Upload through desktop client fails for large files and bad internet connection

I have a long running Seafile installation that always worked well. However, now I am accessing it via VPN in a place with a bad internet connection and I start to see some problems:

When uploading large files (>20 MB or so) through the desktop client, the upload states for about a minute “uploading, 0%” and then the error message

 "Data transfer was interrupted. Please check network or firewall"

appears. I once took the laptop to a place with a more stable connection and the upload worked fine.

This seems like some sort of time-out problem. I am using server version 7.0.5 behind a nginx server. I checked the logs in /var/log/nginx but could see no error messages. In my nginx configuration file I set all timeouts to really high values (ommitting some unrelated parts):

server {
	...

    ssl_session_timeout 60m;
    ssl_session_cache shared:SSL:5m;

	...

    location / {
        proxy_pass         http://127.0.0.1:8000;
        proxy_set_header   Host $host:$server_port;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host $server_name;
        proxy_set_header   X-Forwarded-Proto https;

        access_log      /var/log/nginx/seahub.access.log;
        error_log       /var/log/nginx/seahub.error.log;

        proxy_read_timeout  3600s;

        client_max_body_size 0;
    }

    location /seafhttp {
        rewrite ^/seafhttp(.*)$ $1 break;
        proxy_pass http://127.0.0.1:8082;
        client_max_body_size 0;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_connect_timeout  7200s;
        proxy_read_timeout  7200s;
        proxy_send_timeout  7200s;
        send_timeout  7200s;
    }
	...
}

The error message shows up less than a minute after the upload started, which is nowhere near the timeout values set here. For small files (e.g. source code text files), the upload always works fine.

How can I find out, what exactly is causing the problem? Is there any other option I need to add? I am fine with slowish uploads, I just want that they finish eventually which should be possible even with a very bad connection.

This is not a server problem, it is the client’s network timeout, the client’s network timeout is set to 300 seconds by default.

Thank you for your answer :slight_smile:

I measured the time until the error appear, and it was on 3 tries between 16 and 25 seconds, which is significantly lower than any timeout I found in any server config file and also lower that the 300 seconds you mentioned for the client. I also tried to adjust this client time-out option, however it is not found in the GUI and I also did not find any seafile config files that I could edit on my laptop. Can you hint me where to check this value and how to modify it?

I had a look at the client log as well, there at least I could find an error message, however I don’t know how to interpret it:

[{timestamp}] sync-mgr.c(582): Repo '{Lib}' sync state transition from 'synchronized' to 'uploading'.
[{timestamp}] http-tx-mgr.c(1157): Transfer repo '{number}': ('normal', 'init') --> ('normal', 'check')
[{timestamp}] http-tx-mgr.c(1157): Transfer repo '{number}': ('normal', 'check') --> ('normal', 'commit')
[{timestamp}] http-tx-mgr.c(1157): Transfer repo '{number}': ('normal', 'commit') --> ('normal', 'fs')
[{timestamp}] http-tx-mgr.c(1157): Transfer repo '{number}': ('normal', 'fs') --> ('normal', 'data')
[{timestamp}] http-tx-mgr.c(783): libcurl failed to GET {server url}/seafhttp/protocol-version: Couldn't connect to server.
[{timestamp}] http-tx-mgr.c(783): libcurl failed to GET {server url}:{port}/protocol-version: Couldn't connect to server.
[{timestamp}] http-tx-mgr.c(929): libcurl failed to PUT https://{server url}:{port}/seafhttp/repo/{repo number}-{...}/block/{...}: Failure when receiving data from the peer.
[{timestamp}] http-tx-mgr.c(929): libcurl failed to PUT https://{server url}:{port}/seafhttp/repo/{repo number}-{...}/block/{...}: Failure when receiving data from the peer.
[{timestamp}] http-tx-mgr.c(929): libcurl failed to PUT https://{server url}:{port}/seafhttp/repo/{repo number}-{...}/block/{...}: Failure when receiving data from the peer.
[{timestamp}] http-tx-mgr.c(1157): Transfer repo '{repo number}': ('normal', 'data') --> ('error', 'finished')

While trying this out, changes to smaller files were sucessfully commited. So all the URLs and logins etc. should be fine.

Is there a way to get more information about the error?

Hello, the error should be caused by a network timeout, the client currently does not provide a configurable timeout option.

What is a “network timeout”? Do you think it has something to do with the VPN connection?

The strange thing is, while I don’t have too large files the sync can go on for much longer than those ~20 seconds that I measured, should there be an reconnection during this it does not seem to bother seafile too much. It would be nice if the same was true for synchronising larger files.

Is there some way to get more detailed error messages?

Hm, I switched to a different VPN and now it seems to work. It was not obvious to me since everything else seemed to not be affected by the VPN, but apparently something kept restarting my connection and Seafile did not like it.
It would be nice if maybe Seafile would be made more robust to reconnections, since it still holds, that if you are reliably able to transmit small things, there is no reason why it should be impossible to transmit larger things as well. (e.g. Seafile is overly eager to just cancel everything and start completely for the beginning, causing it to never finish - ‘just’ continue from where you stopped). But I also see, that this might be not a super simple feature to implement and for now I think I have what I want.