Has anyone else noticed this behavior? Is this a known issue and do we have a way to fix it?
I have recently setup a small-ish installation of Seafile (Pro version) for a company with ~70 roaming clients syncing files over (costly & unreliable) metered LTE/4G/Satellite connections.
The client connection is unreliable, may fail over to a secondary connection (when available), and frequently goes offline for hours.
Under the impression that seafile client supports resuming downloads (i.e. without re-starting the transfer from byte 0) when connection is interrupted, I did not expect to see one of the clients used up more than 2GB data, during the transfer of a single 1GB file.
At that point, the user decided to cancel the transfer and deleted the file from seafile library to avoid excessive bandwidth charges, but is willing to test again if we have a fix for this.
I have checked the seafile.log file from the client installation (C:\users\username\ccnet\logs), showing the transfer was timed out (multiple times), e.g.:
[11/27/19 08:53:11] http-tx-mgr.c(783): libcurl failed to GET https://SEAFILE-FQDN/seafhttp/repo/d50edf89-6b59-46ca-9e84-7b0c07396034/block/9af37b7269faa17d1017a82bfdee5f3f5ff567dc: Timeout was reached. [11/27/19 08:53:11] repo-mgr.c(4338): Transfer failed. [11/27/19 08:53:11] http-tx-mgr.c(1157): Transfer repo 'd50edf89': ('normal', 'data') --> ('error', 'finished') [11/27/19 08:53:11] sync-mgr.c(621): Repo 'REPO-0001' sync state transition from downloading to 'error': 'Data transfer timed out. Please check network or firewall'.
I created a pastebin with those parts of the log that i can share, in case it provides further insight: https://pastebin.com/6ZYdRnb6