Seafile Client never finishes

As others reported as well under some conditions Seafile Client never finishes downloading a library. I do have such a case while fetching a recent library using a very bad wifi connection with at most 1 MiB/s. The client downloads forever (like 2 days) without getting any further and downloading more data than the library actually contains (luckily the connections isn’t metered).

The library in this case has been shared with me. ID is 22a87787-29c0-458f-a412-bc8e2eebf318 (2017-18 - Kopie(22a87787)).

@daniel.pan you can download the log at https://download.seafile.com/lib/97ec9b9a-2740-45fe-868d-bb02a86433c0/file/seafile-log.zip (keep it private, please).

Seaf-fsck did not find anything, what I also expected because of the setup (file system with checkums in a Raid and ECC Ram)

seafile@home:~/seafile-server-latest$ time ./seaf-fsck.sh 22a87787-29c0-458f-a412-bc8e2eebf318

Starting seaf-fsck, please wait ...

[12/04/18 01:59:46] fsck.c(595): Running fsck for repo 22a87787-29c0-458f-a412-bc8e2eebf318.
[12/04/18 01:59:46] fsck.c(422): Checking file system integrity of repo 2017-18 - Kopie(22a87787)...
[12/04/18 02:06:12] fsck.c(659): Fsck finished for repo 22a87787.

seaf-fsck run done

Done.

real    6m26.676s
user    1m49.052s
sys     0m29.708s

I had an old 6.x client before and upgraded to the most recent version while having the issue, but it is still there and as you can the in the other report, copying using SeaDrive also doesn’t work because SeaDrive only works with a perfect connection.

From the log I can see that the transfer was interrupted with network timeout. Does the transfer finish after switching to a stable network?

Unfortunately I currently cannot switch to a stable network.

Currently my wifi connection is interrupted frequently, but except for Seafile downloading all the time and never finishing there are no other negative consequences.

Downloading the files from Seahub works, but is no solution because it is much more work.

So my impression is the client runs into an error case, cleans up successfully fetched data from the “cache”, retries the download and runs into another error and repeats that all the time. Or it overwrites successfully fetched data. At least it runs forever unless there is some larger outage, because it is the initial sync it is then being aborted.

@shoeper

I don’t wonder if this is similar to my “uploading forever” case you helped me with earlier. I wonder if Apache has an equivalent to RequestReadTimeout for sending data?

You mean nginx?

You are talking about When using upload speed limit, large file uploads do not work - #5 by arjones85 , right? In my case it is download. still could be timeouts, but in both cases timeouts and connections interruptions can happen and should be handled by Seafile Client. An average user just sees it doesn’t work and moves on to another software.

You mean nginx?

I use Apache, and that’s the name of the Apache configuration I had to tweak, but yes I was referring to a download timeout you may be hitting.