Large downloads via shared link broken

I recently upgraded Seafile to version 6.0.6 from 5.x and ran into a problem when downloading large files via shared download links.

The download of a shared 5 GB file is randomly cancelled, depending on the connection speed of the downloader. When connected over a gigabit link, the download fails somewhere between 800 MB and 2 GB. When connected over a slow 5 MBit/s link, the download fails after only 18 MB. The download usually runs for a time between 20 seconds and 2 minutes before being stopped.

The server is running on Ubuntu 16.04 and has 2 GB RAM. During the download, I can see cache memory usage continually increasing, before immediately dropping back to initial values when the download fails. The download does not fail at the instant the entire memory is used, though (it fails around half a minute later). Rarely, the download succeeds (restarting the seafile server seems to increase the chance of the first download to succeed). This issue never occured when running version 5.x.

nginx, which is used as reverse proxy, logs the download with a status code of 200/OK (although it failed) and the number of bytes actually transmitted, not the number of bytes the requested file has. The seafile.log and seahub.log do not contain any entries regarding the failed download. Downloading also fails when downloading the file directly (as a logged-in user and owner of the file). Firefox was used to reproduce the issue, though it likely happens with all browsers.

Do you have a hint on how this issue could be fixed? Or may I have encountered a bug?

Note: Uploading large files or downloading usign the client works without problems.

There are a few possibilities to check:

  1. Has your seaf-server crashed during the download? You can check this by reading logs/controller.log. If seaf-server crashed, there should be a log entry saying the controller tries to restart it.
  2. I remember I saw similar log entry in nginx log before. The cause was due to timeout setting in Nginx.
  3. You mention memory usage increase. Have you disable buffering in Nginx? Normally Nginx buffers the contents from upstream server to a temp file, then transmit it to the client after it’s completely buffered into the file. If you turn off this, Nginx will need to buffer the content in memory and at the same time transmit it to the client. In this case, depending on the network speed between the client and Nginx, the memory usage can increase.

Thank you for the quick response, Jonathan!

  1. The seaf-server did not crash, the controller.log does not contain any entry indicating it did.
  2. I have set all timeout options to very high values, following the nginx configuration section in the Seafile manual. Currently, the timeouts are set to 36000s, where as the download mostly fails after only 20s. So this is unlikely.
  3. I did not turn off buffering of upstream server responses until now. For testing purposes, I disabled the temp buffer file - and since then, it seems to work! Looks like nginx closed both upstream server and client connections once the temporary file (default configured max size: 1GB) was full. I still do not know what exactly the problem is (it should also work with the temporary file), but disabling it is an adequate solution.

For all people having the same problem:
Add proxy_max_temp_file_size 0; both to location /seafhttp and location /seafdav proxy settings.