Can’t upload big files via webGUI


I can not upload big files (checked with files having a size >3GB) via webGUI to my seafile server. The progress bar goes till 100%, then the upload fails with the message “network error”. I usually wait for hours, everything seems fine, but all of a sudden …

My system is: Seafile Pro v6.3.13, current Debian 9 / Apache 2.

I’ve checked the logs of the apache webserver and all of the seafile logs (/home/seafile/logs). To no avail, nothing can point me into a reasonable direction. I also rechecked the ssl configuration, which I performed by the textbook.

Do you have any idea, why the upload might fail just when it’s meant to finish? I always used the same laptop but different networks during my attempts.

Best regards,

P.S. The upload of small files worked flawlessly so far.

Just gave it another try while avoiding the Apache web server (ssh tunneling access to Seafile’s own webserver) … didn’t work either. Meaning, the troubles do not arise from the Apache.

Cheers Dandu

Just uploaded a file (4.3 GB, having an upload speed of 3-4 MB/s) … successfully. Strange. Cheers Dandu

Still usually doesn’t work. The cancellation occurs at the end, as soon as the browser upload window reveals the message “saving”. Always resulting in a “network error”. Does it ring any bell with someone?

Maybe someone can point me into a direction, which server log to look for specifically!?!

Cheers Dandu

Web GUI is not designed to handle large files like that.

Use the normal client, it breaks the file up into reasonable chunk sizes so that it completes successfully.

Put uploaded data in a tmp, but permanent directory (some place where it won’t get lost, whatever happens, so not /tmp). Afaik Seafile is already doing this.
When upload finishes add it to an indexer queue and return HTTP 200. Furthermore give some message to the user that the file will be processed and appear online after some time (this message could show different time depending on file and queue size).
Instead of running the indexer on demand, let it run as service and let the admin define how many indexers he wants to run. Every x seconds and on finished task the indexer checks the queue for new work. On finish the indexer also removes the file from cache and queue. The queue would be an sql table to be persistent. This would make sure that even on crash no files get lost. Seafile would just continue indexing on restart.
for pro edition you could also allow indexer nodes. These could use e.g. SSD storage to speedup indexing.