Transferring large files (larger then the amount of RAM your host has)


Im currently running seafile in centOS7. CentOS has 12 GB of RAM available to it. Whenever I upload or download a file, the file is cached into RAM. The amount that is cached is directly proportionate to how much of the file has been up/downloaded. The problem is, when I try to up/download a file that is over 12GB, my machine turns to SWAP and then reboots.

I tried setting the HDD to sync in /etc/fstab/, which nuked speeds but the problem still persisted.

Can I somehow get the RAM to flush every so often?

Can I somehow get seafile to avoid caching the RAM and directly writing to disk?

Please help! and thank you for your time!


You can simply use the client to sync, upload via browser requires RAM. I don’t know why it’s like this, but I’d also like a change here. The browser can detect the size of RAM available and then split the file for upload.

@daniel.pan Please put this to the list of features for future releases. Uploading of large files should not go into RAM, it rather should split the files and then upload segments that fit into RAM. Something like that.


Yes! I feel as if this is a priority and I was surprised when my searches for solutions online came up empty as if nobody has had this problem up until now. Folder upload to the webGUI should also be a feature for future releases. Ive tried to drag and drop a folder, hoping it would keep subdirectories intact, but instead it just transfers the files within the subdirectories alone. Strange.

Anyways, thanks for clearing up the fact that there is no workaround as of yet!


It is possible already! Was pro only in the past but was made available to CE as well. It depends on the browser if it works. Chrome/Chromium works definately with that! :slight_smile:

Don’t know about the subfolders for now.
I’d also like to see a unzip option in the webgui.


UPDATE: @DerDanilo

Using the desktop client to sync the folder I want uploaded, the problem of caching in RAM is still present. How is anyone uploading large files???


This is wrong. The client is likely uploading 8 MiB as most. Could be little more (in case you upload files with multiple 100GiB) but definitely far below 100 MiB.

Do you use nginx or Apache? Have you checked that they don’t buffer too much (only relevant for web uploads,not for client synchronization)?



im not using nginx or apache


In normal case, it should not buffer the whole file in RAM. So the problem is specific to @motika 's configuration.


can you suggest a starting place for me to look? Its been run within proxmox, could that be an issue?


Proxmox is not the issue, running it on several machines, no issue.

You need to use nginx or apache.

@daniel.pan We should change the manual to disallow using Seafile without nginx or apache. This brings just problems and people don’t understand that they should not run it without a local reverse proxy.


100% agree !!!


In normal case, it should not buffer the whole file in RAM.

I confirm that the “large file upload problem” has nothing to do with RAM (increasing RAM from 2GB to 4GB didn’t solve the problem that I couldn’t sync or upload a 3GB file via Seafile client). In my case such uploads failed because my system partition was too small (8GB). I doubled the size to 16GB and now syncing 3GB files works fine.

Anyway I wonder why this happens because I have symlinked the whole seafile directory to another virtual disk with around 4TB size. As far as I recognized the used disk space of the system partition increased during the upload attempt of the 3GB file: Is there another directory for temp files Seafile is using during upload process?


using NGINX? The webserver usually caches the uploaded file before handing it over to Seafile


Sry I forgot to reply and thank you Weeehe!