Transferring large files (larger then the amount of RAM your host has)

Im currently running seafile in centOS7. CentOS has 12 GB of RAM available to it. Whenever I upload or download a file, the file is cached into RAM. The amount that is cached is directly proportionate to how much of the file has been up/downloaded. The problem is, when I try to up/download a file that is over 12GB, my machine turns to SWAP and then reboots.

I tried setting the HDD to sync in /etc/fstab/, which nuked speeds but the problem still persisted.

Can I somehow get the RAM to flush every so often?

Can I somehow get seafile to avoid caching the RAM and directly writing to disk?

Please help! and thank you for your time!

You can simply use the client to sync, upload via browser requires RAM. I don’t know why it’s like this, but I’d also like a change here. The browser can detect the size of RAM available and then split the file for upload.

@daniel.pan Please put this to the list of features for future releases. Uploading of large files should not go into RAM, it rather should split the files and then upload segments that fit into RAM. Something like that.

Yes! I feel as if this is a priority and I was surprised when my searches for solutions online came up empty as if nobody has had this problem up until now. Folder upload to the webGUI should also be a feature for future releases. Ive tried to drag and drop a folder, hoping it would keep subdirectories intact, but instead it just transfers the files within the subdirectories alone. Strange.

Anyways, thanks for clearing up the fact that there is no workaround as of yet!

It is possible already! Was pro only in the past but was made available to CE as well. It depends on the browser if it works. Chrome/Chromium works definately with that! :slight_smile:

Don’t know about the subfolders for now.
I’d also like to see a unzip option in the webgui.

1 Like

UPDATE: @DerDanilo

Using the desktop client to sync the folder I want uploaded, the problem of caching in RAM is still present. How is anyone uploading large files???

This is wrong. The client is likely uploading 8 MiB as most. Could be little more (in case you upload files with multiple 100GiB) but definitely far below 100 MiB.

Do you use nginx or Apache? Have you checked that they don’t buffer too much (only relevant for web uploads,not for client synchronization)?

@shoeper

im not using nginx or apache

In normal case, it should not buffer the whole file in RAM. So the problem is specific to @motika 's configuration.

can you suggest a starting place for me to look? Its been run within proxmox, could that be an issue?

Proxmox is not the issue, running it on several machines, no issue.

You need to use nginx or apache.

@daniel.pan We should change the manual to disallow using Seafile without nginx or apache. This brings just problems and people don’t understand that they should not run it without a local reverse proxy.

100% agree !!!

1 Like

In normal case, it should not buffer the whole file in RAM.

I confirm that the “large file upload problem” has nothing to do with RAM (increasing RAM from 2GB to 4GB didn’t solve the problem that I couldn’t sync or upload a 3GB file via Seafile client). In my case such uploads failed because my system partition was too small (8GB). I doubled the size to 16GB and now syncing 3GB files works fine.

Anyway I wonder why this happens because I have symlinked the whole seafile directory to another virtual disk with around 4TB size. As far as I recognized the used disk space of the system partition increased during the upload attempt of the 3GB file: Is there another directory for temp files Seafile is using during upload process?

using NGINX? The webserver usually caches the uploaded file before handing it over to Seafile
https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size

1 Like

Sry I forgot to reply and thank you Weeehe!

I still have problems syncing files lagrer than 2GB from time to time. I didn’t implement Module ngx_http_core_module yet because according to https://download.seafile.com/published/seafile-manual/deploy/deploy_with_nginx.md my setup should handle any filze sizes without problems. However the explanation in the official manual is somehow strange:

For

client_max_body_size 0;

it is written:

Nginx settings client_max_body_size is by default 1M. Uploading a file bigger than this limit will give you an error message HTTP error code 413 (“Request Entity Too Large”).

…but the config template sets client_max_body_size 0;, not client_max_body_size 1;.

You should use 0 to disable this feature or write the same value than for the parameter max_upload_size in section [fileserver] of seafile.conf. Client uploads are only partly effected by this limit. With a limit of 100 MiB they can safely upload files of any size.

So what shall I do? client_max_body_size 0; as the first sentence indicates or client_max_body_size 100; as the last sentence indicates? What happens if the feature is disabled as in the config template?

This is the nginx default as stated.

This is regarding uploads with seafile client. When the limit is 100 MiB the client can upload files of any size because it splits them up and each individual block will be below 100 MiB. When using Seahub, WebDAV or the app the limit will be 100 MiB.

Most likely your problem with larger files is, that the server has to index them after upload and the request times out. According to my experience they show up in Seafile after some time passed.

1 Like