So I’m trying to seed ~ 750 gb’s of data to seafile. I have the data on both the server in a folder/or available via network. The problem is trying to upload via browser or web client a 16 gB file. It fails. As indicated earlier seaf-import creates a null folder. What can I do to fix these issues? Using debian 8 and nginx. Smaller folders/files work fine.
Have you checked the settings in seafile.conf:
max_upload_size
Maybe it’s already enough to increase the value here.
Maybe you have to adjust other settings in e.g. php.ini files
So, your suggestion didn’t stop my issue, as it still seems that libraries of ~ 15 gb’s stop uploading. I did manage to use seaf-import.sh successfully after a seaf-fsck.
Uploading huge files over http/https almost always fails, which is not a seafile issue, as far as I see.
I’d always recommend to upload huge files using the client.
Why is that?
- Uploading a multi-GB file, you will almost always nuke the capabilities of your webserver.
For example: Apache 2.2 and NGINX prior 1.8 (I believe) were not even able to handle request bodies bigger than 2GB. Even if you set the according directives to “0” (no limit). - Server configuration. Is there even enough temporary memory for you to use?
In a poorly configured server, there might probably be no limitation to the temporary upload directory. For security reasons it is sometimes limited to just a few GB, or even only a few 100MB. - See 2. Security reasons. Allowing such big files will almost always cause security issues. Just think about multiple uploads with huge files: Server denies service to other users because it has to handle multiple GB of temporary data, cluttering RAM, disk i/o, etc.
What do you mean by “web client” ?
Sorry - I meant to say seafile client or browser. I’m using nginx as a frontend.
The server has 8 gigs of ram and only runs seafile. The folder is actually a collection of 1500 or so files. Switching from a samba server to this.
If you use the seafile client (and not the seafile file browser) it should work. Can you post your nginx conf?
user www-data www-data;
worker_processes 4;
events {
worker_connections 8096;
multi_accept on;
use epoll;
}
pid /var/run/nginx.pid;
worker_rlimit_nofile 40000;
http {
server_tokens off;
server_names_hash_bucket_size 128;
client_max_body_size 50M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] “$request” ’
'$status $body_bytes_sent “$http_referer” ’
‘"$http_user_agent" “$http_x_forwarded_for”’;
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;
Fully disabled gzip compression to mitigate Django BREACH attack: https://www.djangoproject.com/weblog/2013/aug/06/breach-and-django/
gzip off;
#gzip_vary on;
#gzip_proxied expired no-cache no-store private auth any;
#gzip_comp_level 9;
#gzip_min_length 10240;
#gzip_buffers 16 8k;
#gzip_http_version 1.1;
#gzip_types text/plain text/css text/xml text/javascript application/javascript application/x-javascript application/xml font/woff2;
#gzip_disable “MSIE [1-6].”;
include /etc/nginx/conf.d/*.conf;
map $scheme $php_https {
default off;
https on;
}
include perfect-forward-secrecy.conf;
}
Hey Gary,
and “Sorry!”. I didn’t want to let this topic die…
I guess what marcusm meant you to post was the actual host-config. Not the nginx config file.
If your issue still isn’t resolved, feel free to jump back in!
Yes, sorry i’ve a lot to do currently. But it seems that @Gary_Balliet used the seafile-installer script, am i right?
I didn’t mean to blame you, Marcus. I didn’t answer for as long… Just wanted to make sure that op gets the problem solved…
agh! I misread OPs first post…
Sending a 16GB file via http(s) MUST fail imho.
Not immediately, but there are a couple of spots where a server system might just not accept that amount of data.
Here, we do already find one in the nginx.conf file. client_max_body_size 50M;
will cause files larger than 50MB to be canceled.
Even if this is set to 0
in an seafile-specific config file, there is still an issue with older apache and nginx versions not beeing ablte to accept request bodies > ~2GB, and I believe current versions will fail at about 4GB…
Even if this was not the problem, there’s a slight possibility of a /temp partition on a linux system that is smaller than 16GB.
Putting aside the possible file size issues, there is also the TIME component.
Sending a 16GB file will take about forever in terms of web servers.
On a 50.000mbit/s connection uploading a file of this size would take nearly 14 hours.
I could go on like this with some more spots that might lead to a failure. But I think this is more than enough to make the point clear.
This is not a seafile issue.
I’d recommend to use the client. It splits files up into chunks and uploads these in sequence.