I’ve deployed Seafile Server Professional (latest release) on Ubuntu Server 18.04 behind nginx, created and populated several repos / Libraries and am attempting to download and sync them externally using the CLI client on another Ubuntu 18.04 box.
My repos vary in size between 7.7 GB and 140GB.
Repos below 50gb are downloading without issue, but anything above that size is returning an ‘unknown error’, with log entries as the below example.
[04/24/20 10:43:19] http-tx-mgr.c(4145): Bad response code for POST https://seafile.mydomain.com/seafhttp/repo/60b9212b-9be3-4525-8d3d-3d17f917adb8/pack-fs/: 0.
[04/24/20 10:43:19] http-tx-mgr.c(4572): Failed to get fs objects for repo 60b9212b on server https://seafile.mydomain.com.
[04/24/20 10:43:19] http-tx-mgr.c(1157): Transfer repo '60b9212b': ('normal', 'fs') --> ('error', 'finished')
[04/24/20 10:43:19] clone-mgr.c(697): Transition clone state for 60b9212b from [fetch] to [error]: Unknown error.
Based on my reading so far, I believe there is an issue with my nginx configuration, which is below:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $upstream_response_time';
server {
server_name seafile.mydomain.com;
proxy_set_header X-Forwarded-For $remote_addr;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
server_tokens off;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
proxy_read_timeout 1200s;
proxy_buffer_size 16k;
proxy_busy_buffers_size 16k;
# used for view/edit office file via Office Online Server
client_max_body_size 0;
# logs
access_log /var/log/nginx/seahub.access.log seafileformat;
error_log /var/log/nginx/seahub.error.log;
}
location /seafhttp {
rewrite ^/seafhttp(.*)$ $1 break;
proxy_pass http://127.0.0.1:8082;
client_max_body_size 0;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 36000s;
proxy_read_timeout 36000s;
proxy_request_buffering off;
send_timeout 36000s;
proxy_request_buffering off;
proxy_buffer_size 16k;
proxy_busy_buffers_size 16k;
access_log /var/log/nginx/seafhttp.access.log seafileformat;
error_log /var/log/nginx/seafhttp.error.log;
}
location /media {
root /opt/seafile/seafile-server-latest/seahub;
}
location /seafdav {
fastcgi_pass 127.0.0.1:8080;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param REMOTE_ADDR $remote_addr;
client_max_body_size 0;
access_log /var/log/nginx/seafdav.access.log seafileformat;
error_log /var/log/nginx/seafdav.error.log;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/seafile.mydomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/seafile.mydomain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = seafile.mydomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name seafile.mydomain.com;
return 404; # managed by Certbot
}
I’m a little out of my league here so would appreciate any pointers from more experienced people.