Upload stuck at exactly 9.5MB

I’m trying to upload my photos directory. I created a Library for it, and I’m uploading a folder that contains multiple sub-folders with the pictures divided by date.

There’s about 30 thousand files in there, but only 7.4GB of data. I can’t upload them through the WebUI since it can’t do more than 1000, but uploading through the client seems to work.

It starts working, it steadily uploads until it reaches 9.5MB, then it stops. It’s always on the same file. Since I didn’t really care about it, I deleted it. Then it stopped on another file. I deleted it too. Then it stopped on some other file, but always at 9.5MB.

I left it there for an hour, didn’t move. The files I deleted were about 5MB in size, so nothing big, just regular pictures.

I launched the client through a terminal to see if there was some output. Nothing. The files that did get uploaded have actually been uploaded and can be accessed through the WebUI, but it just stays there.

I only very recently began using Seafile, and while I have synced some other files before for testing (which worked fine), this is the first time I’m uploading something “big”. For all I know, it could get stuck at 9.5MB when uploading something else too.

Before you ask, no, the disk on the server is not running out of space. There’s over 800GB free.

How can I debug this?

I’m not an expert with this, but could it be that it’s not even Seafile’s fault?
For example, some configuration in your reverse proxy (if you use one)?
Have you tried uploading something from a different machine to see that it’s not a client-side issue?

Again, I’m just spitballing things I would try, I don’t know at all what’s happening.

I am using a reverse proxy. I didn’t configure it in any different way than I would any other service. I use Nginx Proxy Manager. I have just forwarded to port 80 and enabled all the toggles in the first tab. Then in the SSL tab I selected a wildcard certificate I already have for my domain and enabled all toggles there too. It works, I can connect to the WebUI.

I am using docker to run everything. Seafile along with the database and memcached are all in their own network with no open ports. Then, I add the Nginx Proxy Manager container to the Seafile network (and to the networks of all other services I’m hosting). This allows NPM to ping the Seafile container. Then I can just configure NPM through the UI to forward to Seafile via hostname, since it can connect to it because it’s on the same network.

Here is my docker compose for Seafile, in case it helps:

version: "3.8"
services:
  seafile:
    image: seafileltd/seafile-mc:latest
    hostname: seafile
    container_name: seafile
    environment:
      - DB_HOST=seafile_db
      - DB_ROOT_PASSWD=supersecretpassword
      - TIME_ZONE=${TZ}
      - PUID=${PUID}
      - PGID=${PGID}
      - SEAFILE_SERVER_LETSENCRYPT=false
      - SEAFILE_SERVER_HOSTNAME=mysupersecrethostname
    networks:
      - seafile
    volumes:
      - seafile:/shared
    restart: unless-stopped
    depends_on:
      - seafile_db
      - seafile_memcached
  seafile_db:
    image: mariadb:10.6
    hostname: seafile_db
    container_name: seafile_db
    environment:
      - MYSQL_ROOT_PASSWORD=supersecretpassword
    volumes:
      - seafile_db:/var/lib/mysql
    networks:
      - seafile
  seafile_memcached:
    image: memcached:1.6.18
    hostname: seafile_memcached
    container_name: seafile_memcached
    entrypoint: memcached -m 256
    networks:
      - seafile

volumes:
  seafile:
    name: seafile
  seafile_db:
    name: seafile_db

networks:
  seafile:
    name: seafile
    driver: "bridge"

If you edit the host configuration in Nginx Proxy Manager, you can add the following two lines:

client_max_body_size 0;
proxy_max_temp_file_size 0;

I added the following in the advanced tab for the proxy host in NPM:

proxy_pass_header Authorization;
proxy_pass_header Server;
proxy_buffering off;
proxy_redirect off;
client_max_body_size 0;
proxy_max_temp_file_size 0;

The two lines you provided alone didn’t do it, although I admit I didn’t restart the NPM container afterwards. With the lines above, I did restart the container, and it does seem to be working! Although, it got stuck on some other random picture for some reason, but this time it gave me the option to skip it. I don’t know what’s up with that, but it seems to be uploading the files just fine.

Maybe I don’t need all of the lines here tho, maybe those two are enough. I’ll test later. Thank you tho!

EDIT: Hmm, seems to be stuck at 251.4MB now…

EDIT 2: I confirmed that client_max_body_size and proxy_max_temp_file_size by themselves do not fix the issue. It does still get stuck at 9.5MB even after restarting the container. Adding “proxy_buffering off” does solve it tho. I only have these three lines present now, and it got past the 251MB mark too. Here’s hoping it finishes. If not, then maybe setting “proxy_request_buffering” to off may help.

EDIT 3: Hit another snag at 553.4MB… I’ll try that other setting.

EDIT 4: Well, no. It got stuck again at 215MB. There’s no rhyme or reason to it, it just seems to get stuck at random points now. Don’t know how to proceed, requesting further help.

Have you checked Seafile’s upload/download settings as well?

Change upload/download settings.

[fileserver]
# Set maximum upload file size to 200M.
# If not configured, there is no file size limit for uploading.
max_upload_size=200

# Set maximum download directory size to 200M.
# Default is 100M.
max_download_dir_size=200

There are some timeouts for Nginx that might be useful:

proxy_connect_timeout 36000s;
proxy_read_timeout 36000s;
proxy_send_timeout 36000s;
send_timeout 36000s;

Thank you. It got stuck again at 531.6MB. I added the lines to the seafile settings file and restarted the server as instructed in that page. I also added the timeout to the proxy settings. Seems to be behaving the same, it works just fine until it just halts at some point for no apparent reason. Is there some log somewhere I can read to see what’s going on?

The log files for Seafile are outlined here.

The log files for Nginx are specified in the config file. For example:

access_log /var/log/nginx/seafhttp.access.log seafileformat;
error_log /var/log/nginx/seafhttp.error.log;

I’ve checked the logs. The only relevant thing on seafile.log is a line displayed when I start the upload saying that the root folder doesn’t exist (I asume it creates it then). Throughout the whole duration of the upload, there’s not a single line displayed. When it got stuck (which reached further this time, 769.6MB), no new information was displayed.

I did notice that the seahub.log file is displayed an error from memcached:

  File "/opt/seafile/seafile-server-10.0.1/seahub/thirdpart/django/core/cache/backends/memcached.py", line 149, in set_many
    failed_keys = self._cache.set_multi(safe_data, self.get_backend_timeout(timeout))
pylibmc.ServerDown: error 47 from memcached_set_multi: SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY
2023-05-19 20:26:33,168 [ERROR] django.pylibmc:132 get MemcachedError: error 47 from memcached_get(:1:ENABLE_ENCRYPTED_LIBRARY): SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/django_pylibmc/memcached.py", line 130, in get
return super(PyLibMCCache, self).get(key, default, version)
  File "/opt/seafile/seafile-server-10.0.1/seahub/thirdpart/django/core/cache/backends/memcached.py", line 77, in get
return self._cache.get(key, default)
pylibmc.ServerDown: error 47 from memcached_get(:1:ENABLE_ENCRYPTED_LIBRARY): SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY

Don’t know what to make of that.

I looked for the log for Nginx Proxy Manager. Couldn’t find the file.

Now that I think about it, Seafile is serving a website for the WebUI. It must have its own web server. Is it also using Nginx? Is it using apache or something else? Maybe the logs for that contain some info.

You might be thinking of gunicorn.

Seahub:the website. Seafile server package contains a light-weight Python HTTP server gunicorn that serves the website. Seahub runs as an application within gunicorn.

At the end of the gunicorn.conf.py file there is:

# for file upload, we need a longer timeout value (default is only 30s, too short)
timeout = 1200

limit_request_line = 8190

So that isn’t it either…

Maybe post your Nginx config so one of the Nginx people can look it over. Can you run at least temporarily without the proxy manager for a test?

I’ll do that, but after the last attempt where it got stuck again, I cancelled and started to upload the files again without deleting whatever was already uploaded. It has gotten much further now, to 1.7GB and it hasn’t stopped yet. Hopefully it finishes.

Anyway, here’s the nginx config file. This was was autogenerated by nginx proxy manager. The file is as is. I know the indentation is a bit wonky, but that’s just how it was generated, with all that whitespace and everything. The only modification I made, was to replace my domain name for something generic:

server {
  set $forward_scheme http;
  set $server         "seafile";
  set $port           80;

  listen 8080;
listen [::]:8080;

listen 4443 ssl http2;
listen [::]:4443 ssl http2;


  server_name myserverdomain.com;


  # Let's Encrypt SSL
  include conf.d/include/letsencrypt-acme-challenge.conf;
  include conf.d/include/ssl-ciphers.conf;
  ssl_certificate /etc/letsencrypt/live/npm-2/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/npm-2/privkey.pem;




# Asset Caching
  include conf.d/include/assets.conf;


  # Block Exploits
  include conf.d/include/block-exploits.conf;



  # HSTS (ngx_http_headers_module is required) (63072000 seconds = 2 years)
  add_header Strict-Transport-Security "max-age=63072000;includeSubDomains; preload" always;





    # Force SSL
    include conf.d/include/force-ssl.conf;




proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_http_version 1.1;


  access_log /data/logs/proxy-host-17_access.log proxy;
  error_log /data/logs/proxy-host-17_error.log warn;

proxy_request_buffering off;
proxy_buffering off;
client_max_body_size 0;
proxy_max_temp_file_size 0;
proxy_connect_timeout 36000s;
proxy_read_timeout 36000s;
proxy_send_timeout 36000s;
send_timeout 36000s;





  location / {





  # HSTS (ngx_http_headers_module is required) (63072000 seconds = 2 years)
  add_header Strict-Transport-Security "max-age=63072000;includeSubDomains; preload" always;





    
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $http_connection;
    proxy_http_version 1.1;
    

    # Proxy!
    include conf.d/include/proxy.conf;
  }


  # Custom
  include /data/nginx/custom/server_proxy[.]conf;
}

Any updates on the progress?

Wow! I’m not an Nginx guru by any stretch but I’ve not seen an Nginx config file like this one.

Except for the parts you’ve added in troubleshooting I would not have guessed that this would have worked for Seafile (or even addressed Seafile in its design).

Most likely the include files have something to add to the conversation. Otherwise I am at a loss to explain the absence of /seafhttp and /media, for example. Where are they hiding? Again, I am no expert here.

Comparing this to the sample provided in the Manual (and many others that appear in this forum) baffles me. Assuming the reverse proxy does work as you claim, can it be doing the same thing(s) that the sample or other working configurations do?

Here is another example of a “typical” Nginx configuration found in an online tutorial on how to install Seafile under Ubuntu 22.04.

The upload got to about 2.9GB and stopped again. The last include in the config file points to a file that doesn’t exist. So nothing there.

The one marked #Proxy! does exist. I would have included it before, but I hadn’t figured out to what relative path it was referring. I did now (it’s in /etc/nginx), here it is:

add_header       X-Served-By $host;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-Proto  $scheme;
proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP          $remote_addr;
proxy_pass       $forward_scheme://$server:$port$request_uri;

I’ll paste the other files here as well. Sorry to not be of much help, I don’t really understand nginx configuration that much:

letsencrypt-acme-challenge.conf

# Rule for legitimate ACME Challenge requests (like /.well-known/acme-challenge/xxxxxxxxx)
# We use ^~ here, so that we don't check other regexes (for speed-up). We actually MUST cancel
# other regex checks, because in our other config files have regex rule that denies access to files with dotted names.
location ^~ /.well-known/acme-challenge/ {
        # Since this is for letsencrypt authentication of a domain and they do not give IP ranges of their infrastructure
        # we need to open up access by turning off auth and IP ACL for this location.
        auth_basic off;
        auth_request off;
        allow all;

        # Set correct content type. According to this:
        # https://community.letsencrypt.org/t/using-the-webroot-domain-verification-method/1445/29
        # Current specification requires "text/plain" or no content header at all.
        # It seems that "text/plain" is a safe option.
        default_type "text/plain";

        # This directory must be the same as in /etc/letsencrypt/cli.ini
        # as "webroot-path" parameter. Also don't forget to set "authenticator" parameter
        # there to "webroot".
        # Do NOT use alias, use root! Target directory is located here:
        # /var/www/common/letsencrypt/.well-known/acme-challenge/
        root /data/letsencrypt-acme-challenge;
}

# Hide /acme-challenge subdirectory and return 404 on all requests.
# It is somewhat more secure than letting Nginx return 403.
# Ending slash is important!
location = /.well-known/acme-challenge/ {
        return 404;
}

ssl-ciphers.conf

ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;

# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;

assets.conf

location ~* ^.*\.(css|js|jpe?g|gif|png|webp|woff|eot|ttf|svg|ico|css\.map|js\.map)$ {
        if_modified_since off;

        # use the public cache
        proxy_cache public-cache;
        proxy_cache_key $host$request_uri;

        # ignore these headers for media
        proxy_ignore_headers Set-Cookie Cache-Control Expires X-Accel-Expires;

        # cache 200s and also 404s (not ideal but there are a few 404 images for some reason)
        proxy_cache_valid any 30m;
        proxy_cache_valid 404 1m;

        # strip this header to avoid If-Modified-Since requests
        proxy_hide_header Last-Modified;
        proxy_hide_header Cache-Control;
        proxy_hide_header Vary;

        proxy_cache_bypass 0;
        proxy_no_cache 0;

        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504 http_404;
        proxy_connect_timeout 5s;
        proxy_read_timeout 45s;

        expires @30m;
        access_log  off;

        include conf.d/include/proxy.conf;
}

block-exploits.conf

## Block SQL injections
set $block_sql_injections 0;

if ($query_string ~ "union.*select.*\(") {
        set $block_sql_injections 1;
}

if ($query_string ~ "union.*all.*select.*") {
        set $block_sql_injections 1;
}

if ($query_string ~ "concat.*\(") {
        set $block_sql_injections 1;
}

if ($block_sql_injections = 1) {
        return 403;
}

## Block file injections
set $block_file_injections 0;

if ($query_string ~ "[a-zA-Z0-9_]=http://") {
        set $block_file_injections 1;
}

if ($query_string ~ "[a-zA-Z0-9_]=(\.\.//?)+") {
        set $block_file_injections 1;
}

if ($query_string ~ "[a-zA-Z0-9_]=/([a-z0-9_.]//?)+") {
        set $block_file_injections 1;
}

if ($block_file_injections = 1) {
        return 403;
}

## Block common exploits
set $block_common_exploits 0;

if ($query_string ~ "(<|%3C).*script.*(>|%3E)") {
        set $block_common_exploits 1;
}

if ($query_string ~ "GLOBALS(=|\[|\%[0-9A-Z]{0,2})") {
        set $block_common_exploits 1;
}

if ($query_string ~ "_REQUEST(=|\[|\%[0-9A-Z]{0,2})") {
        set $block_common_exploits 1;
}

if ($query_string ~ "proc/self/environ") {
        set $block_common_exploits 1;
}

if ($query_string ~ "mosConfig_[a-zA-Z_]{1,21}(=|\%3D)") {
        set $block_common_exploits 1;
}

if ($query_string ~ "base64_(en|de)code\(.*\)") {
        set $block_common_exploits 1;
}

if ($block_common_exploits = 1) {
        return 403;
}

## Block spam
set $block_spam 0;

if ($query_string ~ "\b(ultram|unicauca|valium|viagra|vicodin|xanax|ypxaieo)\b") {
        set $block_spam 1;
}

if ($query_string ~ "\b(erections|hoodia|huronriveracres|impotence|levitra|libido)\b") {
        set $block_spam 1;
}

if ($query_string ~ "\b(ambien|blue\spill|cialis|cocaine|ejaculation|erectile)\b") {
        set $block_spam 1;
}

if ($query_string ~ "\b(lipitor|phentermin|pro[sz]ac|sandyauer|tramadol|troyhamby)\b") {
        set $block_spam 1;
}

if ($block_spam = 1) {
        return 403;
}

## Block user agents
set $block_user_agents 0;

# Disable Akeeba Remote Control 2.5 and earlier
if ($http_user_agent ~ "Indy Library") {
        set $block_user_agents 1;
}

# Common bandwidth hoggers and hacking tools.
if ($http_user_agent ~ "libwww-perl") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "GetRight") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "GetWeb!") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "Go!Zilla") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "Download Demon") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "Go-Ahead-Got-It") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "TurnitinBot") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "GrabNet") {
        set $block_user_agents 1;
}

if ($block_user_agents = 1) {
        return 403;
}

force-ssl.conf

if ($scheme = "http") {
        return 301 https://$host$request_uri;
}

Those should be all. Since I have located the log file now, I’ll try to upload with the log open, and see what I find.

There it goes:

I’ll let you know what happens whenever it gets stuck again.

There, it got stuck:

I left it still trying to upload for all this time. The log gets filled with those requests to /api2/repos. The length of the request tho seems to have shrinked to 520 for some reason. It has stayed that way for a while now.