Upload stuck at exactly 9.5MB

I added the following in the advanced tab for the proxy host in NPM:

proxy_pass_header Authorization;
proxy_pass_header Server;
proxy_buffering off;
proxy_redirect off;
client_max_body_size 0;
proxy_max_temp_file_size 0;

The two lines you provided alone didn’t do it, although I admit I didn’t restart the NPM container afterwards. With the lines above, I did restart the container, and it does seem to be working! Although, it got stuck on some other random picture for some reason, but this time it gave me the option to skip it. I don’t know what’s up with that, but it seems to be uploading the files just fine.

Maybe I don’t need all of the lines here tho, maybe those two are enough. I’ll test later. Thank you tho!

EDIT: Hmm, seems to be stuck at 251.4MB now…

EDIT 2: I confirmed that client_max_body_size and proxy_max_temp_file_size by themselves do not fix the issue. It does still get stuck at 9.5MB even after restarting the container. Adding “proxy_buffering off” does solve it tho. I only have these three lines present now, and it got past the 251MB mark too. Here’s hoping it finishes. If not, then maybe setting “proxy_request_buffering” to off may help.

EDIT 3: Hit another snag at 553.4MB… I’ll try that other setting.

EDIT 4: Well, no. It got stuck again at 215MB. There’s no rhyme or reason to it, it just seems to get stuck at random points now. Don’t know how to proceed, requesting further help.

Have you checked Seafile’s upload/download settings as well?

Change upload/download settings.

[fileserver]
# Set maximum upload file size to 200M.
# If not configured, there is no file size limit for uploading.
max_upload_size=200

# Set maximum download directory size to 200M.
# Default is 100M.
max_download_dir_size=200

There are some timeouts for Nginx that might be useful:

proxy_connect_timeout 36000s;
proxy_read_timeout 36000s;
proxy_send_timeout 36000s;
send_timeout 36000s;

Thank you. It got stuck again at 531.6MB. I added the lines to the seafile settings file and restarted the server as instructed in that page. I also added the timeout to the proxy settings. Seems to be behaving the same, it works just fine until it just halts at some point for no apparent reason. Is there some log somewhere I can read to see what’s going on?

The log files for Seafile are outlined here.

The log files for Nginx are specified in the config file. For example:

access_log /var/log/nginx/seafhttp.access.log seafileformat;
error_log /var/log/nginx/seafhttp.error.log;

I’ve checked the logs. The only relevant thing on seafile.log is a line displayed when I start the upload saying that the root folder doesn’t exist (I asume it creates it then). Throughout the whole duration of the upload, there’s not a single line displayed. When it got stuck (which reached further this time, 769.6MB), no new information was displayed.

I did notice that the seahub.log file is displayed an error from memcached:

  File "/opt/seafile/seafile-server-10.0.1/seahub/thirdpart/django/core/cache/backends/memcached.py", line 149, in set_many
    failed_keys = self._cache.set_multi(safe_data, self.get_backend_timeout(timeout))
pylibmc.ServerDown: error 47 from memcached_set_multi: SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY
2023-05-19 20:26:33,168 [ERROR] django.pylibmc:132 get MemcachedError: error 47 from memcached_get(:1:ENABLE_ENCRYPTED_LIBRARY): SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/django_pylibmc/memcached.py", line 130, in get
return super(PyLibMCCache, self).get(key, default, version)
  File "/opt/seafile/seafile-server-10.0.1/seahub/thirdpart/django/core/cache/backends/memcached.py", line 77, in get
return self._cache.get(key, default)
pylibmc.ServerDown: error 47 from memcached_get(:1:ENABLE_ENCRYPTED_LIBRARY): SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY

Don’t know what to make of that.

I looked for the log for Nginx Proxy Manager. Couldn’t find the file.

Now that I think about it, Seafile is serving a website for the WebUI. It must have its own web server. Is it also using Nginx? Is it using apache or something else? Maybe the logs for that contain some info.

You might be thinking of gunicorn.

Seahub:the website. Seafile server package contains a light-weight Python HTTP server gunicorn that serves the website. Seahub runs as an application within gunicorn.

At the end of the gunicorn.conf.py file there is:

# for file upload, we need a longer timeout value (default is only 30s, too short)
timeout = 1200

limit_request_line = 8190

So that isn’t it either…

Maybe post your Nginx config so one of the Nginx people can look it over. Can you run at least temporarily without the proxy manager for a test?

I’ll do that, but after the last attempt where it got stuck again, I cancelled and started to upload the files again without deleting whatever was already uploaded. It has gotten much further now, to 1.7GB and it hasn’t stopped yet. Hopefully it finishes.

Anyway, here’s the nginx config file. This was was autogenerated by nginx proxy manager. The file is as is. I know the indentation is a bit wonky, but that’s just how it was generated, with all that whitespace and everything. The only modification I made, was to replace my domain name for something generic:

server {
  set $forward_scheme http;
  set $server         "seafile";
  set $port           80;

  listen 8080;
listen [::]:8080;

listen 4443 ssl http2;
listen [::]:4443 ssl http2;


  server_name myserverdomain.com;


  # Let's Encrypt SSL
  include conf.d/include/letsencrypt-acme-challenge.conf;
  include conf.d/include/ssl-ciphers.conf;
  ssl_certificate /etc/letsencrypt/live/npm-2/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/npm-2/privkey.pem;




# Asset Caching
  include conf.d/include/assets.conf;


  # Block Exploits
  include conf.d/include/block-exploits.conf;



  # HSTS (ngx_http_headers_module is required) (63072000 seconds = 2 years)
  add_header Strict-Transport-Security "max-age=63072000;includeSubDomains; preload" always;





    # Force SSL
    include conf.d/include/force-ssl.conf;




proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_http_version 1.1;


  access_log /data/logs/proxy-host-17_access.log proxy;
  error_log /data/logs/proxy-host-17_error.log warn;

proxy_request_buffering off;
proxy_buffering off;
client_max_body_size 0;
proxy_max_temp_file_size 0;
proxy_connect_timeout 36000s;
proxy_read_timeout 36000s;
proxy_send_timeout 36000s;
send_timeout 36000s;





  location / {





  # HSTS (ngx_http_headers_module is required) (63072000 seconds = 2 years)
  add_header Strict-Transport-Security "max-age=63072000;includeSubDomains; preload" always;





    
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $http_connection;
    proxy_http_version 1.1;
    

    # Proxy!
    include conf.d/include/proxy.conf;
  }


  # Custom
  include /data/nginx/custom/server_proxy[.]conf;
}

Any updates on the progress?

Wow! I’m not an Nginx guru by any stretch but I’ve not seen an Nginx config file like this one.

Except for the parts you’ve added in troubleshooting I would not have guessed that this would have worked for Seafile (or even addressed Seafile in its design).

Most likely the include files have something to add to the conversation. Otherwise I am at a loss to explain the absence of /seafhttp and /media, for example. Where are they hiding? Again, I am no expert here.

Comparing this to the sample provided in the Manual (and many others that appear in this forum) baffles me. Assuming the reverse proxy does work as you claim, can it be doing the same thing(s) that the sample or other working configurations do?

Here is another example of a “typical” Nginx configuration found in an online tutorial on how to install Seafile under Ubuntu 22.04.

The upload got to about 2.9GB and stopped again. The last include in the config file points to a file that doesn’t exist. So nothing there.

The one marked #Proxy! does exist. I would have included it before, but I hadn’t figured out to what relative path it was referring. I did now (it’s in /etc/nginx), here it is:

add_header       X-Served-By $host;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-Proto  $scheme;
proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP          $remote_addr;
proxy_pass       $forward_scheme://$server:$port$request_uri;

I’ll paste the other files here as well. Sorry to not be of much help, I don’t really understand nginx configuration that much:

letsencrypt-acme-challenge.conf

# Rule for legitimate ACME Challenge requests (like /.well-known/acme-challenge/xxxxxxxxx)
# We use ^~ here, so that we don't check other regexes (for speed-up). We actually MUST cancel
# other regex checks, because in our other config files have regex rule that denies access to files with dotted names.
location ^~ /.well-known/acme-challenge/ {
        # Since this is for letsencrypt authentication of a domain and they do not give IP ranges of their infrastructure
        # we need to open up access by turning off auth and IP ACL for this location.
        auth_basic off;
        auth_request off;
        allow all;

        # Set correct content type. According to this:
        # https://community.letsencrypt.org/t/using-the-webroot-domain-verification-method/1445/29
        # Current specification requires "text/plain" or no content header at all.
        # It seems that "text/plain" is a safe option.
        default_type "text/plain";

        # This directory must be the same as in /etc/letsencrypt/cli.ini
        # as "webroot-path" parameter. Also don't forget to set "authenticator" parameter
        # there to "webroot".
        # Do NOT use alias, use root! Target directory is located here:
        # /var/www/common/letsencrypt/.well-known/acme-challenge/
        root /data/letsencrypt-acme-challenge;
}

# Hide /acme-challenge subdirectory and return 404 on all requests.
# It is somewhat more secure than letting Nginx return 403.
# Ending slash is important!
location = /.well-known/acme-challenge/ {
        return 404;
}

ssl-ciphers.conf

ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;

# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;

assets.conf

location ~* ^.*\.(css|js|jpe?g|gif|png|webp|woff|eot|ttf|svg|ico|css\.map|js\.map)$ {
        if_modified_since off;

        # use the public cache
        proxy_cache public-cache;
        proxy_cache_key $host$request_uri;

        # ignore these headers for media
        proxy_ignore_headers Set-Cookie Cache-Control Expires X-Accel-Expires;

        # cache 200s and also 404s (not ideal but there are a few 404 images for some reason)
        proxy_cache_valid any 30m;
        proxy_cache_valid 404 1m;

        # strip this header to avoid If-Modified-Since requests
        proxy_hide_header Last-Modified;
        proxy_hide_header Cache-Control;
        proxy_hide_header Vary;

        proxy_cache_bypass 0;
        proxy_no_cache 0;

        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504 http_404;
        proxy_connect_timeout 5s;
        proxy_read_timeout 45s;

        expires @30m;
        access_log  off;

        include conf.d/include/proxy.conf;
}

block-exploits.conf

## Block SQL injections
set $block_sql_injections 0;

if ($query_string ~ "union.*select.*\(") {
        set $block_sql_injections 1;
}

if ($query_string ~ "union.*all.*select.*") {
        set $block_sql_injections 1;
}

if ($query_string ~ "concat.*\(") {
        set $block_sql_injections 1;
}

if ($block_sql_injections = 1) {
        return 403;
}

## Block file injections
set $block_file_injections 0;

if ($query_string ~ "[a-zA-Z0-9_]=http://") {
        set $block_file_injections 1;
}

if ($query_string ~ "[a-zA-Z0-9_]=(\.\.//?)+") {
        set $block_file_injections 1;
}

if ($query_string ~ "[a-zA-Z0-9_]=/([a-z0-9_.]//?)+") {
        set $block_file_injections 1;
}

if ($block_file_injections = 1) {
        return 403;
}

## Block common exploits
set $block_common_exploits 0;

if ($query_string ~ "(<|%3C).*script.*(>|%3E)") {
        set $block_common_exploits 1;
}

if ($query_string ~ "GLOBALS(=|\[|\%[0-9A-Z]{0,2})") {
        set $block_common_exploits 1;
}

if ($query_string ~ "_REQUEST(=|\[|\%[0-9A-Z]{0,2})") {
        set $block_common_exploits 1;
}

if ($query_string ~ "proc/self/environ") {
        set $block_common_exploits 1;
}

if ($query_string ~ "mosConfig_[a-zA-Z_]{1,21}(=|\%3D)") {
        set $block_common_exploits 1;
}

if ($query_string ~ "base64_(en|de)code\(.*\)") {
        set $block_common_exploits 1;
}

if ($block_common_exploits = 1) {
        return 403;
}

## Block spam
set $block_spam 0;

if ($query_string ~ "\b(ultram|unicauca|valium|viagra|vicodin|xanax|ypxaieo)\b") {
        set $block_spam 1;
}

if ($query_string ~ "\b(erections|hoodia|huronriveracres|impotence|levitra|libido)\b") {
        set $block_spam 1;
}

if ($query_string ~ "\b(ambien|blue\spill|cialis|cocaine|ejaculation|erectile)\b") {
        set $block_spam 1;
}

if ($query_string ~ "\b(lipitor|phentermin|pro[sz]ac|sandyauer|tramadol|troyhamby)\b") {
        set $block_spam 1;
}

if ($block_spam = 1) {
        return 403;
}

## Block user agents
set $block_user_agents 0;

# Disable Akeeba Remote Control 2.5 and earlier
if ($http_user_agent ~ "Indy Library") {
        set $block_user_agents 1;
}

# Common bandwidth hoggers and hacking tools.
if ($http_user_agent ~ "libwww-perl") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "GetRight") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "GetWeb!") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "Go!Zilla") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "Download Demon") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "Go-Ahead-Got-It") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "TurnitinBot") {
        set $block_user_agents 1;
}

if ($http_user_agent ~ "GrabNet") {
        set $block_user_agents 1;
}

if ($block_user_agents = 1) {
        return 403;
}

force-ssl.conf

if ($scheme = "http") {
        return 301 https://$host$request_uri;
}

Those should be all. Since I have located the log file now, I’ll try to upload with the log open, and see what I find.

There it goes:

I’ll let you know what happens whenever it gets stuck again.

There, it got stuck:

I left it still trying to upload for all this time. The log gets filled with those requests to /api2/repos. The length of the request tho seems to have shrinked to 520 for some reason. It has stayed that way for a while now.

There are some interesting things here but nothing that is specifically “Seafile”. Sorry, I’m still at a loss.

I found a couple previous posts that might help: “Looking for help with Nginx Proxy Manager” and “'Nginx Proxy Manager’ causing failure when uploading files”.

I suspect some of the authors of the two posts have a solution in hand.

I added the custom location in the second link you sent me and it seemed to be working… It got to 1.5GB, but then got stuck again in the same way as before, with a request to /api2/repos. I have no idea what’s wrong, my NGINX config has everything the manual says it needs. I can try to do it without NGINX, but that is non-trivial since I have my certificates there and everything…

Found a couple more that look useful: [SOLVED] Using nginx reverse proxy for seafile-docker and Docker-Seafile with NGINX reverse proxy on main server .

1 Like

Thank you! I added this:

    proxy_set_header Host $host;
    proxy_set_header Forwarded "for=$remote_addr;proto=$scheme";
    proxy_set_header X-Forwarded-For $remote_addr;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Connection "";

To my custom config. No ice. It still got stuck. This time at 1.7GB. I really don’t know how to fix this. I don’t even know if this is an nginx problem. It always gets stuck when it makes calls to /api2/repos. They return 200, so they should be fine… Why does it make those calls? What’s the purpose? Why could it fail there?