Docker with nginx reverse proxy - Request Entity Too Large

I recently got Seattle up and running with the new docker image. I am running nginx in another container, that reverse proxies to the built-in nginx in the separate container. I thought I had all the kinks ironed out, until I tried to upload a file lager than a MB or so. In the web interface, i am getting the ‘Request Entity Too Large’ error.

I’ve double checked the nginx config in the docker image, and see that ‘client_max_body_size 0;’ is set where it should be. I tried adding the same to my reverse proxy config, but that did not solve the issue.

Anyone have an idea of what else I can try? I can post my config if needful.


Have you placed the client max body size setting in the other NGinx config as well? Also wanted to mention that passing from one NGinx to another NGinx, especially if using https, can be unpredictable.

Yes, I’ve placed the client max body size in the external nginx config as well, but still no change. Here is my location entry for seafile, incase someone sees something obvious that i’ve missed:

    location / {
        proxy_read_timeout 310s;
        proxy_set_header Host $host;
        proxy_set_header Forwarded "for=$remote_addr;proto=$scheme";
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Connection "";
        proxy_http_version 1.1;
        proxy_pass http://localhost:8000;
        client_max_body_size 0;
        proxy_request_buffering off;

I appreciate the advice about passing between instances of nginx. I am enforcing SSL on the external instance. Should I consider bypassing the container’s internal nginx instance altogether, and proxy the docker ports directly? I’m interested to hear if anyone else has tried a similar setup.

Have you restarted your outer nginx to apply the new configuration?

Yes, others have attempted what you are trying… Setting up SSL in one location and then trying to pass it to another NGinx instance. I’m not certain why anyone would want to do this, but I’ve not yet seen anyone successful with it as it pertains to Seafile. However, this really isn’t a Seafile issue, per say, but more of a webserver issue. There may be a way to configure NGinx to both send and receive from another instance, but I’m not well versed enough in NGinx/Apache/IIS, etc in order to guide you in the right direction.

The problem is, SSL with NGinx/Apache is set up to grab the incoming traffic and proxy it elsewhere. If you are sending it to another NGinx instance, the question becomes, how does NGinx know what to do with it?

From a Seafile standpoint, it needs to know what the incoming URL is, whether https or not and which port. In your configuration, I’m not certain if your service URL should be set to https or http. You could try it both ways and see what happens.

You may be able to gain some insight from the following link. In it, this guy successfully passes traffic from a NGinx instance to an Apache instance in a docker using the “upstream” command.

There was someone in the forum who made it working:

1 Like

Sweet! Thanks for providing that link. Good to know. :slight_smile:

Thanks All,

Thanks for the links. Both were worth the read. The other thread was specific to running seafile docker from non-root location, but was still a good read.

After all that, the issue indeed was that I forgot to restart my nginx container after adding the client max body size there. I swore I had done it, but I guess I hadn’t. Once I did that again, I had much better luck.

Thanks again all.

1 Like