Nginx unknown error

Hi everyone,

my Seafile server is addressed through Nginx and the subdomain “”. Nginx listens on port 80 but a Sophos Firewall redirects the communication with clients in the Internet to HTTPS with the certificate sitting on the Sophos FW.
Now, from inside the local network everything works fine. I can upload and download files to and from Seafile. I can login to Seafile from outside as well, both with a browser and with the android app. But if I try to download or upload files from there I get an “unknown error”.
The Service_Url is defined as “” and the File_Server_Root as “”.
Where is the mistake in the configuration? Which log tells me what goes wrong?
Thanks for any help.

Have you checked that your Firewall is properly passing port 8082?

You can check the client logs and the server logs to potentially obtain more information about what went wrong.

With Seafile behind nginx port 8082 must not be open on the Firewall. Nginx connects locally to this port as proxy.
And do I have a client log when I try to upload a file with the Android app?

could you post your nginx conf for the seafile page?

Here it is:

server {
listen 80 default_server;
listen [::]:80 default_server;

proxy_set_header X-Forwarded-For $remote_addr;

location / {
     proxy_set_header   Host $host;
     proxy_set_header   X-Real-IP $remote_addr;
     proxy_set_header   X-Forwarded-For
     proxy_set_header   X-Forwarded-Host $server_name;
     proxy_read_timeout  1200s;

     # used for view/edit office file via Office Online Server
     client_max_body_size 0;

     access_log      /var/log/nginx/seahub.access.log;
     error_log       /var/log/nginx/seahub.error.log;

location /seafhttp {
    rewrite ^/seafhttp(.*)$ $1 break;
    client_max_body_size 0;
    proxy_set_header   X-Forwarded-For $remote_addr;

proxy_request_buffering off;
proxy_connect_timeout 36000s;
proxy_read_timeout 36000s;
proxy_send_timeout 36000s;
send_timeout 36000s;

location /seafdav {
proxy_set_header   Host $host;
proxy_set_header   X-Real-IP $remote_addr;
proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header   X-Forwarded-Host $server_name;
proxy_read_timeout  1200s;

client_max_body_size 0;
proxy_request_buffering off;

access_log      /var/log/nginx/seafdav.access.log;
error_log       /var/log/nginx/seafdav.error.log;

location /media {
    root /home/seafileserver/seafile-server-latest/seahub;


Yes, but the Sophos firewall can be configured to stop internal traffic as well as external traffic.

Have you checked the Sophos, NGinx, and Seafile logs?

I see the download resp. upload requests in the seafile log. No errors.
And I can download files from external, too. But uploads are not possible.
It looks a bit as if the Sophos firewall blocks the upload transfer. But I don’t see any rejects in the firewall log.

What about the NGinx log?

Access to Seafile positive.

So, there are no errors in the NGinx log?

Also, I’m beginning to wonder if these two things you mentioned are causing issues:

The Service_Url is defined as “” and the File_Server_Root as “”.


How could that be? It follows the documentation.

Yes, but you are redirecting http to https in the Sophos firewall and letting it handle that. So, the url being passed to NGinx/Seafile are different than your configuration for either of those. You are passing https to one or both of those, but NGinx and Seafile are configured for http.

Unfortunately, I’m not certain how Sophos handles the redirect… When a request is initiated from the external, when does Sophos redirect? Before or after NGinx? I don’t know how that operates. What I do know is that Seafile expects the request and traffic to come from the service url, and if it’s coming across as https, that could be the issue.

Let me expound a little here. I also use https, but via NGinx, as described in the manual. My service url and file server root are both set to https in the web interface. Everything works fine. However, internally, I can use http directly to the IP rather than the domain name, and it works fine there as well. It’s just that when I use it externally, I have to use https to the domain, which then passes that URL to Seafile. Thus, my service url and file server roots both have to reflect that. Otherwise, it will not work.

That’s how the reverse proxy with NGinx works. It uses http to access Seafile behind the scenes, but the URL is still passed to Seafile.

Thanks for the long explanation.
In the Webserver Protection of Sophos UTM a virtual webserver listens to a specific port on the external interface and connects to a real internal webserver (Nginx). The internal webserver is defined to use HTTP and the virtual webserver uses HTTPS throughout the internet. So the internal traffic in the LAN is not encrypted but the external traffic is. The elegant part of all that is that the certificates are stored centrally on Sophos UTM. So I do not have to care about letsencrypt updates on the (real) webservers.
Do you still think that this constellation can cause the problems?
Can I set a more verbose debug level for the seafile logging?

Yes, it can cause problems for several reasons.

First, that’s the whole point of NGinx with https… it accesses Seafile unencrypted. NGinx typically encrypts/decrypts the external traffic via the installed certificates. In your case, it’s Sophos that is supposed to do so… kind of. However, keep in mind that uploading requires port 8082, and NGinx, as a reverse proxy, handles this via the location sections of its configuration.

So, when a request comes through externally, the desired port is included and then NGinx knows where to take the traffic, seafhttp for uploading, if I can recall correctly.

So, what does Sophos do with it? How does it pass that information to NGinx? Does it even pass it at all?

The easiest thing for you to try is going into the web ui and changing the service_url and file_server_root to https rather than http. You’ll need to restart Seafile after doing so.

I’m trying to do it your way. Which certificates do you use in Nginx?
Letsencrypt does not install, it complains about “invalid response”.
Obviously there is no document root in the Seafile config file.

You are getting “invalid response” because letsencrypt is obsolete since January of last year due to a security issue. letsencrypt-auto has replaced it and it works. I prefer to use certbot-auto which works with letsencrypt.

The manual is outdated regarding certbot/letsencrypt, and has not been updated since the change to the new “auto” version.

Here are the basic steps to get certbot-auto to grab a certificate for you:

  1. Go to the certbot website and grab certbot-auto via wget as per the instructions there. (You’ll need to place it in your /usr/bin folder, which should be in your path)
  2. On your router, make certain port 80 is port forwarded to the machine with NGinx on it.
  3. Disable your AAAA record on your domain, if it has one. There have been issues with ipv6 and certbot.
  4. Run certbot-auto with the --nginx option as per instructions for certbot.
  5. That should grab your certificates and put them in your NGinx config
  6. In your NGinx config, move the certificate lines to the proper section.

Here is a link to a thread where someone I helped to pull this off detailed what we did to get it to work:

1 Like