Seafile 12 docker with https and custom ports

I have installed Seafile 12 in Docker on an Ubuntu 24.04 server. I can access seafile from the Lan but not the Wan. The Lan works with the following in the .env file

SEAFILE_SERVER_HOSTNAME=XXX.XX.XX.XX
SEAFILE_SERVER_PROTOCOL=http

Changing to:

SEAFILE_SERVER_HOSTNAME=my.dns. org : 8443
SEAFILE_SERVER_PROTOCOL=https

doesn’t allow access from either the LAN or the WAN.

I have other services running on my LAN with ports forwarded on my server, so using 80 and 443 all the way through the chain is not an option. I have tried setting my router to forward 8080 to 80 on my ubuntu server (the server is not using 80 for anything else, but my router forwards external calls on port 80 to port 80 on another machine). I have also set my router to forward 8443 to 8443 on my ubuntu server (the server already listens to 443 for another service).
In caddy.yml I have

ports:
  - 80:80
  - 8443:443

I could not find a good description of this anywhere so I’m not sure if this is correct, but I assume that the first port number in a X:X pair is the for the server and it’s passed to the second port number within Docker.
I have tried various combinations of these settings without luck.
Pointing a browser to https ://my.dns.org:8443 results in “This site can’t be reached. my.dns. org refused to connect.”
Looking at the docker logs for seafile-caddy, I’m getting:

{“level”:“error”,“ts”:1749067461.0919232,“msg”:“validating authorization”,“problem”:{“type”:“urn:ietf:params:acme:error:malformed”,“title”:“”,“detail”:“No such authorization”,“instance”:“”,“subproblems”:null},“order”:“https ://acme-v02.api.letsencrypt.org/acme/order/2441389537/391241995387”,“attempt”:1,“max_attempts”:3,“stacktrace”:“github. com/mholt/acmez/v3.(*Client).ObtainCertificate\n\tgithub.com/mholt/acmez/v3@v3.0.0/client.go:152\ngithub.com/caddyserver/certmagic.(*ACMEIssuer).doIssue\n\tgithub.com/caddyserver/certmagic@v0.21.6/acmeissuer.go:477\ngithub.com/caddyserver/certmagic.(*ACMEIssuer).Issue\n\tgithub.com/caddyserver/certmagic@v0.21.6/acmeissuer.go:371\ngithub.com/caddyserver/caddy/v2/modules/caddytls.(*ACMEIssuer).Issue\n\tgithub.com/caddyserver/caddy/v2@v2.9.1/modules/caddytls/acmeissuer.go:249\ngithub.com/caddyserver/certmagic.(*Config).obtainCert.func2\n\tgithub.com/caddyserver/certmagic@v0.21.6/config.go:626\ngithub.com/caddyserver/certmagic.doWithRetry\n\tgithub.com/caddyserver/certmagic@v0.21.6/async.go:104\ngithub.com/caddyserver/certmagic.(*Config).obtainCert\n\tgithub.com/caddyserver/certmagic@v0.21.6/config.go:700\ngithub.com/caddyserver/certmagic.(*Config).ObtainCertAsync\n\tgithub.com/caddyserver/certmagic@v0.21.6/config.go:505\ngithub.com/caddyserver/certmagic.(*Config).manageOne.func1\n\tgithub.com/caddyserver/certmagic@v0.21.6/config.go:415\ngithub.com/caddyserver/certmagic.(*jobManager).worker\n\tgithub.com/caddyserver/certmagic@v0.21.6/async.go:73”}
{“level”:“error”,“ts”:1749067461.0921097,“logger”:“tls.obtain”,“msg”:“could not get certificate from issuer”,“identifier”:“my.dns. org”,“issuer”:“acme-v02.api.letsencrypt.org-directory”,“error”:“HTTP 404 urn:ietf:params:acme:error:malformed - No such authorization”}
{“level”:“error”,“ts”:1749067461.0921655,“logger”:“tls.obtain”,“msg”:“will retry”,“error”:“[my.dns. org] Obtain: [my.dns. org] solving challenge: getting authorization at https ://acme-v02.api.letsencrypt.org/acme/authz/2441389537/530906449497: attempt 1: https ://acme-v02.api.letsencrypt.org/acme/authz/2441389537/530906449497: HTTP 404 urn:ietf:params:acme:error:malformed - No such authorization (ca=https ://acme-v02.api.letsencrypt.org/directory)”,“attempt”:1,“retrying_in”:60,“elapsed”:0.130048906,“max_duration”:2592000}

My DNS service is duckdns which is working for other services I’m running.
Any help would be greatly appreciated. I have been beating my head against this for some time.

It sounds like you are on the right track.

You are correct, the docker config’s “8443:443” is saying “listen on the hsot’s port 8443, and forward whatever comes in there to this container’s port 443”.

In fact, it looks like seafile is configured correctly what you’ve said here. The only thing that doesn’t look right to me is “SEAFILE_SERVER_HOSTNAME=my.dns. org : 8443”. There shouldn’t be spaces around the : , so “my.dns.org:8443” instead of “my.dns/org : 8443”, but that seems likely to me to be something that happened when pasting into this website rather than something actually in your config. Still worth double-checking.

The errors from caddy in the docker log are probably the real clue. I don’t use caddy, so there might be clues in there that I am missing, but here’s what I think I understand from it. It is trying to get a certificate from lets encrypt using the acme protocol, and that is failing. Acme is supposed to be an automated way for you to prove that you own the domain you are requesting a certificate for.

This usually is done by your server connecting to a letsencrypt server, which gives your server a random number (a token), then the letsencrypt server connects to your server via the domain name you requested, and your server gives back that same token to verify that your server is the one at that name. There are other options, like putting that token into the DNS record, but it’s variations on the same theme. What I suspect is happening is that when the letsencrypt server is connecting back to your server, it is doing so at either port 80 or 443, and so isn’t getting to the server that has the token (the caddy container in this case), so the verification is failing.

So what to do about it? It sounds to me like you have reached the level of complexity where it would make sense to have a single reverse proxy in front of both of your seafile, and whatever the other thing is you have on port 443. There are a few ways to do this. What I did was to create a separate VM running nginx. That nginx is then configured so that if you connect to tv.mydomain.com it forwards to jellyfin, if you connect to seafile.mydomain.com it forwards to seafile, and books.mydomain.com to my books server, etc. That nginx VM runs an acme client that gets a single wildcard certificate that is valid for *.mydomain.com, and that same certificate is used for all of those subdomains. With that I don’t need the caddy container for seafile.

You are right that the errant spaces are not in the file.

I think you are also right that it’s probably time to bite the bullet and set up a reverse proxy server. It sounds like it will simplify things in the future… After the work of reconfiguring every service on my network.

I’m still hitting road blocks. I added a reverse proxy server. This is my first time setting one up, so I’m probably missing something simple and obvious, but I can’t seem to figure it out. I selected Nginx Proxy Manager, thinking that the gui interface would help guide my inexperience and prevent some mistakes. NPM seemed to install ok. I used the web interface to request SSL certificates which was successful. I added a proxy host with a new subdomain and pointed it to my already installed SeaFile server. Seafile and the new Proxy Server are on different host machines. Trying to access seafile with the subdomain web address returned a completely blank (white) page. I could still access seafile locally by using the lan IP address of the seafile server.

Next I followed these instructions from the online manual to “Use other other reverse proxy” to take Caddy out of the chain. Caddy seemed like an unnecessary complication at this point. Now I’d get “400 Bad Request / nginx” when trying to access from the wan but I can still access from the lan.

I copied and pasted the custom locations from the instructions above into the NPM settings for my seafile proxy server without really understanding them, so there is a good chance that my problem is there.

One problem (which may also be a clue to my issues) is that if I select the “web sockets support” option in NPM then the proxy server status changes to offline. If I turn it back on, it changes back to online. I understand that web sockets are required for seadoc.

Any help would be greatly appreciated!

I think I figured out the issue with web sockets support. I had to remove the “proxy_http_version 1.1” directives from the advanced settings under the custom locations tab. Apparently NPM adds this itself I websockets support is selected and if it’s already in the manual settings, there is a conflict.

I still can’t access seafile from the wan however. Still getting “400 Baad Request / nginx”

I think the error 400 is coming from nginx itself, before it even tries to go on to seafile. If it thought it was working but just wasn’t getting a reply from seafile it would normally report an error “502 bad gateway”. I don’t know what the Nginx Proxy Manager config looks like, but I assume it is at least similar to normal nginx, so here’s part of my nginx config. I am trying to simplify this by leaving out parts for things like OAuth that I don’t think you are using.

    server {
        listen       80;
        server_name  file.example.com;
        rewrite ^ https://$http_host$request_uri? permanent;    # Forced redirect from HTTP to HTTPS
        server_tokens off;      # Prevents the Nginx version from being displayed in the HTTP response header
    }
    
    server {
        listen 443 http2 ssl;
    
        include /etc/nginx/snippets/ssl.conf;
        include /etc/nginx/snippets/authelia-location-snippet.conf;
    
        server_name file.example.com;
        server_tokens off;
    
        add_header X-Frame-Options "SAMEORIGIN";
        add_header X-XSS-Protection "1; mode=block";
        add_header X-Content-Type-Options "nosniff";
    
        location / {
            proxy_pass         http://10.10.5.5:8000;
    
            include /etc/nginx/snippets/proxy.conf;
            client_max_body_size 0;
    
            access_log      /var/log/nginx/seahub.access.log seafileformat;
            error_log       /var/log/nginx/seahub.error.log;
        }
    
        location /seafhttp {
            rewrite ^/seafhttp(.*)$ $1 break;
            proxy_pass http://10.10.5.5:8082;
    
            include /etc/nginx/snippets/proxy.conf;
    
    	client_max_body_size 0;
    
            # supposed to fix large file uploads with web interface
            proxy_request_buffering off;
    
            access_log      /var/log/nginx/seafhttp.access.log seafileformat;
            error_log       /var/log/nginx/seafhttp.error.log;
        }
        
        # only need this part if you are using the notification server
        location /notification {
            proxy_pass http://10.10.5.5:8083/;
    
            include /etc/nginx/snippets/proxy.conf;
    
            access_log      /var/log/nginx/seafile_notification.access.log;
            error_log       /var/log/nginx/seafile_notification.error.log;
        }
    }

One important line is “proxy_pass http://10.10.5.5:8000;”. The 10.10.5.5 IP is the IP of the machine hosting the docker container, not the IP of the container itself. Also you probably just want to go to port 80 for all of those proxy_pass lines.

Thank you so much! It’s getting closer. I deleted the proxy host from Nginx Proxy Manager and started over. I tried to replicate your Nginx settings as close as possible.

Now I can access the web interface. I cannot download or upload anything. When attempting to download, I get the logo banner at the top without any tools and then a white page with the text “Sorry, but the requested page could not be found.” Attempting to upload, I get “1 file(s) faild to upload” and “Network Error”

If I added your “include …” or “access_log …” directives, it took the proxy host off line, so I just removed them for now. I’m guessing that there are critical things in there that should be included.

From your description I think it’s probably the /seafhttp that is wrong. For me that’s left over from before using the docker version. With the docker setup you don’t need that section, there’s a small nginx inside the container that takes care of that for you, so you just need “location /” and forward it all to the port 80 in the container.

The include parts just load additional settings from other files. Like the ssl one is the settings for where to get the certificate files and such (so is very specific to my machine), and the authelia-location-snippet.conf is part of using authelia as the OAUTH login provider. The include for proxy.conf might have some settings that could apply for you, so here is what’s in that file. I think these are mostly the defaults anyway, but I added them when troubleshooting a problem, and once everything was working I didn’t want to mess with figuring out what ones I could remove again. :slight_smile:

You can just put these settings directly into the conifg, instead of doing the include from another file if that is easier. You probably don’t need these at all, but in case you do, here it is:

	proxy_set_header Host $http_host;
	proxy_set_header X-Forwarded-Proto $scheme;
	proxy_set_header X-Forwarded-Host $http_host;
	proxy_set_header X-Forwarded-URI $request_uri;
	proxy_set_header X-Forwarded-Ssl on;
	proxy_set_header X-Forwarded-For $remote_addr;
	proxy_set_header X-Real-IP $remote_addr;
	proxy_set_header Upgrade            $http_upgrade;
	proxy_set_header Connection         "upgrade";

	## Basic Proxy Configuration
	client_body_buffer_size 128k;
	proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; ## Timeout if the real server is dead.
	proxy_redirect  http://  $scheme://;
	proxy_http_version 1.1;
	proxy_cache_bypass $cookie_session;
	proxy_no_cache $cookie_session;
	proxy_buffers 64 256k;
	

I just wanted to start a thread by my self, and stumbled around this one.
I’m stuck exactly on the same problem as @dos286
Configuration is Seafile 12 docker install on another Host with a nginx VM as reverse Proxy in front.

Everything works, but uploads and I try for a few days now to fix it.

The underlying problem is, that uploads and some downloads (not all for me, but the pdf preview for example) are blocked caused by “mixed request”

SERVICE_URL and FILE_SERVER_ROOT are both defined with “https ://xxx” but request on the /seafhttp endpoint are made to “http ://xxx” resulting in an correct blocking by the browser and an instant network error. (Without the space in front of the :, had to put them in, cause I’m not allowed to post links )

I would assume, that this is the same Problem here, but couldn’t find a solution to this.

(If I rather should open my own thread, please let me know, don’t want to hijack this one)

I now have a working seafile! Many thanks to @tomservo for all the guidance and patience. My Nginx Proxy Manager settings are very simple. I have a single custom location for “/” forwarding to my seafile host machine ip address and port 80. The custom nginx settings for this custom location are:

proxy_read_timeout 310s;

client_max_body_size 0;

I can upload and download. Markdown file viewing and editing works fine.

Seadoc still does not work. I get “Load doc content error.” The “use other reverse proxy instructions” say that I need custom locations for this, but the ones that it suggests break the proxy server. Without them, I have a functioning seafile. I think that the problem with seadoc is finding the correct custom locations.

I am also trying to integrate Collabora without success. When I try to open files in open document format I get “ took too long to respond.” The “use other reverse proxy” instructions do not indicate the need for any custom locations or settings in the proxy server for Collabora, so this may be strictly a seafile or collabora settings issue which may have been complicated by removing caddy from the chain.The Collabora.yml file does not have a “ports:” setting, so I added one. It also had an “expose:” setting. I wondered if this was a conflict so I commented that out. No combination of these settings in Collabora seemed to fix the issue.

My best guess from your description is that you need to set SEAFILE_SERVER_PROTOCOL to https, since it sounds like that is currently set to http.

I can’t help with seaddoc. I tried it very briefly in a test setup, but since I couldn’t find the source, or even what license it is released under so I decided it wasn’t for me.

I do use Collabora, so I have some tips for you there. I run my seafile on one subdomain (like seafile.my-domain.com) and Collabora on another (like docs.my-domain.com). Here is my nginx config for Collabora.

server {
       listen 443 ssl http2;
       server_name docs.{{ primary_public_domain }};

       gzip off;
       proxy_buffering off;

       include /etc/nginx/snippets/ssl.conf;

# some extra options for security stuff (prevent some cross-site scripts, etc).
       add_header X-Frame-Options "SAMEORIGIN";
       add_header X-XSS-Protection "1; mode=block";
       add_header X-Content-Type-Options "nosniff";
       
       add_header Referrer-Policy strict-origin-when-cross-origin; 
       add_header X-Content-Type-Options nosniff;
       add_header X-XSS-Protection "1; mode=block";
       add_header Permissions-Policy interest-cohort=();
       add_header Expect-CT 'enforce; max-age=604800';
       
       client_max_body_size 1G;
       client_body_buffer_size 400M;

       # Replace robots.txt with custom version.  
       include  /etc/nginx/snippets/override-robots-txt.conf

       # static files
       location ^~ /browser {
       		proxy_pass http://{{ hostvars['collabra_server'].ansible_host }}:9980;
   		proxy_set_header Host $http_host;
	}

	# WOPI discovery URL
	location ^~ /hosting/discovery {
  		 proxy_pass http://{{ hostvars['collabra_server'].ansible_host }}:9980;
  		 proxy_set_header Host $http_host;
	}

	# Capabilities
	location ^~ /hosting/capabilities {
		 proxy_pass http://{{ hostvars['collabra_server'].ansible_host }}:9980;
  		 proxy_set_header Host $http_host;
	}

	# main websocket
	location ~ ^/cool/(.*)/ws$ {
		 proxy_pass http://{{ hostvars['collabra_server'].ansible_host }}:9980;
		 
		 proxy_set_header Upgrade $http_upgrade;
		 proxy_set_header Connection "Upgrade";
		 proxy_set_header Host $http_host;
		 proxy_read_timeout 36000s;
	}

	# download, presentation and image upload
	location ~ ^/(c|l)ool {
		 proxy_pass http://{{ hostvars['collabra_server'].ansible_host }}:9980;
		 proxy_set_header Host $http_host;
	}

	# Admin Console websocket
	location ^~ /cool/adminws {
  		 # Restrict access to home network and VPN
  		 include /etc/nginx/snippets/restrict_to_local_only_access.conf;

  		 proxy_pass http://{{ hostvars['collabra_server'].ansible_host }}:9980;
		 
  		 proxy_set_header Upgrade $http_upgrade;
		 proxy_set_header Connection "Upgrade";
		 proxy_set_header Host $http_host;
		 proxy_read_timeout 36000s;
	}

}

The only thing I remember causing me trouble that wasn’t covered in the guide I followed is that both collabora and seafile need to be able to talk to each other at the same URL that is publicly used, with the same certificate. If your seafile is https://seafile.my-domain.com you need the collabora server to be able to get to it at that address.

So inside the container (if you are using docker), you need to be able to “curl https://seafile.my-domain.com” and get a response from seafile without errors (no “untrusted certificate”, or “page not found”). And in the same way, inside the seafile container you need to be able to “curl https://collabora.my-domain.com” without errors. For me this took some extra work to set up because my router wasn’t configured to let internal machines do that.

@tomservo I’m struggling to understand how to merge your instructions with the instructions for both “Use other reverse proxy” and “Collabora Online Integration”.

Several questions came up:

  • Collabora Online Integration instructs to add these lines to .env: “COLLABORA_USERNAME=<your LibreOffice admin username>” and “COLLABORA_PASSWORD=<your LibreOffice admin password>” I assume that I am supposed to substitute a real user name and password for the stuff in the angle brackets. I assumed that this is the place that I define this user name and password and that the web UI would ask me for these if I click on administration settings or something. But now I’m wondering if I was supposed to setup a LibreOffice admin account somewhere else and that this has to match and it’s crashing because of login error. Can you clarify?
  • Are your seafile and collabora running in the same docker (like the instructions say)?
  • If you run/access collabora from a separate subdomain, I assume that you need to tell seafile the url somewhere. Where is this done? The most likely place I can find is this line in seahub_settings.py: “OFFICE_WEB_APP_BASE_URL = ‘http://collabora:9980/hosting/discovery’” Do you change that to “https://docs.my-domain.com/hosting/discovery”? or something similar?
  • Do you remember what guide you followed to set up collabora?

Thanks for all the help!

I got collabora mostly working now. The missing link was to update this line in collabora.yml under the environment heading:

  - server_name=collabora.my-domain.org

I setup a new proxy host for the subdomain collabora.my-domain.org in Nginx Proxy Manager. I had to enable websockets support, contrary to the indications on the “Use other reverse proxy” instructions.

I’m still not certain how or where the COLLABORA_USERNAME comes into play. I don’t see any place to enter an administrative mode or anything.

Everything seems to be working just fine. I cannot however see how to create a new libreoffice document. It seems like there should be links under the “+ New” menu but there is only SeaDoc, Markdown, and Microsoft formats. Any clue how to add .odt, .ods, etc.?

I think that the COLLLABORA_USERNAME and password are for the admin panel as you suggested. My collabora’s config has a big XML config file with an optional section for an admin username and password, and I thin that setting in the docker config ends up in that xml file.

I think that admin username and password are used at this address
https://collabora.my-domain.com/browser/dist/admin/admin.html
I don’t remember there being much there that I found interesting, so once it was all working I disabled the admin user. I don’t want people messing with it, and I can always enable it again in the config if I need to.

I do not run collabora in docker. I don’t like docker, so my collabora is it’s own separate VM. I only run seafile in a docker container because it became clear that there wasn’t going to be another option soon, and even then I run it in podman instead of docker.

As far as I know, there isn’t an option to make a new openoffice document, but the “New Word File” make a file that can be edited with collabora. And if you upload an odt file you can edit it.