How to use Seafile 13 behind Nginx Proxy Manager?

Let me first cover my current setup for my system:

I’m using Cloudflare for my DNS, and have Nginx Proxy Manager (NPM) running as a reverse proxy. NPM is also what manages my HTTPS cert via Let’s Encrypt. I’m running TrueNAS Scale and have setup Seafile 13 through the apps interface using this config: https://pastebin.com/0bi8D5DF

In order to get this working with HTTPS, I had to modify some configs. First, in my seahub_settings.py, I added these lines to the bottom:

SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
FILE_SERVER_ROOT = "https://127.0.0.1/seafhttp"
CSRF_TRUSTED_ORIGINS = ["https://127.0.0.1", "http://127.0.0.1"]

Then I created ccnet.conf next to my seahub settings file and put this in there:

[General]
SERVICE_URL = "https://127.0.0.1"

However, I have one significant problem: Seafile appears to be entirely unaware it’s behind a reverse proxy with HTTPS traffic. I can use the Seafile Client just fine on all of my PCs, but if I try to use Collabora/Seadoc, download files from the Seahub interface, or try to download/view any files on my Android phone, I get an error about trying to download a file over an insecure connection despite the site being HTTPS. This is the exact error from Seahub in the JavaScript console:

Mixed Content: The site at 'https://seafile.example.com/' was loaded over a secure connection, but the file at 'https://seafile.example.com/seafhttp/repos/6becddb4-85c8-4741-b8cb-3738547832cf/files//Documents/frames.txt/?op=download' was redirected through an insecure connection. This file should be served over HTTPS. See https://blog.chromium.org/2020/02/protecting-users-from-insecure.html for more details.

This is the initiating link as shown in the network tab:

http://seafile.example.com/seafhttp/repos/6becddb4-85c8-4741-b8cb-3738547832cf/files//Documents/frames.txt/?op=download

If I copy this link, paste it into my address bar, and change it to HTTPS, it downloads just fine. Clearly, everything is working, Seahub is just entirely unaware it’s behind a reverse proxy that’s handling the HTTPS traffic. This makes sense, as the traffic from NPM to Seahub is standard HTTP.

I’ve seen in the Seafile Documentation, https://manual.seafile.com/13.0/setup/use_other_reverse_proxy/#add-reverse-proxy-for-related-services, I need to get rid of Caddy and add a bunch of Nginx configs. I tried doing this through NPM’s custom configuration interface, but that broke Seahub and I couldn’t access it. Bringing back Caddy let me get back in. So that’s where I’m at right now.

Has anyone else gotten Seafile 13 to fully work with NPM? What am I doing wrong?

Here is a way to fix your issue by our AI (manually checked and modified):


It looks like you are hitting a classic configuration mismatch between your reverse proxy (NPM) and how Seafile 13 handles URL generation. In Seafile 13, the logic for generating links has changed significantly, and some of the settings you are using are now deprecated.

Based on your error log, Seahub is generating http links because it doesn’t “know” the external connection is https. Here is how to fix this for version 13:

1. Remove Deprecated Settings

Since Seafile 12.0, FILE_SERVER_ROOT and SERVICE_URL are no longer used. You should remove them from your seahub_settings.py and ccnet.conf. Keeping them can sometimes cause unexpected behavior or be ignored entirely.

2. Update Environment Variables

In Seafile 13, the preferred way to set the protocol and domain is via environment variables in your Docker/container setup. Ensure these are set:

  • SEAFILE_SERVER_PROTOCOL=https
  • SEAFILE_SERVER_HOSTNAME=seafile.example.com

3. Adjust Nginx Proxy Manager (NPM) Configuration

The “Mixed Content” error happens because the X-Forwarded-Proto header isn’t reaching Seafile’s internal components correctly. In your NPM “Advanced” tab for the Seafile host, add the following to ensure the protocol is passed explicitly:

location / {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_pass http://<YOUR_CONTAINER_IP>:80; # Point to the main Seafile entry port
}

4. Verify seahub_settings.py

Keep the SECURE_PROXY_SSL_HEADER but make sure your CSRF_TRUSTED_ORIGINS uses your actual domain rather than 127.0.0.1:

SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
CSRF_TRUSTED_ORIGINS = ["https://seafile.example.com"]

(Note, normally we don’t need to set SECURE_PROXY_SSL_HEADER. I suggest you remove it from seahub_setting.py)

Why your previous attempt broke:

When you tried to “get rid of Caddy” and add custom Nginx configs, you were likely fighting the internal architecture of the Seafile 13 image. Version 13 treats its internal Nginx as a required component for internal routing. The best practice now is to leave the internal Nginx/Caddy alone and simply treat the entire Seafile container as a single HTTP service that you proxy to from NPM.

Once you update the environment variables and the X-Forwarded-Proto header, Seafile should start generating https:// links for all file downloads and SeaDoc sessions automatically.

1 Like

Thanks for the quick reply! I’ll attempt to apply this solution tonight and let you know how it goes

I am exactly trying to do just that, however seadoc does not seem to work properly (seafile seems ok though).

Hereafter is what I am trying to achieve:

         ┌───────────────┐
         │   Client      │
         │  (browser)    │
         └───────┬───────┘
                 │ HTTPS
                 ▼
         ┌─────────────────────┐
         │ Nginx Proxy Manager │
         │  Container: npm     │
         │  Network: npm-net   │
         │  TLS / SSL          │
         └───────┬─────────────┘
                 │ HTTP
                 │ (npm-net)
                 ▼
         ┌─────────────────────┐
         │       Caddy         │
         │  Container: seafile-caddy
         │  Networks: seafile-net + npm-net
         │  HTTP internal      │
         └───────┬─────────────┘
                 │ HTTP
                 │ (seafile-net)
                 ▼
 ┌───────────────────────────────────┐
 │           Seafile CE              │
 │  Containers: seafile, seadoc, seafile-mysql, seafile-redis
 │  Network: seafile-net             │
 └───────────────────────────────────┘

My npm.yml has a dedicated network:

networks:
  default:
    external: true
    name: npm-net

This network is added to caddy.yml and ports are no longer exposed since npm can access the caddy container directly through that npm-net network:

services:

  caddy:
    image: ${SEAFILE_CADDY_IMAGE:-lucaslorentz/caddy-docker-proxy:2.12-alpine}
    restart: unless-stopped
    container_name: seafile-caddy
    # ports:
    #   - 80:80
    #   - 443:443
    environment:
      - CADDY_INGRESS_NETWORKS=seafile-net
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ${SEAFILE_CADDY_VOLUME:-/opt/seafile-caddy}:/data/caddy
    networks:
      - seafile-net
      - npm-net
    healthcheck:
      test: ["CMD-SHELL", "curl --fail http://localhost:2019/metrics || exit 1"]
      start_period: 20s
      interval: 20s
      timeout: 5s
      retries: 3

networks:
  seafile-net:
    name: seafile-net
  npm-net:
    external: true
    name: npm-net

On npm, I am just adding a proxy host which forwards the public domain name to http://seafile-caddy:80 (name of the internal caddy container), and activate ‘Block Common Exploits’ + ‘Websockets Support’.

As for SSL panel, I have the following:

The following has also to be added to seahub_settings.py otherwise the site fails to load after login.

CSRF_TRUSTED_ORIGINS = ["https://seafile.example.com"]

With the above, I manage to have seafile working, but as soon seadoc is involved, it fails.

I have tried a few things but it always ends up breaking things up:

  • If I enable https in .env (SEAFILE_SERVER_PROTOCOL=https), caddy tries to generate ssl certificates, which fails (as expected since it is no longer exposed).
  • I tried to play in npm with custom locations for seadoc paths like /socket.io and /sdoc-server, but I don’t things this should be necessary as caddy does its own thing already to proxy everything.
  • I also tried bypassing caddy as the OP, but end up with the same issue.

I have been looking all over, but I couldn’t find a proper way of having npm working with seafile/seadoc. And I have not even tried yet with other extensions…

It would be nice to have a proper guide and have the doc updated, especially if now having the internal caddy behind npm is the proper and recommended way to go as mentioned by @daniel.pan

Any insight are most welcome. Thanks in advance!

Hi there, I have a close issue behind cloudfare. I created a ticket on github.

Finally got around to working on this - the fix noted in the last reply fixed it! I would link to it, but for some reason I’m not allowed to :man_shrugging:

Here are the steps I took in the docker-compose.yml/app config:

  1. Remove the caddy service
  2. Remove all labels from seadoc and seafile services referring to caddy (may not be necessary, but that’s what I did)
  3. Modified seadoc service’s SEAHUB_SERVICE_URL envvar so that it starts with https
  4. Modified seafile service’s SEADOC_SERVER_URL envvar so that it starts with https
  5. Set the seafile service’s SEAFILE_SERVER_PROTOCOL envvar from https to http
  6. Added ports section to the seafile service and exposed internal port 80 (e.g. 8000:80)
  7. Ensured NPM pointed to the seafile container and was directed to port 8000 with http traffic

Now I can use Seadoc and download files through Seahub! Thank you all for the help :smiley: