Seafile docker and other software behind nginx - but how? :-)

Hi

I am new to docker but I use seafile since 2012 or so.
I just installed Seafile community edition with docker on a fresh Ubuntu 22.04 VPS.

It took me a couple of wipes of the server when I made mistakes, but now it’s running pretty smoothly in my home folder in a folder “seafile”.

Great way to install a seafile server with https !!
I made sure it would restart by adding the flags “restart: unless-stopped” to the docker-compose.yml as below.
In general the basic setup is stable.

My question is, what would I have to do, to run to more instances of Seafile on the same VPS or other services, that require to be behind NGINX, like Mailcow or so?

I would need to run a different NGINX instance, I guess.

What would I need to configure in the docker-compose.yml though?

services:
  db:
    image: mariadb:10.11
    container_name: seafile-mysql
    restart: unless-stopped
    environment:
      - MYSQL_ROOT_PASSWORD=xxx  # Required, set the root's password of MySQL service.
      - MYSQL_LOG_CONSOLE=true
      - MARIADB_AUTO_UPGRADE=1
    volumes:
      - /home/seafile/seafile/seafile-mysql/db:/var/lib/mysql  # Required, specifies the path to MySQL data persistent store.
    networks:
      - seafile-net

  memcached:
    image: memcached:1.6.18
    container_name: seafile-memcached
    restart: unless-stopped
    entrypoint: memcached -m 256
    networks:
      - seafile-net
          
  seafile:
    image: seafileltd/seafile-mc:11.0-latest
    container_name: seafile
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"  # If https is enabled, cancel the comment.
    volumes:
      - /home/seafile/seafile/seafile-data:/shared   # Required, specifies the path to Seafile data persistent store.
    environment:
      - DB_HOST=db
      - DB_ROOT_PASSWD=xxx  # Required, the value should be root's password of MySQL service.
      - TIME_ZONE=Etc/UTC  # Optional, default is UTC. Should be uncomment and set to your local time zone.
      - SEAFILE_ADMIN_EMAIL=admin@domain.com # Specifies Seafile admin user, default is 'me@example.com'.
      - SEAFILE_ADMIN_PASSWORD=xxx     # Specifies Seafile admin password, default is 'asecret'.
      - SEAFILE_SERVER_LETSENCRYPT=true   # Whether to use https or not.
      - SEAFILE_SERVER_HOSTNAME=cloud.domain.com # Specifies your host name if https is enabled.
#      -  - FORCE_HTTPS_IN_CONF=true
    depends_on:
      - db
      - memcached
    networks:
      - seafile-net

networks:
  seafile-net:

I found this post, which seems to be similar to my intentions.

If I would like to have a “Main Nginx instance” before the “Seafile docker instance”.

The Main nginx Instance would then redirect to the Seafile docker image. Should the config file of the Main Nginx look like this?
Also considering https and port 443?

server {
    listen 80;
    server_name cloud.domain.com;
        
    location / {
        proxy_pass **localhost**:8001;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

And how would the docker-compose.yml file have to look like?
The manual describes

ports:
- “8001:80”

Would it need to be
ports:
- “8001:80”
- “8001:443”
?

Thanks so much for helping me

I’m planning to do something similar, i.e. switch to using Docker instead of bare metal installs. That said, Nginx is relatively lightweight so I might be tempted to install that bare metal and keep docker for Seafile.

Cool,

Do you know how to forward the 80 and 443 ports?

So ssl and automatic retrieval of letsencrypt certificates will work etc.?

This is my existing Nginx configuration for Seafile - which at the moment runs on the same server hence the proxy_pass to http://127.0.0.1:8000 and 8082 (Seafile & Seahub).

log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $upstream_response_time';

server {
    listen 80;
    server_name seafile.maltsystems.co.uk;
    rewrite ^ https://$http_host$request_uri? permanent;    	# Forced redirect from HTTP to HTTPS
    server_tokens off;      									# Prevents the Nginx version from being displayed in the HTTP response header
}

server {
    listen 443 ssl;
    ssl_certificate /etc/letsencrypt/live/seafile.maltsystems.co.uk/fullchain.pem;    # Path to your fullchain.pem
    ssl_certificate_key /etc/letsencrypt/live/seafile.maltsystems.co.uk/privkey.pem;  # Path to your privkey.pem
    server_name seafile.maltsystems.co.uk;
    server_tokens off;

    location / {
        proxy_pass         http://127.0.0.1:8000;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host $server_name;
        proxy_read_timeout 1200s;

        proxy_set_header   X-Forwarded-Proto https;

        # used for view/edit office file via Office Online Server
        client_max_body_size 0;

        access_log      /var/log/nginx/seahub.access.log seafileformat;
        error_log       /var/log/nginx/seahub.error.log;
    }

    location /seafhttp {
        rewrite ^/seafhttp(.*)$ $1 break;
        proxy_pass http://127.0.0.1:8082;
        client_max_body_size 0;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_connect_timeout  36000s;
        proxy_read_timeout  36000s;
        proxy_send_timeout  36000s;
        send_timeout  36000s;
        access_log      /var/log/nginx/seafhttp.access.log seafileformat;
        error_log       /var/log/nginx/seafhttp.error.log;
    }

    location /media {
        root /data/seafile/seafile-server-latest/seahub;
    }
}

So… I would imagine that if Seafile was in a docker container, that proxy_pass line would be changed to talk to the docker container. My knowledge of docker is limited but I’ve seen examples whereby a docker contain has it’s own LAN IP address. But I am guessing here somewhat…

I appreciate that you make the effort to post something, but it’s not really related to my question and it would be really cool if we leave this for someone to answer who knows the solution.
I am trying to find a solution as well, but haven’t found it yet.
It’s probably less a docker than an nginx question …

That’s a bit rude - that is the Nginx configuration to handle Seafile whether or not it’s in a container. There are two blocks in the configuration - the first listens on port 80 and redirects to https. The second blocks listens on port 443 and proxy redirects to the Seafile server.

Your problem is that you need to configure your Seafile docker container with it’s own local IP address. The proxy_pass statements then refer to the IP address of the Seafile container. That’s the only part of this that I’m a little unsure about. There are many tutorials on setting up docker networks.

You originally stated you wanted to run multiple instances on the same server. They will either need different IP addresses OR you could configure each instance with different ports - 8001 and 8083 maybe.

Yes exactly, that‘s my question here, how to set this up.

I don’t understand yet how to set it up so the letsencrypt in the seafile docker instance will work for example.

If you have the luxury to wait, then I suggest to wait, as they are changing their docker setup, and hopefully document exactly this, as promised here:

Thank you for the tip!

I do have the time to wait actually.
That sounds quite advanced :slight_smile:

Thank you for sharing your experience with Docker and Seafile! I noticed your interest in running multiple instances and services behind NGINX.

I’ve actually written a detailed guide that might be helpful for your setup, which you can find here: Deploying Seafile CE 11 with Docker and SWAG as a Reverse Proxy

Multiple Seafile Instances

To run multiple Seafile instances, the key points are:

  1. Use different container names (e.g., seafile-personal, seafile-pro)
  2. Ensure proper NGINX configuration
  3. Connect all containers to the same Docker network

Here’s an example NGINX configuration for SWAG:

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    http2 on;

    server_name seafile-personal.domain.com;

    # SSL standard config
    include /config/nginx/ssl.conf;

    client_max_body_size 0;

location / {
    # Use Docker's internal DNS resolver
    include /config/nginx/resolver.conf;

    # Proxy configuration for Seafile container
    set $upstream_app seafile-personal;
    set $upstream_port 80;
    set $upstream_proto http;
    proxy_pass $upstream_proto://$upstream_app:$upstream_port;
}
...

Reverse Proxy Options

For managing multiple services, you have several excellent options:

SWAG: A comprehensive solution that makes it easy to manage multiple services.

NGINX Proxy Manager: A user-friendly alternative with a nice GUI.

Standard NGINX: Manual configuration option for those who prefer more control.

Mail Server Experience

Regarding Mailcow - while I successfully set it up with SWAG, I ultimately switched to Mailu which I found to be a superior choice:

  • Significantly lighter on resources
  • More modern and cleaner interface
  • Easier to maintain
  • Better overall user experience

My guide on the Seafile forum provides detailed instructions for setting up SWAG as a reverse proxy, which should help you get multiple instances running smoothly!

You’re absolutely right about the proxy_pass needing to talk to the Docker container! Let me show you how simple it really is.

Basic Network Setup

First, create a shared network for your containers:

docker network create proxy

Simple Example

Here’s a minimal docker-compose.yml showing how to connect services:

services:
  seafile:
    container_name: seafile    # This name is used in NGINX config
    networks:
      - proxy                          # Connect to shared network

  nginx:
    container_name: nginx
    networks:
      - proxy                          # Same network as Seafile

networks:
  proxy:
    external: true                     # Use existing network

Then in your NGINX config, you can simply use:

...
        # Use the internal Docker DNS resolver
        # SWAG Exemple
        include /config/nginx/resolver.conf;

        # Because seafile is your "container_name"
        # In the docker-compose
        proxy_pass http://seafile:80;
...

That’s it! Docker handles all the networking - no IP addresses needed. The container name ‘seafile-personal’ becomes the hostname, and Docker manages all the communication between containers on the same network.

Thank you Whisperwarlord

I had to read this a couple of times, to be honest.
I think I understand it but I wouldn’t yet know how to configure it myself.

I will probably need to do a try and error.

Look my post above but you have he direct link of my tutorial in the forum

Deploying Seafile CE 11 with Docker and SWAG as a Reverse Proxy