Alternative steps for switching to the docker version of Seafile

This is my (not so) little guide to switching from non-docker to docker version of Seafile in the easiest, and most secure way I could come up with.

It’s clear that Seafile is only going to be distributed as a docker image in the future, so since I have the time now, I figured I should learn how I would convert to using the docker version. The process outlined in the Seafile documentation will get a working docker installation, but it isn’t the most secure setup. I am recording the alternative steps I came up with for my setup in case they are useful to anyone else.

First, a bit about why I don’t want to use the docker edition. I already run Seafile in its own server so the container stuff does nothing for me; the separate VM is more thorough isolation than a container. Getting a backup of the entire VM is much easier than backing up a container (since I already have the infrastructure for VM backups). I tend to try to run things in as secure a way as I can, so installing programs as a black box image that’s hard to inspect doesn’t really appeal to me, and the fact that Docker requires another daemon running as root on the server (and setting the data directories’ permissions to a=rwx) is also concerning. This seems like importing a lot of complexity and attack surface unnecessarily.

And a bit about how my setup currently works, since this is the starting point for my process. My seafile VM runs from a pretty small OS disk image (cloned from a generic server image). The VM has 2 additional virtual disks, a small one for the database (/database) and a larger one for the seafile data (/seafile-data). The reverse proxy, authelia (for OAUTH) and other services all run in separate VMs, isolated by firewalls, vlans, and etc. Since I have the reverse proxy, and OAUTH, and colabora working with seafile version 11, I want to make as few changes outside of the seafile VM as possible for the upgrade.

OK, with the background out of the way, here’s my process to upgrade from non-docker version 11, to docker edition of version 12. This was all done on Debian 12, so if you are on another distribution some of the podman stuff might be different. Please share here if you find a difference.

  • Install podman
    I will use podman instead of docker because it seems to have a better design for security. There is no daemon running all the time, and everything with podman can be run as a user instead of root. I also found it easier to install, and a tiny bit faster. Also it will happily read the docker files, so there’s not much you have to do different.

apt install podman-docker systemd-container

  • Create a user for the containers to run as. I decided to name mine podman. We will also need to turn on “linger” for this user so the container can run even when the user is not logged in. I put the podman user’s home directory on the /seafile-data disk because the OS disk is small, and this is where the files downloaded to make the docker containers will be stored. It’s likely that the default home directory will work for you.
sudo adduser --home /seafile-data/podman-home --shell /usr/sbin/nologin podman 
sudo loginctl enable-linger podman
  • Make note of the podman user’s subuids (or create if it wasn’t made for you).
    cat /etc/subuid
    There should be a line like:
    podman:165536:65536
    Which gives podman 65536 subuids starting at 165536. Make note of that number for later.

  • Configure podman to use the overlay storage driver. This improves performance immensely. Everything seemed to say this is the default, but it wasn’t used until I manually configured it. So in the podman’s home directory, create the file ~/.config/containers/storage.conf with this content:

[storage]
driver = "overlay"
  • Reconfigure mariadb and memcached to accept connections over the network (because connections from the container look like remote connection from the network). Edit /etc/mysql/mariadb.conf.d/50-server.cnf to change the bind-address line to have the server’s real IP (not the docker IP, but server’s main IP). Edit /etc/memcached.conf to change “-l 127.0.0.1” to instead have the server’s IP. Restart both:
sudo systemctl restart mariadb.service
sudo systemctl restart memcached.service
  • Set up firewall rules to block anything from the network from talking to mariadb and memcached. Be sure to test these rules after everyhting is set up. The exact steps depend on what firewall you are using, so I don’t have specifics that are likely to work for you (for example, mine is actually firewall rules on the Proxmox machines that host the seafile VM).

  • Stop and disable the old seafile services:

  systemctl stop seafile.service 
  systemctl stop seahub.service 
  systemctl disable seafile.service 
  systemctl disable seahub.service
  • Give the seafile user in the database permission to log in from remote machines, and give it control over the seafile databases when logged in remotely.
mariadb
   GRANT ALL PRIVILEGES ON *.* TO 'seafile'@'%' IDENTIFIED BY 'PASSWORD' WITH GRANT OPTION;
   GRANT ALL PRIVILEGES ON ccnet_db.* to 'seafile'@'%';
   GRANT ALL PRIVILEGES ON seafile_db.* to 'seafile'@'%'; 
   GRANT ALL PRIVILEGES ON seahub_db.* to 'seafile'@'%';
  • Make a directory for the docker files, and download them:
  mkdir /seafile-data/docker
  cd /seafile-data/docker
  wget -O .env https://manual.seafile.com/12.0/docker/ce/env
  wget https://manual.seafile.com/12.0/docker/ce/seafile-server.yml
  # Optional, I wanted the notification server, so I did
  wget https://manual.seafile.com/12.0/docker/notification-server.yml

I decided not to include seadoc. It appears to be written in javascript, and I don’t have time to check over 400 npm packages aren’t subject to the famous npm supplychain attacks. Also seadoc doesn’t seem to be opensource (or at least I can’t find the source and license anywhere), so it seems to be prorietary, so I can’t expect that anyone besides the original developer has checked either. So that’s a solid pass for me, at least for now.

  • Edit these docker files to remove the caddy stuff, as described in the instructions in the manual for not using caddy ( Use other reverse proxy - Seafile Admin Manual ).

    I made some additional changes beyond what was described there. I also removed the entire db and memcached sections since I already have a working database and memcached and don’t need to install another one of either. I also changed the ports that get forwarded out of the seafile-server.yml to look like this:

ports: 
  - "8000:8000" 
  - "8082:8082"

This will let us talk directly to Seafile without going through the nginx inside the container. This seemed necessary to get OAUTH working, but I now suspect it wasn’t. However, it still made it easier because this means the nginx config I was using on my reverse proxy works without any changes on this new Seafile version.

It also makes troubleshooting easier, because this way it is easier to use wireshark to see exactly what is going on. I don’t like that the container is still running an nginx process I don’t need, but at least that process is now out of the way and not causing problems (beyond wasting resources). Maybe someday a variable can added to the docker config that could let us not start nginx?

I also added the “NON_ROOT=true” in the .env file as part of my efforts to improve the security of this setup. This will start the Seafile programs inside the container as the seafile user instead of as root.

  • Move the seahub-data from the old version, to the docker location:
mv /seafile-data/seafile/seahub-data/* /seafile-data/persistent-data/seafile/seahub-data
  • Move the Seafile data from the old version to where docker expects it.
    I tried with both of these to just set up docker to use those files where they already are, but that didn’t work. There were several problems, but mostly they were either giving the docker container access to files it doesn’t need, or putting in an extra mount-point or two, which broke uploads by making it impossible to move the temp files into the storage directory. So in the end moving these was just easier.
mv /seafile-data/data /seafile-data/persistent-data/seafile/seafile-data
  • Add the “current_version” file. This doesn’t appear in the documentation (not that I could find at least). I found it while reading through scripts in the container. This is how it will know that you are doing an upgrade and so it needs to run the upgrade/upgrade_11.0_12.0.sh file for you. For me this was done with:

echo 11.0.13 > /seafile-data/persistent-data/seafile/seafile-data/current_version

That needs to go in the directory above the directory named “storage”. Obviously be sure to put your current version in this file, not mine.

  • Create the logs directory. If you let the container create the logs directory for itself, the permissions will not be set on it correctly, and seafile won’t start (at least if you are using the “NON_ROOT=true” option). This should be fixed in future versions, but is easy enough for us to just work around now.
    mkdir /seafile-data/persistent-data/seafile/logs

Continued in the next post…

Continuing from above

  • Set file permissions.
    Now we will need the container to have access to the files in the “persistent-data” directory. The admin guide says to run “chmod -R a+rwx /seafile-data/persistent-data” to give everyone access to these files and directories. This means that ever user on this server has full access to read, add and remove Seafile files. I don’t really like that idea, but if you’re running most of the code on this server as root, then file permissions aren’t going to do much to contain any problems anyway.

But since this guide is about running without root, it makes sense to instead just change these files to be owned by the seafile user inside the container. For that we need to know the uid this user will show up as outside the container. We can work this out from the subuid number you got above. Inside the container, the seafile user’s id is 8000. So we want to add to the subuid from above, this 8000 and subtract 1.
165536 + 8000 - 1 = 173535 for my system. So we run:

chown -R 173535:173535 /seafile-data/persistent-data

And wait a few minutes for that to finish.

  • Start the container for the first time.
    We need to log into the podman user as a full login session so systemd will work for us. Just su podman doesn’t do it for some reason (I would like to know why, so a note to myself to do some research).
machinectl shell podman@

Now we need to setup podman to be ready to run for this user.

systemctl --user enable podman.socket 
systemctl --user start podman.socket

The official guide says to use “docker compose up -d”. We will make 2 small changes here, and one larger one. First, because we are using podman to pretend to be docker, the command is now “docker-compose” instead of “docker compose”. And for the second, the -d tells it run in the background, which means you don’t see much status. For this first run lets run in the foreground to see the action.

The other change is that we need to tell docker-compose how to talk to podman since we aren’t using podman as root. Get the current user’s uid with the id command. In my case it was 1001, so put your number in for 1001 in this command:

docker-compose -H unix:///run/user/1001/podman/podman.sock up

It might take a few minutes the first time. You should see the startup log messages. You should either get an error, or something like this showing it worked:

seafile           | Done. 
seafile           | 
seafile           | Starting seahub at port 8000 ... 
seafile           | 
seafile           | Seahub is started 
seafile           | 
seafile           | Done. 
seafile           |

You can use ctrl-c to shutdown the containers once you know it is working.

  • Set the containers to start up with the system.
    Now we will create a systemd service within the podman user. Create and edit the ~/.config/systemd/user/seafile-containers.service file, and paste in this:
[Unit]
Description=Seafile Podman containers via docker-compose
Wants=network-online.target
After=network-online.target
#RequiresMountsFor=/seafile-data
Requires=podman.socket mariadb.service memcached.service 
​
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Environment=PODMAN_USERNS=keep-id
Environment=COMPOSE_HTTP_TIMEOUT=300
Restart=always
TimeoutStartSec=60
TimeoutStopSec=60
ExecStart=/usr/bin/docker-compose -H unix:///run/user/1001/podman/podman.sock up
ExecStop=/usr/bin/docker-compose -H unix:///run/user/1001/podman/podman.sock down
Type=simple
WorkingDirectory=/seafile-data/docker
​
[Install]
WantedBy=default.target

Save and exit

Now we will enable and start that service:

systemctl --user daemon-reload 
systemctl --user enable seafile-containers 
systemctl --user start seafile-containers

And that should be it. When your server boots, it should start the seafile containers, and all with as little exposure to root privileges as possible.

I haven’t yet tackled a few parts I want to get working eventually:

  • The container doesn’t log to my syslog server. I need to figure out how to make that work.
  • I need to update the monitoring system to alert if the firewall doesn’t block access to memcache or mariadb.
  • I also need the monitoring system to alert if any of the expected containers fails to start, or stops running.
  • I had some wrapper scripts to make seaf-fsck.sh and seaf-gc.sh real easy, but now those will need to be rewritten. I think it won’t be too hard, just need to do the usual stuff around these commands. Need help, passing arguments, make sure we are running in screen or tmux, etc. around:
sudo -u podman DOCKER_HOST=unix:///run/user/1001/podman/podman.sock docker exec -it seafile su seafile -c "/opt/seafile/seafile-server-latest/seaf-fsck.sh -r"
sudo -u podman DOCKER_HOST=unix:///run/user/1001/podman/podman.sock docker exec -it seafile su seafile -c "/opt/seafile/seafile-server-latest/seaf-gc.sh --rm-fs -t 8"

That switches to the podman user, and there is runs “docker exec” to run a program inside the container. Inside the container it will run su to become the seafile user, and as that user will run seaf-fsck or seaf-gc.

And finally, I will end with a quick thanks to the seafile developers who helped me get unstuck several times in the testing while making this process, and who have made a really top-notch system that was worth all this effort to keep using.

2 Likes

Wow! Thank you for this amazing and helpful piece of work. :pray:

Did you consider using podman-compose instead?
Or even go “all in” with podman quadlet?

No, I didn’t consider podman-compse. Not for a good reason, but because I don’t know anything about it. I have had to fight docker once at work when filling in for a coworker on vacation (we had a lead developer for some internal tools who had joined the docker religion, but I think was using it all wrong), and that was my only docker experience before trying with seafile. I was not any more impressed with it this second time, and found podman as the easier way to feel like I’m doing docker with some security.

I wouldn’t be surprised if there’s a much better way than what I wrote since I’m new to the podman and docker stuff. So what advantages would podman-compose or quadlet bring? And what disadvantages?

podman-compose is basically the (drop-in) replacement for docker compose in the podman-world. This should spare you the DOCKER_HOST=unix:///run/user/1001/podman/podman.sock argument.

Quadlets basically provide you a way to run podman containers as systemd services without any need of compose-files. There is even a tool to convert compose-files into quadlet definitions.
The benefit is that you define inter-container-dependencies as normal systemd service dependencies. Systemd takes care of the life-cycle.

Thanks! That’s interesting. I will have to play with that, it sounds useful. Do you know if it also separates the steps for getting the latest version and starting the container? That’s something I haven’t yet had time to figure out, and I don’t want updates being installed without getting a chance to test first (or at least snapshot the VM first).

I am also new to the topic, but I would tend to separate the services and let systemd do the work. I haven’t yet tested it, but I will use almalinux as a platform for podman, then put mariadb, seafile, notification server and probably also seadoc in separate containers. I will wait until the stable community version 12 is fully available as a docker container.
I am not yet sure whether in this setup, it is still possible to run the apps in the same pod so they can see each other which is probably more efficient or if we just run each of them rootless and let them communicate via their external API / http ports.

Afaik it should. You have to manually pull new image versions if you use quadlets. (It’s been some since time I was deeper into this, so I might be wrong about that.)

While there are some information sources about quadlet in the internet, it is hard to find information about its rootless mode and UID mapping. After some search, I found the following source which helps to better understand the different modes of user mappings. You also have to follow the links where you can find even more valuable information. This does not seem to be a topic for one hour, you have to understand the consequences of what you are doing before making any decisions:

And still the documentation may not answer the question how you can register and start the service with a non-root user. What’s so easy with a root user seems to be much more difficult, e.g. when dealing with privileged ports.

1 Like

Thank you @liayn for the tip on how I could improve this process with proman-compse and quadlets. After playing with it a bit, I have some observations. First, the quadlets feature isn’t available in the podman version in debian 12 stable. I might switch to debian testing, or build podman from source to get it in the future. I tried instead to just use the older podman systemd generator, but found some problems that make it feel not worth it, like needing to remove the systemd unit files and regenerate them every time you update a container to a new version.

And thank you @d025477 for that link, it does help a lot with understanding the quadlets.

We can pretty easily use podman compose instead of docker-compose, so the “docker-compose -H unix:///run/user/1001/podman/podman.sock up”, can be “podman compose -f seafile-server.yml up -d” and “podman compose -f notification-server.yml up -d”. It looks like you can copy sections from these into one unified docker-compose.yml file to have one command start all your containers at once, but I haven’t played with that yet.

Another random note, as @d025477 noted bove, normally containers running without root can’t use privileged ports (ports under 1024). I don’t need this for my setup, but stumbled on this while reading so thought I would make a note of it here for anyone who is trying to run their reverse proxy or something in a rootless podman container. This probably isn’t the best fix, but you can change the range of privileged ports:

sudo sysctl net.ipv4.ip_unprivileged_port_start=80

One other thing I learned, you can replace the “up” in the docker-compose or podman compose with start to just start the containers that were already created by an “up”. In this way, you can keep your current version until ready to start an upgrade. So change that systemd service file above to do docker-compose start and stop instead of up and down (and switch the type, probably to forking).

And then when you are ready to actually install the upgrade do (in the directory with the docker files):
systemctl --user stop seafile-containers
docker-compose down
docker-compose up --no-start
systemctl --user start seafile-containers

I haven’t yet figured out the podman compose version of doing that. And I might not bother, since I have the docker-compose working with minimal changes to the official docker files, which I hope means that I’m less likely to break things from future versions.

A podman version >5 is available in Almalinux (or Rocky Linux) as they are Redhat derivatives or at least compatible.
I also did not mention that at least part of the issues I had are caused by the fact that I have installed Almalinux in a non-priviledged LXC container in Proxmox. Now I better understand why this can be a pain in the butt :wink:
But this post has helped me a lot.