Podman quadlets for Seafile V13 including notification server, redis, maria-db, onlyoffice, caddy and rclone

Hello everyone,
I finally managed to migrate my entire server to rootless podman quadlets.

Advantages in my opinion:

  1. No rootful docker containers
  2. Even regular systemd services like rclone in my example can be made dependent on a container, so they start after each other in the right order
  3. I have a separate config file for each container, so I can e.g. use one maria-DB container for different container apps without having to start them together in one stack, thanks to systemd quadlets, they start in the right order.
  4. Podman has auto-update inbuilt, no need for watchtower or similar addon aps (I have a small script in place that checks for updates once a week and notifies me by email, so I can check if some updates contain breaking changes that need manual interaction, afterwards I push the updates with “podman auto-update”.

Drawback:

  1. Withouth enabling the podman socket (which is not recommended), caddy docker proxy does not work, so I use a regular Caddyfile.
  2. There is no dedicated quadlet editor for the web like portainer, cockpit provides a podman plugin, but without proper quadlet support so far.

Since this was quite some work, I would like to share this with other forum members in case they are interested.
Quadlet files (.container &.network) go to ~/.config/containers/systemd, I keep .env files in the same folder.
.service files for rclone goes to ~/.config/systemd/user

Let’s start with the seafile.env

TIME_ZONE=your timezone
SEAFILE_SERVER_HOSTNAME=https://my-seafiledomain.com
SEAFILE_SERVER_PROTOCOL=https
JWT_PRIVATE_KEY=your jwt key

ENABLE_SEADOC=false
##I don’t use seadoc at all

SEAFILE_MYSQL_DB_HOST=maria-db
SEAFILE_MYSQL_DB_PORT=3306
SEAFILE_MYSQL_DB_USER=seafile
SEAFILE_MYSQL_DB_PASSWORD=your seafilepassword

for fresh install you also need to use the root db password variable, but not for migration from docker

SEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db
SEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db

CACHE_PROVIDER=redis

REDIS_HOST=seafile-redis
REDIS_PORT=6379

db.network (to create a custom podman network)

[Unit]
Description=Database Network
After=network-online.target

[Network]
NetworkName=db
Subnet=10.90.0.0/24
Gateway=10.90.0.1

[Install]
WantedBy=default.target

seafile-server.container

[Unit]
Description=Seafile Server
Requires=maria-db.service seafile-redis.service seafile-notifications.service oods.service
After=maria-db.service seafile-redis.service seafile-notifications.service oods.service

[Container]
ContainerName=seafile-server
EnvironmentFile=seafile.env
Image=docker.io/seafileltd/seafile-mc:13.0-latest
Label=io.containers.autoupdate=registry

Network=db
PublishPort=3001:80
Volume=/Media/Seafile:/shared

[Service]
Restart=on-failure

[Install]
WantedBy=default.target

seafile-notifications.container

[Unit]
Description=Seafile Notification Server
After=network.target
Wants=network.target

[Container]
Image=docker.io/seafileltd/notification-server:13.0-latest
ContainerName=seafile-notifications
Label=io.containers.autoupdate=registry

Network=db
EnvironmentFile=seafile.env

Volume=/Media/Seafile/seafile/logs:/shared/seafile/logs

PublishPort=8083:8083

[Service]
Restart=on-failure

[Install]
WantedBy=default.target

seafile-redis.container

[Unit]
Description=Seafile Redis Cache Server

[Container]
ContainerName=seafile-redis
Image=docker.io/redis:latest
Label=io.containers.autoupdate=registry
Network=db
HealthCmd=redis-cli ping || exit 1

[Service]
Restart=on-failure
Notify=healthy

[Install]
WantedBy=default.target

maria-db.container

[Unit]
Description=MariaDB container
After=network.target

[Container]
ContainerName=maria-db
Image=docker.io/mariadb:lts
Label=io.containers.autoupdate=registry
Network=db

HealthCmd=/usr/bin/mariadb-admin ping -h 127.0.0.1 -uroot -pyourrootpass --silent
HealthInterval=10s
HealthRetries=5
HealthTimeout=5s

Volume=/Databases/MariaDB:/var/lib/mysql

Environment=MARIADB_ROOT_PASSWORD=yourrootpass
Environment=MYSQL_INITDB_SKIP_TZINFO=1
Environment=MYSQL_LOG_CONSOLE=true
Environment=MARIADB_AUTO_UPGRADE=1

[Service]
Restart=on-failure
Notify=healthy

[Install]
WantedBy=default.target

oods.container

[Unit]
Description=OnlyOffice Document Server

[Container]
ContainerName=oods
Environment=JWT_ENABLED=true
Environment=JWT_SECRET=jwt secret
Image=docker.io/onlyoffice/documentserver:latest
Label=io.containers.autoupdate=registry

PublishPort=8086:80
[Service]
Restart=on-failure

[Install]
WantedBy=default.target

caddy.container

[Unit]
Description=Caddy container
After=network.target

[Container]
ContainerName=caddy
Image=docker.io/caddy:latest
Label=io.containers.autoupdate=registry
Network=host
Environment=XDG_DATA_HOME=/data

Volume=/Storage/Caddy/Caddyfile:/etc/caddy/Caddyfile
Volume=/Storage/Caddy/data:/data/caddy
[Service]
Restart=on-failure

[Install]
WantedBy=default.target

rclone.service (similar to seaf-fuse, but with full write access, not read only, optional)

[Unit]
Description=Rclone mount for Seafile
After=seafile-server.service
Requires=seafile-server.service

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount Seafile: /RClone
–vfs-cache-mode full
–dir-cache-time 72h
–poll-interval 15s
ExecStop=/bin/fusermount -u /path/to/mountpoint
Restart=on-failure
RestartSec=5

[Install]
WantedBy=default.target

Rclone needs to be configured once with an inbuilt assistance, this will create a config file within ~/.config and this service file will automatically start this (or you can comment out the line wantedby= and start it manuall).

Caddyfile

my-seafiledomain.com {
reverse_proxy :3001 {
}
handle_path /notification/* {
reverse_proxy :8083
}
handle_path /office/* {
reverse_proxy :8086 {
header_up X-Forwarded-Host {host}/office
}
}
}

Thanks a lot for daniel.pan in helping me to fix this caddy config for running onlyoffice in a subfolder of the seafile-domain (no additional port or separate sub-domain required).

To make this work, you also need to update your seahub_settings.py as followed:

ONLYOFFICE_APIJS_URL = ‘https://my-seafiledomain.com/office/web-apps/apps/api/documents/api.js’

Final remarks, I’ve added the following lines to my .bashrc to facilitate podman container handling (restart, stop, status)

##Custom Podman functions for bash
alias sudr=‘systemctl --user daemon-reload’
_pcac() {
local cur container_services
COMPREPLY=()
cur=“${COMP_WORDS[COMP_CWORD]}”

# List all user services (active + inactive), match those derived from Quadlet (.container)
container_services=$(systemctl --user list-units --type=service --all --no-legend \
    | awk '{print $1}' \
    | sed 's/\.service$//')

COMPREPLY=( $(compgen -W "${container_services}" -- "${cur}") )
return 0

}
complete -F _pcac pcr
complete -F _pcac pcs
complete -F _pcac pcst

pcr() {
systemctl --user restart “$1.service”
}
pcs() {
systemctl --user stop “$1.service”
}
pcst() {
systemctl --user status “$1.service”
}
##End of podman functions

So sudr is used to reload container quadlet config after changes were made to the files.
pcr seafile-server restarts the container (and all dependencies), pcs seafile-server stops the server and pcst seafile-server provides status information. For more detailed and realtime info on a container you can use e.g. podman logs -f seafile-server.

It is possible to harden this even further by using podman secrets for the passwords, but for now I guess this is enough. Feel free to comment on this setup or ask questions if you are interested in trying this.

Good luck!
Regards, Ruediger

P.S. Of note, I don’t use podman volumes for permanent data, but I prefer volume mounts directly to my zfs datasets to facilitate backups and direct access to all files from the host.

3 Likes

Good approach, I did the same thing, even scoped to its own linux user.

Though you shouldn’t hardcode secrets into the quadlets. You can pass an env file instead, but that exactly is one of the drawbacks of this approach. You can’t load individual env vars from a file, you need to bring in the entire file, which makes isolation of secrets such a pain (which I never bothered with).

Frankly, in hindsight I think I’d just run the docker compose file with podman-compose though I haven’t tested it.

As far as I understand, you can use a separate feature, podman secrets, instead of placing passwords in the quadlet files or the env files.
I use dedicated env files if the app has a lot of environment variables, so I avoid writing Environment= a lot (lazy person) :slight_smile:
If there are only a few variables, I’ll keep them in the quadlet. Perhaps not very consistent, but reasonable.

You can for sure use podman-compose with compose files, but that’s not the recommended way for podman. Also you would loose the ability of podman to have dependencies across different compose stacks.

e.g. if you run seafile and sogo with the same maria-db container, you would have to stack all those containers into one compose stack to make sure that mariadb is up before both containers using the db start. There are some work-around for that, but they are actually all rather complicated and not really a clean solution, but a work-around. with quadlets, each container has it own config file (.container), and they can still depend on each other.

Very nice, if you use this together with zfs snapshots.
I created a batch file for performing my snapshots, that starts with

systemctl --user stop maria-db.service

This command will stop maria-db and all dependent quadlets at once.
Then it performs a zfs snapshot and afterwards it restarts the apps containers. Maria-DB will start automatically, as it is set as requirement inside the app quadlets.

Just a quick comment here to make sure I can find this again later when I have time to tinker again, and also to say that you are my hero! I’m using podman-compose now, and it works, but bothers me because it isn’t the right way to do it. Thank you! I can’t wait to fix my setup.

Please feel free to ask for some help if you run into issues. I can definitely recommend to start playing around with quadlets on a separate system, e.g. a virtual machine. That’s how I migrated from docker compose to podman quadlets.

I got some time to work on this, and just checking in with a bit of a status update here. I never understood quadlets before. I had read about it, but I didn’t understand how systemd found out about the containers. It took a while to find out that quadlets don’t actually work in debian 13, and then I had the wrong directory for the .container files and eventually put that on hold to work on something else. So with your example I’m back on the right track.

I only have some containers working so far, but I have found I needed to make some changes from your files, and thought I’d ask if I’m going down the wrong path again.

  1. “After=network.target” doesn’t seem to work. It looks like you can’t have user units depend on system units? I’ve done After=basic.target instead
  2. maria-db.container should Requires and After db.network.service, right?
  3. seafile-server.container says “Requires=maria-db.service seafile-redis.service seafile-notifications.service oods.service”, but in the official docker stuff the order is different. I think seafile-server should require maria-db and seafile-redis, but notifications.service, metadata, and the other optional containers should then come afer seafile-server (so they depend on seafile-server, not the other way around).
  4. I haven’t got it working yet, but I think seafile-server needs something like HealthCmd=“curl -f http://localhost:80 || exit 1” so that it is fully up before other containers that need this one start, right?

Hi there,

for 1. it should be Wants=network-online.target
for 2. no, db.network gets started before anyways
for 3. I noticed that both oods and notifications-service throw no errors when started before seafile-server, so I have it this way. Advantage. when is run “pcr seafile-server” after stopping the containers for some maintenance it will automatically start all other containers. otherwise it would not start seafile-notifications until I do so manually. But yes, this can be changed.
for 4. you can try this: HealthCmd=curl -fs http://localhost/ || exit 1
HealthInterval=30s
HealthTimeout=10s
HealthStartPeriod=10s
HealthRetries=3

Kind regards,

P.S. Just noticed that the wiki feature does not work without seadoc, I will add the quadlet later on. Also I moved caddy to the custom network, so I can stop publishing container ports on the host. Update will follow…

Thank you, that helped. I also just figured out that a lot of the problems I have been having is because there are variables that the containers want that aren’t defined in the supplied .env file, but exist only in .yml file. I thought an easy shortcut to get it running would be to just include that .env file into the environment for all containers, but this forced me to take the time to put the needed variables into each .container file.

With that solved I have some containers working, but more work to do. I did decide to deviate from your dependencies. Instead I have it looking like this:

[Unit]
Description=Database Network
PartOf=seafile-containers.target

[Network]
NetworkName=db
Subnet=10.90.0.0/24
Gateway=10.90.0.1

[Install]
WantedBy=seafile-containers.target

The “WantedBy” and “PartOf” seafile-containers.target connects this to a target. In .config/systemd/user create seafile-containers.target looking like:

[Unit]
Description=Start all the seafile containers
## Requires for the not-optional containers
Requires=seafile-server.service maria-db.service redis.service db-network.service 
## optional containers should be started if they exist
Wants=notification-server.service seafile-metadata-server.service seafile-thumbnail.service 

[Install]
WantedBy=default.target

After that file is created, “systemctl --user daemon-reload”, then “systemctl --user enable seafile-containers.target”. With that, you can start all containters by starting seafile-containers.target, and (because of the PartOf line in the container files), you can stop them all by stopping seafile-containers.target.

I mostly did this because I am deploying this all with an ansible playbook, and so it really helps to have a single target to use to stop and start the containers, but I thought you might like it.

Now that I think I have a clear path to making this work, I am curious about some other stuff you talked about. I wonder if you could explain a bit about your script that tells you about updates, that sounds very useful.

I now have seafile running in production with completely rootless podman with quadlets. I learned a few things along the way that might have been obvious to some people but weren’t obvious to me, so I thought I would share.

  • Debian 13 (current stable) won’t work. The version of podman is too old to do quadlets. There might be some way to make it work, but I couldn’t find one as easy as just running testing, so I just stepped up to Debian testing.

  • I haven’t been able to find a license for the seadoc component. There is no license in the github project GitHub - haiwen/sdoc-server ¡ GitHub so I would advise that you stay away from seadoc for now.

  • Use the same name for containers. You might think it doesn’t matter if you make the seafile metadata container’s name “seafile-metadata-server”, but you would be wrong. It seems to be hardcoded in seafile that it wants to talk to “seafile-md-server”. The exception is the database, which isn’t hard-coded (there is a variable for it), but you still need to be careful to match the variable and the container name.

  • Not all variables for a container are in the .env file. When converting, look in the .yml file for the variables instead. Sometimes the yml file pulls in the value from the .env file, and sometimes the value is just set there in the yml file.

  • See the above post about refining the dependencies to a single .target you can start/stop to start or stop all the containers in the right order.

  • Running seaf-fsck or seaf-gc can be weird, especially if you try to script it. One gotcha in particular if you are trying to run without root inside of your containers. If $SEAFILE_VOLUME/seafile/seafile-data/storage (when viewed from outside of the container) is owned by the user you run podman as, then seaf-fsck and seaf-gc will expect to be run as root inside of the container. If the owner is a really high UID number (one of your podman user’s sub-uids), like 173535, then seaf-fsck and seaf-gc will expect to run as the seafile user inside the container. I put this in my script to handle that:

data_dir="{{ seafile_volume_path }}/seafile/seafile-data/storage"
command="/opt/seafile/seafile-server-latest/seaf-fsck.sh \"$@\""

get_data_dir_owner(){
    stat -c %u "${data_dir}"
}

if [[ $(get_data_dir_owner) -gt 10000 ]] ; then
    user=seafile
else
    user=root
fi

    sudo -u podman podman exec -it seafile-server \
	 su $user -c "$command"