Seafile backup step-by-step how-to, plus: Co-existing with NextCloud, auto-mounting seaf-fuse with Docker

I would like to make regular backups of the Seafile data according to the instructions here.

[UPDATE: The [Docker backup instructions here (Seafile CE Docker)]( Seafile Community Installation - Seafile Admin Manual) or here (Seafile Pro Docker) are a little different, and if I had followed them, would have made the scripts and setup a little simpler. That simpler approach and script is outlined here. That approach is probably better than the approach below for most situations.]

I thought this would be a simple and straightforward project, but as always there are a few more twists and turns than anticipated. I thought others (and my future self) would benefit from a step-by-step walkthrough of how I did it, including various necessary scripts and files.

Setup:

  1. Seafile Server running on docker (Docker Compose setup files)
  2. Data store on external drive
  3. Running on an Ubuntu VM under Windows (though Windows & VM details are not very relevant to the steps below)
  4. Currently about 300gb in Seafile storage, though that looks to be >1tb soon.

The basic procedure is:

  1. Dump databases to external folder (I use a new subfolder of the Seafile data store folder)
  2. Copy directory containing Docker Compose .yml files & other setup files to the same new subfolder
  3. Use Rsync or some external backup program to back up those two things plus the Seafile data store files.

I’m using the procedure outlined below to dump the databases + Docker config files to the same folder as the Seafile data stores every morning at 5:55am, then Kopia to back up that entire folder starting at 6:00am; repeat the same process at 5:55pm & 6pm. For various reasons I’m running Kopia under Windows & the DB dumps under Ubuntu - thus the fairly loose coordination re: time. Normally you could run the DB dump routine as a before-backup action within the backup software. But the main need is to run the DB dump first, then the backup of the file store.

Instead of Kopia you could use Rsync, Duplicati, Duplicacy, Borg, etc etc etc. To me, setting up a backup program to back up a certain directory on a regular schedule is fairly routine. I will leave that part to you. The slightly trickier part is getting the DB dumps out of the Docker container and to a place where they can be accessed by the backup program.

Here is the procedure I used to do that:

#1. Added 2 volumes to the DB section of the Docker Compose file (last two lines shown here, marked with #*****:

services:
  db:
    image: mariadb:10.11
    container_name: seafile-mysql
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=<<my secret DB password>>  # Requested, set the root's password of MySQL service.
      - MYSQL_LOG_CONSOLE=true
      - MARIADB_AUTO_UPGRADE=1
    volumes:
      - /my/preferred/path/to/seafile-mysql/db:/var/lib/mysql  # Requested, specifies the path to MySQL data persistent store.
      - /my/directory/path/to/Seafile/datastore/seafile_db_dumps:/mnt/seafile_db_dumps  #***** A new directory within the .../seafile directory where we can store the DB dumps      
      - /home/<<my-user-name>>/seafile:/mnt/config #*****Making the directory with my Seafile Docker Compose files & other needed config files (such as the scripts to dump the DB) available to the DB container
...

#2. Within the directory /home/<<my-user-name>>/seafile (the same directory where I keep the Seafile Docker Compose files & other similar setup/config files), I add the new file seafile_databasedump.sh:

#!/bin/bash

#The directory where you want to save the database dumps (inside/relative to the seafile-mysql container)
#Note that this must be the same directory as previously defined in the Docker Compose file under db/volumes:
mydir=/mnt/seafile_db_dumps

#This must be the same directory as previously defined in the Docker Compose file for the config directory:
myconfigdir =/mnt/config

# This requires the same password defined in the Docker Compose .yml file as MYSQL_ROOT_PASSWORD:
myDBpw='<<my secret DB password>>'

dt=`date +"%Y-%m-%d-%H-%M-%S"`

echo "Adding backups and db dumps to $mydir/seafile_db_$dt.tgz (relative path within seafile-mysql container)..."
mkdir $mydir/$dt
mkdir $mydir/$dt/config
mysqldump -u root --password=$myDBpw --opt ccnet_db > $mydir/$dt/ccnet-db.sql
mysqldump -u root --password=$myDBpw --opt seafile_db > $mydir/$dt/seafile-db.sql
mysqldump -u root --password=$myDBpw --opt seahub_db > $mydir/$dt/seahub-db.sql
cp -r $myconfigdir/* $mydir/$dt/config/
cd $mydir
tar --exclude=$dt/config/office-preview -czvf $mydir/seafile_db_$dt.tgz $dt
rm -r $mydir/$dt

echo "Removing any seafile_db_*.tgz archives older than 30 days..."
find $mydir/ -name 'seafile_db_*.tgz' -type f -mtime +30 -exec rm -v {} +

Note that:

  • $mydir=/mnt/seafile_db_dumps and $myconfigdir =/mnt/config are the same two directories we previously created under db:/volumes: in the Docker Compose file
  • Similarly myDBpw='<<my-secret-DB-password>>' must be the database password you defined in the Docker Compose file
  • The backup instructions I was following (perhaps a little outdated?) listed DB names as ccnet-db, seafile-db, and seahub-db. They are in fact named ccnet_db, seafile_db, and seahub_db (note UNDERLINE instead of dash).
  • Seafile documentation suggests saving DB dumps for at least 7 days. Script above saves for 30 days, which could be easily modified. Given the backup method I am using (2X daily backups via Kopia), together with the fact that the current DB dumps are included in that backup, we really don’t need to save more than one day’s old DB dumps - because we can always find older versions of the DB dumps simply by looking through older Kopia backups.
  • Now make the script executable: chmod +x seafile_databasedump.sh

This script is designed to be run from within the seafile-mysql container. Since it is within the directory /home/<<my-user-name>>/seafile - which has been mounted as volume /mnt/config within seafile-mysql, we will be able to do that - using the next script.

#3. Within folder /home/<<my-user-name>>/seafile create script seafile_backup.sh:

#!/bin/bash

#run this daily/as desired via cron
echo Seafile database dump starting...
docker exec -u root seafile-mysql bash /mnt/config/seafile_databasedump.sh
echo Seafile database dump finished.
  • Note, again, that folder /mnt/config was defined within the Docker Compose file as a volume under seafile-mysql.
  • Make this script executable: chmod +x seafile_backup.sh
  • The purpose of this script is to run the first script within the seafile-mysql container.

Now you can test the entire system by running this script:

./seafile_backup.sh

You should see output similar to this, showing where the backup file is created (note that the path shown is within the seafile-mysql container file structure - to find the files outside of that container, you’ll have to translate that to the external file structure depending on how you set up the Docker Compose volumes) and which files were added to it:

Seafile database dump starting...
Adding backups and db dumps to /mnt/seafile_db_dumps/seafile_db_2024-01-18-05-01-29.tgz (relative path within seafile-mysql container)...
2024-01-18-05-01-29/
2024-01-18-05-01-29/ccnet-db.sql
2024-01-18-05-01-29/config/
2024-01-18-05-01-29/config/collabora.env
2024-01-18-05-01-29/config/docker-compose.yml
2024-01-18-05-01-29/config/office-previewer-settings.py
2024-01-18-05-01-29/config/seafile_backup.sh
2024-01-18-05-01-29/config/seafile_databasedump.sh
2024-01-18-05-01-29/seafile-db.sql
2024-01-18-05-01-29/seahub-db.sql
Removing any seafile_db_*.tgz archives older than 30 days...
Seafile database dump finished.

Note that you will find similar output in the system logs (in my system this is in /var/log/syslog, but this varies) and/or emailed to you via your cron system whenever the backup/DB dump runs.

Finally, add a line similar to this to your root crontab file (sudo crontab -u root -e)

55 5,17 * * * /home/<<your-username>>/seafile/seafile_backup.sh 2>&1 | logger -t seafile_db_cron
  • The above script will run at 5:55 and 17:55 daily
  • The database dump takes just a few seconds so it would probably be safe to run it even at 5:59 and 17:59 - assuming the file store backup begins at 6:00 and 18:00
  • The final portion of the line 2>&1 | logger -t seafile_db_cron adds the output of the script to the system log file (/var/log/syslog in my system - your exact log file may vary). You can use a command like tail -n 100000 /var/log/syslog | grep seafile_db_cron to find output from recent cron runs of the script.
  • Just for simplicity I have added this to root crontab but it would probably run just as well from your account’s crontab
  • If your backup program allows it, you could also run the script as a pre-backup task.
1 Like

Just for reference, here is my full Docker Compose file, docker-compose.yml.

Note that set up of the network under 10.10.10.x was to allow simultaneous operation of Seafile and NextCloud (both like to sit on port 443 . . . ).

This is a Seafile Pro installation, so it allows elasticsearch integration and (if I recall) works a little differently with the Collabora & Office Preview integration. But most things are no different from a CE installation, which I had previously set up.

networks:
  seafile_default:
    ipam:
          driver: default
          config:
              - subnet: "10.10.10.0/16"
                gateway: "10.10.10.1"

services:
  db:
    image: mariadb:10.11
    container_name: seafile-mysql
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=<<secret DB password>>  # Requested, set the root's password of MySQL service.
      - MYSQL_LOG_CONSOLE=true
      - MARIADB_AUTO_UPGRADE=1
    volumes:
      - /d/seafile/seafile-mysql/db:/var/lib/mysql  # Requested, specifies the path to MySQL data persistent store.
      - /mnt/seafile_main_data_store/seafile_db_dumps:/mnt/seafile_db_dumps  # Location of regular DB dumps & config file copies, for backups
      - /home/<<my-user-name>>/seafile:/mnt/config #location of my Docker Composer files & other config for Seafile

    networks:
      seafile_default:
        ipv4_address: 10.10.10.2

  memcached:
    image: memcached:1.6.18
    container_name: seafile-memcached
    restart: always
    entrypoint: memcached -m 256
    networks:
      seafile_default:
        ipv4_address: 10.10.10.3

  elasticsearch:
    image: elasticsearch:8.6.2
    container_name: seafile-elasticsearch
    ports:
      - 9200:9200   # 192.x.x.x is the IP address of the machine
    environment:
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
      - xpack.security.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 2g
    volumes:
      - /d/seafile/seafile-elasticsearch/data:/usr/share/elasticsearch/data  # Requested, specifies the path to Elasticsearch data persisten
     - /mnt/seafile_main_data_store_on_W/elasticsearch/data:/usr/share/elasticsearch/data  # Requested, specifies the path to Elasticsearch d
    networks:
      seafile_default:
        ipv4_address: 10.10.10.6

  seafile:
    image: docker.seadrive.org/seafileltd/seafile-pro-mc:latest
    #This command initiates the normal command to start the container - /sbin/my_init -- /scripts/enterpoint.sh
    #It also waits 15 seconds and then starts seaf-fuse.sh - (sleep 15 && /opt/seafile/seafile-server-latest/seaf-fuse.sh start /seafile-fuse)&
    command: sh -c "(sleep 15 && /opt/seafile/seafile-server-latest/seaf-fuse.sh start /seafile-fuse)& /sbin/my_init -- /scripts/enterpoint.sh"
    container_name: seafile
    restart: unless-stopped
    privileged: true
    ports:
      # NO PORTS need to be defined here, because we have our external networks & hit them directly from CADDY
      #- "180:80" #Windows uses port 80 for WebDAV services etc, so use something else
      #- "1443:443"  # If https is enabled, cancel the comment.
      - "8000:8000" #backup access port

    volumes:
      - /mnt/seafile_main_data_store:/shared   # Requested, specifies the path to Seafile data persistent store. NEW location on W drive
      #  This is to create a local externally accessible mount point for the entire data store at /mnt/seafile-fuse      
      - type: bind
        source: /mnt/seafile-archive/
        target: /seafile-fuse/
        bind:
          propagation: rshared
    environment:
      - DB_HOST=db
      - DB_ROOT_PASSWD=<<secret-DB-password>>  # Requested, the value should be root's password of MySQL service (same as above)
      - TIME_ZONE=America/Chicago  # Optional, default is UTC. Should be uncomment and set to your local time zone.
      - SEAFILE_ADMIN_EMAIL=<<my-email-address>> # Specifies Seafile admin user, default is 'me@example.com'.
      - SEAFILE_ADMIN_PASSWORD=<<my-secret-admin-password>>     # Specifies Seafile admin password, default is 'asecret'.
      - SEAFILE_SERVER_LETSENCRYPT=false   # Whether to use https or not. (Not necessary as https is done via Caddy.)
      - SEAFILE_SERVER_HOSTNAME=sf.myddns.org # Specifies your host name if https is enabled.
    depends_on:
      - db
      - memcached
      - elasticsearch
    networks:
      seafile_default:
        ipv4_address: 10.10.10.4

  collabora:
    image: collabora/code:23.05.5.4.1
    container_name: collabora
    restart: unless-stopped
    #privileged: true
    user: cool
    env_file:
      - ./collabora.env
    ports:
      - "19980:9980"   # 192.x.x.x is the IP address of the machine
    networks:
      seafile_default:
        ipv4_address: 10.10.10.5

  seafile-office-preview:
    image: seafileltd/office-preview:latest
    container_name: seafile-office-preview
    command: bash start.sh
    environment:
      - IGNORE_JWT_CHECK=true   # Usually, seafile and office-perview are deployed on the same machine and communicate through the intranet, >    ports:
      - "0.0.0.0:8089:8089"   # 192.x.x.x is the IP address of the machine
    volumes:
      - ./office-preview:/shared
      - ./office-previewer-settings.py:/opt/office_convertor/settings.py
    networks:
      seafile_default:
        ipv4_address: 10.10.10.7

File collabora.env in the same folder:

domain="https://sf.myddns.org"
username=<<myusername>>
password=<<mypassword>>

These are the relevant portions of the Caddyfile used to run NextCloud, Seafile, and a few other things on the same network:

#nextcloud
https://nc.myddns.org:443 {
        header Strict-Transport-Security max-age=31536000;
        reverse_proxy localhost:11000
}
#seafile
#Note these are sent directly to the 10.10.10.X network set up for
#Seafile & it's various components in the Docker Compose file
#This allows Seafile to co-exist with NextCloud -  both want to
#own & live on ports 443 & 80
sf.myddns.org {
        reverse_proxy 10.10.10.4:80
}
sf.myddns.org/seafhttp* {
        uri strip_prefix seafhttp
        reverse_proxy 10.10.10.4:8082
}
sf.myddns.org/seafdav* {
        uri strip_prefix seafdav
        reverse_proxy 10.10.10.4:8899
}
#paperless
pl.myddns.org {
        reverse_proxy localhost:7500
}
#vaultwarden
vw.myddns.org {
        reverse_proxy localhost:7843
}
#collabora, running with SeaFile
co.myddns.org {
        encode gzip

        reverse_proxy localhost:19980 {
                transport http {
                        tls
                        tls_insecure_skip_verify
                }
        }
}
#Office Preview, running with SeaFile
op.myddns.org {
        reverse_proxy localhost:8089
}
#portainer - access via https://po.myddns.org:9443
po.myddns.org {
        reverse_proxy localhost:9443
}

One other special mod I do is to make the seaf-fuse mount point auto-start within the container.

[UPDATE: I now have a smoother way to accomplish this - see the complete instructions here.]

Step 1 is to add a new “bind” volume to seafile in the Docker Compose file. See the full .yml file above, but here is the relevant section:

[services:
...
   seafile:
...
    volumes:
...]
      - type: bind
        source: /mnt/seafile-archive/
        target: /seafile-fuse/
        bind:
          propagation: rshared
  • Note that bind volumes do not work in Docker Desktop. I previously tried to run Seafile under Windows/WSL and Docker Desktop. This was the failure point. The solution was to install Ubuntu in a Hyper-V virtual machine instead, and run Docker/Seafile from there.

With the mount point set, the trick is to run seaf-fuse.sh every time the container restarts. There are probably a few ways to do this, but this is the procedure I’m using for now:

#1. sudo docker exec -it seafile bash

#2. From the container shell,: apt-get update

#3. Then apt-get install nano
(The purpose of the above two lines is to install nano for editing the following file. If you prefer, you can skip these steps and, instead, edit with vim which is pre-installed in the container.)

#4. nano /scripts/enterpoint.sh

#5. Look for the line that says log "This is an idle script (infinite loop) . . . ."

#6. Just before that line, add these lines:

/opt/seafile/seafile-server-latest/seaf-fuse.sh start /seafile-fuse

log
log "----------------------------------------"
log "loaded nano & initialized seaf-fuse"
log "----------------------------------------"
log

#7. Note that the specified directory /seafile-fuse matches with the target of the volume specified in the Docker Compose file. Externally the virtual file system will be available at /mnt/seafile-archive/, which is the source of the volume (again, as specified in the Docker Compose file as outlined above).

#8. Exit the container and restart it (exit then sudo docker restart seafile

Unfortunately, this must be re-done every time the container is rebuilt (docker compose up etc) or upgraded.

But in the meanwhile, the FUSE mount point is re-established at /mnt/seafile-archive every time the Seafile container restarts.

(It would be nice to add the option to enable or disable seaf-fuse, and specify its directory, via something like an environmental variable that could be set via a Docker Compose file. All it would take is checking for the environmental variable(s) and including or omitting the key line above. But for now this is what we have.)

Hi,

Considering the fuse aspect, I’ve been a long time user in order to pass through seafile folders to music streaming servers.

I’ve done this if it is of any help : https://github.com/Cherryblue/seafile-docker/

It is now based on the official dockerfile, with a few modifications for the points mentioned in the README (or on github front page). With this Fuse works out of the box. No custom action needed.

You can either build your own image based on what I’ve done, even modify it for your needs, or use the image I’ve built based on that : https://hub.docker.com/repository/docker/kynn/seafile-rpi/general.

It’s using 11.0.2 as base, but you can easily tell it to use 11.0.3 in Dockerfile. I’ll probably update the image soon, or wait for a new minor version of seafile if any is in the work.

Have a nice day guys.

1 Like

This is an extremely valuable post and should be of great interest to many. Thank you for for taking the time to put it together.

1 Like

As noted above, the Docker backup instructions here (Seafile CE Docker) or here (Seafile Pro Docker) are a bit simpler than what I implemented above.

So here is a script implementing that solution, but otherwise following the same general procedure as outlined above - dumping the databases and config folder using a script run by cron, then backing that up together with the Seafile data stores using Kopia (or the backup software of your choice).

This is different & simpler than the procedure above:

  • Does not require adding any new/extra volumes in the Docker Compose file
  • Requires just one shell script
  • Run that one script daily or as desired via (root) cron OR (if your backup software provides for this) as a pre-backup task just before your backup executes

Here is the step-by-step:

#1. Open the docker-compose.yml file and make a note of MYSQL_ROOT_PASSWORD found there as we will need it in Step #2.

#2. Within the directory /home/<<my-user-name>>/seafile (the same directory where I keep the Seafile Docker Compose files & other similar setup/config files), I add the new file seafile_backup.sh:

#!/bin/bash

#Run this daily via cron (as root user OR user in the root group - seafile runs as root)

#the directory where you want to save the database dumps (inside/relative to the seafile-mysql container):
mydir=/path/to/seafile_main_data_store/seafile_db_dumps
#Make sure this directory exists & has the same permissions as the other directories within the seafile_main_data_store directory

#the directory where seafile config files are found (Docker Compose files or in general any config-related things you want backed up):
configdir=/home/<<YOUR USERNAME>>/seafile

#a directory within the configdir that you wish to exclude from the backup (begin with /):
excludedir=/office-preview
#NOTE: If you don't want/need to exclude a directory, then edit the tar statement below to remove the --exclude option
#But don't just leave the excludedir blank - that will exclude everything in the configdir from the backup

daystosave=3

dt=`date +"%Y-%m-%d-%H-%M-%S"`

myDBpw='<<MYSQL_ROOT_PASSWORD as found in docker-compose.yml>>'

echo "****SDD: Beginning Seafile database dump and config file backup..."
echo "SDD: Adding backups and db dumps to $mydir/seafile_db_$dt.tgz ..."
mkdir $mydir/$dt
mkdir $mydir/$dt/config

docker exec seafile-mysql mysqldump -u root --password=$myDBpw --opt ccnet_db > $mydir/$dt/ccnet_db.sql
docker exec seafile-mysql mysqldump -u root --password=$myDBpw --opt seafile_db > $mydir/$dt/seafile_db.sql
docker exec seafile-mysql mysqldump -u root --password=$myDBpw --opt seahub_db > $mydir/$dt/seahub_db.sql

cp -r $configdir/* $mydir/$dt/config/
cd $mydir
tar --exclude=$dt/config$excludedir -czvf $mydir/seafile_db_$dt.tgz $dt
rm -r $mydir/$dt

echo "SDD: Removing any seafile_db_*.tgz archives older than $daystosave days..."
find $mydir/ -name 'seafile_db_*.tgz' -type f -mtime +$daystosave -exec rm -v {} +
echo "****SDD: Seafile database dump and config file backup finished."

Note that:

  • The directory you specify inmydir=/path/to/seafile_main_data_store/seafile_db_dumps must exist and should have the same permissions as the other files/directories within the Seafile data store folder. So do something like cd /path/to/seafile_main_data_store and sudo mkdir seafile_db_dumps
  • myDBpw='<<DB-password>>' - this must be the same database password you defined in the Docker Compose file as MYSQL_ROOT_PASSWORD
  • Seafile documentation suggests saving DB dumps for at least 7 days. Script above saves for just 3 days, because I run the script twice daily (thus 6 full DB backups) plus Kopia saves the daily, weekly, and monthly DB dumps along with the appropriate data stores. Obviously, change this as desired.
  • Now make the script executable: chmod +x seafile_backup.sh

Make sure the Seafile containers are started and running in Docker.

Now you can test-run the script via this command:

sudo ./seafile_backup.sh

(Because Seafile runs as root, various files or directories will have, or in some cases can have, root permissions - thus the need for sudo.)

You should see output similar to this, showing where the backup file is created (note that the path shown is within the seafile-mysql container file structure - to find the files outside of that container, you’ll have to translate that to the external file structure depending on how you set up the Docker Compose volumes) and which files were added to it:

****SDD: Seafile database dump starting...
SDD: Adding backups and db dumps to /mnt/seafile_db_dumps/seafile_db_2024-01-18-05-01-29.tgz ...
2024-01-18-05-01-29/
2024-01-18-05-01-29/ccnet_db.sql
2024-01-18-05-01-29/config/
2024-01-18-05-01-29/config/collabora.env
2024-01-18-05-01-29/config/docker-compose.yml
2024-01-18-05-01-29/config/office-previewer-settings.py
2024-01-18-05-01-29/config/seafile_backup.sh
2024-01-18-05-01-29/seafile_db.sql
2024-01-18-05-01-29/seahub_db.sql
SDD: Removing any seafile_db_*.tgz archives older than 3 days...
****SDD: Seafile database dump and config file backup finished.

Note that, if you follow the instructions for editing crontab just below, you will find similar output in the system logs (in my system this is in /var/log/syslog, but this varies) and/or emailed to you via your cron system whenever the backup/DB dump runs.

Check that the .tgz file produced by our test run of the script is in the location expected, contains the database dumps and config directory files as expected, and that the database dump files look as expected.

#3. Add a line similar to this to your root crontab file (sudo crontab -u root -e):

55 5,17 * * * /home/<<your-username>>/seafile/seafile_backup.sh 2>&1 | logger -t seafile_db_cron
  • The above script will run at 5:55 and 17:55 daily
  • The database dump takes just a few seconds so it would probably be safe to run it even at 5:59 and 17:59 - assuming the file store backup begins at 6:00 and 18:00
  • The final portion of the line 2>&1 | logger -t seafile_db_cron adds the output of the script to the system log file (/var/log/syslog in my system - your exact log file may vary). You can use a command like tail -n 100000 /var/log/syslog | grep seafile_db_cron to find output from recent cron runs of the script.
  • Because Seafile runs as root, to access/edit/add files within the data store directory it is simplest to also run this script as root. So I suggest adding this line this to root crontab as described above.
  • If your backup program allows it, you could run the script as a pre-backup task rather than via cron.

#4. Set up your preferred backup solution, or rsync, to run just after the script (say, daily at 6:00 and 18:00) and to back up the entire Seafile library/data store directory, including the seafile_db_dumps directory that we have created within it.

#5. To be completely confident of your back up, once your backup program has backed up the data store directory with your database dumps and configs files, as well as the entire Seafile Data Store, you should end-to-end test the backup. Here is a step-by-step:

  1. Set up a different machine or VM with Docker etc

  2. Copy your config directory to the corresponding /home/<YOUR USERNAME>>/seafile folder on the new machine

  3. Build the Seafile Docker containers using those config files (docker compose up -d etc) - perhaps modifying them to use different data directories etc as necessary.

  4. Restore the databases from the DB dump files into the seafile-mysql container as described here (CE) or here (Pro). In brief that is:

docker cp /opt/seafile-backup/databases/ccnet_db.sql seafile-mysql:/tmp/ccnet_db.sql
docker cp /opt/seafile-backup/databases/seafile_db.sql seafile-mysql:/tmp/seafile_db.sql
docker cp /opt/seafile-backup/databases/seahub_db.sql seafile-mysql:/tmp/seahub_db.sql

docker exec -it seafile-mysql /bin/sh -c "mysql -uroot ccnet_db < /tmp/ccnet_db.sql"
docker exec -it seafile-mysql /bin/sh -c "mysql -uroot seafile_db < /tmp/seafile_db.sql"
docker exec -it seafile-mysql /bin/sh -c "mysql -uroot seahub_db < /tmp/seahub_db.sql"
  1. Restore the Seafile library/data stores to the expected directory as explained in the same document.

  2. Test to make sure the restored version works as expected.

Here is a smoother way to make the seaf-fuse mount point auto-start within the container:

Step 1 is to add a new “bind” volume to seafile in the Docker Compose file. See the full .yml file above, but here is the relevant section:

[services:
...
   seafile:
...
    volumes:
...]
      - type: bind
        source: /mnt/seafile-archive/
        target: /seafile-fuse/
        bind:
          propagation: rshared
  • Note that bind volumes do not work in Docker Desktop. I previously tried to run Seafile under Windows/WSL and Docker Desktop. This was the failure point. The solution was to install Ubuntu in a Hyper-V virtual machine instead, and run Docker/Seafile from there.

Now we need to run seaf-fuse.sh start /seafile-fuse/ within the Docker container. Here is a smoother way to accomplish that, that only requires make a small change to the Docker Compose .yml. Add the following line to your docker-compose.yml:

[services:
...
   seafile:
...
       command: sh -c "(sleep 15 && /opt/seafile/seafile-server-latest/seaf-fuse.sh start /seafile-fuse)& /sbin/my_init -- /scripts/enterpoint.sh"
...

The second part of the command (/sbin/my_init -- /scripts/enterpoint.sh) is just the normal initial command for the Seafile docker container.

The first part - (sleep 15 && /opt/seafile/seafile-server-latest/seaf-fuse.sh start /seafile-fuse)& - waits 15 seconds, then starts seaf-fuse.sh in a background thread.

In your logs you can verify that this command run - look for output like this:

seafile  | Starting seaf-fuse, please wait ...
seafile  | seaf-fuse started
seafile  |
seafile  | Done.

One possible future issue: If Seafile ever changes the init script for the container (to something besides enterpoint.sh then this script will fail and require some tweaking. However, enterpoint.sh seems to be a standard initial command for Docker containers, so it seems unlikely to change. Even if it does change in a future version, the required changes to the Compose file should be relatively simple. The basic idea here is to add another shell command in addition to the one used to start the container.

Again, there may be a smoother way to accomplish this - and I would love to hear about them from anyone who cares to share. But this is fairly simple, robust, and portable in the sense of requiring only a few lines in the Docker Compose file rather than twiddling with internal files in the container itself.