I would like to make regular backups of the Seafile data according to the instructions here.
[UPDATE: The [Docker backup instructions here (Seafile CE Docker)]( Seafile Community Installation - Seafile Admin Manual) or here (Seafile Pro Docker) are a little different, and if I had followed them, would have made the scripts and setup a little simpler. That simpler approach and script is outlined here. That approach is probably better than the approach below for most situations.]
I thought this would be a simple and straightforward project, but as always there are a few more twists and turns than anticipated. I thought others (and my future self) would benefit from a step-by-step walkthrough of how I did it, including various necessary scripts and files.
Setup:
- Seafile Server running on docker (Docker Compose setup files)
- Data store on external drive
- Running on an Ubuntu VM under Windows (though Windows & VM details are not very relevant to the steps below)
- Currently about 300gb in Seafile storage, though that looks to be >1tb soon.
The basic procedure is:
- Dump databases to external folder (I use a new subfolder of the Seafile data store folder)
- Copy directory containing Docker Compose .yml files & other setup files to the same new subfolder
- Use Rsync or some external backup program to back up those two things plus the Seafile data store files.
I’m using the procedure outlined below to dump the databases + Docker config files to the same folder as the Seafile data stores every morning at 5:55am, then Kopia to back up that entire folder starting at 6:00am; repeat the same process at 5:55pm & 6pm. For various reasons I’m running Kopia under Windows & the DB dumps under Ubuntu - thus the fairly loose coordination re: time. Normally you could run the DB dump routine as a before-backup action within the backup software. But the main need is to run the DB dump first, then the backup of the file store.
Instead of Kopia you could use Rsync, Duplicati, Duplicacy, Borg, etc etc etc. To me, setting up a backup program to back up a certain directory on a regular schedule is fairly routine. I will leave that part to you. The slightly trickier part is getting the DB dumps out of the Docker container and to a place where they can be accessed by the backup program.
Here is the procedure I used to do that:
#1. Added 2 volumes to the DB section of the Docker Compose file (last two lines shown here, marked with #*****:
services:
db:
image: mariadb:10.11
container_name: seafile-mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=<<my secret DB password>> # Requested, set the root's password of MySQL service.
- MYSQL_LOG_CONSOLE=true
- MARIADB_AUTO_UPGRADE=1
volumes:
- /my/preferred/path/to/seafile-mysql/db:/var/lib/mysql # Requested, specifies the path to MySQL data persistent store.
- /my/directory/path/to/Seafile/datastore/seafile_db_dumps:/mnt/seafile_db_dumps #***** A new directory within the .../seafile directory where we can store the DB dumps
- /home/<<my-user-name>>/seafile:/mnt/config #*****Making the directory with my Seafile Docker Compose files & other needed config files (such as the scripts to dump the DB) available to the DB container
...
- This is only the first portion of the Docker Compose file with the necessary two extra lines. For your reference, the full Docker Compose file is in a reply below.
- Make a note of MYSQL_ROOT_PASSWORD found here as we will need it later.
#2. Within the directory /home/<<my-user-name>>/seafile (the same directory where I keep the Seafile Docker Compose files & other similar setup/config files), I add the new file seafile_databasedump.sh:
#!/bin/bash
#The directory where you want to save the database dumps (inside/relative to the seafile-mysql container)
#Note that this must be the same directory as previously defined in the Docker Compose file under db/volumes:
mydir=/mnt/seafile_db_dumps
#This must be the same directory as previously defined in the Docker Compose file for the config directory:
myconfigdir =/mnt/config
# This requires the same password defined in the Docker Compose .yml file as MYSQL_ROOT_PASSWORD:
myDBpw='<<my secret DB password>>'
dt=`date +"%Y-%m-%d-%H-%M-%S"`
echo "Adding backups and db dumps to $mydir/seafile_db_$dt.tgz (relative path within seafile-mysql container)..."
mkdir $mydir/$dt
mkdir $mydir/$dt/config
mysqldump -u root --password=$myDBpw --opt ccnet_db > $mydir/$dt/ccnet-db.sql
mysqldump -u root --password=$myDBpw --opt seafile_db > $mydir/$dt/seafile-db.sql
mysqldump -u root --password=$myDBpw --opt seahub_db > $mydir/$dt/seahub-db.sql
cp -r $myconfigdir/* $mydir/$dt/config/
cd $mydir
tar --exclude=$dt/config/office-preview -czvf $mydir/seafile_db_$dt.tgz $dt
rm -r $mydir/$dt
echo "Removing any seafile_db_*.tgz archives older than 30 days..."
find $mydir/ -name 'seafile_db_*.tgz' -type f -mtime +30 -exec rm -v {} +
Note that:
$mydir=/mnt/seafile_db_dumpsand$myconfigdir =/mnt/configare the same two directories we previously created underdb:/volumes:in the Docker Compose file- Similarly
myDBpw='<<my-secret-DB-password>>'must be the database password you defined in the Docker Compose file - The backup instructions I was following (perhaps a little outdated?) listed DB names as
ccnet-db,seafile-db, andseahub-db. They are in fact namedccnet_db,seafile_db, andseahub_db(note UNDERLINE instead of dash). - Seafile documentation suggests saving DB dumps for at least 7 days. Script above saves for 30 days, which could be easily modified. Given the backup method I am using (2X daily backups via Kopia), together with the fact that the current DB dumps are included in that backup, we really don’t need to save more than one day’s old DB dumps - because we can always find older versions of the DB dumps simply by looking through older Kopia backups.
- Now make the script executable:
chmod +x seafile_databasedump.sh
This script is designed to be run from within the seafile-mysql container. Since it is within the directory /home/<<my-user-name>>/seafile - which has been mounted as volume /mnt/config within seafile-mysql, we will be able to do that - using the next script.
#3. Within folder /home/<<my-user-name>>/seafile create script seafile_backup.sh:
#!/bin/bash
#run this daily/as desired via cron
echo Seafile database dump starting...
docker exec -u root seafile-mysql bash /mnt/config/seafile_databasedump.sh
echo Seafile database dump finished.
- Note, again, that folder
/mnt/configwas defined within the Docker Compose file as a volume underseafile-mysql. - Make this script executable:
chmod +x seafile_backup.sh - The purpose of this script is to run the first script within the
seafile-mysqlcontainer.
Now you can test the entire system by running this script:
./seafile_backup.sh
You should see output similar to this, showing where the backup file is created (note that the path shown is within the seafile-mysql container file structure - to find the files outside of that container, you’ll have to translate that to the external file structure depending on how you set up the Docker Compose volumes) and which files were added to it:
Seafile database dump starting...
Adding backups and db dumps to /mnt/seafile_db_dumps/seafile_db_2024-01-18-05-01-29.tgz (relative path within seafile-mysql container)...
2024-01-18-05-01-29/
2024-01-18-05-01-29/ccnet-db.sql
2024-01-18-05-01-29/config/
2024-01-18-05-01-29/config/collabora.env
2024-01-18-05-01-29/config/docker-compose.yml
2024-01-18-05-01-29/config/office-previewer-settings.py
2024-01-18-05-01-29/config/seafile_backup.sh
2024-01-18-05-01-29/config/seafile_databasedump.sh
2024-01-18-05-01-29/seafile-db.sql
2024-01-18-05-01-29/seahub-db.sql
Removing any seafile_db_*.tgz archives older than 30 days...
Seafile database dump finished.
Note that you will find similar output in the system logs (in my system this is in /var/log/syslog, but this varies) and/or emailed to you via your cron system whenever the backup/DB dump runs.
Finally, add a line similar to this to your root crontab file (sudo crontab -u root -e)
55 5,17 * * * /home/<<your-username>>/seafile/seafile_backup.sh 2>&1 | logger -t seafile_db_cron
- The above script will run at 5:55 and 17:55 daily
- The database dump takes just a few seconds so it would probably be safe to run it even at 5:59 and 17:59 - assuming the file store backup begins at 6:00 and 18:00
- The final portion of the line
2>&1 | logger -t seafile_db_cronadds the output of the script to the system log file (/var/log/syslogin my system - your exact log file may vary). You can use a command liketail -n 100000 /var/log/syslog | grep seafile_db_cronto find output from recent cron runs of the script. - Just for simplicity I have added this to
rootcrontab but it would probably run just as well from your account’s crontab - If your backup program allows it, you could also run the script as a pre-backup task.