Backup seafile data with cp or rsync CE

Is is safe to rsync seafile-data instead of cp, because seafile stores data in blocks.

Also do I backup this folder, in non docker version 8 /opt/seafile/seafile-data

Or do recomended full cp as the manual says

`cp -R /opt/seafile/seafile-data /backup/data/opt/seafile/seafile-data-`date +"%Y-%m-%d-%H-%M-%S"`

The Seafile Admin Manual says either is acceptable:

The data files are all stored in the /data/haiwen directory, so just back up the whole directory. You can directly copy the whole directory to the backup destination, or you can use rsync to do incremental backup.

To directly copy the whole data directory,

cp -R /data/haiwen /backup/data/haiwen-`date +"%Y-%m-%d-%H-%M-%S"`

This produces a separate copy of the data directory each time. You can delete older backup copies after a new one is completed.

If you have a lot of data, copying the whole data directory would take long. You can use rsync to do incremental backup.

rsync -az /data/haiwen /backup/data

This command backup [sic] the data directory to /backup/data/haiwen.

You should also use ‘H’ with rsync, see: https://forum.seafile.com/t/tutorial-for-relocating-seafile-data-directory-optionally-on-an-encrypted-volume/131/5

2 Likes

Idk how I missed that, it says rsync right below it :slight_smile:

It would seem I do have to stop the server for data and database rsyncs

mind that - of course much depending on your concrete hardware setup and desired backup schedule - even rsync will be no suitable solution b/c comparing the directory trees alone takes a lot of time. time in which the server application must either be stopped, or the origin data changed while running resulting in inconsistent copies. i’ve seen this at an instance with a few TB of data and maybe around 100 users.

cow filesystems with snapshot capabilities, e.g. btrfs, are generally recommendable.

Use rclone and setup a seafile source, then backup to any other destination. Works like a charm