Set it and forget it backup solution

I’ve spent many hours searching these forums and elsewhere to find an easy “set it and forget it” backup solution for Seafile without much success. At this point, I’m considering plugging in an external HDD to my laptop once a week and using the Seafile client to sync all libraries to the drive, then use Backblaze Personal to back it all up. Is there a better solution?

I store 2tb worth of data on the Seafile server and mount it on my laptop since my laptop has a small HDD. Seafile is hosted on a Odroid XU4 running Yunohost. Can someone please help me figure out a solid backup solution for my data?

Since the backup server is something I’ve not tried and it seems to me a bit difficoult to set up, in my personal opinion, and for what I know you can:

  • attach a usb drive to your Odroid sever and copy seafile-data dir and dump the databases (automating the process, of course) so that if something goes wrong you can restore the data setting up a new server and restoring those from usb. Doc about it is here. This is best to restore quickly the cloud system.

  • set up a seafile-cli client on a second linux device (something like a raspberry pi) that just auto sync the libraries you want to backup (that basically is setting another client, which is always on) - you need a little linux terminal knowledge for this, here you can find the how to. This is best to have quickly access to the files.

I personally do both things: I backup up (all) my BananaPI server on an usb drive, and I set up my old Buffalo nas running debian to act as a cli-client.

Thank you for the reply! This seems like a solid fail-safe option, the only thing is that I’m also trying to avoid having more than 1 always-on device set-up.

I understand in your case that you use a second linux device to backup the actual file structure, but is there no way to backup the actual file structure on a second USB drive on my Odroid, and then rsync it remotely to a service like Backblaze B2?

Yes, you could set up the seafile cli client on the same machine of the seafile server and make the cli-client sync on an usb drive to have the data easly recoverable in case the main Odroid storage is damaged.

Anyway I prefer having the backup far from the main source.

Could I also just use Seafuse to mount the actual file structure and rsync this mounted drive?

Maybe. I’ve never tried Seadrive though

I personally use both Option A (db dump + seafile-data backup copy) and Option B (sync all to another unused HDD specifically for copy). Option B won’t cater for accidental deletion of files, though seafile has its own internal bin.

@archivepipper I get both options synced up via SSH rsyncing data remotely so that the backup is kept safe, away from source as @Pazzoide mentioned, mounting the remote server before. In order to optimise speed, you may want copy data locally (if you have space) and immediately transfer it when it’s done.

Another option I experimented with was copying the db data folder /var/lib/MySQL following a kernel extension failure causing the system not to boot up, and was unsuccessful with recovering this and started db in safe mode to dump DBs. I would recommend to AVOID db data folder copying as this will be still dependant on the particular version used in that environment.

This has worked for me for 4 major server migrations data starting from 800GB growing to 1.4TB over time…

Recovery is straight forward.

  1. Install a fresh Seafile.
  2. SQL dump import.
  3. Copy backed up seafile-data to the newly installed folder.
  4. Start Seafile and seahub.

Seafile is great… Enjoy.

This solution https://borgbackup.readthedocs.io/en/stable/ is amazing when it comes to backing up huge amount of files and keeping the snapshots of the data. In addition, it will automatically remove the old snapshots based on the parameters you setup via config, encrypt the data before sending to destination and it uses SSH to authenticate.

Hi Guys,

Install seafile server on a virtual machine and schedule a daily backup and save it externaly.

In my enviroment, I use seafile on Debian virtualized on MS Hyper-V. So I create a daily backup to a LTO tape.

In case of problems, you could simply restore the latest VM.

I use REAR to backup my systems. It works very well.
http://relax-and-recover.org/

I personally use


which I like because it’s easy like rsync and it makes hardlinks full backups (Apple’s Time Machine’s style). So I can easily restore the date i want to and the used space is just like a differential backup. It also deletes older backup or keeps one for month / week / year… Fully configurable.
Best choice IMHO.

Here’s my 2 cents:
You may containerize your setup using LXD. It’s not a backup solution by itself, but it makes all kinds of admin tasks so much easier. I can’t recommend it enough.

LXD is an open source container solution by canonical that IMHO perfectly combines the advantages of VMs and Containers:

  • inside the container you have a regular OS setup, like in a VM, but contrary to docker;
  • it’s totally lightweight (runs perfectly fine on Ubuntu for Raspberry Pi);
  • internal storage is not some “binary blob” image as for VMs (i.e. qcow etc.), but regular files (btrfs or another advanced filesystem is highly recommended);
  • from the host you can easily (in mere seconds!) create snapshots of your running container;
  • you can transfer or copy your whole container to another device without the hassle of exporting everything separately (database, data files, etc.), this could then be used as backup server in order to get up and running again in minutes.
  • you can export an LXD container as a single (huge) tarball as backup (remember: snapshots are not backups, neither is RAID). Warning: currently you may hit this issue;

Here’s my current personal strategy:

  • seafile-server in LXD on my NAS (btrfs-raid1) with scheduled snapshotting
  • seafile CLI in a separate LXD container to hold current versions as plain old files of Seafile contents, also with snapshotting. (I don’t really like opaque/proprietary formats for my data…)
  • daily syncing of the seafcli container’s contents to offsite-backup.

As soon as this issue is taken care of, I’ll add regular tarball-exports of the whole container. In the meantime I may implement a periodic lxd copy to a second lxd instance (e.g. on my desktop PC), just in case.