What amount of data are we talking about ?
What connection do u have between the Seafile server and the remote backup server ?
… backup history … how many days/weeks/… ?
What kind of backup storage do you have (filesystem, amount of storage,…) ?
Rsync full backups and/or Rsnapshot are, without any fancy storage backend (ZFS, Ceph,…), the only practical solution i can think of. If you have e.g. FreeNAS as backup storage, the deduplication/compression would help to keep the amount of data low.
Well at the moment i live with full backups of virtual disk on which seafile data resides. It means i lose history of few months, if server goes down, for home its ok.
I already have a plan on how to do proper daily LIVE incremental backups, but am to lazy to do it. I would use btrfs, since it uses WAY less resources than ZFS and is stable enough for most features(RAID56 is not stable yet for example, but normal raids and snapshots are stable).
I would use btrfs filesystem for seafile server which can at any time make crash-consistent snapshot of harddrive(including sql databases). And than you can, while server continues operating, read from this snapshot to any incremental backup program. It can be rsync, another btrfs backup drive with snapshot capability, ZFS, Ceph,… I personally for home usage would use another btrfs for simplicity.
A step up for more serious backup would be seperate sql dump that would happen about 30-60 seconds BEFORE data snapshotting takes place(since we want all reacords in sql to have a matching record on disk, we do not care if a few orphaned files stay on disk, they do no harm). So proper sql backup + crash-consistend seafile data backup.
Or also a very good idea if taking server offline for a few seconds would work for you:
Take server offline, make a btrfs snapshot and turn server back on. This way you do not need any seperate sql backuping, plus instead of crash-consistent data you have a fully offline backup with offline time of server less than 30 seconds with a script!
I use the seaf-fuse filesystem and Crashplan. That way I have version controlled backups of all files in Seafile and I’m not reliant on Seafile internal formats if I ever need to rescue any files. This is all done live.
Maybe I didn’t give so detailed information. I’m just looking for a general solution, and want to know how others are dealing with the backup strategies on Seafile CE.
I have the backupserver attached to the local network by gigabit interface.
As Seafile have history and recycle-bin, the backup are mainly for disaster recovery (not user accidents). Daily backup are enough.
As seaf-fuse are working on another way, I will indeed recover the files stored in Seafile, but I will lose all Seafile “metadata”. Good as last resort, but not my preferred way.
Regarding the BTRFS - proposed by @Lonsarg. Should I store both mySQL and the Seafile data-directory on a disk with this filesystem. Also, will it require Seafile to be stoped during snapshots? Anyone that have this solution up and running successfully? This seems to require the Backup server to also use a snapshot capable filesystem for simplification of a incremental backup / restore.
Thanks for more discussion on this topic!
Until I have a good “on the guset backup” I will back the VM up from the host… It consumes more disk, time but are a secure way…
I’ve been dealing with this since CE v. 1.X and pretty much follow the sequence proposed in the original post. The servers I see are on Ubuntu and generally serve under 25 users. I can expect to see encrypted libraries among users.
For what it’s worth, I have had one major event in the time I’ve been using and managing Seafile CE that involved a corrupted MySQL server after an upgrade. This was during the Seafile 2.X days. I have not had a problem with Seafile itself (fingers crossed) although I do the minor upgrades religiously shortly after they come out.
For the sake of the new-comer to Seafile, I’d recommend a periodic full backup using Redo or Clonezilla, Redo being drop-dead easy to use. In your case you seem to suggest you backup your VM file which gets you to the same place.
As for scripted solutions you might want to reference some posts on the German Seafile site too. It’s now called SyncWerk but was formerly Seafile.de (I think). I seem to recall some discussions there like these (in English):
I think if I had to build an industrial-strength system I would base storage on ZFS. I have not done this yet but I might try a ZFS-based system running a VM or Docker instance of Seafile on FreeNAS or TrueNAS (iXsystems). RAIDZ2 and Snapshots could be very useful. In addition, in case seaf-fuse offers any advantages in your system, the server could be used for file storage as well.
I have not tried backup servers like Bacula but would like to. It seems like another steep learning curve but one worth doing as an R&D project on the side. By the way, FreeNAS/TrueNAS does run Bacula as a plug-in, I believe.
If you have a powerful server, you don’t have to stop ZFS for making snaphots. I rsync the snapshots after it. But don’t use btrfs, it’s beta for over 5 years now and Red Hat gave it up to make it stable. Just use it if your system isn’t powerful enough for ZFS.
Okay, I have only tried ZFS in FreeNAS and Nas4Free. FreeNAS requires high-end.
But I’m maybe not to familiar with ZFS.
Is it the best solution? ZFS on Ubuntu?
//Sam
There are two options: ZFS on FUSE and ZFS on Linux. ZFS on FUSE is horrible. Actually, the performance on BSD should be much better than on Linux, but I nver used FreeBSD with ZFS. Of course, if you don’t have 16GB RAM or more you have to limit the usage of ZFS. But you can use it like a normal system if you let it use swap.
Okay, It seems to be a good idea to let the ZFS file-system deal with the backup-history instead of doing some own retention-policy based file-hierarchy.
But if I cannot have a Linux ZFS based backup-machine?.. Are there any other options? Just want to be sure to investigate different paths before a decision.