I’ve synced my local nas to seafile server (pro) with davfs so far. It worked as expected.
But now I’ve very high loadavg to be caused by davfs, so I do thinking about something better.
I want to use the seadrive client, but only syncing existing local folder(s) to the server library. Not vice versa. And only from commandline in background, maybe via cronjob.
So I don’t know which of the both clients is the better solution and what is the right command to sync in the right way.
NAS-OS: debian 9 (openmediavault)
Thanks for any suggestions.
I don’t understand your setup or the direction of your backup from what you write.
Maybe invest a bit more energy to write a better layout people can understand. A easy and simple statement like this could help:
Server A (blabla)
Server B (blabla)
Goal: backup Server A to Server B.
As for backup FROM seafile to any thing else you can also use FuSe (its built-in seafile) and then Rsync to any remote folder.
You can also install the seafile cli client on the backup Target and connect with seafile credentials.
Rsync would be time controlled.
Cli client would be live.
If you want to sync at night you might want to go with the fuse +rsync. But I believe you could also start/stop the cli client with Cron.
If your backup Target doesn’t have gui you good with the cli client.
I don’t see the drive client for proper backups
Sorry, I didn’t thought my question was so unintelligible
server a: local nas with debian 9
server b: seafile pro server 7.1 (currently upgraded), debian 10
what I want?
syncing (backup) some directories from server a to server b, not vice versa.
I had done this a long time with davfs and rsync, but the davfs produced very high loadevg at last. So I’m thinking about alternatives.
okay and have you tried using the seafile cli client on server A to sync folders to seafile server?
never heard of davfs
No, I didn’t. I’m unsure because the docs say:
Synchronize a library with an existing folder. The existing files in the local folder will be merged with the files in the library.
But I don’t want a local merge, the sync should only work from local server A to remote seafile server B.
May be I misunderstand the doc?
I’ve run a test with testdata and test library to see how the sync works.
It’s a both-way sync what is not I want. Deleted local files will be deleted at seafile server too. So there is no security against malware and crypto trojaner
I have to search for a better working solution to me.
Thanks for your comments.
So, seafile is a not a oneway backup software, its more a central hub for a bunch of clients and devices. normaly you would backup you seafile server to achieve protection against malware or dataloss… and there are many proper backsolutions for oneway backup.
I don’t want to force you into seafile and would recommend you go for a other solution like Rsync to make backup, but I can see two options here.
This one ghetto-style:
- You can setup server A to sync files via seafile like every evening. After that some other cron jobs grabs the files via Fuse out of the library and places in next to the seafile-data folder. hmm but you would have the data doubled.
This one could be okay:
- What if you use the seafile permissions on the backup library so that the server A can write its backup syncs in two way manor but all other clients (other user accounts) on have read permission to that library…
If you are scared of you server a been encrypted you could setup a cron job that only start the seafile client every night or every weekend. Then you would have time to detach the sync jobs and recovery the “backup” from the seafile server…
I do backup of my machines to seafile via libraries (two-way) and THEN backup my entire seafile to an external site every evening. if my server gets fucked I have a couple of hour to notice and detach the next backup process. Its not perfect but I’m okay with it.
I have the Data on my Clients , again on the seafile and again on the external site.
Historical Backup would be nicer but it affords a lot of storage