Recommended (system) storage size?

Hi,
I have started to setup a new Seafile server as I want the system to be on a SSD and the storage on a HDD.
The only thing that I’m wondering about is what’s the recommended size for Seafile system on the SSD?

I have 5 users and the storage are today at 1TB but it will increase to 2TB soon as the 1TB are almost full soon.

Should I have the SSD on 80GB or should I get it bigger then that? I can go lower also as it’s a VHDX stored at a SSD so the size can I choose by my self.

Use zfs or btrfs for the data volume. This allows snapshot based backups.
I would also recommend to put /var/lib/mysql on a separate zfs/btrfs volume.
You should have sufficient ram though for zfs but it speeds up the storage performance if configured correctly.
You can then also use the ssd for log and cache for the hdd.

Thanks for the replay, but I only have one VHDX for the storage and one for the system.
So my Q is, how big should I use for the system 80GB should be good I guess?

You are running Linux in it so Linux can still use zfs as file system. :wink:

Hi,

don’t take it at a recommendation, but here the partition sizes of our Seafile cluster:

System (on each node): 16 GByte, Seafile uses about 2 GByte,
Data: (MariaDB; on each node): 40 GByte, MariaDB uses about 2 GByte
Shared (tmp files, seahub data): 500 GByte, about 200 Gbyte used
Storage: 25 TByte

Best regards

Thomas

1 Like

Same here. Debian 9 and EXT4 FS on all disks.

I have a lot of services instaled as web services, proxies, vpn, database, seafile and etc.
You can see below I have used something about 1.5TB by Seafile data(only seafile-data folder) and rest of seafile server is on SSD, same as database data etc …
As you can see have used on SSD 34GB from 196GB, today I’m sad for that I’m not used only 80GB SSD and risk second 80GB SSD for RAID. But I think that snapshots of disk are better then RAID on SSD disk, for me it’s blind feel of having safe backup.


Disk                       Size   Use  Free Use% Mount point
/dev/mapper/sdb1_crypt     196G   34G  143G  20% /           # SSD
/dev/mapper/sdc1_crypt     2,7T  1,5T  958G  61% /data       # 6x 1TB WD Red - RAID

I have created a 80GB VHDX for Seafile (system not data) and I’ll go with that, I guess that should be good.
If not, I can just extend it in the future, thanks for the help!

I’m running a 3TB volume with ext4 on thin lvm. I take automated hourly LVM snapshots of the volume for quick disaster recovery.

Do you sync them away?

Yes. Once a day a separate script creates a sql backup and uploads the SQL backup and seafile data folder to Blackblaze B2

That is good. But do you sync the snapshots away?

No, snapshots are designed for local (software) failure recovery only.

That is not entirely true. You can sync snapshots to other storage systems and have them available there as well to either launch the service or restore files from there. This is also the best way to make backups of running systems without any service interruption. My remote machine also syncs the whole storage to a 3rd storage system out of the network.

With Seafile the advantage of creating a snapshot is relatively low. When creating the SQL backup first and not running the garbage collector while backing up the data, the backup is always consistent (of course advantages like sending snapshots are there as well).

I didn’t say you can’t sync snapshots off. Off course you can back them up off-site.

I said for my setup I designed them for use locally. I do off-site backups of the data using a different method.

1 Like

Well I’m running Seafile like this now:
Seafile system (Ubuntu OS, MariaDB etc.) are on a Samsung 850 Pro.
All of Seafile data is stored at a WD Red 3TB.

I’m doing a backup every night of the hole VM with Veeam to a other harddrive.
And once a week it’s a offsite backup :slight_smile:

It’s a fast setup so for me, it works perfect :slight_smile: