I have a 4 TB drive (USB) attached to my MacBook and want to sync the data to Seafile.
It starts to index and sync - but eventually aborts because it eats up all space on my system disk while indexing that ~3 TB that sit on the external disk.
If I try to put everything on the external disk, Seafile refuses to start and crashes.
How can I fix this? Is there any way to limit the index size?
The issue you are experiencing is likely caused by the Seafile client’s internal metadata and block database management. Even when the source data is on an external drive, Seafile maintains its “seafile-data” (the internal database, block cache, and indexing logs) on your system drive by default (usually in ~/.seafile-data).
When indexing 3 TB of data, the client breaks files into small blocks. The metadata for these billions of blocks can consume hundreds of gigabytes of space. If your system disk fills up completely, the database can no longer write new entries, leading to the sync aborting.
To fix this, you can try the following:
- Relocate the seafile-data folder: Instead of manually moving files, use the Seafile client’s settings to move the “Seafile-data” folder to your external drive.
- Important: Ensure your external drive is formatted as APFS or Mac OS Extended (Journaled). Seafile often crashes or fails to start if the data directory is placed on an ExFAT or FAT32 drive because these filesystems do not support the required file locking and symlink features.
- Sync in smaller batches: Instead of syncing the entire 4 TB library at once, try syncing one sub-folder at a time. This allows the client to finalize the indexing and “clean up” temporary block data after each successful upload, preventing the local cache from ballooning all at once.
- Check for disk sleep: Ensure your MacBook is set to not put the external hard disk to sleep during the indexing process, as a sudden disconnection of the drive where the index is stored will cause the client to crash instantly.
I have managed to find a workaround: I have installed the Seafile client on another USB drive with ~500 GB of free space. This way the index is created there and doesn’t eat up my main drive while eventually failing anyway.
I wish it would create the indices where the stuff actually lives, but at least it’s working now.
If a large number of files are copied into the the synced folder or virtual drive, the client indexes and uploads them in batches (100MB per batch). The batches are cached in an internal cache folder. The cache space will be cleaned up after each batch is uploaded. So in most case it won’t use much cache space. But if you have a single very large file, the file has to be indexed in a single batch, which will use cache size the same as the file size. Perhaps you have some very large files in you 4TB drive? If not, the cache space may not have been cleaned up successfully after some batches. In this case you can check seafile.log for error messsages.