Client "storage/blocks" big size

hi,
I am using seafile client 5.0.7 on Mac OS to sync my RAID system of 26 TB.
It works perfectly but now, the local storage is 176 GB and it is very big for my root file system :sweat_smile:

Is is possible to purge the folder or to move on another disk ?
thanks for your tips

Have you seen this link?

The seafile manual site describes a garbage collection script seaf-gc

The tutorial and the garbage collection are both for Seafile server, but OP is asking about the client!

I do not know about MacOS but AFAIK the “storage/blocks” folders are created/accessed by the server process to store the files (and their previous versions too).

I have copied the directory to another disk and modified seafine.ini to have a pointer on the new folder.
twho hours laters, it semms OK
thanks

Usually the blocks folder on the client won’t grow to such large size, at least not stay there forever. The blocks are removed immediately after successful upload. So I suspect there is a library fails to upload to the server? Can you check your seafile.log in the client?

seafile.log is a huge file: 76 MB with 672197 lines.
https://we.tl/ZVs5C7zwVF

What’s the size of your largest files of the data on the RAID system of 26 TB?

There is a /Volumes/Raid6_1/CACHE_SEAFILE/Seafile/.seafile-data/storage/blocks/1c9d6f2d-0a80-4ab2-9950-bef8b5927a52/.DS_Store in your blocks directory. And the client fails to open this folder somehow. It stops cleaning up the whole blocks folder after the failure. You can remove this .DS_Store manually.

it seems that .DS_Store is added by Mac OS X automatically. Can I remove /Volumes/Raid6_1/CACHE_SEAFILE/Seafile/.seafile-data/storage/blocks/1c9d6f2d-0a80-4ab2-9950-bef8b5927a52/ ?

removed and no more error messages in seafile.log but now 184 Go on disk :frowning:

You can remove it manually if you’re sure everything is on the server already. The blocks folder is just a cache.

Hi there,
does this means that if a have a 160 GB disk size and i have a 60 gb DB which i dump into /backups as a backup and sync the backup folder with the seaf-cli that the blocks folder needs to have another 60gb availalbe space on the disk to cache the upload? i.e. this will fail because the harddrive is full?

Yes this is needed, if you have a single large file.

This is bad news for me. Is there a workaround to disable the cache or should i use rsync to write the files to another server and seafile sync them from there?