hi,
I am using seafile client 5.0.7 on Mac OS to sync my RAID system of 26 TB.
It works perfectly but now, the local storage is 176 GB and it is very big for my root file system
Is is possible to purge the folder or to move on another disk ?
thanks for your tips
I do not know about MacOS but AFAIK the “storage/blocks” folders are created/accessed by the server process to store the files (and their previous versions too).
Usually the blocks folder on the client won’t grow to such large size, at least not stay there forever. The blocks are removed immediately after successful upload. So I suspect there is a library fails to upload to the server? Can you check your seafile.log in the client?
There is a /Volumes/Raid6_1/CACHE_SEAFILE/Seafile/.seafile-data/storage/blocks/1c9d6f2d-0a80-4ab2-9950-bef8b5927a52/.DS_Store in your blocks directory. And the client fails to open this folder somehow. It stops cleaning up the whole blocks folder after the failure. You can remove this .DS_Store manually.
it seems that .DS_Store is added by Mac OS X automatically. Can I remove /Volumes/Raid6_1/CACHE_SEAFILE/Seafile/.seafile-data/storage/blocks/1c9d6f2d-0a80-4ab2-9950-bef8b5927a52/ ?
Hi there,
does this means that if a have a 160 GB disk size and i have a 60 gb DB which i dump into /backups as a backup and sync the backup folder with the seaf-cli that the blocks folder needs to have another 60gb availalbe space on the disk to cache the upload? i.e. this will fail because the harddrive is full?
This is bad news for me. Is there a workaround to disable the cache or should i use rsync to write the files to another server and seafile sync them from there?