Seafile Pro 6.0.8
We use an NFS file server as storage backend.
I just discovered an error in our seafile storage where a user repo has too many links. The file server reports
Unable to link /ifs/data/rz/seafile/data/blocks/b0/8a01c8d32c45119e40c0b99a219bc74b9f39fa in directory /ifs/data/rz/seafile/data/storage/blocks/259f8a0e-964f-4b98-a4d9-e81a4d6d08c7/b0, Local error : Unable to link lin 105dbcbe5: Too many links: Too many links
Looking at the file I see, that it already has 1000 (!!!) links:
# ls -li 664bf7a2-5b84-4c11-bc30-01cbe0b15f90/b0
4316408274 -rw------- 1000 112 112 229376 Jan 13 2014 8a01c8d32c45119e40c0b99a219bc74b9f39fa
How is this possible and what can I do to resolve the error? I was running seaf-fsck on the repo but it didn’t find an error. The repo is empty. It only contains the seafile.doc.
When a file is copied from one library to another, if you use file system backend (the default), the file is not actually copied but hard linked. This saves some physical storage. I think there should be some way to increase the number of links per file?
I just found out that it’s actually the Seafile doc which seems to be hard linked to every repo:
# ls -l
-rw-r--r--. 1 root root 229376 1. Jan 1970 Seafile使用指南.doc
It has exactly the same size as the file above.
It’s an Isilon file server running OneFS, and I don’t see a possibility to increase the number of links. Can we prevent linking the Seafile.doc into every new repo?
OK, I found out how to increase the number of hard links.
But I never needed this before. More than 1000 hard links per file is very unusual. Maybe you should give a hint in the server manual (if I didn’t overlooked it). Or you should not link the Seafile.doc into every repo.
Is this the case for other storage backends too? Like S3, Swift or Ceph?
Yep, maybe an option to really copy the data instead of hard links.
No, S3 and Ceph uses real copy.
And what about deduplication on these backends?
Deduplication only works within a library. Files in different libraries will not be deduped.