Need feedback or best practice, limit local storage

I can’t find this information. What are the limits of the seafile in terms of storage volume, at what point in time should a load balancing be considered?

  • How does the seafile server react with 60TB of data? Is there a critical stage?
  • is a limit of size of a single library ? 8Tb on a single library is it possible to consider or deconsolidate it ?
  • is there a limit to the number of small files and therefore the number of entries in the database? i just found on the forum an information a library can not exceed 100 k files. can a user have several libraries to exceed this limit?

When I talk about limit it is rather a recommendation before considering an evolution of the seafile infrastructure (1/2 server, 1 internal/external bd, load balancing etc…).

thank you for your feedback.
Pierre L.

a.) Depends a lot on the type of data and the storage backend used. If you are mostly using large static files, Seafile is great! If your data is atomic and gets changed often, Seafile may get problems.
b.) Again, that depends a lot on the type of files. Generally speaking, Seafile’s data model is better for larger files than for tiny ones.
c.) The 100k file limit is for one library. A user can have as many libraries with each containing multiple tens of thousands of files.

ok thank you for your feedback.

the backend seen by seafile is local, which makes it a glusterfs mount. (currently 2TB but the target is 40TB and more in a more distant future).

the size of the files is not easy to say but they are common files for users so yes it’s not very big files.

I am not familiar with the specific properties of glusterfs. I hope for the input of others.

If the files you want to store are mostly office, PDF, jpg/png/… files, then I don’t see any particular reason why you should suffer a hefty performance penalty. This said, it is probably not the highest performing setup though.

Have a look at this thread: GlusterFS as local storage target for Seafile-data This may be interesting for you.

Most probably everything will be limited by IO. What kind of storage and disk setup is planned? CPU usage will only go up with extensive use of Seahub. Using the clients will require very little CPU.

I think a library can easily exceed 100k files. See also Seafile community restrictions?

The 100k files were a recommendation. It could require some RAM to process many files in a single library. I don’t have numbers though and didn’t run into issues in the past.

Regarding the library size 8 TiB will be possible but I would recommend to use a library per purpose (so do not put everything in a single folder as one might do with other solutions).

Thank you very much for all your answers. I am reassured.

I have one addition regarding IO: You can expect a random IO workload. So don’t trust any sequential read/write tests.