Ceph integration

How is the current state of Ceph integration working?

Does Seafile support native integration of RBD storage and can it guarantee that the Ceph connector will work when there is a new stable Ceph release? I am asking this because the past with Seafile has shown that some things were only updated with a larger delay, leaving some pro customers more or less stranded.

Does anybody use Seafile with CephFS instead of native RBD?

Why is Ceph only supported for the Pro Edition?
This is a storage backend and doesn’t necessarily require a Seafile cluster.

@daniel.pan It would be great if Ceph,S3 and Swift would be supported in the CE as well, allowing us to use safer and large storage backend.
As said above this doesn’t mean that a Seafile cluster is required.


They should go open source with a License that buisseness have to pay for it. They would increase immediately after doing that, since they now would be a real alternative to nextcloud and a trustworthy option for governments.

1 Like

Also the ridiculous high pricing should be better. Seafile has no chance against others because of the pricing for less value and stability than other solutions provide in the eyes of customers and decision makers.
Always remember that your product needs to sell to non technical personal more than to those who know why they prefer Seafile.

1 Like

If you sell for a less price, you’ll get more customers and when if the price is higher you get more money but loose customers or afraid interested people. You have to find the find the intersection of both graphs, for most money. Seafile is far too expensive to hit this intersection, you’re right.

1 Like

Back to topic:

So far it’s working. Currently we observe problems with the library calculation queue if there are many libraries on one system, i hope it’s going to be fixed soon.

You should use the Ubuntu version of Seafile Pro (as there are’t libs included for ceph) and install the ceph python packages from the official ceph repos, this ensures that the libs from ceph (used by Seafile) are the same as your cluster version (see: https://manual.seafile.com/deploy_pro/setup_with_ceph.html).

No but i don’t see any advantage since there is another layer. For CephFS you would need a Metadata server and i think the performance is not as good as a direct integration because Seafile would use a lot small files which is not the best thing for distributed file systems.

I would like to see this too! It would allow more users to use features which currently are just used by a small amount of bigger Seafile customers and therefore would allow to gain more experience with those integrations.

From the business aspect i can understand that this is a pro only feature.

And this is a potential problem - past problems have shown that Seafile wasn’t always read to work with the latest stable version. Not even weeks after a release sometimes.

I don’t understand what they gain from limiting this to the pro version? Features like multiple LDAP server connections or cluster setup can be limited but not storage backends and basic file search.

Seafile will yet agan loose against Nextcloud with the customers that I currently work with. Two of them have major project who’ll start with TB and scale up to PB. It’s just sad that the Seafile devs don’t want to bring Seafile further and still act stubborn. :frowning:

Where is the issue with a storage system with this? A major Ceph upgrade doesn’t happen by accident and obviously one should test compatibility first. When buying some Java software there also is no guarantee that it works with all JRE releases (most of the time it is only guaranteed to work with one specific version that is outdated, already).

That should be solved because you now can install the correct version of ceph libraries directly on your linux system and seafile will use them. So they’ve improved it, but yes this was a problem some time ago.

I agree with @shoeper - It’s not something what you do everyday.

1 Like

Since I am looking more into this currently unstalling patched and upgrading to a new Ceph version works on a fly with e.g. croit.io.
It is important to understand what the Ceph code update policy is from the Seafile devs. @daniel.pan In what timeframe is the code regarding Ceph storage backend updated or tested against a new Ceph version and library files?
Thanks in advance.

Since Seafile supports multiple Storage backends I am wondering how multiple Ceph backends can be integrated?


We want to integrate multiple storage backends, some running Ceph.

Note: Currently file system, S3 and Swift backends are supported. Ceph/RADOS is not supported yet.

@daniel.pan Can you please elaborate when we can expect multiple Ceph backend to be supported? This is quiet important.

What command set does Seafile use to work with any S3 storage backend? Does it work fine and fast with any S3 backend?

Using S3 interface of Ceph is recommended way. The performance should be okay. It also avoid compatibility of ceph library.

Thanks for the fast reply.

What does this mean exactly? Does it scale up to multiple Petabyte of storage split into multiple S3 volumes? Do you have any statistics and test results? Surely you must have tested this somehow?

Thanks in advance!

@daniel.pan Do you have a whitepaper on this? How does one migrate existing from single local storage to multi storage S3 storage backend(s)?

You can first migrate data from local storage to a single S3 storage via migrate script https://manual.seafile.com/deploy_pro/migrate.html

Then you can add multiple backends and define a storage policy. The original S3 storage should be defined as default.

Why do you want to use multiple S3 storage backends, if a single one can scale up to Petabytes and even more ?