Seafile Real Time Backup Question


I was wondering, what kinds of checks does the real-time backup do to ensure data consistency between the main server and the backup server? I have a production Seafile server in Kubernetes that I need to move to a new Kubernetes setup, and found this to be the simplest way to move the data. I performed it on a staging implementation with only 3 libraries, which were successfully replicated, but when I tried to verify by running a filesystem wide checksum comparison, I noticed that the way the data was stored under seafile-data directory was not identical. Our production Seafile is used company-wide and hosts about 7TB of documents, videos, OS ISOs, and other types of files, so it is not possible for us to manually verify that each file successfully made the trip to the new server.

The timestamps of the objects replicated to the backup server may not be the same as the original. And due to some settings like library history, some history files won’t be replicated too.

That would indeed explain why my method for file checksum verification wouldn’t work, but it doesn’t really answer my main question, which is the following:
How does the real-time backup verifies that the files in the main and backup servers are the same, or are properly replicated? Does it do it through a checksum check?

The real-time backup uses the same syncing algorithm and protocol as the sync client. So it’s based on commit history of the libraries. The complete history is just synced to the backup server.