Hello, I’m using Seafile community 11.0.2 deployed via docker to kubernetes cluster.
The server is a Xeon(R) CPU E3-1230 v6, 64GB of memory, repository backend is 6x6TB RaidZ2 array with database hosted on SSD.
I am attempting to migrate large quantity of data. Approximately 350k files at 1.5TB, into three libraries. I am using rclone as the server itself is an appliance, so I cannot install the native seafile client.
Two of the three libraries have migrated perfectly. This post is about the final library, which is only about 35GB in size but has ~200k files in it. They are quite small, most under a megabyte. It is a collection of optimized images, if that helps at all.
The initial migration of this library went fine, but there were approximately 400 errors due to a mismatch of 1 byte between rclone’s expected size and the size sent back from seafile. I have verified via the other 2 repositories that this happens sometimes and the files are okay, so I need to re-run the migration.
The problem is the migration during the file checking stage pegs all cores of the CPU, and after 60 seconds the container restarts. So far I have attempted changing the fileserver over to the go based fileserver, increasing the amount of worker and indexing threads, and attempting to put a bandwidth limit on rclone.
Is there anything else I could tune in seafile to potentially make this migration more successful?