Migrate TB of data from old on premise seafile to seafile docker

I have a very old version of seafile running on an odroid xu4 with Ubuntu 16.04. For some reasons (compatibility with this old odroid device), seafile can’t be properly upgraded anymore. There are about 4 TB of data managed by this seafile.
I want to transfer the files to a newer version of seafile, specifically the docker version (community edition). What I tried so far:

  • Transfer the data (seafile data blocks) by means of rclone directly from seafile to seafile (rclone DOT org/seafile/): Here, I had the problem that there was an error copying larger files, probably a bug in rclone.
  • Transfer the files which I see via FUSE (not seafile data blocks but “real” files) by means of rclone / ssh to the other odroid (rclone DOT org/sftp/), synchronize it there with the docker-seafile in the next step. Disadvantage of this solution: On the target odroid, the data would be duplicated (the copied files and the seafile blocks), which will exceed the storage capacity. I would have to do this process in small chunks and always delete the copied files after syncing.
  • Within a docker container, I installed seafile-CLI (help DOT seafile DOT com/syncing_client/linux-cli/) on the “old” odroid system. By means of seaf-cli sync, the files visible via FUSE (not the seafile blocks) are synchronized with the docker-seafile on the new odroid.

The third solution would be my preferred one, as it is a syncing process which can be interrupted (contrary to file transfers via ssh), and there should be no data duplication involved on the target odroid. However (and now coming to my main point): When I start the synchronization with seaf-cli sync, the folder ~/seafile-client/seafile-data/storage/ inside the docker container starts to grow. This folder contains the folders blocks, commits and fs. I fear that seaf-cli recreates the local folder in seafile-blocks in the background, which would definitely exceed the available storage capacity.

My questions:

  • When synchronizing an existing local folder to a target instance of seafile, does seaf-cli create a copy of the files in seafile blocks locally?
  • Or is ~/seafile-client/seafile-data/storage/ just some kind of cache whose growth is limited?
  • What other solutions can you think of to get these large amounts of data from the old seafile to the docker-seafile?

According to forum.seafile DOT com/t/another-seafile-data-is-too-big-50gib-for-109gib-of-real-data/7536/16?u=tadaki, the storage folder is a kind of cache which is periodically filled and emptied. I guess that seaf-cli takes some files, creates block data in storage, syncs them with the remote repository, empties the storage folder and proceeds to the next bunch of files.
Knowing that, I can start the synchronization process with seaf-cli because I’m confident it won’t exhaust my storage capacity on the old odroid system.