Migrate from local Storage to S3 using migration script

Hi there,

Server Version 6.0.12 Pro

i wanted to move my storage backend from local storage to S3
i copied the config file and i added the following data there:

name = s3
use_v4_signature = true
aws_region = eu-west-1
use_https = true
bucket = sf-commit-objects
key_id = XXX
key = XXX
memcached_options = --SERVER=localhost --POOL-MIN=10 --POOL-MAX=100

name = s3
use_v4_signature = true
aws_region = eu-west-1
use_https = true
bucket = sf-fs-objects
key_id = XXX
key = XXX
memcached_options = --SERVER=localhost --POOL-MIN=10 --POOL-MAX=100

name = s3
use_v4_signature = true
aws_region = eu-west-1
use_https = true
bucket = sf-block-objects
key_id = XXX
key = XXX
memcached_options = --SERVER=localhost --POOL-MIN=10 --POOL-MAX=100

all good. i used the migration script and i ran: migrate.sh config/seafile-s3.conf
the script ran successfully (completed blocks, commits and fs). what i have to do next ? i thought the script is moving the data to the S3 buckets. I have to do this manually ?

Thanks !

Best, Andre

@Jonathan - can you help here ?

Thanks ! Best, Andre

The script should actually move the data to s3.

But this was not working then, the response from the migration script was successful (or completed)
The script ran ~15 min. for currently 700GB of Data. My assumption is that the script is just updating the database or “prepares” the filesystem and i have to move the data to S3, but would be nice if someone can confirm this, because the sync of the folder “data/storage/fs” takes ages to complete. (due to the thousands of subfolders)

After running the script and using the new config file, you should be able to see your data in your s3 buckets and have Seafile running again. What’s the problem you see?

Pretty much looks like the script aborts after 15 mins in his case saying it has finished (which is unlikely with 700 GB).

Right, it took ~15 min. and the script said “completed” for commits, fs and blocks. But the data wasn’t uploaded to S3. So not sure what i should do now.
should i sync the folders manually to the s3 buckets and then use the new config file ?

ok, uploading the folder to the s3 buckets is not really working :slight_smile:

That is the result:

1 Like

Pretty much looks like there has been some error case that wasn’t handled and thus not all data has been uploaded to s3.

i uploaded (synced) all files (checked multiple times) from the folder

  • data/storage/
    – blocks
    – commits
    – fs

to the respective S3 buckets and changed the config file. So all files should be there. Do i have to do sth. else ?
i’m afraid to start the server with the old config after the migration script confirmed that the migration is completed. I guess the script did sth. in the 15 min. :wink:

Afaik naming of the files is being done different for s3. So just uploading them won’t work.

Just wondering if there has been any update on this issue. I’ve tried migrating my ~200 GB of data to S3 using the above script - it completes in about 30 seconds with no errors but no data is moved to S3.

We’ve tested the script internally with S3. But we actually haven’t officially released the script, thus no manual is available for it right now. Can you tell how do you run the script?

Has there been any progress with this issue? I’m running in to the same issue and not seeing any glaring problems that would cause the migration to fail. I’m seeing a significant amount of network traffic on the system while the script is running, though at the end (mine is only running for 2 minutes, with a 520G repo, though at the end (with no returned errors) the buckets are empty.

I’ve been in the seafile-server-latest running ./migrate.sh seafile.s3.conf
I’ve also tried the above, but specifying the full path to the conf file.

I’ve also changed to the data folder and then executed the full path of migrate.sh (with and without the full path to the conf file).

Results are always the same, ~2 minutes of execution, no errors returned, and empty S3 buckets.

Is your config file named ‘seafile-s3.conf’? you should rename it to seafile.conf under config/ , then run ./migrate.sh config , give it a try.

Official document is added for this migration script: https://manual.seafile.com/deploy_pro/migrate.html

1 Like

Sorry for necropost, but this issue nearly drives me mad. :frowning:

I follow the official manual to migrate data from local storage to a S3-compatible Object Storage.

When I use the dir path of the new seafile.conf, the output is:

When I use the full path of the new seafile.conf, the output is:

In both situation the script runs well with no errors, but the buckets are always empty.

Many thanks for any reply !

I would add your temporary seafile.conf in /opt directory (just like their docs show) since Seafile migration script has hardcoded path based on my last experience.

Hey. Working actually with version 11.0.6 and getting the same behaviour when i´m trying to migrate data to S3.
S3 config is imho working. I´ve setup a test instance and without data i can use the S3 config and create new libs and save data.
But the migrate.sh script does not copy the data.
Already tried to save it in /opt/ and run it with folder and complete path.
It is a docker setup.

Any hints?