Migrating NFS storage to S3 failed

Hi,

we are running Seafile Pro 11 on docker with NFS as storage backend. We want to migrate to Pro 13 with S3 as backend. What is the best way to do this?

For testing purposes, I installed a new server with Pro 13 and NFS. I tried to migrate the backend to S3 as described in the Manual, but that failed.

I opened a shell in my docker container and created a new seafile.conf file under /shared, where I defined the S3 storage. Then I called the script migrate.sh that failed with an error:

/opt/seafile/seafile-server-latest# ./migrate.sh /shared/seafile.conf

/opt/seafile/seafile-pro-server-13.0.18/migrate.py:214: SyntaxWarning: invalid escape sequence '\A'
  if len(obj[1]) != 40 or not re.match('\A[0-9a-f]+\Z', obj[1]):
2026-02-11 12:39:44,846 filesystem storage backend does not support migration between identical storage
Done.

Is this an error in the migrate.py?

I can run Seafile with a new S3 backend, that is defined in the .env, file without any problems.

Regards,

Dirk

Seems, that the “r” is missing in the python syntax for re.match. When I change the command to

if len(obj[1]) != 40 or not re.match(r'\A[0-9a-f]+\Z', obj[1]):

the error disappears. But I still get the message:

filesystem storage backend does not support migration between identical storage

So, how can I migrate my storage backend from NFS to S3?

You have used wrong path in the migrate.sh command. The correct command should be:

./migrate.sh /shared

The python warning doesn’t affect the outcome of the script. We’ll fix the syntax warning in later versions.

Hi Jonathan,

thanks for the reply. But that doesn’t work, neither.

/opt/seafile/seafile-server-latest# ./migrate.sh /shared

2026-02-12 10:02:16,858 filesystem storage backend does not support migration between identical storage
Done.

In my .env file I’ve defined:

### Storage type

SEAF_SERVER_STORAGE_TYPE=disk # disk, s3, multiple
MD_STORAGE_TYPE=$SEAF_SERVER_STORAGE_TYPE # disk, s3
SS_STORAGE_TYPE=$SEAF_SERVER_STORAGE_TYPE # disk, s3

In my docker container I run:

/opt/seafile/seafile-server-latest# ./migrate.sh /shared

2026-02-12 10:26:10,433 filesystem storage backend does not support migration between identical storage
/opt/seafile/seafile-server-latest# export SEAF_SERVER_STORAGE_TYPE=s3
/opt/seafile/seafile-server-latest# ./migrate.sh /shared

2026-02-12 10:26:29,709 S3 storage backend does not support migration between identical storage

Then I set the multiple storage type and in my seafile.conf

[storage]
enable_storage_classes = false

Now, in my container I get the error:

/opt/seafile/seafile-server-latest# export SEAF_SERVER_STORAGE_TYPE=multiple
/opt/seafile/seafile-server-latest# ./migrate.sh /shared

WARNING:root:Failed to load json file
Traceback (most recent call last):
File “/usr/lib/python3.12/configparser.py”, line 767, in get
value = d[option]
~^^^^^^^^
File “/usr/lib/python3.12/collections/init.py”, line 1015, in getitem
return self.missing(key)            # support subclasses that define missing
^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3.12/collections/init.py”, line 1007, in missing
raise KeyError(key)
KeyError: ‘storage_classes_file’

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/opt/seafile/seafile-pro-server-13.0.18/migrate.py”, line 14, in 
from seafobj.objstore_factory import SeafObjStoreFactory
File “/opt/seafile/seafile-pro-server-13.0.18/seahub/thirdpart/seafobj/init.py”, line 2, in 
from .commits import commit_mgr
File “/opt/seafile/seafile-pro-server-13.0.18/seahub/thirdpart/seafobj/commits.py”, line 1, in 
from .objstore_factory import objstore_factory
File “/opt/seafile/seafile-pro-server-13.0.18/seahub/thirdpart/seafobj/objstore_factory.py”, line 551, in 
objstore_factory = SeafObjStoreFactory()
^^^^^^^^^^^^^^^^^^^^^
File “/opt/seafile/seafile-pro-server-13.0.18/seahub/thirdpart/seafobj/objstore_factory.py”, line 430, in init
json_file = cfg.get(‘storage’, ‘storage_classes_file’)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3.12/configparser.py”, line 770, in get
raise NoOptionError(option, section)
configparser.NoOptionError: No option ‘storage_classes_file’ in section: ‘storage’
Done.


In the manual the path to the file is defined as /shared/conf/seafile_storage_classes.json That doesn’t work, because the confdirectory is under /shared/seafile/conf.

[storage]
enable_storage_classes = false
storage_classes_file = /shared/seafile/conf/seafile_storage_classes.json

But no matter if I change the path and create the json file, the migrate script is not working. I also wonder why the script is looking for the json file, even if I set the enable_storage_classes = false.

Now I don’t know what to do :thinking:

We have reproduced your problem. This is a bug of the script in version 13. The SEAF_SERVER_STORAGE_TYPE env variable makes the script to use origin storage configuration as the destination configuration. Setting SEAF_SERVER_STORAGE_TYPE to empty before running the script should work around the issue for now. We’ll fix the script in the next version.

Sorry for the inconvenience and reporting the issue.

Setting SEAF_SERVER_STORAGE_TYPE to ‘multiple’ overrides the configuration from the conf file. So it works the same as setting enable_storage_classes to true.

Hi Jonathan,

good to hear, that you could reproduce the issue.

I will try again with the empty variable SEAF_SERVER_STORAGE_TYPE.

Thanks!

In fact, the migration of my data succeeded, after I set SEAF_SERVER_STORAGE_TYPE=<empty> in the .env file. I compared the files on the nfs share and in my s3 storage. They were identical. But I couldn’t access the S3 data, until I deleted the port from the s3 host:

S3_HOST=s3.mydomain:443 (not working)
S3_HOST=s3.mydomain (working)
S3_USE_HTTPS=true

After changing SEAF_SERVER_STORAGE_TYPE=s3 I could access the data from my S3 storage. But I always got an Internal server error when I opened a library. The new file features have disappeared, because the MD_STORAGE_TYPE used the same backend as Seafile. Can the Metadata be migrated, too?

Just another observation that I made during the migration last week:
I got a lot of the following errors:

Connection pool is full, discarding connection: seafile-fs.s3.mydomain. Connection pool size: 10

This seems to have been caused by the entry

DEFAULT_POOLSIZE = 10
...
pool_connections=DEFAULT_POOLSIZE,
pool_maxsize=DEFAULT_POOLSIZE,

in file seafile-server-latest/seahub/thirdpart/requests/adapters.py.
I set this value to 100 and the errors disappeared.

The migration script does not work for Metadata yet.