Migration bare metal (seafile + db host) to docker professional


I have an existing Community installation on bare metal (1 host for Seafile, 1 host for MySQL).

Now I would like to migrate to a Professional docker based installation - but including the dockered MySQL server, not the external one.

I have the data on a separate disk. I have SQL dumps from ccnet_db, seafile_db and seahub_db.
The docker installation of the professional version works fine until I replace these three databases using

docker exec -it e084aa9XXXXX /bin/bash
mysql -p ccnet_db < ccnet_db-dump.sql
mysql -p seafile_db < seafile_db-dump.sql
mysql -p seahub_db < seahub_db-dump.sql

after this change the seafile container cannot access the db.

seafile-mysql    | 2022-11-27 17:22:35 5 [Warning] Access denied for user 'seafile'@'' (using password: YES)

I tried to re-grant rights as described at Migrate from non-docker deployment - Seafile Admin Manual but there is no change.

As I only have few users I can also re-create them - as long as I can keep the data intact and libraries assigned to users.

What would be the best solution to use the full-dockered professional installation?

Thanks, -MN

Your problem is your environment setup and MySQL privileges. Have nothing to do with seafile.

Are you sure the user “seafile” exists? Page you linked here, only talk about privs, not creating user and you SQL dumps surely not contains MySQL user export.

If user exists:
Are you sure you grand privs to user ‘seafile’@’%’ and not ‘seafile’@‘whatever else here’? The % means “connection from everywhere”


User “seafile”@"%.%.%.%" is automatically created by the docker-compose file.

However the three import commands will DROP each DB and CREATE a new one. I am honestly not sure if the GRANT rights from user seafile will be applied to the new databases or not.

I think I got a solution for my migration problem though:

  1. Install seafile-dockered pro file
  2. Migrate data from old bare metal server using seafile-fuse and rclone and back to new seafile

This just takes some time for 5 TByte of data but makes sure that the internal files are correct.

Thanks, -MN