After migration from community to pro - libraries "lost"

Hi community,

i’ve been using Seafile quite a while now, without major issues.
So, I decided to switch from the community edition to pro edition yesterday (latest comm. vers. to latest pro - same minor version).
Migration was done without error messages and executed in context of the seafile user on my Ubuntu Server using a MySQL DB.

Problems:

  • After migration, all libraries disappeared from all user accounts.
  • The quota suggests, that there is space used (correctly calculated), but there are no libraries shown
  • Using the admin account and the administration section, the libraries of all users are still listed. But without names

What I’ve done so far to solve the issue:

  • made fschk (seafile FSCK, Ubuntu FS check, folder/file permissions)
  • made a DB check (mysqlcheck -u root -p --all-databases --auto-repair [no messages])
  • restarted several times (server and seafile-/hub)
  • cleared the seafile cache (deleted the folder)

I don’t want to play around too much, to get this solved.
So any advice is appreciated :wink:

Greets

Please create the missing tables in seafile_db:

CREATE TABLE IF NOT EXISTS RepoStorageId (
  id BIGINT NOT NULL PRIMARY KEY AUTO_INCREMENT,
  repo_id CHAR(40) NOT NULL,
  storage_id VARCHAR(255) NOT NULL,
  UNIQUE INDEX(repo_id)
) ENGINE=INNODB;

CREATE TABLE IF NOT EXISTS RoleQuota (
  id BIGINT NOT NULL PRIMARY KEY AUTO_INCREMENT,
  role VARCHAR(255),
  quota BIGINT,
  UNIQUE INDEX(role)
) ENGINE=INNODB;

CREATE TABLE IF NOT EXISTS FileLocks (
  id BIGINT NOT NULL PRIMARY KEY AUTO_INCREMENT,
  repo_id CHAR(40) NOT NULL,
  path TEXT NOT NULL,
  user_name VARCHAR(255) NOT NULL,
  lock_time BIGINT,
  expire BIGINT,
  KEY(repo_id)
) ENGINE=INNODB;
2 Likes

Thank You @daniel.pan,

I’ve done that and am now waiting for seafile to start up.
It’s not yet reachable and its processes are consuming more than 8GB of RAM right now :wink:
But, let’s wait and see. (You answer really makes sense though.)
Results will follow soon…

Greetings
Henrik

Back and still waiting, buuuuut:

I have to give some additional information, because you gave me an implicit hint with your missing tables creation.

I watched my seafile.log and got many more “missing tables” errors.

I decided to create the missing tables with the statements You use on Github’s server code:

CREATE TABLE IF NOT EXISTS FileLockTimestamp (
id BIGINT NOT NULL PRIMARY KEY AUTO_INCREMENT,
repo_id CHAR(40),
update_time BIGINT NOT NULL,
UNIQUE INDEX(repo_id)
);

CREATE TABLE IF NOT EXISTS FolderPermTimestamp (
id BIGINT NOT NULL PRIMARY KEY AUTO_INCREMENT,
repo_id CHAR(36),
timestamp BIGINT,
UNIQUE INDEX(repo_id)
) ENGINE=INNODB;

Libraries are back now and memory usage also fell back to an appropriate amount.

So.
THANKS ! ! !:star_struck:

I had to add some additional tables. All done with the help of your repo and the error messages shown in my seafile.log.

I’d like to make one more comment.
I had a custom css in use. This did not perform well at several pages in the web ui of the professional edition. (E.g. no password entry field is shown, when trying to open an encrypted library)
I used a modified version of a CSS repo on GitHub, which was also mentioned as an example in your manual.

Maybe, You could mention that issue in Your migration manual.

Greetings
Henrik

Thank you for sharing, @Henrik, I ran into the same problem.

Situation: Migrating from 7.0 CE to 7.1 PRO. Deployed using docker-compose. No errors in migration script

Solution:

Then I had more troubles when migrating the storage from local to Aliyun’s OSS (S3-compatible storage):

2021-03-23 04:25:25,614 Connection pool is full, discarding connection: gsseafilefsobject.oss-cn-hongkong-internal.aliyuncs.com
2021-03-23 04:25:31,151 [fs] task: 7700 objects written to destination.                                                        
./migrate.sh: line 55:  2116 Killed                  $PYTHON ${migrate}                                                        
Done.

Despite “Done”, only a fraction of all the files in the commits directory has been copied.

I’m copying the files now manually into the buckets, mounting them using s3fs and running:

$ cp -a /mnt/nas/seafile-data/storage/commits/* /mnt/gsseafilecommit/
$ cp -a /mnt/nas/seafile-data/storage/blocks/* /mnt/gsseafileblock/
$ cp -a /mnt/nas/seafile-data/storage/fs/* /mnt/gsseafilefsobject/

Will report after the operation finishes.

I was not successful with the method of mounting the s3 storage using s3fs and copying the files manually. There are over hundred thousand files in commits alone, the process of copying took over a day, and in the end the libraries were still not accessible (unfortunately I didn’t copy the logs, but either blocks or commits were not found by the backend).

I found out that the migrate.py script was originally killed by oom (out of memory) killer, and rather than reporting an error, finishing with a mere “Done”. I have

  • deleted the buckets that I have been using so far, created new ones
  • decreased the number of workers and the queue length
  • added swap to the server

This time the migration ran smoothly.

Now everything seems to be all right. Running seaf-fsck to make sure everything has been transferred properly. So far so good!

Deployed OnlyOffice too. Besides the document staying locked for several minutes after the OnlyOffice tab is closed, it seems to be working great! Finally we are moving to 21st century in our workflow :stuck_out_tongue: