Restore failing seaf-fsck.sh Segmentation fault

I have a lot of files in folder with repo id 5c060e2e-16c4-42bb-9c7d-aed4658bfbfc

When I upload some files there, the electricity was down. After that I tried to enter there and I couldn’t do that. I’ve tried to restore, but get segmentation fault
Seafile pro 13 in docker

./seaf-fsck.sh -r 5c060e2e-16c4-42bb-9c7d-aed4658bfbfc
Starting seaf-fsck, please wait ...
[2025-10-28 17:47:48] [INFO] ../../common/seaf-utils.c(128): Failed to read SEAFILE_MYSQL_DB_CCNET_DB_NAME, use ccnet_db by default[2025-10-28 17:47:48] [INFO] fsck.c(684): Running fsck for repo 5c060e2e-16c4-42bb-9c7d-aed4658bfbfc.[2025-10-28 17:47:48] [WARNING] Empty input for zlib, invalid.
[2025-10-28 17:47:48] [WARNING] ../../common/fs-mgr.c(2919): Failed to decompress fs object 33e64fea96fcf9f9f88fb54bad6005dab3800e33.
[2025-10-28 17:47:48] [INFO] fsck.c(68): Dir 33e64fea96fcf9f9f88fb54bad6005dab3800e33 is damaged.
[2025-10-28 17:47:48] [INFO] fsck.c(728): Repo 5c060e2e HEAD commit is damaged, need to restore to an old version.
[2025-10-28 17:47:48] [INFO] fsck.c(595): Scanning available commits...
[2025-10-28 17:47:48] [WARNING] Empty input for zlib, invalid.
[2025-10-28 17:47:48] [WARNING] ../../common/fs-mgr.c(2919): Failed to decompress fs object 33e64fea96fcf9f9f88fb54bad6005dab3800e33.
[2025-10-28 17:47:48] [INFO] fsck.c(68): Dir 33e64fea96fcf9f9f88fb54bad6005dab3800e33 is damaged.
[2025-10-28 17:47:48] [INFO] fsck.c(654): Find available commit e2f8bdce(created at 2025-10-27 17:41:31) for repo 5c060e2e.
[2025-10-28 17:47:48] [INFO] fsck.c(507): Checking file system integrity of repo Cloud(5c060e2e)..../seaf-fsck.sh: line 61: 112269 Segmentation fault      (core dumped) LD_LIBRARY_PATH=$SEAFILE_LD_LIBRARY_PATH ${seaf_fsck} -d "${default_seafile_data_dir}" -F "${default_conf_dir}" ${seaf_fsck_opts}

seaf-fsck run done

Done.

Please help me to restore my folder

Hello @insigmo ,

this issue should be a bug, we will fix it in next release.

Do you need something else’s. Core dump for example

Hi,

That’s not necessary. This issue will be fixed in the next 13.0 version.

Ok, thanks a lot, I’m expecting