Virus_scan complains about 'Failed to read object <OBJECT>'

Hi everyone

I have a freshly installed Seafile setup (Pro Edition, v10.0.14, with a license for 9 users).

I’m using Minio as my storage backend. I have been testing the setup for a few days with uploads/downloads of files and checking logs to see if everything seemed fine behind the scenes.

Everything seemed fine, so I decided to add virus scanning (clamav) to the mix also. Running a scan produces several warnings, however. Related to objects it cannot find/read.

I eventually decided to delete ALL libraries and files during my troubleshooting.

virus_scan still complains, however:

~/seafile-server-latest$ ./pro/pro.py virus_scan
[03/23/2024 21:05:48] [INFO] [seafevents] database: mysql, name: seafpro_seahub
[03/23/2024 21:05:48] [INFO] [seafevents] database: mysql, name: seafpro_seafile
[03/23/2024 21:05:48] [INFO] [seafevents] database: mysql, name: seafpro_seafile
[03/23/2024 21:05:49] [DEBUG] Using access key provided by client.
[03/23/2024 21:05:49] [DEBUG] Using secret key provided by client.
[03/23/2024 21:05:49] [DEBUG] path=/frederik/seafpro/my-commit-objects/
[03/23/2024 21:05:49] [DEBUG] auth_path=/frederik/seafpro/my-commit-objects/
[03/23/2024 21:05:49] [DEBUG] Method: HEAD
[03/23/2024 21:05:49] [DEBUG] Path: /frederik/seafpro/my-commit-objects/
[03/23/2024 21:05:49] [DEBUG] Data: 
[03/23/2024 21:05:49] [DEBUG] Headers: {}
[03/23/2024 21:05:49] [DEBUG] Host: 127.0.0.1:9000
[03/23/2024 21:05:49] [DEBUG] Port: 9000
[03/23/2024 21:05:49] [DEBUG] Params: {}
[03/23/2024 21:05:49] [DEBUG] establishing HTTP connection: kwargs={'timeout': 70, 'port': 9000}
[03/23/2024 21:05:49] [DEBUG] Token: None
[03/23/2024 21:05:49] [DEBUG] StringToSign:
HEAD


Sat, 23 Mar 2024 20:05:49 GMT
/frederik/seafpro/my-commit-objects/
[03/23/2024 21:05:49] [DEBUG] Signature:
AWS seafpro:knCgbc5gutO8QtuvInw2mhYbX6M=
[03/23/2024 21:05:49] [DEBUG] Final headers: {'User-Agent': 'Boto/2.49.0 Python/3.11.2 Linux/6.1.0-13-amd64', 'Date': 'Sat, 23 Mar 2024 20:05:49 GMT', 'Authorization': 'AWS seafpro:knCgbc5gutO8QtuvInw2mhYbX6M=', 'Content-Length': '0'}
[03/23/2024 21:05:49] [DEBUG] Response headers: [('Accept-Ranges', 'bytes'), ('Content-Length', '0'), ('Server', 'MinIO'), ('Strict-Transport-Security', 'max-age=31536000; includeSubDomains'), ('Vary', 'Origin'), ('Vary', 'Accept-Encoding'), ('X-Amz-Id-2', 'dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8'), ('X-Amz-Request-Id', '17BF7D7AA11C0963'), ('X-Content-Type-Options', 'nosniff'), ('X-Minio-Error-Code', 'NoSuchKey'), ('X-Minio-Error-Desc', '"The specified key does not exist."'), ('X-Xss-Protection', '1; mode=block'), ('Date', 'Sat, 23 Mar 2024 20:05:49 GMT')]
[03/23/2024 21:05:49] [WARNING] Failed to scan virus for repo 568f3a58: Failed to read object 568f3a58-8a6f-42aa-a7d4-f2a13658b930/c4982643f1e7198ec60c21da128cefacb0d1362a: S3ResponseError: 404 Not Found
.

At the same time seaf-fsck doesn’t complain about anything being wrong:

~/seafile-server-latest$ ./seaf-fsck.sh 

Starting seaf-fsck, please wait ...

2024-03-23 21:05:55 fsck.c(619): Running fsck for repo 568f3a58-8a6f-42aa-a7d4-f2a13658b930.
2024-03-23 21:05:55 fsck.c(444): Checking file system integrity of repo My Library Template(568f3a58)...
2024-03-23 21:05:55 fsck.c(683): Fsck finished for repo 568f3a58.

seaf-fsck run done

Done.

So my question now is this: How did I possibly manage to foobar something during my very simplistic upload/download testing? And how do I fix the issue (is it even an actual issue as such?)

The “Failed to read object” warning produced by virus_scan relates to repo 568f3a58 (the “My Library Template”). In here I then decided to delete the tutorial file thinking that the warning somehow related to this file. But no.

No clue what object 568f3a58-8a6f-42aa-a7d4-f2a13658b930/c4982643f1e7198ec60c21da128cefacb0d1362a is supposed to be all about. It’s all very confusing to me.

hmm, I just created a new library and uploaded a 200 mb file.

Immediately, virus_scan now complains about another object that it cannot find/read:

[03/23/2024 21:19:21] [WARNING] Failed to scan virus for repo df305031: Failed to read object df305031-9aaf-4ee4-9674-54d2b3610ce5/3ca886f96fda668fafb22919df574a142bde4f2e: S3ResponseError: 404 Not Found

Again, fsck says everything is just fine for the newly created library (with just one 200 mb file in it):

2024-03-23 21:19:40 fsck.c(619): Running fsck for repo df305031-9aaf-4ee4-9674-54d2b3610ce5.
2024-03-23 21:19:40 fsck.c(444): Checking file system integrity of repo MyTest(df305031)...
2024-03-23 21:19:41 fsck.c(683): Fsck finished for repo df305031.

I mean, it’s not like virus_scan is wrong. There is literally no object to be found called df305031-9aaf-4ee4-9674-54d2b3610ce5/3ca886f96fda668fafb22919df574a142bde4f2e (but then, where does virus_scan stumble on a reference to the object? In a table in MariaDB somewhere?)

~/seafile-server-latest$ mcli ls seafile/frederik/seafpro/my-block-objects/df305031-9aaf-4ee4-9674-54d2b3610ce5/
[2024-03-23 21:19:02 CET] 8.0MiB STANDARD 006074e30ef3241fa06f0e21f379ccd6c4926c7c
[2024-03-23 21:19:02 CET] 8.0MiB STANDARD 0be9ac3aa78f8c01059b557a7d0308549e43c93c
[2024-03-23 21:19:01 CET] 8.0MiB STANDARD 0cb4295f358e66f2e21d947de67aa68eaa9ba032
[2024-03-23 21:19:02 CET] 8.0MiB STANDARD 21abf1adf6f4e22c29338f3029d4ff02acf68678
[2024-03-23 21:19:01 CET] 8.0MiB STANDARD 3874b78707f8063c19d35655399dc300f6a8fe61
[2024-03-23 21:19:01 CET] 8.0MiB STANDARD 3f29c441906138745caced802126cdea744b809c
[2024-03-23 21:19:03 CET] 8.0MiB STANDARD 4822c60308ebd0e451c892d356adfa51e366d316
[2024-03-23 21:19:03 CET] 8.0MiB STANDARD 51e6063c36f003af42501127375753bea337c554
[2024-03-23 21:19:03 CET] 8.0MiB STANDARD 5a7c3e77d2b68a4755bcc58edc48367d70ed89d8
[2024-03-23 21:19:02 CET] 8.0MiB STANDARD 5ac56e3f1f258e8a5679e8310f7690fbce727bb7
[2024-03-23 21:19:02 CET] 8.0MiB STANDARD 65def033ee11f3ce9c4149881e0c79fef83b7e7f
[2024-03-23 21:19:04 CET] 8.0MiB STANDARD 65efbbab68533690473520179662a7f735b557ae
[2024-03-23 21:19:03 CET] 8.0MiB STANDARD 6b219ee80c42da1cf8505a5e23fa9c54bdefd1e3
[2024-03-23 21:19:03 CET] 8.0MiB STANDARD 7271ee18866ed0b694617462d43f7b48adda02e3
[2024-03-23 21:19:01 CET] 8.0MiB STANDARD 743046c4adee54048b2e69915e9296d0eb893192
[2024-03-23 21:19:02 CET] 8.0MiB STANDARD 80a782239587d2a8c3b3b1f032c577a238a70133
[2024-03-23 21:19:04 CET] 8.0MiB STANDARD 88d6ca43466898e6ca8cacdbb7b5dc89c9d63677
[2024-03-23 21:19:01 CET] 8.0MiB STANDARD 8ed175bf1d19340bccc54efeef008f76a6cedc6f
[2024-03-23 21:19:04 CET] 8.0MiB STANDARD 93160848c01b512728571675c15cab8ea8894d61
[2024-03-23 21:19:04 CET] 6.3MiB STANDARD 9a31a7e844be7bdbbf69b86d983d2f18957d4d14
[2024-03-23 21:19:02 CET] 8.0MiB STANDARD 9d8e5c110be92abf0f4a207601e3be4d3395ddb3
[2024-03-23 21:19:01 CET] 8.0MiB STANDARD c1294c8485ed2a2209fc941f69e16914b15201fa
[2024-03-23 21:19:03 CET] 8.0MiB STANDARD c197bd1d5292ddd3d3448003fb78fc97c3fd55e5
[2024-03-23 21:19:03 CET] 8.0MiB STANDARD c5d8a35281d4a663fb4e1c3c49fbfb4af8f64adb
[2024-03-23 21:19:01 CET] 8.0MiB STANDARD e2bb9bd7c24b6338dfdafe4e1262c4940b0bb6f9

virus_scan is definitely not behaving in a sane manner.

As posted above, it immediately gave me this upon creating a new (empty) library:

[03/23/2024 21:19:21] [WARNING] Failed to scan virus for repo df305031: Failed to read object df305031-9aaf-4ee4-9674-54d2b3610ce5/3ca886f96fda668fafb22919df574a142bde4f2e: S3ResponseError: 404 Not Found

However, the next day (without me touching ANYTHING), that output line is no longer produced by virus_scan. Instead, the warning about the new library (repo df305031) is represented differently now as such:

[03/24/2024 19:36:55] [WARNING] Failed to scan virus for repo df305031: 'SeafCommit' object has no attribute 'root_id'.

It’s truly odd how this WARNING changes “characteristics” from one day to another without me having done anything whatsoever to my setup in the meantime.

Hello @Attefall ,

the object that virus_scan can’t find should be the fs object and commit object of library.

Are you using s3 as a storage backend? And is there a properly configured storage backend for fs, commits, and blocks in the current seaifle.conf?

Sorry for the late response on my part (Easter mode).

Ahh, I was only looking in the block bucket for some reason, not the fs bucket and commit bucket, and there is in fact a 568f3a58-8a6f-42aa-a7d4-f2a13658b930/c4982643f1e7198ec60c21da128cefacb0d1362a to be found in the commit bucket. My bad.

~$ mcli ls seafile/frederik/seafpro/my-commit-objects/568f3a58-8a6f-42aa-a7d4-f2a13658b930/c4982643f1e7198ec60c21da128cefacb0d1362a
[2024-03-23 21:02:03 CET]   581B STANDARD c4982643f1e7198ec60c21da128cefacb0d1362a

But why does it fail to read the object then, I wonder? And as such return a S3ResponseError: 404 Not Found?

Yeah, the backend is S3 (MinIO) and I have everything configured as per the instructions in Amazon S3 Backend - Seafile Admin Manual with a section for each of the 3 different buckets ([commit_object_backend], [fs_object_backend], [block_backend]).

Everything with the setup seems to be working as intended. Only virus_scan is causing me headaches and confusion.

Hello @Attefall , can you test this with the following python script?

from seafobj import commit_mgr
repo_id = '568f3a58-8a6f-42aa-a7d4-f2a13658b930'
commit_id = 'c4982643f1e7198ec60c21da128cefacb0d1362a'
root_id = commit_mgr.get_commit_root_id(repo_id, 1, commit_id)
print(root_id)

The following environment variables need to be set before running the script:
export PYTHONPATH=~/seafile-server-latest/seahub/thirdpart:$PYTHONPATH
export SEAFILE_CONF_DIR=~/conf

Hi @feiniks

First and foremost, thank you so much for your time on helping me troubleshoot this.

Here goes…

seafpro@attefall:~/seafile-server-latest$ echo $PYTHONPATH
/opt/seafpro/seafile-server-latest/seahub/thirdpart:/opt/seafpro/python-venv/lib
seafpro@attefall:~/seafile-server-latest$ echo $SEAFILE_CONF_DIR
/opt/seafpro/conf
seafpro@attefall:~/seafile-server-latest$ cat feiniks.py 
from seafobj import commit_mgr
repo_id = '568f3a58-8a6f-42aa-a7d4-f2a13658b930'
commit_id = 'c4982643f1e7198ec60c21da128cefacb0d1362a'
root_id = commit_mgr.get_commit_root_id(repo_id, 1, commit_id)
print(root_id)
seafpro@attefall:~/seafile-server-latest$ python feiniks.py 
Traceback (most recent call last):
  File "/opt/seafpro/seafile-server-latest/seahub/thirdpart/seafobj/backends/base.py", line 14, in read_obj
    data = self.read_obj_raw(repo_id, version, obj_id)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/seafpro/seafile-server-latest/seahub/thirdpart/seafobj/backends/s3.py", line 70, in read_obj_raw
    data = self.s3_client.read_object_content(real_obj_id)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/seafpro/seafile-server-latest/seahub/thirdpart/seafobj/backends/s3.py", line 56, in read_object_content
    self.do_connect()
  File "/opt/seafpro/seafile-server-latest/seahub/thirdpart/seafobj/backends/s3.py", line 52, in do_connect
    self.bucket = self.conn.get_bucket(self.conf.bucket_name)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/seafpro/python-venv/lib/python3.11/site-packages/boto/s3/connection.py", line 509, in get_bucket
    return self.head_bucket(bucket_name, headers=headers)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/seafpro/python-venv/lib/python3.11/site-packages/boto/s3/connection.py", line 553, in head_bucket
    raise err
boto.exception.S3ResponseError: S3ResponseError: 404 Not Found


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/seafpro/seafile-pro-server-10.0.14/feiniks.py", line 4, in <module>
    root_id = commit_mgr.get_commit_root_id(repo_id, 1, commit_id)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/seafpro/seafile-server-latest/seahub/thirdpart/seafobj/commits.py", line 57, in get_commit_root_id
    commit = self.load_commit(repo_id, version, commit_id)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/seafpro/seafile-server-latest/seahub/thirdpart/seafobj/commits.py", line 35, in load_commit
    data = self.obj_store.read_obj(repo_id, version, obj_id)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/seafpro/seafile-server-latest/seahub/thirdpart/seafobj/backends/base.py", line 20, in read_obj
    raise GetObjectError('Failed to read object %s/%s: %s' % (repo_id, obj_id, e))
seafobj.exceptions.GetObjectError: Failed to read object 568f3a58-8a6f-42aa-a7d4-f2a13658b930/c4982643f1e7198ec60c21da128cefacb0d1362a: S3ResponseError: 404 Not Found

I just thought of something…

Could it possibly play a role that I initially installed ‘boto3’ by mistake (instead of ‘boto’)?

And then I added the new [commit_object_backend], [fs_object_backend], [block_backend] sections to seafile.conf and did a restart (all whilst boto3 was installed, not boto).

Could it be the case that upon restarting Seafile with the newly added S3 sections that something was “created/initialised” in an incorrect manner due to boto3 being installed instead of boto?

Shortly after (after realising my mistake) I uinstalled boto3 and installed boto instead - but perhaps “damage” had already been done at this point?

Hello @Attefall, If you configured commit_object_backend correctly, then this could be caused boto. You can remove boto and boto3 first, then reinstall by boto. boto3 will only be used in 11.0 for seafobj.

Hi again. It doesn’t change anything unfortunately. I have even just now performed a complete wipe and reimplementation of my entire setup. The problem with virus_scan comes back right away.

I wonder if it has to do with the switch from normal disk to S3?

Maybe some references to the system library called “My Library Template” and its content (the tutorial file) get mangled somehow when I do the switch from disk to S3 at a later stage?

A similar thing happens for the initial admin account which is added (during the first startup of seahub) and before I add S3 to the mix at a later stage. “My Library” and content (tutorial doc) become corrupt when I switch to S3 (which is probably the expected behavior unless I do some manual intervention to avoid this - but it’s not a problem since its just the default tutorial doc).

So, in a nutshell, I’d like to just (safely) get rid of the commit object that virus_scan is complaining about that it cannot read for some strange reason (even though it exists):

attefall/seafpro/my-commit-objects/de591858-36de-4768-968f-718caedd14be/b3fe734c027b03e0954eb4eb3f4176da62a6e2ec

After switching from local disk to S3, the data are not migrated automatically.

In your case, you can just delete the old libraries created before migration, clean them in “System admin → Libraries → Trash” and then run seaf-gc to remove them completely.

Then virus_scan will not scan those old libraries.

Hi @daniel.pan

I already deleted (and deleted again from within Trash) the “normal” libraries that were created before the move to S3, but one “special” library remains. Namely the “My Library Template”, and this is the library that virus_scan complains about.

I don’t see a way to remove this special template library (and re-initialize it) from within the System Admin web interface.

Currently I only have this special template library and then a brand new “My Library” (correctly provisioned on S3) for a new user I created (which I also made admin, before I then deleted my origin admin account which was created when seahub.sh ran the first time (pre-s3)).

seafpro@attefall:~/seafile-server-latest$ ./seaf-fsck.sh

Starting seaf-fsck, please wait ...

2024-04-03 19:46:59 fsck.c(619): Running fsck for repo de591858-36de-4768-968f-718caedd14be.
2024-04-03 19:46:59 fsck.c(444): Checking file system integrity of repo My Library Template(de591858)...
2024-04-03 19:46:59 fsck.c(683): Fsck finished for repo de591858.

2024-04-03 19:46:59 fsck.c(619): Running fsck for repo 5d1a2565-eea8-40f2-8be1-de5eb9f03674.
2024-04-03 19:46:59 fsck.c(444): Checking file system integrity of repo My Library(5d1a2565)...
2024-04-03 19:46:59 fsck.c(683): Fsck finished for repo 5d1a2565.

seaf-fsck run done

Done.

Both libraries (My Library Template(de591858) + My Library(5d1a2565)) are completely empty. Trash is also completely empty.

But GC complains majorly:

seafpro@attefall:~/seafile-server-latest$ ./seaf-gc.sh 

Starting seafserv-gc, please wait ...
2024-04-03 19:56:34 gc-core.c(1134): Database is MySQL/Postgre/Oracle, use online GC.
2024-04-03 19:56:34 gc-core.c(1159): Using up to 10 threads to run GC.
2024-04-03 19:56:34 gc-core.c(1103): GC version 1 repo My Library Template(de591858-36de-4768-968f-718caedd14be)
2024-04-03 19:56:34 ../../common/s3-client.c(1314): S3 error status for list bucket: 404.
2024-04-03 19:56:34 ../../common/s3-client.c(1315): Request URL: http://127.0.0.1:9000/seafpro/my-block-objects/?prefix=de591858-36de-4768-968f-718caedd14be/
2024-04-03 19:56:34 ../../common/s3-client.c(1316): Response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>my-block-objects/</Key><BucketName>seafpro</BucketName><Resource>/seafpro/my-block-objects/</Resource><RequestId>17C2D6CF44E27EB2</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>
2024-04-03 19:56:34 ../../common/block-backend-s3.c(862): Failed to get block list from s3.
2024-04-03 19:56:34 gc-core.c(744): Failed to collect existing blocks for repo de591858, stop GC.

2024-04-03 19:56:34 gc-core.c(1103): GC version 1 repo My Library(5d1a2565-eea8-40f2-8be1-de5eb9f03674)
2024-04-03 19:56:34 ../../common/s3-client.c(1314): S3 error status for list bucket: 404.
2024-04-03 19:56:34 ../../common/s3-client.c(1315): Request URL: http://127.0.0.1:9000/seafpro/my-block-objects/?prefix=5d1a2565-eea8-40f2-8be1-de5eb9f03674/
2024-04-03 19:56:34 ../../common/s3-client.c(1316): Response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>my-block-objects/</Key><BucketName>seafpro</BucketName><Resource>/seafpro/my-block-objects/</Resource><RequestId>17C2D6CF44ED7421</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>
2024-04-03 19:56:34 ../../common/block-backend-s3.c(862): Failed to get block list from s3.
2024-04-03 19:56:34 gc-core.c(744): Failed to collect existing blocks for repo 5d1a2565, stop GC.

2024-04-03 19:56:34 gc-core.c(993): === Repos deleted by users ===
2024-04-03 19:56:34 gc-core.c(1017): Start to GC deleted repo f8f14148-7979-4593-9dd4-39739f2dcad6.
2024-04-03 19:56:34 gc-core.c(948): Deleting commits for repo f8f14148-7979-4593-9dd4-39739f2dcad6.
2024-04-03 19:56:34 gc-core.c(955): Deleting fs objects for repo f8f14148-7979-4593-9dd4-39739f2dcad6.
2024-04-03 19:56:34 gc-core.c(962): Deleting blocks for repo f8f14148-7979-4593-9dd4-39739f2dcad6.
2024-04-03 19:56:34 gc-core.c(1017): Start to GC deleted repo a42e2805-3737-4d0f-80c8-926b97c23cd0.
2024-04-03 19:56:34 ../../common/s3-client.c(1314): S3 error status for list bucket: 404.
2024-04-03 19:56:34 ../../common/s3-client.c(1314): S3 error status for list bucket: 404.
2024-04-03 19:56:34 ../../common/s3-client.c(1315): Request URL: http://127.0.0.1:9000/seafpro/my-fs-objects/?prefix=f8f14148-7979-4593-9dd4-39739f2dcad6/
2024-04-03 19:56:34 ../../common/s3-client.c(1315): Request URL: http://127.0.0.1:9000/seafpro/my-block-objects/?prefix=f8f14148-7979-4593-9dd4-39739f2dcad6/
2024-04-03 19:56:34 ../../common/s3-client.c(1316): Response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>my-block-objects/</Key><BucketName>seafpro</BucketName><Resource>/seafpro/my-block-objects/</Resource><RequestId>17C2D6CF45010A00</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>
2024-04-03 19:56:34 ../../common/s3-client.c(1316): Response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>my-fs-objects/</Key><BucketName>seafpro</BucketName><Resource>/seafpro/my-fs-objects/</Resource><RequestId>17C2D6CF45027E76</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>
2024-04-03 19:56:34 ../../common/block-backend-s3.c(862): Failed to get block list from s3.
2024-04-03 19:56:34 ../../common/obj-backend-s3.c(666): Failed to get object list from s3.
2024-04-03 19:56:34 gc-core.c(948): Deleting commits for repo a42e2805-3737-4d0f-80c8-926b97c23cd0.
2024-04-03 19:56:34 gc-core.c(955): Deleting fs objects for repo a42e2805-3737-4d0f-80c8-926b97c23cd0.
2024-04-03 19:56:34 gc-core.c(962): Deleting blocks for repo a42e2805-3737-4d0f-80c8-926b97c23cd0.
2024-04-03 19:56:34 ../../common/s3-client.c(1314): S3 error status for list bucket: 404.
2024-04-03 19:56:34 ../../common/s3-client.c(1315): Request URL: http://127.0.0.1:9000/seafpro/my-commit-objects/?prefix=f8f14148-7979-4593-9dd4-39739f2dcad6/
2024-04-03 19:56:34 ../../common/s3-client.c(1316): Response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>my-commit-objects/</Key><BucketName>seafpro</BucketName><Resource>/seafpro/my-commit-objects/</Resource><RequestId>17C2D6CF45027E77</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>
2024-04-03 19:56:34 ../../common/obj-backend-s3.c(666): Failed to get object list from s3.
2024-04-03 19:56:34 ../../common/s3-client.c(1314): S3 error status for list bucket: 404.
2024-04-03 19:56:34 ../../common/s3-client.c(1315): Request URL: http://127.0.0.1:9000/seafpro/my-block-objects/?prefix=a42e2805-3737-4d0f-80c8-926b97c23cd0/
2024-04-03 19:56:34 ../../common/s3-client.c(1316): Response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>my-block-objects/</Key><BucketName>seafpro</BucketName><Resource>/seafpro/my-block-objects/</Resource><RequestId>17C2D6CF450A45DE</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>
2024-04-03 19:56:34 ../../common/block-backend-s3.c(862): Failed to get block list from s3.
2024-04-03 19:56:34 ../../common/s3-client.c(1314): S3 error status for list bucket: 404.
2024-04-03 19:56:34 ../../common/s3-client.c(1315): Request URL: http://127.0.0.1:9000/seafpro/my-fs-objects/?prefix=a42e2805-3737-4d0f-80c8-926b97c23cd0/
2024-04-03 19:56:34 ../../common/s3-client.c(1316): Response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>my-fs-objects/</Key><BucketName>seafpro</BucketName><Resource>/seafpro/my-fs-objects/</Resource><RequestId>17C2D6CF450A9567</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>
2024-04-03 19:56:34 ../../common/obj-backend-s3.c(666): Failed to get object list from s3.
2024-04-03 19:56:34 ../../common/s3-client.c(1314): S3 error status for list bucket: 404.
2024-04-03 19:56:34 ../../common/s3-client.c(1315): Request URL: http://127.0.0.1:9000/seafpro/my-commit-objects/?prefix=a42e2805-3737-4d0f-80c8-926b97c23cd0/
2024-04-03 19:56:34 ../../common/s3-client.c(1316): Response:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>my-commit-objects/</Key><BucketName>seafpro</BucketName><Resource>/seafpro/my-commit-objects/</Resource><RequestId>17C2D6CF450C1F26</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>
2024-04-03 19:56:34 ../../common/obj-backend-s3.c(666): Failed to get object list from s3.
2024-04-03 19:56:34 gc-core.c(1211): === GC is finished ===
2024-04-03 19:56:34 gc-core.c(1215): The following repos are damaged. You can run seaf-fsck to fix them.
2024-04-03 19:56:34 gc-core.c(1218): 5d1a2565-eea8-40f2-8be1-de5eb9f03674
2024-04-03 19:56:34 gc-core.c(1218): de591858-36de-4768-968f-718caedd14be
seafserv-gc run done

Done.

edit: by the way, it is odd that GC claims both repos/libs are damaged. According to FSCK they are not.

In seafile.conf my S3 config is pretty standard I’d say:

[commit_object_backend]
name = s3
bucket = seafpro/my-commit-objects
key_id = <KEY_ID>
key = <KEY>
host = 127.0.0.1:9000
path_style_request = true

[fs_object_backend]
name = s3
bucket = seafpro/my-fs-objects
key_id = <KEY_ID>
key = <KEY>
host = 127.0.0.1:9000
path_style_request = true

[block_backend]
name = s3
bucket = seafpro/my-block-objects
key_id = <KEY_ID>
key = <KEY>
host = 127.0.0.1:9000
path_style_request = true

I used the example mentioned here: Amazon S3 Backend - Seafile Admin Manual

edit: there are no limitations to what the S3 seafpro user can do underneath the “root” seafpro/ bucket via the attached policy (also called seafpro).

root@attefall:~ # mcli admin user info attefall seafpro
AccessKey: <KEY_ID>
Status: enabled
PolicyName: seafpro
MemberOf: []

root@attefall:~ # mcli admin policy info attefall seafpro
{
 "PolicyName": "seafpro",
 "Policy": {
  "Version": "2012-10-17",
  "Statement": [
   {
    "Effect": "Allow",
    "Action": [
     "s3:ListAllMyBuckets"
    ],
    "Resource": [
     "arn:aws:s3:::*"
    ]
   },
   {
    "Effect": "Allow",
    "Action": [
     "s3:*"
    ],
    "Resource": [
     "arn:aws:s3:::seafpro"
    ]
   },
   {
    "Effect": "Allow",
    "Action": [
     "s3:*"
    ],
    "Resource": [
     "arn:aws:s3:::seafpro/*"
    ]
   }
  ]
 },
 "CreateDate": "2024-04-02T15:46:19.111Z",
 "UpdateDate": "2024-04-02T15:46:19.111Z"
}

In seafile_db, there is a SystemInfo table. You can execute the SQL below to remove the reference to the old template library. A new one will be automatically created when you restart Seafile. The new one should be in S3 if your configuration is correct.

DELETE FROM SystemInfo WHERE info_key='default_repo_id'

Sorry for taking this long to reply. I gave up! :slight_smile: No matter what I tried with regards to cleaning up, I was not able to resolve the issue.

So I decided to start all over - once again! - and this time I would make sure to NOT start seafile/seahub before I had every single S3-related config step in place beforehand!

At the same time, I decided to go with version 11 instead.

The only two repos/libs that currently exist are My Library of the admin user that was created when seahub did its initial launch, and then of course the Template system library.

So at this point I have no “lingering” users and libraries as a result of moving from local disks to S3 buckets (since I never started seafile until all S3 config was in place).

When it comes to virus_scan, it failed to scan because it assumes it can read the buckets via http://127.0.0.1, but my MinIO S3 is listening on port 9000.

However, I already specify port 9000 in seafile.conf (I have intentionally removed key_id and key before posting it here).

[commit_object_backend]
name = s3
bucket = seafile-commit-objects
host = 127.0.0.1:9000
path_style_request = true

[fs_object_backend]
name = s3
bucket = seafile-fs-objects
host = 127.0.0.1:9000
path_style_request = true

[block_backend]
name = s3
bucket = seafile-block-objects
host = 127.0.0.1:9000
path_style_request = true

[virus_scan]
scan_command = clamscan
virus_code = 1
nonvirus_code = 0
scan_interval = 120
scan_size_limit = 7

But no problem. I simply added the following to my Apache config for the default virtual host definition to work around the port issue:

        RewriteEngine on
        RewriteRule ^/(.*)$ http://127.0.0.1:9000/$1 [R,L]

NOW things really took off! Including a million DEBUG output lines (any way to easily adjust/decrease output verbosity to just INFO or WARNING perhaps?).

The virus scan now ends with one warning per repo:

[04/10/2024 22:29:30] [WARNING] Failed to scan virus for repo 2d2b4ac0: Failed to read object 2d2b4ac0-268e-4d91-8f69-a7960c86ed51/e9850cbfc7f9eabfdbe1b832003bc0fb83728eb4: maximum recursion depth exceeded.

[04/10/2024 22:29:30] [WARNING] Failed to scan virus for repo caab0677: Failed to read object caab0677-d479-402e-ba57-1f023fa05919/d28ffdec771e9e676394ef8f7be00d78df41186f: maximum recursion depth exceeded.

More output here:

[...]
[04/10/2024 22:29:30] [DEBUG] Event before-endpoint-resolution.s3: calling handler <bound method S3RegionRedirectorv2.redirect_from_cache of <botocore.utils.S3RegionRedirectorv2 object at 0x7fd11aa474d0>>
[04/10/2024 22:29:30] [DEBUG] Event request-created.s3.HeadBucket: calling handler <function add_retry_headers at 0x7fd12a288d60>
[04/10/2024 22:29:30] [DEBUG] Calling endpoint provider with parameters: {'Bucket': 'seafile-commit-objects', 'Region': 'us-east-1', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'http://127.0.0.1', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True}
[04/10/2024 22:29:30] [DEBUG] Sending http request: <AWSPreparedRequest stream_output=False, method=HEAD, url=http://127.0.0.1/seafile-commit-objects, headers={'User-Agent': b'Boto3/1.34.29 md/Botocore#1.34.29 ua/2.0 os/linux#6.1.0-13-amd64 md/arch#x86_64 lang/python#3.11.2 md/pyimpl#CPython cfg/retry-mode#legacy Botocore/1.34.29', 'Date': b'Wed, 10 Apr 2024 20:29:30 GMT', 'Authorization': b'AWS seafile:KGRlZtgGXoXPQprcFx1azsVSoD8=', 'amz-sdk-invocation-id': b'9268375c-f796-49a9-8dd4-7d0d187403e8', 'amz-sdk-request': b'attempt=1'}>
[04/10/2024 22:29:30] [DEBUG] Endpoint provider result: http://127.0.0.1/seafile-commit-objects
[04/10/2024 22:29:30] [DEBUG] Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "s3"
[04/10/2024 22:29:30] [DEBUG] Event before-call.s3.HeadBucket: calling handler <function add_expect_header at 0x7fd12a243060>
[04/10/2024 22:29:30] [DEBUG] Event before-call.s3.HeadBucket: calling handler <bound method S3ExpressIdentityResolver.apply_signing_cache_key of <botocore.utils.S3ExpressIdentityResolver object at 0x7fd11aa4b710>>
[04/10/2024 22:29:30] [DEBUG] http://127.0.0.1:80 "HEAD /seafile-commit-objects HTTP/1.1" 302 0
[04/10/2024 22:29:30] [DEBUG] Event before-call.s3.HeadBucket: calling handler <function add_recursion_detection_header at 0x7fd12a241a80>
[04/10/2024 22:29:30] [DEBUG] Response headers: {'Date': 'Wed, 10 Apr 2024 20:29:30 GMT', 'Server': 'Apache/2.4.57 (Debian)', 'Location': 'http://127.0.0.1:9000/seafile-commit-objects', 'Content-Type': 'text/html; charset=iso-8859-1'}
[04/10/2024 22:29:30] [DEBUG] Event before-call.s3.HeadBucket: calling handler <function inject_api_version_header_if_needed at 0x7fd12a2885e0>
[04/10/2024 22:29:30] [DEBUG] Response body:
b''
[04/10/2024 22:29:30] [DEBUG] Making request for OperationModel(name=HeadBucket) with params: {'url_path': '', 'query_string': {}, 'method': 'HEAD', 'headers': {'User-Agent': 'Boto3/1.34.29 md/Botocore#1.34.29 ua/2.0 os/linux#6.1.0-13-amd64 md/arch#x86_64 lang/python#3.11.2 md/pyimpl#CPython cfg/retry-mode#legacy Botocore/1.34.29'}, 'body': b'', 'auth_path': '/seafile-commit-objects/', 'url': 'http://127.0.0.1/seafile-commit-objects', 'context': {'client_region': 'us-east-1', 'client_config': <botocore.config.Config object at 0x7fd11aa28a10>, 'has_streaming_input': False, 'auth_type': None, 's3_redirect': {'redirected': False, 'bucket': 'seafile-commit-objects', 'params': {'Bucket': 'seafile-commit-objects'}}, 'S3Express': {'bucket_name': 'seafile-commit-objects'}, 'signing': {}, 'endpoint_properties': {'authSchemes': [{'disableDoubleEncoding': True, 'name': 'sigv4', 'signingName': 's3', 'signingRegion': 'us-east-1'}]}}}
[04/10/2024 22:29:30] [DEBUG] Event needs-retry.s3.HeadBucket: calling handler <botocore.retryhandler.RetryHandler object at 0x7fd11b618090>
[04/10/2024 22:29:30] [DEBUG] Event request-created.s3.HeadBucket: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7fd11aa289d0>>
[04/10/2024 22:29:30] [DEBUG] No retry needed.
[04/10/2024 22:29:30] [DEBUG] Event needs-retry.s3.HeadBucket: calling handler <bound method S3RegionRedirectorv2.redirect_from_error of <botocore.utils.S3RegionRedirectorv2 object at 0x7fd11aa3c810>>
[04/10/2024 22:29:30] [WARNING] Failed to scan virus for repo 2d2b4ac0: Failed to read object 2d2b4ac0-268e-4d91-8f69-a7960c86ed51/e9850cbfc7f9eabfdbe1b832003bc0fb83728eb4: maximum recursion depth exceeded.
[04/10/2024 22:29:30] [DEBUG] Event before-parameter-build.s3.HeadBucket: calling handler <function validate_bucket_name at 0x7fd12a242ca0>
[04/10/2024 22:29:30] [DEBUG] Event before-parameter-build.s3.HeadBucket: calling handler <function remove_bucket_from_url_paths_from_model at 0x7fd12a288e00>
[04/10/2024 22:29:30] [DEBUG] Event before-parameter-build.s3.HeadBucket: calling handler <bound method S3RegionRedirectorv2.annotate_request_context of <botocore.utils.S3RegionRedirectorv2 object at 0x7fd11aa3c810>>
[04/10/2024 22:29:30] [DEBUG] Event before-parameter-build.s3.HeadBucket: calling handler <bound method S3ExpressIdentityResolver.inject_signing_cache_key of <botocore.utils.S3ExpressIdentityResolver object at 0x7fd11b618050>>
[04/10/2024 22:29:30] [DEBUG] Event before-parameter-build.s3.HeadBucket: calling handler <function generate_idempotent_uuid at 0x7fd12a242ac0>
[04/10/2024 22:29:30] [DEBUG] Event before-endpoint-resolution.s3: calling handler <function customize_endpoint_resolver_builtins at 0x7fd12a288fe0>
[04/10/2024 22:29:30] [DEBUG] Event before-endpoint-resolution.s3: calling handler <bound method S3RegionRedirectorv2.redirect_from_cache of <botocore.utils.S3RegionRedirectorv2 object at 0x7fd11aa3c810>>
[04/10/2024 22:29:30] [DEBUG] Calling endpoint provider with parameters: {'Bucket': 'seafile-commit-objects', 'Region': 'us-east-1', 'UseFIPS': False, 'UseDualStack': False, 'Endpoint': 'http://127.0.0.1', 'ForcePathStyle': True, 'Accelerate': False, 'UseGlobalEndpoint': True, 'DisableMultiRegionAccessPoints': False, 'UseArnRegion': True}
[04/10/2024 22:29:30] [DEBUG] Endpoint provider result: http://127.0.0.1/seafile-commit-objects
[04/10/2024 22:29:30] [DEBUG] Selecting from endpoint provider's list of auth schemes: "sigv4". User selected auth scheme is: "s3"
[04/10/2024 22:29:30] [DEBUG] Event before-call.s3.HeadBucket: calling handler <function add_expect_header at 0x7fd12a243060>
[04/10/2024 22:29:30] [DEBUG] Event before-call.s3.HeadBucket: calling handler <bound method S3ExpressIdentityResolver.apply_signing_cache_key of <botocore.utils.S3ExpressIdentityResolver object at 0x7fd11b618050>>
[04/10/2024 22:29:30] [DEBUG] Event before-call.s3.HeadBucket: calling handler <function add_recursion_detection_header at 0x7fd12a241a80>
[04/10/2024 22:29:30] [DEBUG] Event before-call.s3.HeadBucket: calling handler <function inject_api_version_header_if_needed at 0x7fd12a2885e0>
[04/10/2024 22:29:30] [DEBUG] Making request for OperationModel(name=HeadBucket) with params: {'url_path': '', 'query_string': {}, 'method': 'HEAD', 'headers': {'User-Agent': 'Boto3/1.34.29 md/Botocore#1.34.29 ua/2.0 os/linux#6.1.0-13-amd64 md/arch#x86_64 lang/python#3.11.2 md/pyimpl#CPython cfg/retry-mode#legacy Botocore/1.34.29'}, 'body': b'', 'auth_path': '/seafile-commit-objects/', 'url': 'http://127.0.0.1/seafile-commit-objects', 'context': {'client_region': 'us-east-1', 'client_config': <botocore.config.Config object at 0x7fd11aec0990>, 'has_streaming_input': False, 'auth_type': None, 's3_redirect': {'redirected': False, 'bucket': 'seafile-commit-objects', 'params': {'Bucket': 'seafile-commit-objects'}}, 'S3Express': {'bucket_name': 'seafile-commit-objects'}, 'signing': {}, 'endpoint_properties': {'authSchemes': [{'disableDoubleEncoding': True, 'name': 'sigv4', 'signingName': 's3', 'signingRegion': 'us-east-1'}]}}}
[04/10/2024 22:29:30] [DEBUG] Event request-created.s3.HeadBucket: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7fd11aa122d0>>
[04/10/2024 22:29:30] [WARNING] Failed to scan virus for repo caab0677: Failed to read object caab0677-d479-402e-ba57-1f023fa05919/d28ffdec771e9e676394ef8f7be00d78df41186f: maximum recursion depth exceeded.

We will look into those issues in the next week.

1 Like

Hi, what is the name of logfile you show here?

Hi. It’s not from a log file. It’s stdout/stderr printed directly to screen when running

seafile@attefall:~/seafile-server-latest$ ./pro/pro.py virus_scan