Seafile CE 11.0 on Proxmox

Hi,

So i have installed seafile within an LXC on proxmox without docker compose and it doesn’t quite work.

The problem is that I use a bind mount to share a host directory with the lxc so that I can backup seafile-data from outside the container with rclone, but, I get an error: invalid cross-device link. I have tried multiple solutions to fix this but nothing seems to work.

Does anyone else have some workarounds or a solution for this?

Can you explain where you have mounted things?

That error suggests that something is trying to make a hardlink (or move a file by making a link and remove the old one), but this can’t be done when the source and destination are in different filesystems. So for example, if you have mounted /data/dir from the host, and the software tries to move or hardlink a file from /data/file to /data/dir/file, you could get that error.

It might also be a slightly misleading error. It is possible that you need to set your LXC container to run as privileged instead of unprivileged. I don’t really know if that’s true for this, but it is something I saw a few times when messing with LXC (before I decided that it was worth just using a full VM instead of the LXC).

Hey, I just did the install again on a privileged LXC but sadly nothing changed.

For the LXC I use the following bind mount point:

mp0: /var/lib/vz/shared/seafile-storage,mp=/opt/seafile/seafile-data/storage

Uploading fails with an internal server error and this is what I see in the log:

2024-11-16 20:52:40 start to serve on pipe client
2024-11-16 20:52:53 ../common/block-backend-fs.c(189): [block bend] failed to commit block 2d5c9eb5-0130-47fe-8ef1-c83f46521606:ef0ca179900485bbbaec9fe6ca85c5282972ddfc: Invalid cross-device link
2024-11-16 20:52:53 ../common/fs-mgr.c(545): failed to commit chunk ef0ca179900485bbbaec9fe6ca85c5282972ddfc.
2024-11-16 20:52:53 ../common/block-backend-fs.c(189): [block bend] failed to commit block 2d5c9eb5-0130-47fe-8ef1-c83f46521606:a506a9556b99c3fe69b4902cb935b4f11f480b90: Invalid cross-device link
2024-11-16 20:52:53 ../common/fs-mgr.c(545): failed to commit chunk a506a9556b99c3fe69b4902cb935b4f11f480b90.
2024-11-16 20:52:53 repo-op.c(1148): failed to index blocks2024-11-16 20:53:01 ../common/block-backend-fs.c(189): [block bend] failed to commit block 2d5c9eb5-0130-47fe-8ef1-c83f46521606:36922c3805e99f62fcc4f572fe1dcf6d080b3001: Invalid cross-device link
2024-11-16 20:53:01 ../common/fs-mgr.c(545): failed to commit chunk 36922c3805e99f62fcc4f572fe1dcf6d080b3001.
2024-11-16 20:53:01 repo-op.c(1148): failed to index blocks

Did you have this working on a VM and if so what do you use to backup your files? The reason I want to avoid rclone or similar from within a container is because I don’t want to pre-allocate space to the disk.

I think you need to move your mount point up at least one level.

On my server I have the data in /seafile-data/data. I just tested watching file accesses with fatrace while uploading a file I see several times that it created a temporary file “/seafile-data/data/tmpfiles/{random_numbers}”, wrote data into it, and then moved that file into /seafile-data/data/storage/blocks, or storage/fs, or storage/commits. So I think you need your mount up at that “data” directory level (or above) instead of at the “storage” directory level so that the temp files are in the same file system and can be moved into storage easily.

I started trying to set it all up in LXC for that same reason, I didn’t want to allocate free space to just this single machine. At the same time, I tried a VM mounting the /seafile-data from the host via nfs (which made performance unacceptable), and a few other options for testing.

In the end I decided that since my seafile was going to need to be publicly accessible, a container was not enough isolation and I needed a full VM. I made the disk thin-provisioned so it only uses as much space as is used inside the disk, and with ceph this doesn’t have the performance penalty that I expected. And for backups I am using the proxmox backup server.

Thank you this worked, I can’t believe it was so simple!