Seafile community edition 9.0 is ready for test!

None of that really makes sense. I suspect is because how that seafile dock images are crafted. That is a problem, if there is any issues with them they can be a major PITA. I digress… but it really can be overcome and shouldn’t be so hard to get beyond. If they were willing. Sounds like docker is their preferred method of some degree. I hope if they are putting work into them, there is not a repeat of what was what id worked through with v7 and 8. If we could get some traction to improve that without hardforks, it would be ideal.

I also synced libraries with half a million files in older versions. May require the increase of inotify limits on Linux clients, but that’s it.

I feel your pain here; I thought I had things sorted out. Those of us managing Docker images should get together and unify a baseline and see how we can push back a good solution to the team at Seafile. I don’t like having to hard fork that, but there was no alternative at the time. I suspect with release 9.0 it’ll be back to the drawing board anyhow. Maybe not.

I like your comment.
IMHO the official container image is extremely overloaded.
It contains too much packages and layers.
I would suggest to build 2 images, one for the server, one for the hub.
They shouldn’t contain any other parts, only exactly this.
All the sophisticated scripts should be removed / replaced by very simple ones.
Other parts like the database, the webserver, the Lets Encrypt certbot etc. should run
in their own containers and MUST be removed from the Seafile container.
This makes the overall update and maintenance process much more easy
and able to automate.
And it would be the perfect preparation for a Kubernetes deployment.
See my current seafile helm chart on Artifacthub (dr300481 / seafile).
And if not Kubernetes, then at least docker compose.
It would be a dream to get this to a new level.
But I’m in time pressure, this is just my hobby.
Working together with lets say 4-8 good skilled people can make this a real success.

It was relatively easy to make my Dockerfiles compatible with 9.0.0

Check out ggogel/seafile-containerized on github :wink:

If you could provide your helm charts for this it would be amazing, since Kubernetes deployment is on my list but I haven’t the time for it yet. I think my Dockerfiles are very much suited for a Kubernetes deployment.

I don’t have any kube stuff setup. Might be interesting to take a look at …
I’ll take a look at your Dockerfiles; I was striving for minimal and no multiple services in an image. Mine are in github/sawdog

This exactly what my deployment does.

Quick glance - Not sure what some of you’ve got going on there. Is this your design or is it based on new stuff from seafile for Docker in 9.0?
There’s awfully lot of opinion that at first glance I wouldn’t be interested in.
I want something that is simple, should work for basic needs out of the box, but be flexible for those who want to take part and implement or use a replacement component. For example use traefik over nginx for proxy or reusing an existing proxy over having one running in the same container as seahub. etc.
I hope we’re not required to use this caddy http server. I know I saw the sync was rewritten in go – but I’d be surprised having to have that forced on users to run things.
I think by the time I saw the details mentioned related to storage I was done for the moment. I just don’t know why that’s being opinionated in Dockerfiles. Maybe I jumped to quick - but I just don’t want the tail wagging the dog. I’m using MariaDB and that sits on a ZFS filesystem – but that shouldn’t make it’s way into the Dockerfile layer unless is a default configuration or the like IMHO. I’m not sure what’s the impact of your storage bits, but if I’m spending time there, i’d rather be running on Postgresql and bag MariaDb and family
I’d be interested in your words, as my impression was docker wasn’t really to your liking – but what I saw doesn’t suggest that. :wink:

It is my design, which I started with version 8.0. It doesn’t have anything to do with changes in 9.0.

Can you explain what is unclear or complicated? You simply need to take the compose file, make some changes and deploy it. The storage part, for instance, is specifically for Docker Swarm and doesn’t apply for a normal Docker Compose setup. Hence, this whole part is labeled as Additional Information, nothing is opinionated on anybody there. Those are just advice for more complicated deployments.

I don’t understand why you write me this lengthy answer instead of reading the document. For example, what you say about Nginx, is exactly what my deployment does, which is described in the first two sections.

Just read the first three sections properly…

Quick question about this.

I’m in Arch Linux and have been manually upgrading using the pro version “seafile-pro-server_8.0.xx_x86-64_CentOS.tar.gz” file.
When upgrading to release 9 will it be a package I can use?

That would be great if the docker image actually worked! I tried for ages to follow the directions to make it work and was unable to. I gave up and had to go back to the tried and true tgz install. See Seafile docker dead on deployment

This is an awful decision to not provide a tarball for RHEL/CentOS systems. Please reconsider.

4 Likes

Hello i’m unable to delete files in seahub.
I got message “access denied”

Seahub.log
[WARNING] django.request:224 log_response Forbidden: /api/v2.1/repos/06b571ff-d8ea-41b3-8b85-3542f955aad2/file/

EDIT
maybe issue with my secure cookie flag on webserver

Hi Im new here, and i would like to second arjones85 POV.

I spent ages trying to get the docker image working on rocky linux 8.5, and eventualy i succeeded.
I could not get the built in letsencrypt to work, i had to install without ssl and letsencrypt working, and put an nginx front end on, with certbot. That all worked on the face of it, but then over a slow fibre broadband upload of 10 mbts per second would reset the connection from the docker image after about 10 mins or 1 gig download.
i installed tc and slowed down my network intherface to emulate the slow connection, and sure enough bypassing the nginx front end going direct to the docker image it would still time out.
I applied all the recommended tuning to the built in nginx to try to mitigate the time outs.
so i discarded the , now in my opinion troublesome docker install, and installed the tarball behind my existing nginx, and its now working fine. I spent days trying to improve the connection time outs of the docker image, to no avail, and the problems i was experiencing with it are replicated like a rash all over google searches.
the tarball is reliable with nginx front end for certbot, the docker image is not.
also im an experienced redhat admin with an RHCE.
please dont abandon the Rhel tarball, lots of people are moving to rocky and alma, its a great stable basis to install a service on.
regards peter

1 Like

Please post this in a new thread. Don’t hijack the topic.

I think you are unfair. The topic is about testing Seafile Comunity Edition. And Tom posted long before you in this post - actually the third reaction in this thread.

1 Like

Before 9.0:

  • If you have a small deployment, you can increase the max_sync_file_count option to whatever you need. Then you can sync very large libraries.
  • If you run a large deployment, say with 1000 users, it’s suggested to not allow syncing very large libraries (more than half million files). The fileserver implementation is based on threads. Long requests can exhaust the threads. This makes fileserver not responsive to new requests.

9.0 with Go fileserver:

  • The same for small deployments.
  • For large deployments, syncing very large libraries will not make fileserver not responsive. But it can take more CPU power as it calculate the file list for large libraries. With sufficient CPU power you can allow syncing very large libraries.
4 Likes

Got the same, i ended up coppying the thirdparty djano dir from 8.0.7 to 9.0.2 and it works now.

seafile-server-8.0.7/seahub/thirdpart/django → seafile-server-9.0.2/seahub/thirdpart/django

It seems like the method was removed in django3 which is shipped with 9.0.2. Im curious why we are the only one which encounter this, this error should be fairly wide spread.

This is probabbly related to the django captcha package?

python3-django-captcha 0.5.6-2

I guess it’s because we rely on the Debian version of the mentioned captcha package while we should have installed a newer version with pip3 (see manual).

python_2_unicode_compatible has to be imported from six in models.py now.

@ggogel I’ve been using your deployment, and it’s been working fantastically well for a few months now. Upgraded to 9.0.2 from 8.x.x without a hitch.

Biggest feature I’ve been looking forward to with the golang server is the on-the-fly zipping-- which seems to work correctly on my end (although it downloads without any extension, so I have to rename to “.zip”) but when I create a link to send to someone, hitting the “zip” button on the shared link comes up with a generic “Error” popup. I’ve since switched back from the go fileserver for now, which fixes it–

1 Like