I had a fully working Seafile 11 docker running and I wanted to upgrade to version 12. I’ve removed seadoc from COMPOSE_FILE=
line in the .env
file and copied all other configuration variables from the docker-compose.yml
to the new .env file. After the running with sudo docker compose up
it upgraded everything and it shows that it is running.
seafile | Starting seahub at port 8000 …
seafile |
seafile | Seahub is started
seafile |
seafile | Done.
seafile |
However, when I try to connect it says: Connection closed error in both Chrome, Edge and the clients on my phone and pc. I can see however that the port 80 and 443 are being used from my proxy to the virtual server. I have SSL enabled (protocol set to https).
I’m not sure what went wrong and the logs don’t give me any clues. I do not see any updates to the logs however as in logs/seahub.access.log for example. Last entries are from before the upgrade.
EDIT:
One more thing I noticed in the logs:
seafile-mysql | 2025-03-01 17:08:26 7 [Warning] Aborted connection 7 to db: ‘ccnet_db’ user: ‘seafile’ host: ‘172.18.0.5’ (Got an error reading communication packets)
seafile-mysql | 2025-03-01 17:08:26 6 [Warning] Aborted connection 6 to db: ‘seafile_db’ user: ‘seafile’ host: ‘172.18.0.5’ (Got an error reading communication packets)
seafile-mysql | 2025-03-01 17:08:26 13 [Warning] Aborted connection 13 to db: ‘seahub_db’ user: ‘seafile’ host: ‘172.18.0.5’ (Got an error reading communication packets)
seafile-mysql | 2025-03-01 17:08:26 14 [Warning] Aborted connection 14 to db: ‘seahub_db’ user: ‘seafile’ host: ‘172.18.0.5’ (Got an error reading communication packets)
seafile-mysql | 2025-03-01 17:08:26 11 [Warning] Aborted connection 11 to db: ‘seahub_db’ user: ‘seafile’ host: ‘172.18.0.5’ (Got an error reading communication packets)
seafile-mysql | 2025-03-01 17:08:26 10 [Warning] Aborted connection 10 to db: ‘seafile_db’ user: ‘seafile’ host: ‘172.18.0.5’ (Got an error reading communication packets)
EDIT2:
When I disable SSL (protocol http) then I am able to get to the login page. However, when I try to sign in I get:
Forbidden (403)
CSRF verification failed. Request aborted.
More information is available with DEBUG=True.
For the database error, you might want to double-check your sql server username and password. Not sure what else might be happening there. Maybe try connecting to the database from your commandline and from the command line inside of the container.
The CSRF error is a known issue from added security stuff in newer versions of something seafile depends on. Here’s the doc:
https://manual.seafile.com/12.0/upgrade/upgrade_notes_for_11.0.x/#django-csrf-protection-issue
With SSL disabled you might need to add the address you are connecting with to the list of trusted addresses, like
CSRF_TRUSTED_ORIGINS = ['https://{{ public_address }}','http://{{ public_address }}','http://192.168.1.2']
I managed to get the “system” back up and running. My setup is I have a load balancer (HAProxy) which sits in front of the virtual machine that is running the Seafile server. I have it set to TCP mode and then configured nginx to reverse proxy so that the SSL certificates would be valid/passed properly.
The new version 12 of Seafile however does no longer do certificate and HTTPS server using nginx, instead in front of that sits Caddy.
The problem I’m now facing is how can I configure Caddy in the exact same way I was able to have reverse_proxy working on nginx. I followed the guide to not use Caddy and now it all runs through nginx again just as I had set up with version 11. However, I believe now it does not do the auto renewing of my certificates anymore.
So my question now is, can I still use the “built-in” Caddy while it being behind my HAProxy load balancer (not used for load balancing on the Seafile server), while still being able to use HTTPS (SSL)?
About the database errors. I figured out that I can log in as root user with the given password I have set in the .env file. So I changed the db user to root instead of the default seafile and now those errors are gone. I’m however thinking, if it never was able to log in, how did the web interface ever work? Or is the db only used for other things, like the notification server?
EDIT:
It turns out I don’t/(shouldn’t) need to use root as the user. The password I had in the .env file was the mysql root password. However, looking in the seafile.conf file, there was the actual password that was created on the first deployment for the user seafile. Now it’s working with the correct user, seafile, for the notification server. Is the “webserver” using the username and password stored in the seafile.conf file instead of the .env environment variables?
I think it might be possible to have HAProxy forward to caddy and have it work, but I don’t know enough about either of them to be sure. I do think there’s a better way though. Having that extra reverse proxy in the chain of things will make troubleshooting problems just that much harder since there’s one more place that a problem can be coming from that you need to investigate, one more set of logs, etc.
My setup before the upgrade was using nginx as the reverse proxy, and I found that I can remove caddy, and bypass the nginx that is inside of the container so that my old nginx config needed almost no changes to continue working. So basically I am suggesting that you mess with the docker config to get things back to where the proxy config you already know works can be used again.
The change I had to make was in the seafile-server.yml:
ports:
- "8000:8000"
- "8082:8082"
If you use the notification server or other containers (seadoc for example) you will need to make similar changes to them to let those components be accessed from outside of docker.
I think the login screen can show up with the database not working, but logins definitely wouldn’t work. It needs to check the username and password against the username and password hash from the database.
Thank you for the reply. However, I’m using the nginx from inside the container, not my own. I have a haproxy on a server which has port 80 and 443 of my DNS connected to it. I use haproxy to proxy based on the given (sub)domain and send it to the appropriate vm.
I have haproxy set to mode tcp and on the backend I do send-proxy. On the vm running the Seafile server I then modified the nginx config to have proxy_protocol and set_real_ip_from to the ip of my haproxy server. This allows haproxy to send over the correct client ip in order for SSL handshake to work properly.
This all seems to work fine, but I’m wondering if I can have this set up with Caddy instead, I mean the proxy_protocol stuff. Also, my notification server seems to not be working when I try to reach /notification/ping.
I did thind this: Modules - Caddy Documentation
But I cannot seem to find a way to enable it, I tried adding this to the seafile-server.yml file:
labels:
caddy.servers.trusted_proxies: "static 192.168.10.101"
caddy_0: ${SEAFILE_SERVER_PROTOCOL:-http}://${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty}
caddy_0.reverse_proxy: "{{upstreams 80}} http {proxy_protocol v1}"
But now I still get the connection_closed_err on any browser.
Managed to fix it. I was setting proxy protocol as a downstream (towards the nginx inside the docker container). What I had to do is set the global proxy protocol as an incoming connection (upstream) from the haproxy.
labels:
caddy.servers.listener_wrappers.proxy_protocol.allow: "192.168.10.101/24"
caddy.servers.listener_wrappers.tls:
caddy_0: ${SEAFILE_SERVER_PROTOCOL:-http}://${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty}
caddy_0.reverse_proxy: "{{upstreams 80}}"