Hi,
i’m new to seafile and really like the design, fast interface and functionality.
But i’m a bit overwhelmed with caddy:
I’ve seen the caddy image “lucaslorentz/caddy-docker-proxy” does not support Lets Encrypt Cloudflare DNS verification by default (also official caddy doesn’t).
I don’t want open the necessary ports for LE http-challenge - for security reasons and problematic handling with split DNS in my setup.
I built lucaslorentz caddy image with the official cloudflare DNS module but to be honest - i don’t know what to do next.
I could mount the Caddyfile and edit it to do cloudflare DNS verification - but on the other hand this caddy image works with labels in the server.yml; seadoc.yml and so on…
Mayby someone can help me to find the right solution - the goal is to stick as much as possible to the standard seafile deplyment with caddy and Lets Encypt DNS verification.
Also tried NGINX as descibed in the docu instead of CADDY - which worked - but had problems with seadoc socket.
A document opened, was able to edit - but couldn’t save - 403 permission denied in nginx log - i couldn’t detect any permission issues and couldnt find any similar problem here in this forum…
Hi,
struggled, switched, might circle back if it simplifie.
Seafile 12 changing to this derived caddy container instead nginx makes things more complicated and restricted than easier (imo).
If doing the LE challenge really isn’t an option due to security concerns anyway, it sounds like your next best option (if staying with Docker) is to use nginx or another reverse proxy out front. I am doing the same thing (although I am using Apache Traffic Server), but I am still using Caddy in my setup. Trying to do an end-run around Caddy or remove it from the workflow is probably more trouble than it’s worth.
Changes I’ve made to my configuration:
.env:
I do not have an entry for SEAFILE_SERVER_PROTOCOL.
SEAFILE_SERVER_HOSTNAME is my external FQDN.
caddy.yml:
Only specifies port 80.
seafile-server.yml:
Under the environment section, for SEADOC_SERVER_URL, my SEAFILE_SERVER_PROTOCOL defaults to https.
Under the labels section, for the caddy key, my SEAFILE_SERVER_PROTOCOL defaults to http.
seadoc.yml:
(same two items apply as in seafile-server.yml)
For your permission issues, I’m not quite sure. My main Seafile directory is owned by the root user & group with all directories and files using 755 permissions. My Seafile processes are also all owned by root.
So we can start there if you want - perhaps double check the above configuration files, make some changes if needed and see if it nets you any improvement?
Bumping this because I’m now in the same situation. Seafile 12 CE in a single node deployment, including Seadoc. One thing I ran into trying to shoehorn in another reverse proxy (NPM in my case) was the containers wouldn’t talk to each other. For instance, I could create a brand new .sdoc, but I couldn’t open it. I could see the call from Seahub to sdoc-server fail in the dev tools in any browser I tried in. Same with trying to convert Markdown or Docx to .sdoc.
Currently everything works but only over http://seafile.mydomain.com. I can’t get Caddy to use a DNS Challenge like NPM can. I can however edit the yamls to use a locally stored copy of the already exisiting and valid Let’s Encrypt cert I have for “*.mydomain.com”. Which is great, but there’s no automated renewal with that setup. I thought about writing a script to copy the certificate files out of NPM to the location Seafile is looking for them when NPM renews it for my other apps that use the same domain.
Really frustrating as a beginner too because as I’m trying go figure it out I see that there is nginx AND caddy and the documentation isn’t clear why. I think I understand that nginx was used before v12, and now they’re on Caddy, but I could be wrong. Plus reverse proxies break my brain a little bit, which is why I use NPM instead of plain nginx. I find the GUI very helpful to visualize and manage the exisiting services.
I have tried - so far without success - the same way with apache. Would be great if you could share your virtual host file you have used for seafile/caddy.
Just to give an update: in the interim, I’ve bypassed Caddy completely for just plain nginx out front with certs, and everything works great. I’d be more than happy to share that config with you but you are using Apache so it may have limited value for you.
@Father_Redbeard I think you’re right as to nginx vs Caddy, but regarding your issue with the sdoc-server call failure, was it related to CSRF? If so, you may want to check your seahub_settings.py.
Not in this case. I did have that issue initially when setting up Single Node deployment vs the v12 template available in Unraid. But thankfully I was able to get that figured out with making the change you mention.