Clarification of documentation "other reverse proxy"

The document mentioned describes how to proceed without the proxy caddy. It mentions which changes need to be made to the nginx.conf file. At least under Debian, this file is normally not changed, but rather one under /etc/nginx/sites-available.

It is also not clear to me at this point whether they are talking about NGINX within the Seafile container or a separate NGINX installation. They are simply talking about NGINX, but you could use Apache, Traefik or FortiWeb as a reverse proxy.

So, I assume that you don’t have to change the configuration file for the container (which would be /opt/seafile-data/nginx/conf/seafile.nginx.conf).

Am I right or am I wrong?

You are right. Your reverse proxy can be anything, they used nginx for their example. You don’t change the nginx inside the container at all.

Thanks, then I assume that the entries for the notification server are also superfluous, because this service is provided via its own container.

Although the whole thing is still not entirely clear to me. I configure my own reverse proxy and NGINX runs again as a reverse proxy in the Seafile container. But I have tested it and it is the only way I can access Seafile.
In version 11 I could still access Seafile “directly” from my reverse proxy via ports 8000, 8082 and 8083. No need to use NGINX inside the seafile container. Strange.

I find it confusing and annoying that there is an nginx inside the container. It’s not just a waste of resources but makes everything that much more complicated than it needs to be (one more place to need to wireshark when troubleshooting, one more log file to have to watch, etc).

I modified the container’s config to “open” the ports directly to the seafile parts (seahub, fileserver, etc), bypassing the nginx inside the container. That nginx is still running but at least it isn’t doing anything. That let me keep using the nginx config I already had working from an older version with very few changes.

The change was in the seafile-server.yml:

    ports:
      - "8000:8000"
      - "8082:8082"

I do have an additional section in my nginx config for the notification server, but I don’t know if I needed it before I decided to bypass the superfluous nginx.

Yes it works for me also, but I cannot proxying to 127.0.0.1. I have to use the ip address of the docker0 interface. Then I have to bind the gunicorn webserver to 0.0.0.0.

TL;DR: Can I let my subdomain for Seafile punch through my outboard reverse proxy so that Caddy can take over all the duties and I don’t have to do so much custom configuration?

More detail:
I’ve had trouble with my own NPM reverse proxy server in V11. I was hopeful that the V12 changes might allow me to configure a very simple NPM config to pass traffic through to Seafile and Caddy which could internally manage the minutiae. That doesn’t appear to be the case.

Is there a way to make my NPM proxy server pass traffic to my subdomain, seafile.example.com, through to the VM host for Caddy to manage as though it were the only reverse proxy involved?

1 Like

I’m not an expert, but as I understand it, you either use Caddy or your own reverse proxy. I use Nginx, so I turn off Caddy completely.

I have switched reverse proxy to Traefik which is something I’ve been planning for a while as my self-hosting becomes more complex. I’ve had better luck, but I’m still working through a seadoc issue. I think I e got a solve, but all this makes me wonder:

Is there any way to use my external Traefik reverse proxy to send all traffic to the Seafile subdomain right to Caddy at port 80 and let’s it route traffic through all the containers and terminate SSL on its own? I don’t understand exactly how reverse proxies work under the hood, so maybe there is an obvious reason that can’t be done, but on the surface it seems silly to strip Caddy completely from Seafile PE v12 an recreate everything it does in Traefik. I’d much prefer to leave Caddy in place, while permitting Traefik to continue to do its job of forwarding Seafile (and other services’) traffic to its particular IP on the LAN

It is in theory possible, but it’s not recommended. You can get a lot of problems that can be quite a pain to troubleshoot and fix. For example, you need to make sure that the first proxy sets the “X-Forwarded-For” header (because it is the only one that knows the IP of the client machine connecting to it), and that the second one passes that header along without changing it. And the same goes for a number of other headers.

And every time something isn’t working, you have another set of logs to review. Generally it’s best to avoid any unnecessary complication.