i have spent too many hours trying to use custom port. seafile works but then sdoc doesnt. im using ubuntu docker. if i do not do anything i can work with http:// 192.168.1.17 just fine including sdoc but i need it to work on http:// 192.168.1.17:8002 or other port. i followed this many times with many changes to no avail https:/ /manual.seafile.com/12.0/setup/setup_ce_by_docker/#system-requirements, on brink of giving up. i added this to my .env
I have a guess about what’s going on here. I think you aren’t using a reverse proxy, based on your config pointing to port 8002 in the SEAFILE_SERVER_HOSTNAME and also using 8002 for the seafile container here:
I am pretty sure that seadoc won’t work that way. It is built with the assumption that you would have a reverse proxy in front that would direct connections for /sdoc-server to thee seadoc container’s port, and other addresses to the seafile container’s port. Without the reverse proxy there doing that job, the connections to http:// 192.168.1.17:8002/sdoc-server are going to the wrong container.
You might be able to just change SEADOC_SERVER_URL to point to port 8003 and put that port in that container’s config, or something similar, but I don’t actually know if that would work. So I think the more certain option is to add a reverse proxy to this. Probably not caddy since I think that will always try to get a certificate automatically.
I also went through a bit of a struggle getting Seafile to work in Ubuntu docker with a custom port. I eventually got there with a combination of an nginx reverse proxy plus a browser-side java script to capture and fix malformed file transfer urls, which (as of a few months ago when i did this) seems to be a Seafile bug. Happy to share more details if helpful.
After much pain sweat and tears and many days wasted. I finally got it to work. custom port, remote outside access, sdoc, collabora. all working. i will post link to gitlab with the exact docker env and instructions in coming days. woohoo.
I was trying to do essentially the same thing (docker 12.0 HTTPS on a custom port), but on Raspberry Pi OS. I couldn’t forward port 443 on my router at all - instead, I forwarded the external port 4430 to raspberrypi:443. I managed to get the web version running on HTTPS, but only the thumbnails and metadata worked - I couldn’t download existing files or upload new ones. Devtools revealed that the client is trying to get the files without using the provided port (image), which obviously fails - is that the “malformed file transfer urls” bug you mentioned, mqmoore?
The above was the result I got after I added :4430 in seahub_settings.py (FILE_SERVER_ROOT and SERVICE_URL). I left the hostname in .env unchanged though. Because, for some reason, adding the port to the SEAFILE_SERVER_HOSTNAME results in it not being accesible at all. It just throws a PR_CONNECT_RESET_ERROR in the web browser, not providing any more info, and the seafile and seafile-caddy docker logs don’t show any errors as well (and this happens during a connection through a VPN or a different network - so it’s not because of LAN)
Do you know if this can be somehow fixed? It’s already driving me crazy, so if it’s going to require modifying the client’s code, then it would already be easier to switch to a different software. Thanks in advance for any help
I created a topic where I explained how I understand the issue at hand and what could be done to “solve“ it.
Essentially you need to change SEAFILE_SERVER_HOSTNAME to include port number, and it can’t be the port number defined in caddy.yml, rather you have to specify direct port forwardings in seafile-server.yml: