Seadoc accesses seahub not through SEAHUB_SERVICE_URL

tl;dr; I solved the issues but I’d suggest an improvement. Seems like SEAHUB_SERVICE_URL is only used to connect saehub /api/ endpoint. For downloading and saving the files though seadoc uses seahub’s public https URL. Surprisingly seahub too uses it’s own public https endpoint instead of communicating directly with itself. So basically the public https endpoint needs to be accessible directly to both - seadoc and seahub. And this makes installing them within an isolated container network very tricky. See below for details. I think much more efficient and reliable would be to have this communication happen through the internal container network endpoints.

— ORIGINAL POST —

Hi, I already wrote about it here but let me start a new topic because it may not be the same issue and there the thread might be lost.

The symptom of my problem is that trying to access an sdoc file, I get the dreaded “Load doc content error”.

I have a seafile 12 installation (upgraded from 11) in a private podman network that doesn’t have internet access. The services are accessible to the internet by a proxy container that can access the exported container ports.

With Seafile 11 I didn’t have Seadoc installed so this is a clean Seadoc installation on top of the current version 12.

The SEAHUB_SERVICE_URL is set http://seafile which is the correct container name of the Seafile container running within the private network.

On the seafile side I have SEADOC_SERVER_URL=https://seafile.EXAMPLE.com/sdoc-server which is the correct way to access seadoc from the Internet that works in my browser. As I mentioned, this URL cannot work from within the private network so Seafile server cannot directly access it.

Now this is my first question - is it needed that URL to be accessible by seafile itself? It appears to me it is just a redirection URL for the user browser.

Now assuming it is not needed, I try to open such an empty sdoc file results in the infamous Load doc content error.

The only thing I see in sdoc-server.log is

[2025-04-22 19:58:05] [ERROR] document-controller.js[74] - Load test.sdoc(55d418
e7-1ce7-4847-a0af-f90fa7e1006a) from https://seafile.example.com/seafhttp/files/6e914561-392e-49e0-b0f2-6f4b11bdc319/test.sdoc error

This makes me think that Seadoc tries to access seafile over the public domain instead of using the supplied SEAHUB_SERVICE_URL.

So I decided to try adding a proxy within the private network which would be visible as https://seafile.example.com and redirect everything to the seafile container. I did this as a quick hack by adding a https redirect to the nginx config of seadoc and adding an additional domain to that container so it is resolvable to seafile.example.com from within the private network.

This works because this indeed works from within the container.

curl -v https://seafile.example.com/seafhttp/files/4f5128a7-6eec-49dd-9f4a-38df0b7360dc/test.sdoc

But then I still see exactly the same behavior trying to open the sdoc file. I figure out one issue must be CA certificate. I think so because using a simple listener program in python I see an connection is performed but then the proxy I setup does not show any attempts in access log (while it does when I use curl). So it appears that seadoc tries to download the file but doesn’t like what it connects to and to me that most likely is that it doesn’t use the CAs from the standard debian location.

On container startup I make sure that debian CA certificates are updated by putting the custom CA into the right place and calling update-ca-certificates with my_init. I know this works because curl works. But maybe Seadoc reads certificates from somewhere else?

I believe that ideally Seadoc should only access Seafile through the given SEAHUB_SERVICE_URL. This seems to me as a bug that one has to specify SEAHUB_SERVICE_URL but then it is only used for some calls, not everything.

But either case, could you help me understand how to pass the custom CA authority properly and any other things that may have to be changed for this feature to work?

Update, I figured our nodejs doesn’t use system CAs automatically so I had to set an additional environment variable NODE_EXTRA_CA_CERTS=/etc/ssl/certs/ca-certificates.crt and now the file opens.

I’ll check if saving and everything else works alright tomorrow. Too late now.

But I leave thread open because I think it makes more sense for Seadoc to always access Seafile over the provided SEAHUB_SERVICE_URL instead of going through the public proxy.

Ok, I cou;dn’t stop myself testing it. Second issue is now saving the file back to Seafile. I see in seahub.access.log

- 10.10.10.34 [22/Apr/2025:21:15:33 +0000] "POST /api/v2.1/seadoc/upload-file/55d418e7-1ce7-4847-a0af-f90fa7e1006a/ HTTP/1.1" 500 285 "-" "axios/1.7.4" 0.768

And in sdoc-server.log:

","3":"O","4":"C","5":"T","6":"Y","7":"P","8":"E","9":" ","10":"h","11":"t","12"
:"m","13":"l","14":">","15":"\n","16":"<","17":"h","18":"t","19":"m","20":"l","2
1":" ","22":"l","23":"a","24":"n","25":"g","26":"=","27":"\"","28":"e","29":"n",
"30":"\"","31":">","32":"\n","33":"<","34":"h","35":"e","36":"a","37":"d","38":"
>","39":"\n","40":" ","41":" ","42":" ","43":" ","44":"<","45":"t","4..."283":">","284":"\n","status":500}

I can’t find any explanation about this. Services appear to see each other now. Any ideas what might be wrong?

Update, I parsed this JSON as:

<!DOCTYPE html>
<html lang="en">
<head>
    <title>Page unavailable</title>
</head>
<body>
    <h1>Page unavailable</h1>

    <p>Sorry, but the requested page is unavailable due to a server hiccup.</p>

    <p>Our engineers have been notified, so check back later.</p>
</body>
</html>
500

Eventually figured out that seahub tries to make call to the public https://seafile.example.com/seafhttp so it was necessary for it to also have the valid CA configured in. That I managed to do by copying the CA like with seadoc but using different environment variables to get that effective:

REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
DEFAULT_CA_BUNDLE_PATH=/etc/ssl/certs/ca-certificates.crt

I’ll update the tl;dr; section of the post with the summary info.

1 Like

Hi, seadolphin!
Thank you for not letting me despair)
Please tell me in more detail how exactly I can copy the certificates and to what location? And where do I enter the necessary settings? I have a certificate Let’s Encrypt for my domain name, received automatically when I registered it. How do I use this certificate?

@Juretsky , I think your issue is different than what I describe here. So here is the certificate part of what I do, but look at the bottom for what I think is your actual issue.

What I describe here is how to ensure seafile and seadoc can communicate with each other when using a custom CA certificate.

To install the custom CA, one has to mount the certificate inside both - seafile and seadoc containers like:

-v /path/to/custom/ca.crt:/usr/local/share/ca-certificates/custom_ca.crt

And then add the certificate to containers local trust adding an additional my_init file like:

-v /path/to/seafile/02_updatecerts.sh:/etc/my_init.d/02_updatecerts.sh

The content of 02_updatecerts.sh should be:

#!/bin/bash
update-ca-certificates

Finally, you need to set environment variables inside the containers so that the container local trust is actually being used.

For Seafile I add these variables:

REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
DEFAULT_CA_BUNDLE_PATH=/etc/ssl/certs/ca-certificates.crt

For seadoc I use the following variables:

NODE_EXTRA_CA_CERTS=/etc/ssl/certs/ca-certificates.crt
REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt

Both of these are needed for seadoc because seadoc is nodejs but seadoc converter is python.

But your problem is different as I said. Youhave a Let’s encrypt certificate which should already be trusted by default and you don’t need anything of the above.

What you need to do is to configure your reverse proxy to use your certificate. Instructions depend on the particular proxy you are using.

I would recommend you to upgrade to Seafile 12 and configure Caddy to automatically manage certificate with Let’s encrypt as in the official installation instructions.

I myself am using rpxy instead of caddy because I proxy other services as well. But it doesn’t really matter.

Seafile 11 is more complicated with nginx in the container and no separate external reverse proxy. I don’t know how to convert a non-ssl config to an ssl config without reinstall in version 11.

1 Like

Thank you, valuable recommendations. I’m trying to figure out how to apply all this correctly. My situation is more complicated than I expected. My Seafile works on a local machine, and external access to it is carried out through the CrazeDNS (KeenDNS) service, which automatically generates and installs an SSL certificate for the issued names. The name looks like mysite.keenetic.link
Seafile works through a forwarded port and has an address seafile.mysite.keenetic.link I see in the browser that this address also uses an SSL-certificate from CrazeDNS. But I don’t have the certificate itself. I contacted CrazeDNS, but it was no success - they replied that “the question does not concern the equipment produced by CrazeDNS”.
And now the question is:
How can I use the same certificate for correct working HTTPS on my system?
I mean Seafile, Seadoc, as well as Portainer, Home Assistant and other services that I have running on separate subdomains (4 levels)
Now Seafile gives an error when trying to open documents(
@seadolphin, Is your solution applicable in my case?

Yes, it should be applicable. In my case I run seafile inside a private podman network. This sounds like a similar issue to what you’ve got.

So for internal communication I have to use a custom CA and a custom proxy. I reused the nginx config of seadoc container.

I’ll try to expand on the information in the following days. But as a high level overview, you need:

  1. generate a custom CA and server certificate for your external DNS name (e.g. seafile.mysite.keenetic.link). You can see how to enable the custom CA from my previous post.
  2. make your custom internal reverse proxy accessible from the internal network at your public DNS name (e.g. seafile.mysite.keenetic.link). You can use split DNS, editing hosts file or idk how it can work with the system you’re using. With podman I could just add an additional name to my seadoc container.
  3. Configure your internal reverse proxy to serve HTTPS and / should point at seahub and /sdoc-server/converter/ to the converter host:ip

To achieve this I just added the following site configuration file to seadoc (nginx-inner-seafile.conf):

server {
    listen 443 ssl;

    server_name _;
    ssl_certificate /shared/proxy_files/seafile.mydomain.com.crt;
    ssl_certificate_key /shared/proxy_files/seafile.mydomain.com.key;

    proxy_set_header X-Forwarded-For $remote_addr;

    # this is mostly a copy of "/" from seafile.nginx.conf
    location / {
        if ($request_method = 'OPTIONS') {
            add_header Access-Control-Allow-Origin *;
            add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;
            add_header Access-Control-Allow-Headers "deviceType,token, authorization, content-type";
            return 204;
        }

        proxy_pass         http://seafile;
        proxy_redirect     off;
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host  $server_name;
        proxy_set_header   X-Forwarded-Proto $scheme;

        client_max_body_size 100m;

        access_log      /shared/logs/seafile-hack-server.access.log;
        error_log       /shared/logs/seafile-hack-server.error.log;
    }

    # copy from /etc/nginx/sites-enabled/nginx-sdoc.conf
    location /sdoc-server/converter/ {
        add_header Access-Control-Allow-Origin *;
        add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;
        add_header Access-Control-Allow-Headers "deviceType,token, authorization, content-type";
        if ($request_method = 'OPTIONS') {
            add_header Access-Control-Allow-Origin *;
            add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;
            add_header Access-Control-Allow-Headers "deviceType,token, authorization, content-type";
            return 204;
        }

        proxy_pass         http://127.0.0.1:8888/;
        proxy_redirect     off;
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host  $server_name;
        proxy_set_header   X-Forwarded-Proto $scheme;

        client_max_body_size 100m;

        # access_log      /shared/logs/nginx-converter.access.log sdoc-serverformat;
        # error_log       /shared/logs/nginx-converter.error.log;
    }
}

You chave to mount the crt and key inside the container and point at the right path. And mount nginx-inner-seafile.conf at /etc/nginx/sites-enabled/nginx-inner-seafile.conf inside the container.

You can use a separate container and a separate proxy of course by adjusting everything properly. This was the easiest course of action I could find.

Interesting note is that you don’t have to handle /sdoc-server path because it is only used by the client browser which is outside the private network and can use the public reverse proxy.

HTH. I wanted to write a complete howto but no time for it yet.

Now for generating certificates, they need to follow all latest rules. Otherwise some things will still fail.

I don’t remember how I generated the CA some time ago. Perhaps

openssl req -new -x509 -days 1826 -key ca.key -out ca.crt

Then your server certificate is more complicated. From my messed up notes, should be like:

openssl req -new -newkey rsa:4096 -days 3650 -noenc -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=localhost" -addext extendedKeyUsage=serverAuth -addext "subjectAltName = DNS:foo.example.com, DNS:another.example.com" -keyout certs/service.key -out certs/service.csr
openssl x509 -req -in certs/service.csr -text -days 3650 -CA ca.crt -CAkey ca.key -CAcreateserial -copy_extensions copy -out certs/service.crt

What is important is to have subjectAltName in the final server certificate, otherwise some clients will reject it as not matching the domain name.

Actually this is maybe everything. Let me know if something is not clear. I still wnat to find time to write things down from top to bottom. As currently it’s a lot of information in a little bit mixed order.

If you use a Let’s encrypt certificate for your public domain, then you can save yourself from adding the custom CA to the containers. So you’ll only have to setup your internal revers proxy.

Even easier, if you can allow containers (seafile and seadoc) from your internal network to access thenselved through the external proxy, then you just do that. i.e. internal network container → seafile.mysite.keenetic.link → external proxy → internal network container

Although I find value in having the containers unable to access the internet and be accessible only through the reverse proxy.