How forward from NGINX server to Seafile server?

Hi,
I’m thinking of putting up a NGINX server that will direct everything as I need to use other sites over the port 80 and 443 port.
I have some subdomains that I’ll use, one for every service that I got.
Everything is on different VM’s.
So as I understands it I just forward things from the NGINX server to the Seafile server.
And the configuration file should look something like this, just replace the 127.0.0.1 with the correct IP to the Seafile server.

[…]
location / {
proxy_pass http://192.168.234.22:8000;
[…]
[…]
location /seafhttp {
rewrite ^/seafhttp(.*)$ $1 break;
proxy_pass http://192.168.234.22:8082;
[…]

Should I use the SSL (Let’s encrypt) in the NGINX server or in the Seafile server?
My guess is that I’ll use it in the NGINX server only, is that correct?

Should I uninstall NGINX on the Seafile server?
My guess is that I’ll uninstall NGINX in the Seafile server, is that correct?

BUT
Here is one question that I can’t figure out, on the NGINX server how should I write the root line?
It looks like this now.

[…]
location /media {
root /opt/nohatech/seafile-server-latest/seahub;
[…]

@DerDanilo Your using NGINX do you know how to solve this? :slight_smile:

server {
listen 80;
server_name cloud.domain.com;

    location / {
    rewrite ^ https://cloud.domain.com$uri permanent; 
    }

}

server {
listen 443 ssl http2;
server_name cloud.domain.com;

    add_header Strict-Transport-Security "max-age=31530000; includeSubDomains";

    add_header X-Frame-Options SAMEORIGIN;

    access_log             /var/log/nginx/access.log;
    error_log               /var/log/nginx/error.log;

    ssl_certificate         /etc/nginx/ssl/cloud.pem;
    ssl_certificate_key     /etc/nginx/ssl/cloud.key;
    ssl_dhparam             /etc/nginx/ssl/dhparam.pem;

    resolver 10.10.10.1;
    ssl_stapling on;

    ssl_session_timeout 24h;
    ssl_session_cache shared:SSL:2m;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers kEECDH+AES128:kEECDH:kEDH:-3DES:kRSA+AES128:kEDH+3DES:DES-CBC3-SHA:!RC4:!aNULL:!eNULL:!MD5:!EXPORT:!LOW:!SEED:!CAMELLIA:!IDEA:!PSK:!SRP:!SSLv2;
    ssl_prefer_server_ciphers on;
    add_header Strict-Transport-Security "max-age=31536000;";
    add_header Content-Security-Policy-Report-Only "default-src https:; script-src https: 'unsafe-eval' 'unsafe-inline'; style-src https: 'unsafe-inline'; img-src https: data:; font-src https: data:; report-uri /csp-report";


    location /.well-known {

            root /var/www/;
    }

location / {
proxy_pass http://10.10.10.100:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_read_timeout 1200s;

            # used for view/edit office file via Office Online Server
            client_max_body_size 0;

    }

    location /media {
            root /mnt/cloud_seahub;
    }

    location /seafdav {

            fastcgi_pass                            10.10.10.100:1031;
            fastcgi_param   SCRIPT_FILENAME         $document_root$fastcgi_script_name;
            fastcgi_param   PATH_INFO               $fastcgi_script_name;

            fastcgi_param   SERVER_PROTOCOL         $server_protocol;
            fastcgi_param   QUERY_STRING            $query_string;
            fastcgi_param   REQUEST_METHOD          $request_method;
            fastcgi_param   CONTENT_TYPE            $content_type;
            fastcgi_param   CONTENT_LENGTH          $content_length;
            fastcgi_param   SERVER_ADDR             $server_addr;
            fastcgi_param   SERVER_PORT             $server_port;
            fastcgi_param   SERVER_NAME             $server_name;
            fastcgi_param   HTTPS                   on;
            fastcgi_param   HTTP_SCHEME             https;

            client_max_body_size                    0;

            fastcgi_read_timeout                    86400s;
            fastcgi_send_timeout                    86400s;
            fastcgi_connect_timeout                 86400s;
            fastcgi_request_buffering               off;

            # This option is only available for Nginx >= 1.8.0. See more details below.

fastcgi_ignore_client_abort on;

            create_full_put_path                    on;
            dav_methods                             PUT DELETE MKCOL COPY MOVE;
            dav_access                              user:rw group:rw all:rw;

    }

    location /seafhttp {

            rewrite ^/seafhttp(.*)$                 $1 break;
            proxy_pass                              http://10.10.10.100:1030;
            proxy_connect_timeout                   86400s;
            proxy_read_timeout                      86400s;
            proxy_send_timeout                      86400s;
            send_timeout                            86400s;

            client_max_body_size                    0;        

            proxy_request_buffering                 off;
}

}

What? you need to explaine this, first of fast_cgi is not recommended anymore.

This is a working configuration file. Nginx front-end server, seafile server is in the DMZ. fastcgi_param used to webav.

Ok,
So the only thing is that you have this:
location /media {
root /mnt/cloud_seahub;

that means that you have shared the the folder over the network is that correct?
I thought it was a other way to do it on without sharing the folder trough the network?

I recommend to have all services in the DMZ do their config locally and just expose there service via 80/443. Way easier to handle.
For the front-end reverse proxy have a look at haproxy. There you can then provide your validated certificate to the clients.
Internal clients should also callout to the central reverse proxy (local DNS server or entries in hosts file).

Calby, /mnt/cloud_seahub shared via nfs.

That’s not secure, I have found a other way to do it and I recommend you to do so also.

First, install NGINX on Seafile server and set it up as always, then install NGINX on the NGINX reverse proxy server.
Here is a correct and secure configuration for NGINX front.end server (proxy / reverese)

server {
    listen       80;
    server_name  dav.xxxx.se;
    rewrite ^ https://$http_host$request_uri? permanent;    # force redirect http to https
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header X-Frame-Options "DENY" always;
    add_header Referrer-Policy "strict-origin" always;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
    server_tokens off;
}
server {
    listen 443;
    ssl on;
    ssl_certificate /etc/letsencrypt/live/dav.xxxx.se/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/dav.xxxx.se/privkey.pem;
    server_name dav.xxxx.se;
    ssl_session_timeout 5m;
    ssl_session_cache shared:SSL:5m;
    # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
    ssl_dhparam /etc/ssl/private/dhparam_dav.pem;
    # secure settings (A+ at SSL Labs ssltest at time of writing)
    # see https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256$
    ssl_prefer_server_ciphers on;
    proxy_set_header X-Forwarded-For $remote_addr;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header X-Frame-Options "DENY" always;
    add_header Referrer-Policy "strict-origin" always;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
    server_tokens off;
    location '/.well-known/acme-challenge' {
      default_type "text/plain";
      root /mnt/certbot-webroot;
    }
    location / {
        proxy_pass         https://192.168.1.23; # The IP to your Seafile server (local IP)
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host $server_name;
        proxy_set_header   X-Forwarded-Proto https;
        proxy_request_buffering off;
        access_log      /var/log/nginx/dav.access.log;
        error_log       /var/log/nginx/dav.error.log;
        proxy_read_timeout  1200s;
        client_max_body_size 0;
    }
}
1 Like

This works but haproxy may be easier to use.

I’ll google it, what I have found out is that with NGINX you can just use the http configuration at the Seaflie server and in the NGINX server you can have the SSL and Let’s encrypt etc.
That takes some load of from the Seafile server but also it’s putting all in a other server so if your Seafile server is down everything stil get’s updated etc.

But I’m new to this so I’m not sure, we did not have this in school nor do I have been using it before in my work.

Hi,
Here is a guide that I have been writting about it, this is best practices, so please read it for your own security.