After Upgrade from 6.2.2 to 6.3.4 on Raspi with Nginx "Page unavailable"

Hello Everyone,

i have upgraded my Server from 6.2.2 to 6.3.4 and everything seems to work properly except the Web Page. (after upgrading my Raspi from Jessie to Stretch the day before, the dav server also stopped working, but it isnt needed anymore so its not important)

I know it changed to WSGI and i changed the nginx config and also set fastcgi =false in the ccnet.conf but i cant access the WebUi.
If i try to open it in my browser i see this:

Page unavailable
Sorry, but the requested page is unavailable due to a server hiccup.
Our engineers have been notified, so check back later.

The Seafile clients work, my backup rsync client worked and the Baikal Calendar server, which is also using the same nginx config still works well.
So i dont think its something big, but dont know what the problem is and it would be nice if anyone could help me figure it out.

my nginx config:

server {
listen 443;
ssl on;
ssl_certificate /etc/nginx/ssl/domain_com/domain_com.crt;
ssl_certificate_key /etc/nginx/ssl/domain_com/domain_com.key;
server_name domain.com;
ssl_prefer_server_ciphers on;
error_page 497 https://$host:$server_port$request_uri;
client_max_body_size 20G; # set max upload size

#baikal
root /var/www/html; #
rewrite ^/.well-known/caldav /dav.php redirect; #
rewrite ^/.well-known/carddav /dav.php redirect; #
charset utf-8; #

location / {
  proxy_pass         http://127.0.0.1:8000;
  proxy_set_header   Host $host;
  proxy_set_header   X-Real-IP $remote_addr;
  proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header   X-Forwarded-Host $server_name;
  proxy_read_timeout  1200s;

 # used for view/edit office file via Office Online Server
 client_max_body_size 0;

 access_log      /var/log/nginx/seahub.access.log;
 error_log       /var/log/nginx/seahub.error.log;
}       

#baikal
location ~ ^(.+.php)(.)$ {
try_files $fastcgi_script_name =404;
fastcgi_split_path_info ^(.+.php)(.
)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
include /etc/nginx/fastcgi_params;
access_log off;
}

location /seafhttp {
    rewrite ^/seafhttp(.*)$ $1 break;
    proxy_pass http://127.0.0.1:8082;
    client_max_body_size 0;
    proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_connect_timeout  36000s;
    proxy_read_timeout  36000s;
    proxy_send_timeout  36000s;

    send_timeout  36000s;
}

location /media {
    root /home/user/haiwen/seafile-server-latest/seahub;
}

location /seafdav {
    fastcgi_pass    127.0.0.1:8080;
    fastcgi_param   SCRIPT_FILENAME     $document_root$fastcgi_script_name;
    fastcgi_param   PATH_INFO           $fastcgi_script_name;

    fastcgi_param   SERVER_PROTOCOL     $server_protocol;
    fastcgi_param   QUERY_STRING        $query_string;
    fastcgi_param   REQUEST_METHOD      $request_method;
    fastcgi_param   CONTENT_TYPE        $content_type;
    fastcgi_param   CONTENT_LENGTH      $content_length;
    fastcgi_param   SERVER_ADDR         $server_addr;
    fastcgi_param   SERVER_PORT         $server_port;
    fastcgi_param   SERVER_NAME         $server_name;

    fastcgi_param   HTTPS               on;

    client_max_body_size 0;

    # This option is only available for Nginx >= 1.8.0. See more details below.
    #proxy_request_buffering off;

    access_log      /var/log/nginx/seafdav.access.log;
    error_log       /var/log/nginx/seafdav.error.log;

}
}

ccnet.conf

[General]
USER_NAME = domain
ID = 1b873db6cbfbb5ea3ab12a6987136923b1839b9d
NAME = domain
SERVICE_URL = https://domain.com:8001

[Client]
PORT = 13419

gunicorn.conf

import os

daemon = True
workers = 5

default localhost:8000

bind = “0.0.0.0:8000”

Pid

pids_dir = ‘/home/seafile/pids’
pidfile = os.path.join(pids_dir, ‘seahub.pid’)

for file upload, we need a longer timeout value (default is only 30s, too short)

timeout = 1200

limit_request_line = 8190

seafdav.conf

[WEBDAV]
enabled = true
port = 8080
fastcgi = false
share_name = /seafdav

seafile.conf

[fileserver]
port=8082

Set maximum upload file size to 999M.

#max_upload_size=999

Set maximum download directory size to 999M.

#max_download_dir_size=999

seahub_settings.py

SECRET_KEY = “MY KEY”
HTTP_SERVER_ROOT = “https://domain.com:8001/seafhttp
CONSTANCE_DATABASE_CACHE_BACKEND = None

Logs: (The ones that changed after reboot and tried to access WebUI)
controller.log

[12/11/18 04:33:56] seafile-controller.c(117): bad pidfile format: /home/seafile/pids/seafdav.pid
[12/11/18 04:33:56] seafile-controller.c(414): failed to read pidfile /home/seafile/pids/seafdav.pid: Success

ccnet.log

[12/11/18 04:31:10] …/common/session.c(398): Accepted a local client
[12/11/18 04:34:21] …/common/session.c(398): Accepted a local client

seafile.log

[12/11/18 04:29:27] http-server.c(173): fileserver: worker_threads = 10
[12/11/18 04:29:27] http-server.c(188): fileserver: fixed_block_size = 8388608
[12/11/18 04:29:27] http-server.c(203): fileserver: web_token_expire_time = 3600
[12/11/18 04:29:27] http-server.c(218): fileserver: max_indexing_threads = 1
[12/11/18 04:29:27] http-server.c(233): fileserver: max_index_processing_threads= 3
[12/11/2018 04:29:27 AM] …/common/mq-mgr.c(54): [mq client] mq cilent is started
[12/11/2018 04:29:28 AM] size-sched.c(96): Repo size compute queue size is 0
[12/11/2018 04:34:28 AM] size-sched.c(96): Repo size compute queue size is 0

seahub.acces.log

109.91.21.49 - - [11/Dec/2018:04:38:10 +0100] “GET /api2/account/info/ HTTP/1.1” 200 250 “-” “Mozilla/5.0”
109.91.21.49 - - [11/Dec/2018:04:39:21 +0100] “GET /api2/events/ HTTP/1.1” 404 35 “-” “Mozilla/5.0”
192.168.2.116 - - [11/Dec/2018:04:41:10 +0100] “GET /api2/ping/ HTTP/1.1” 200 6 “-” “Mozilla/5.0”
109.91.21.49 - - [11/Dec/2018:04:41:10 +0100] “GET /api2/account/info/ HTTP/1.1” 200 250 “-” “Mozilla/5.0”

Im trying out a bit, but it still wont work.

what ive tried:
i added

SITE_BASE = “https://domain.com
File_Server_Root = “https://domain.com/seafhttp

to seahub.settings.py as mentioned here Unable to use WEB UI in 6.2.5 when switching from fastcgi to wsgi - #11 by Mark_O_Polo

And i tried a “blank” nginx config as mentioned in the manuals but still the same.

If i go to the client and right click on a library to “show in browser” it works.
I am logged in as a client and can see all my data and stuff.
But as soon as i go to the menu and click on “system Admin”, i get the “Page unavailable” again.
If i log out, the same, i get “Page unavailable”.

If i type the domain name, the adress changes to https://domain.com/accounts/login/?next=/
In my opinion the error message also doesnt look like its coming from nginx.
So it seems to me that the server doesnt have the correct root folder for seafile or am i wrong?

ps. after digging several hours on this now, i think its a pity that such fundamental changes are made without giving a proper manual on what has to be changed to get everything working.
The CM is a good thing but im missing some simple examples on working configurations for the mostly used cases like seafile, seafile+https , seafile+https+dav. But thats only my opinion as a non expert :wink:

Can you add this line to your nginx config as a test.

proxy_set_header X-Forwarded-Proto https;

Then try normal troubleshooting restart of everything…

Shut down Seahub and Seafile
Clear Seahub Cache
Clear Browser Cache
Service Nginx Stop and then Start
Start Seafile and then start Seahub

Also make this change on your config:

proxy_set_header Host $host;

to

proxy_set_header Host $host:$server_port;

My working nginx in that part of config looks like this…

location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_read_timeout 1200s;

Report back, hope this helps.

thanks for your reply.

Im always restarting the pi and the browser which should clear all cache and stuff.

As u said i tried this nginx config: ( i cleaned it up a bit to rule out other causes)

server {
listen 443;
ssl on;
ssl_certificate /etc/nginx/ssl/domain_com/domain_com.crt;
ssl_certificate_key /etc/nginx/ssl/domain_com/domain_com.key;
server_name domain.com;
ssl_prefer_server_ciphers on;

location / {
  proxy_pass         http://127.0.0.1:8000;
  proxy_set_header Host $host:$server_port;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-Proto https;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Forwarded-Host $server_name;
  proxy_read_timeout 1200s;

 client_max_body_size 0;

 access_log      /var/log/nginx/seahub.access.log;
 error_log       /var/log/nginx/seahub.error.log;
}       

location /seafhttp {
    rewrite ^/seafhttp(.*)$ $1 break;
    proxy_pass http://127.0.0.1:8082;
    client_max_body_size 0;
    proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_connect_timeout  36000s;
    proxy_read_timeout  36000s;
    proxy_send_timeout  36000s;

    send_timeout  36000s;
}

location /media {
    root /home/seafile/seafile-server-latest/seahub;
}

}

But this didnt do the trick.
I also tried using 8001 and 443 instead of $server_port , but that didnt change anything.

I still get the error message on the domain itself or when it uses absolute paths like https://domain.com/accounts/login/?next=/
or https://domain.com/accounts/logout/

If i make a right click in the client, it opens the WebUI with a client token and works.
https://domain.com/#common/lib/777ab5f3-136a

Restarting the browser(or rebooting) will definitely not clear the browser cache. It is persistent until you manually go into the browser settings and tell it to clear.

i.e. firefox. Settings ->Options->Privacy&Security->Cookies and Site Data->Clear Data

Here is my working config, my domain name has been redacted.

Note: I use let’s encrypt so you can disregard my server 80 block area and use your certificate info/location in the 443 section.

Note: I also have a different location /media address as Debian stores the files in a different location than Raspi.

Hope this helps.


server {
  listen       80;
  server_name  useyourdomain.net;
rewrite ^ https://$http_host$request_uri? permanent;

location '/.well-known/acme-challenge' {
    default_type "text/plain";
    root /opt/certbot-webroot;
}

}

server {
  listen 443;
  server_name  useyourdomain.net;



  ssl on;
  ssl_certificate /etc/letsencrypt/live/useyourdomain.net/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/useyourdomain.net/privkey.pem;

  proxy_set_header X-Forwarded-For $remote_addr;


location / {
         proxy_pass         http://127.0.0.1:8000;
         proxy_set_header Host $host:$server_port;
         proxy_set_header   X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-Proto https;
         proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header   X-Forwarded-Host $server_name;
         proxy_read_timeout  1200s;

         # used for view/edit office file via Office Online Server
         client_max_body_size 0;

         access_log      /var/log/nginx/seahub.access.log;
         error_log       /var/log/nginx/seahub.error.log;
    }




  location /seafhttp {
    rewrite ^/seafhttp(.*)$ $1 break;
    proxy_pass http://127.0.0.1:8082;
    client_max_body_size 0;
    proxy_connect_timeout  36000s;
    proxy_read_timeout  36000s;
    proxy_send_timeout  36000s;
  }

  location /media {
    root /opt/seafile/seafile-server-latest/seahub;
  }

  location /seafdav {
    fastcgi_pass    127.0.0.1:8080;
    fastcgi_param   SCRIPT_FILENAME     $document_root$fastcgi_script_name;
    fastcgi_param   PATH_INFO           $fastcgi_script_name;
    fastcgi_param   SERVER_PROTOCOL     $server_protocol;
    fastcgi_param   QUERY_STRING        $query_string;
    fastcgi_param   REQUEST_METHOD      $request_method;
    fastcgi_param   CONTENT_TYPE        $content_type;
    fastcgi_param   CONTENT_LENGTH      $content_length;
    fastcgi_param   SERVER_ADDR         $server_addr;
    fastcgi_param   SERVER_PORT         $server_port;
    fastcgi_param   SERVER_NAME         $server_name;
    fastcgi_param   REMOTE_ADDR         $remote_addr;
    fastcgi_param   HTTPS               on;
    client_max_body_size 0;
    access_log      /var/log/nginx/seafdav.access.log;
    error_log       /var/log/nginx/seafdav.error.log;
  }
}

Another item to consider…

What version of nginx are you running?

I upgraded my nginx from jesse backports to

nginx version: nginx/1.10.3

I did that during my troubleshooting prior to eventually getting it working.

of course it will because Settings ->Options->Privacy&Security->Cookies and Site Data → is set to keep until FF is closed :wink: But thats not the topic here :slight_smile:

i dont see any differences between our nginx configs.

im also running the nginx version 1.10.3 on stretch.

could you post your ccnet.conf, gunicorn.conf, seafile.conf, seahub_settings.py and seafdav.conf if you have? maybe there is something different.

In my server 443 block I have this:

proxy_set_header X-Forwarded-For $remote_addr;

seahub_settings.py (My server key/password/domain info redacted)


SECRET_KEY = "RedactedforForum"

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'seahub-db',
        'USER': 'seafile',
        'PASSWORD': 'RedactedforForum',
        'HOST': '127.0.0.1',
        'PORT': '3306',
        'OPTIONS': {
            'init_command': 'SET storage_engine=INNODB',
        }
    }
}

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    'LOCATION': '127.0.0.1:11211',
    }
}

EMAIL_USE_TLS                       = False
EMAIL_HOST                          = 'localhost'
EMAIL_HOST_USER                     = ''
EMAIL_HOST_PASSWORD                 = ''
EMAIL_PORT                          = '25'
DEFAULT_FROM_EMAIL                  = 'noreply@useyourdomain.net'
SERVER_EMAIL                        = 'EMAIL_HOST_USER'
TIME_ZONE                           = 'Europe/Berlin'
SITE_BASE                           = 'https://useyourdomain.net'
SITE_NAME                           = 'Seafile Server'
SITE_TITLE                          = 'Seafile Server'
ENABLE_SIGNUP                       = False
ACTIVATE_AFTER_REGISTRATION         = False
SEND_EMAIL_ON_ADDING_SYSTEM_MEMBER  = True
SEND_EMAIL_ON_RESETTING_USER_PASSWD = True
CLOUD_MODE                          = False
FILE_PREVIEW_MAX_SIZE               = 30 * 1024 * 1024
SESSION_COOKIE_AGE                  = 60 * 60 * 24 * 7 * 2
SESSION_SAVE_EVERY_REQUEST          = False
SESSION_EXPIRE_AT_BROWSER_CLOSE     = False
FILE_SERVER_ROOT                    = 'https://useyourdomain.net/seafhttp'
REPO_PASSWORD_MIN_LENGTH            = 8
USER_PASSWORD_MIN_LENGTH            = 6
USER_PASSWORD_STRENGTH_LEVEL        = 3
USER_STRONG_PASSWORD_REQUIRED       = True
ENABLE_MAKE_GROUP_PUBLIC            = False
ENABLE_THUMBNAIL                    = True
THUMBNAIL_ROOT                      = '/opt/seafile/seahub-data/thumbnail/thumb/'
REPO_PASSWORD_MIN_LENGTH            = 8
USER_PASSWORD_MIN_LENGTH            = 8
USER_PASSWORD_STRENGTH_LEVEL        = 3
USER_STRONG_PASSWORD_REQUIRED       = True

ccnet.conf (my server items changed/redacted)

[General]
USER_NAME = useyourdomainname
ID = RedactedforForum
NAME = useyourdomainname
SERVICE_URL = https://useyourdomainname.net

[Client]
PORT = 13419

[Database]
ENGINE = mysql
HOST = 127.0.0.1
PORT = 3306
USER = seafile
PASSWD = RedactedforForum
DB = ccnet-db
CONNECTION_CHARSET = utf8

seafile.conf

[fileserver]
port=8082

# Set maximum upload file size to 1000M.
max_upload_size=1000



# Set maximum download directory size to 1000M.
max_download_dir_size=1000

host = 127.0.0.1



[database]
type = mysql
host = 127.0.0.1
port = 3306
user = seafile
password = RedactedforForum
db_name = seafile-db
connection_charset = utf8

Thank You Mark_O_Polo,

i added proxy_set_header X-Forwarded-For $remote_addr; to nginx config

i added SITE_BASE = "https://domain.com" and FILE_SERVER_ROOT= "https://domain.com/seafhttp" to seahub_settings.py

i deleted the port beheind the domain SERVICE_URL = https://domain.com in ccnet.conf

i added host = 127.0.0.1 to seafile.conf

but still the same error in the browser with hardlinks and relative paths (right click on client) still works.

Could you also post your gunicorn.conf ?

Manually delete seahub cache just to rule it out.

rm -rf /tmp/seahub_cache

re: gunicorn.conf not familiar with this file. Pretty sure that it’s new with 6.3. I don’t have it.