Random errors during downloads with the web client

Hi,
during downloads on the web interface, I encounter random errors, which has the effect of creating a “network error” and the download is interrupted, I cannot identify the problem and here are the errors noted
during downloads on the web interface,

fileserver.log

[2023-07-05 17:24:20] failed to pack dir Photo boubou: write tcp 127.0.0.1:8082->127.0.0.1:44628: write: broken pipe`

fileserver-error

[mysql] 2023/07/10 15:33:06 packets.go:122: closing bad idle connection: connection reset by peer
[mysql] 2023/07/10 15:33:06 connection.go:158: driver: bad connection
[mysql] 2023/07/10 16:08:25 packets.go:122: closing bad idle connection: EOF
[mysql] 2023/07/10 16:08:25 packets.go:122: closing bad idle connection: EOF
[mysql] 2023/07/10 23:38:20 packets.go:122: closing bad idle connection: EOF
[mysql] 2023/07/10 23:38:20 connection.go:158: driver: bad connection
[mysql] 2023/07/11 11:50:11 packets.go:122: closing bad idle connection: connection reset by peer
[mysql] 2023/07/11 11:50:11 connection.go:158: driver: bad connection
[mysql] 2023/07/11 18:44:57 packets.go:122: closing bad idle connection: EOF
[mysql] 2023/07/11 18:44:57 packets.go:122: closing bad idle connection: EOF
[mysql] 2023/07/11 18:45:10 packets.go:122: closing bad idle connection: EOF
[mysql] 2023/07/11 18:45:10 connection.go:158: driver: bad connection
[mysql] 2023/07/11 18:47:37 packets.go:122: closing bad idle connection: EOF
[mysql] 2023/07/11 18:47:37 packets.go:122: closing bad idle connection: EOF
[mysql] 2023/07/11 18:47:58 packets.go:122: closing bad idle connection: EOF
[mysql] 2023/07/11 18:48:28 packets.go:122: closing bad idle connection: EOF
[mysql] 2023/07/11 18:48:28 connection.go:158: driver: bad connection

notification-server.log

2023/07/12 04:28:15 disconnected because no pong was received for more than 5s
2023/07/12 04:28:50 send repo-update event to client reolink@XXXXXXXX.cloud(2): {"repo_id":"51df52ff-0626-4955-a64d-XXXXXXXX","commit_id":"cb077f402ef8ef6f4e2e2c72fc2cXXXXXXXX"}
2023/07/12 04:29:04 failed to read json data from client: 127.0.0.1:56018: websocket: close 1006 (abnormal closure): unexpected EOF
2023/07/12 04:29:43 send repo-update event to client reolink@XXXXXXXX.cloud(2): {"repo_id":"51df52ff-0626-4955-a64d-XXXXXXXX","commit_id":"f517180547165f7f0f010e044daXXXXXXXX"}
2023/07/12 04:29:43 send repo-update event to client boubou@XXXXXXXX.me(6): {"repo_id":"51df52ff-0626-4955-a64d-XXXXXXXX","commit_id":"f517180547165f7f0f010e044daXXXXXXXX"}
2023/07/12 04:29:49 disconnected because no pong was received for more than 5s
2023/07/12 04:30:38 send repo-update event to client reolink@XXXXXXXX.cloud(2): {"repo_id":"51df52ff-0626-4955-a64d-XXXXXXXX","commit_id":"f316dde17b8c5e59d3ae6fXXXXXXXX"}
2023/07/12 04:32:51 send repo-update event to client reolink@XXXXXXXX.cloud(2): {"repo_id":"51df52ff-0626-4955-a64d-XXXXXXXX","commit_id":"eb7009ea33ddc7197999f071aXXXXXXXX"}
2023/07/12 04:33:58 send repo-update event to client reolink@XXXXXXXX.cloud(2): {"repo_id":"51df52ff-0626-4955-a64d-XXXXXXXX","commit_id":"283645dc71b85d93c24d66cb763XXXXXXXX"}
2023/07/12 04:34:17 send repo-update event to client reolink@XXXXXXXX.cloud(2): {"repo_id":"51df52ff-0626-4955-a64d-XXXXXXXX","commit_id":"020cb3902d48d32ada0952bXXXXXXXX"}

syslog

2023-07-06T03:46:17.492830+02:00 lenovo-TS150 mariadbd[792]: 2023-07-06  3:46:17 1430 [Warning] Aborted connection 1430 to db: 'seahub-db' user: 'seafile' host: 'localhost' (Got an error reading communication packets)
2023-07-06T03:51:19.899484+02:00 lenovo-TS150 mariadbd[792]: 2023-07-06  3:51:19 1440 [Warning] Aborted connection 1440 to db: 'seahub-db' user: 'seafile' host: 'localhost' (Got an error reading communication packets)
2023-07-06T03:56:22.136561+02:00 lenovo-TS150 mariadbd[792]: 2023-07-06  3:56:22 1448 [Warning] Aborted connection 1448 to db: 'seahub-db' user: 'seafile' host: 'localhost' (Got an error reading communication packets)
2023-07-06T04:01:24.052521+02:00 lenovo-TS150 mariadbd[792]: 2023-07-06  4:01:24 1466 [Warning] Aborted connection 1466 to db: 'seahub-db' user: 'seafile' host: 'localhost' (Got an error reading communication packets)

seafile.log

2023-07-12 03:57:19,755 [INFO] seafes:182 load_seafevents_conf [seafes] use highlighter fvh
2023-07-12 03:57:19,956 [INFO] xmlschema:1234 include_schema Resource 'XMLSchema.xsd' is already loaded
2023-07-12 03:57:20,867 [INFO] seafes:182 load_seafevents_conf [seafes] use highlighter fvh
2023-07-12 03:57:21,428 [INFO] seafes:182 load_seafevents_conf [seafes] use highlighter fvh
2023-07-12 03:57:26,549 [INFO] seafes:182 load_seafevents_conf [seafes] use highlighter fvh
2023-07-12 04:07:33,269 [INFO] seafes:182 load_seafevents_conf [seafes] use highlighter fvh
2023-07-12 04:07:33,857 [INFO] seafes:182 load_seafevents_conf [seafes] use highlighter fvh
2023-07-12 04:10:04,485 [INFO] xmlschema:1234 include_schema Resource 'XMLSchema.xsd' is already loaded
2023-07-12 04:10:05,658 [INFO] seafes:182 load_seafevents_conf [seafes] use highlighter fvh
2023-07-12 04:10:07,621 [INFO] seafes:182 load_seafevents_conf [seafes] use highlighter fvh
2023-07-12 04:10:07,626 [INFO] seafes:182 load_seafevents_conf [seafes] use highlighter fvh
2023-07-12 04:10:08,285 [INFO] seafes:182 load_seafevents_conf [seafes] use highlighter fvh
2023-07-12 04:25:23,141 [INFO] xmlschema:1234 include_schema Resource 'XMLSchema.xsd' is already loaded
2023-07-12 04:31:44,252 [INFO] xmlschema:1234 include_schema Resource 'XMLSchema.xsd' is already loaded

by observing the network frames and each time the network error occurs, there is a reset request which occurs, and I don’t know where this request comes from!!! but it comes from the pc that downloads the file and the external ip of the Seafile server, the same error occurs on another pc

please help me identify the problem

Debian 11
MariaDB Server version: 10.11.4-MariaDB-1:10.11.4
Python 3.9.2

Can you please post your configuration from the seafile conf dir. If you use a reverse proxy please include the config, version.

Hi
here is the configuration i use with seafile:
ccnet.conf

[General]

[Database]
ENGINE = mysql
HOST = 127.0.0.1
PORT = 3306
USER = seafile
PASSWD = my-password
DB = ccnet-db
CONNECTION_CHARSET = utf8

gunicorn.conf

import os

daemon = True
workers = 10

# default localhost:8000
bind = "127.0.0.1:8000"

# Pid
pids_dir = '/home/boubou/cloud/seafile/pids'
pidfile = os.path.join(pids_dir, 'seahub.pid')

# for file upload, we need a longer timeout value (default is only 30s, too short)
timeout = 86400

limit_request_line = 8190

seafdav.conf

[WEBDAV]

# Default is false. Change it to true to enable SeafDAV server.
enabled = true

port = 8070

# If you deploy seafdav behind nginx/apache, you need to modify "share_name".
share_name = /webdav

# SeafDAV uses Gunicorn as web server.
# This option maps to Gunicorn's 'workers' setting. https://docs.gunicorn.org/en/stable/settings.html?#workers
# By default it's set to 5 processes.
workers = 10

# This option maps to Gunicorn's 'timeout' setting. https://docs.gunicorn.org/en/stable/settings.html?#timeout
# By default it's set to 1200 seconds, to support large file uploads.
timeout = 86400

show_repo_id = false

seafevents.conf

[DATABASE]
type = mysql
host = 127.0.0.1
port = 3306
username = seafile
password = my-password
name = seahub-db



[AUDIT]
enabled = true

[INDEX FILES]
enabled = true
interval = 10m
external_es_server = true
#shards = 10

highlight = fvh

## If true, indexes the contents of office/pdf files while updating search index
## Note: If you change this option from "false" to "true", then you need to clear the search index and update the index again. See the FAQ for details.
index_office_pdf = true

[OFFICE CONVERTER]
enabled = true
workers = 2

## where to store the converted office/pdf files. Deafult is /tmp/.
outputdir = /tmp/

[SEAHUB EMAIL]
enabled = true

## interval of sending Seahub email. Can be s(seconds), m(minutes), h(hours), d(days)
interval = 30m

# Enable statistics
[STATISTICS]
enabled= true

seafile.conf

[fileserver]
use_go_fileserver = true
max_indexing_threads = 10
worker_threads = 10
host = 127.0.0.1
port = 8082
# default to false. If enabled, fileserver-access.log will be written to log directory.
enable_access_log = false

fs_cache_limit = 2000

# Set maximum upload file size to 200M.
max_upload_size= 2000000000

# Set maximum download directory size to 200M.
#max_download_dir_size= 2000000000

max_sync_file_count = -1
fs_id_list_request_timeout = -1

# After how much time a temp file will be removed. The unit is in seconds. Default to 3 days.
http_temp_file_ttl = 86400
# File scan interval. The unit is in seconds. Default to 1 hour.
http_temp_scan_interval = 3600

#Set block size to 1MB
fixed_block_size= 1

#Set uploading time limit to 3600s 
web_token_expire_time= 86400

# skip_block_hash = true

# Use larger connection pool
max_connections = 100

[quota]
# default user quota in GB, integer only
default = 50

[history]
keep_days = 3

[library_trash]
# How often trashed libraries are scanned for removal, default 1 day.
scan_days = 1

# How many days to keep trashed libraries, default 30 days.
expire_days = 7

[zip]
# The file name encoding of the downloaded zip file.
windows_encoding = iso-8859-1

[file_lock]
default_expire_hours = 1
#use_locked_file_cache = true

[Slow_log]
# default to true
ENABLE_SLOW_LOG = true
# the unit of all slow log thresholds is millisecond.
# default to 5000 milliseconds, only RPC queries processed for longer than 5000 milliseconds will be  logged.
RPC_SLOW_THRESHOLD = 5000

[database]
type = mysql
host = 127.0.0.1
port = 3306
user = seafile
password = my-password
db_name = seafile-db
connection_charset = utf8
#unix_socket = /var/run/mysqld/mysqld.sock

[notification]
enabled = true
host = 127.0.0.1
port = 8083
log_level = debug
jwt_private_key = private-key

seahub_settings.py

# -*- coding: utf-8 -*-
SECRET_KEY = "my-private-key"
SERVICE_URL = "http://cloud.XXXXX.me/"

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'seahub-db',
        'USER': 'seafile',
        'PASSWORD': 'my-password',
        'HOST': '127.0.0.1',
        'PORT': '3306',
        'OPTIONS': {'charset': 'utf8mb4'},
    }
}

EMAIL_USE_SSL = True
EMAIL_HOST = 'mail.XXXXX.me'
EMAIL_HOST_USER = 'info@XXXXX.me'
EMAIL_HOST_PASSWORD = 'my-password-smtp'
EMAIL_PORT = 465
DEFAULT_FROM_EMAIL = EMAIL_HOST_USER
SERVER_EMAIL = EMAIL_HOST_USER
REPLACE_FROM_EMAIL = False
ADD_REPLY_TO_HEADER = True

DEBUG = True

# For security consideration, please set to match the host/domain of your site, e.g., ALLOWED_HOSTS = ['.example.com'].
# Please refer https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts for details.
ALLOWED_HOSTS = ['.XXXXXX.me']

ENABLE_WIKI = True

#ENABLE_DEMO_USER = True
#CLOUD_DEMO_USER = 'demo@XXXXXX.me'

# Whether to use a secure cookie for the CSRF cookie
# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-secure
CSRF_COOKIE_SECURE = True

# The value of the SameSite flag on the CSRF cookie
# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-samesite
CSRF_COOKIE_SAMESITE = 'Strict'

# version for encrypted library
# should only be `2` or `4`.
# version 3 is insecure (using AES128 encryption) so it's not recommended any more.
ENCRYPTED_LIBRARY_VERSION = 4

#BRANDING_CSS = 'custom/custom.css'
#LOGO_PATH = 'custom/mylogo.png'
#LOGO_WIDTH = 250
#LOGO_HEIGHT = 41
#DESKTOP_CUSTOM_BRAND = 'Mon Cloud Personnel'
#DESKTOP_CUSTOM_LOGO = 'custom/desktop-custom-logo.png'
#FAVICON_PATH = 'custom/favicon.png'

# video thumbnails
#ENABLE_VIDEO_THUMBNAIL = False
THUMBNAIL_VIDEO_FRAME_TIME = 15  # use the frame at 5 second as thumbnail
ENABLE_RESUMABLE_FILEUPLOAD = True
TIME_ZONE = 'Europe/Paris'
LANGUAGE_CODE = 'fr'
ENABLE_TERMS_AND_CONDITIONS = False
ENABLE_SYS_ADMIN_VIEW_REPO = True
SHOW_TRAFFIC = True
SITE_NAME = 'Seafile'
SITE_TITLE = 'Mon Espace Cloud'
#ENABLE_SHARE_LINK_AUDIT = False
ENABLE_UPLOAD_LINK_VIRUS_CHECK = False
USE_PDFJS = True
FILE_PREVIEW_MAX_SIZE = 40 * 1024 * 1024
ENABLE_THUMBNAIL = True
THUMBNAIL_ROOT = '/home/boubou/cloud/XXXXXXX/seahub-data/thumbnail'
THUMBNAIL_SIZE_FOR_ORIGINAL = 1024
THUMBNAIL_IMAGE_SIZE_LIMIT = 30 # MB
ENABLE_GUEST_INVITATION = True
ENABLE_USER_CLEAN_TRASH = True
ENABLE_SHARE_TO_ALL_GROUPS = True
ENABLE_SHOW_CONTACT_EMAIL_WHEN_SEARCH_USER = True
ENABLE_TWO_FACTOR_AUTH = True

#ENABLE_STORAGE_CLASSES = True

# Whether to allow user to delete account, change login password or update basic user
# info on profile page.
# Since PRO 6.3.10
ENABLE_DELETE_ACCOUNT = True
ENABLE_UPDATE_USER_INFO = True
ENABLE_CHANGE_PASSWORD = True

# Whether to send email when a system admin adding a new member. Default is `True`.
SEND_EMAIL_ON_ADDING_SYSTEM_MEMBER = True
# Whether to send email when a system staff resetting user's password.
SEND_EMAIL_ON_RESETTING_USER_PASSWD = True
# Send system admin notify email when user registration is complete. Default is `False`.
NOTIFY_ADMIN_AFTER_REGISTRATION = True

# Age of cookie, in seconds (default: 2 weeks).
SESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2

# Whether a user's session cookie expires when the Web browser is closed.
SESSION_EXPIRE_AT_BROWSER_CLOSE = False

# Whether to save the session data on every request. Default is `False`
SESSION_SAVE_EVERY_REQUEST = False

# Interval for browser requests unread notifications
# Since PRO 6.1.4 or CE 6.1.2
UNREAD_NOTIFICATIONS_REQUEST_INTERVAL = 3 * 60 # seconds

# Add the ability of tagging a snapshot of a library (Use ENABLE_REPO_SNAPSHOT_LABEL = True to turn the feature on)
ENABLE_REPO_SNAPSHOT_LABEL = True

# Enable cloude mode and hide `Organization` tab.
CLOUD_MODE = False
# Disable global address book
ENABLE_GLOBAL_ADDRESSBOOK = True
MAX_NUMBER_OF_FILES_FOR_FILEUPLOAD = 200000

# Default expire days for share link (since version 6.3.8)
# Once this value is configured, the user can no longer generate an share link with no expiration time.
# If the expiration value is not set when the share link is generated, the value configured here will be used.
#SHARE_LINK_EXPIRE_DAYS_DEFAULT = 7

# Add a report abuse button on download links. (since version 7.1.0)
# Users can report abuse on the share link page, fill in the report type, contact information, and description.
# Default is false.
ENABLE_SHARE_LINK_REPORT_ABUSE = True

#Webdav
ENABLE_WEBDAV_SECRET = True
WEBDAV_SECRET_MIN_LENGTH = 16
WEBDAV_SECRET_STRENGTH_LEVEL = 4
# to disable the check
DATA_UPLOAD_MAX_NUMBER_FIELDS = None

# If you don't want to run seahub website on your site's root path, set this option to your preferred path.
# e.g. setting it to '/seahub/' would run seahub on http://example.com/seahub/.
SITE_ROOT = '/'

# Config Memcached ( Http or Socket )

CACHES = {
    'default': {
        'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',
        'LOCATION': '127.0.0.1:11211',
    },
    'locmem': {
        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
    },
}
COMPRESS_CACHE_BACKEND = 'locmem'

# Enable Only Office
ENABLE_ONLYOFFICE = True
VERIFY_ONLYOFFICE_CERTIFICATE = True
ONLYOFFICE_APIJS_URL = 'https://cloud.XXXXX.me/onlyofficeds/web-apps/apps/api/documents/api.js'
ONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods')
ONLYOFFICE_EDIT_FILE_EXTENSION = ('docx', 'pptx', 'xlsx')
ONLYOFFICE_JWT_SECRET = 'my-private-key'

ONLYOFFICE_DESKTOP_EDITORS_PORTAL_LOGIN = True

ngnix.conf

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
	worker_connections 1024;
	# multi_accept on;
}

http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;

        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
	##
	# Basic Settings
	##
	sendfile on;
	tcp_nopush on;
	#types_hash_max_size 2048;
	server_tokens off;

	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	##
	# SSL Settings
	##

	ssl_protocols TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
	ssl_prefer_server_ciphers on;

	##
	# Logging Settings
	##

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	##
	# Gzip Settings
	##

	gzip on;

	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

	##
	# Virtual Host Configs
	##

	include /etc/nginx/conf.d/*.conf;
	#include /etc/nginx/sites-enabled/*;
}

nginx seafile.conf

#       Required for OnlyOffice DocumentServer

        map $http_x_forwarded_proto $the_scheme {

	 default $http_x_forwarded_proto;

	 "" $scheme;

        }

        map $http_x_forwarded_host $the_host {

	 default $http_x_forwarded_host;

	 "" $host;

        }

        map $http_upgrade $proxy_connection {

	 default upgrade;

	 "" close;

        }

#        Log Format Seafile
         log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $upstream_response_time';

        server {
         listen 80;
         server_name  cloud.XXXX.me www.cloud.XXXX.me;
         rewrite ^ https://$http_host$request_uri? permanent;    # force redirect http to https
	 server_tokens off;

        }

        server {
         listen 443 ssl proxy_protocol http2;
	 server_name cloud.XXXX.me www.cloud.XXXX.me;

 	 set_real_ip_from 192.168.1.8;
	 real_ip_header proxy_protocol;

	location /stub_status {
	 stub_status on;
	 access_log off;


        }


        ssl_certificate /etc/letsencrypt/live/XXXX.me/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/XXXX.me/privkey.pem;

	ssl_session_timeout 1d;
	ssl_session_cache shared:SSL:50m;
	ssl_session_tickets off;
	ssl_ecdh_curve X25519:X448:secp521r1:secp384r1:secp256k1;
	ssl_protocols TLSv1.2 TLSv1.3;
	ssl_prefer_server_ciphers on;
	ssl_ciphers 'TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-CHACHA20-POLY1305-D:ECDHE-RSA-CHACHA20-POLY1305-D:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384';
	
	ssl_stapling on;
	ssl_stapling_verify on;
	resolver 8.8.8.8 8.8.4.4 valid=300s;
	resolver_timeout 5s;


	add_header X-Content-Type-Options nosniff;
	add_header X-Frame-Options "SAMEORIGIN";
	add_header X-XSS-Protection "1; mode=block";
	add_header X-Robots-Tag none;
	add_header X-Download-Options noopen;
	add_header X-Permitted-Cross-Domain-Policies none;	
        add_header Strict-Transport-Security "max-age=15552000; includeSubDomains; always";
        proxy_set_header X-Forwarded-For $remote_addr;
        server_tokens off;


        location /media {
         root /home/boubou/cloud/XXXX/seafile-server-latest/seahub;
        }

    location / {
        proxy_pass http://127.0.0.1:8000/;
        proxy_read_timeout 3600s;
        proxy_set_header Host $host;
        proxy_set_header Forwarded "for=$remote_addr;proto=$scheme";
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Connection "";
        proxy_http_version 1.1;

        client_max_body_size 0;
        access_log      /var/log/nginx/seahub.access.log seafileformat;
        error_log       /var/log/nginx/seahub.error.log;
    }

    location /seafhttp {
        rewrite ^/seafhttp(.*)$ $1 break;
        proxy_pass http://127.0.0.1:8082;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        client_max_body_size 0;
        proxy_connect_timeout  86400s;
        proxy_read_timeout  86400s;
	proxy_send_timeout  86400s;
	send_timeout  86400s;
	
        proxy_request_buffering off;
        access_log      /var/log/nginx/seafhttp.access.log seafileformat;
        error_log       /var/log/nginx/seafhttp.error.log;
    }

    location /notification/ping {
        proxy_pass http://127.0.0.1:8083/ping;
        access_log      /var/log/nginx/notification.access.log seafileformat;
        error_log       /var/log/nginx/notification.error.log;
    }

    location /notification {
        proxy_pass http://127.0.0.1:8083/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        access_log      /var/log/nginx/notification.access.log seafileformat;
        error_log       /var/log/nginx/notification.error.log;
    }

    location /webdav {
        proxy_pass         http://127.0.0.1:8070/webdav;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host $server_name;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_read_timeout  1200s;
        client_max_body_size 0;

        access_log      /var/log/nginx/webdav.access.log seafileformat;
        error_log       /var/log/nginx/webdav.error.log;
    }

#       ONLYOFFICESDS

        location /onlyofficeds/ {

#        IMPORTANT ! - Trailing slash !
         proxy_pass http://127.0.0.1:88/;
         proxy_http_version 1.1;
         client_max_body_size 100M; # Limit Document size to 100MB
         proxy_read_timeout 3600s;
         proxy_connect_timeout 3600s;
         proxy_set_header Upgrade $http_upgrade;
         proxy_set_header Connection $proxy_connection;

#        IMPORTANT ! - Subfolder and NO trailing slash !
         proxy_set_header X-Forwarded-Host $the_host/onlyofficeds;		
         proxy_set_header X-Forwarded-Proto $the_scheme;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        }
}

I am using Seafile Pro 10.0.6
Mariadb 10.11.4
Nginx 1.24.0
all on a server freshly installed on Debian 11.7
I use the manual method to install Seafile
I specify that there is no problem with the Android client, the download and upload function without problem.
Web uploading of large files has no problems and no errors, and desktop clients on all computers work fine
I feel like the error this occurs when I view HAProxy server stats through the web interface with the computer uploading a file to seafile…i think HAProxy doesn’t like to pass a SSL request and have it sent back to check its status in a private network … I think I’m going to move Nginx to the server where HAProxy is installed and remove it from the Seafile server, so the SSL certificates will be processed by HAProxy , I am using TCP SNI forwarding at the moment and will try to work in HTTP with HAProxy
I keep looking in this direction
THANKS

Appreciate the configs. After the first glance nothing catches my eye directly.

To be clear:

  1. Access & sync (up and download) via Drive/ Sync client for Pc/ Mac works without any issues? (using the Nginx reverse proxy _https://cloud.XXXX.me)
  2. Access & sync via Android/ iOS mobile client works? (via Nginx _https://cloud.XXXX.me)
  3. Web Browser access & sync via Firefox / Chrome/ Edge / Safari on Pc/ Mac works (via Nginx _https://cloud.XXXX.me) - even larger

No where in this configuration comes HAproxy into play? Why would you use HAproxy in addtion to Nginx?

It sounds like everything is working, but some mysterious scenario involving HAproxy - which to be honest I don’t understand the reason for.

Hi
I use HAProxy because I have other servers that use port 80 and 443 on my network (see image), HAProxy and used in TCP mode (layer 4) which facilitates the sharing of these ports on the different servers.
I did some tests today and noticed that I didn’t get an error when I use Seafile without Nginx with an internal ip (example 192.168.1.5:8000),
as soon as I use Nginx with SSL I have errors
I even isolated the Seafile server to expose port 80 and 443 directly to it without HAProxy and I still have random errors…

1 Like

Sorry for the delayed response. Okay - now we get somewhere, that graphic explains a lot.

From your logs I see many error messages, some don’t have anything to do with what you describe.

Please correct me if I’m wrong.

You experience errors when you download a file (what size do the affected files have?) using the Web Interface/ UI.

  1. What is the message you get from the Web UI? - Does it only give you “network error”, in which UI context?
  2. You don’t get an error message when you upload a file using the Web UI?
  3. From your last tests you seem to have isolated the issue - HAproxy and nginx might not cause the issue.
  4. Also did you verify the same error happens with different web browsers, like Chromium/ Chrome, Edge, Firefox (please disable caching and turn off all add-ons)?

All your logs and configs are nice, but the don’t paint a clear picture of what is happening at the exact moment the issue arises. They include log messages which might have nothing to do with the issue. All these info’s are good to have, but without a reference, all these details tend to obfuscate the real issue.
In your case - the exact error message you get in the Web UI and the time stamp when it occurred.

Another good practice is to clear or rotated all log files, before testing.