Lots of request timeouts with a Docker deployment

We’re running Seafile in a container based on seafileltd/seafile-mc:8.0.5 and a lot of requests that are handled by Django reach the timeout. In a sample that I took over several hours that affected more than every fourth request.

As also the /api2/ping/ endpoint is affected, the problem seems to be with the Django application or its communication with gunicorn.

These timeouts occur more often under “high loads” (with still less than 1.000 requests per hour that are handled by gunicorn) than with a testing instance.

Another possibly related observation that our monitoring tells me, is that over the time a Seafile container is running, the host is accumulating open file descriptors in a linear way.

As reference this is the current gunicorn.conf.py:

import multiprocessing
import os

daemon = True
workers = multiprocessing.cpu_count() * 2 + 1

bind = "127.0.0.1:8000"

errorlog = "/shared/logs/gunicorn.log"
loglevel = "debug"

pids_dir = "/opt/seafile/pids"
pidfile = os.path.join(pids_dir, "seahub.pid")

timeout = 30

limit_request_line = 8190

Has anyone a pointer how this problem should be further investigated or how it can be solved?

The “solution” was to set the workers to a fixed size, 50. i dunno, but guess that this constraint comes from some configuration of the database.