Lost connection to MySQL server causing Seafile client sync issues

I am running Seafile CE in docker with seafile server 7.1.4. I get frequent sync errors in the client (on linux with both client versions 7.0.4 and 7.0.9). The sync errors either saying that the library was damaged on the server, or deleted on the server. Neither of these are errors true, it just seems to be a hiccup in the server. Any ideas what is going on?

The seafile.log on the server shows a number of the same MySQL error:

[08/06/2020 04:17:09 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 04:17:59 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 04:18:40 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 04:24:59 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 04:35:58 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 04:39:20 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 04:39:39 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 04:39:42 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 05:01:23 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 05:01:45 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 05:01:55 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 05:01:55 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110
[08/06/2020 05:09:20 PM] …/common/seaf-db.c(589): Failed to connect to MySQL: Lost connection to MySQL server at ‘reading authorization packet’, system error: 110

Some of these errors correspond in time to the error in the .ccnet/seafile.log file from the client (time on the server it set to UTC, 4hrs ahead):

[08/06/20 13:01:18] http-tx-mgr.c(1157): Transfer repo ‘92688666’: (‘normal’, ‘check’) → (‘normal’, ‘commit’)
[08/06/20 13:01:18] http-tx-mgr.c(1157): Transfer repo ‘92688666’: (‘normal’, ‘commit’) → (‘normal’, ‘fs’)
[08/06/20 13:01:18] http-tx-mgr.c(1157): Transfer repo ‘92688666’: (‘normal’, ‘fs’) → (‘normal’, ‘data’)
[08/06/20 13:01:20] http-tx-mgr.c(1157): Transfer repo ‘92688666’: (‘normal’, ‘data’) → (‘finished’, ‘finished’)
[08/06/20 13:01:20] sync-mgr.c(582): Repo ‘EnvironmentalData’ sync state transition from ‘downloading’ to ‘synchronized’.
[08/06/20 13:01:22] sync-mgr.c(582): Repo ‘EnvironmentalData’ sync state transition from ‘synchronized’ to ‘committing’.
[08/06/20 13:01:22] repo-mgr.c(3842): All events are processed for repo 92688666-cc7d-4797-b663-1d4f42a57701.
[08/06/20 13:01:22] sync-mgr.c(582): Repo ‘EnvironmentalData’ sync state transition from ‘committing’ to ‘synchronized’.
[08/06/20 13:01:24] http-tx-mgr.c(2331): Bad response code for POST URLREMOVED/seafhttp/repo/head-commits-multi/: 500
[08/06/20 13:01:55] http-tx-mgr.c(2331): Bad response code for POST URLREMOVED/seafhttp/repo/head-commits-multi/: 500
[08/06/20 13:01:55] sync-mgr.c(621): Repo EnvironmentalData’ sync state transition from initializing to ‘error’: ‘Library deleted on server’.
[08/06/20 13:01:55] sync-mgr.c(832): repo EnvironmentalData(92688666) not found on server
[08/06/20 13:02:25] sync-mgr.c(582): Repo ‘EnvironmentalData’ sync state transition from ‘initializing’ to ‘downloading’.This text will be hidden

Hey,

we encountered a similar issue with 7.1.3 and 7.1.4 Pro and discovered a lot of connections to the DB server (a few thousand constantly opening/closing with enough load). This lead to numerous problems in our own cloud based setup because we were running into certain limitations. We investigated the issue further in cooperation with the Seafile team (thanks!) which lead to them moving (back?) to a connection pool for the DB connections of seafile-server in 7.1.6.

If you see a high amount of connections in TIME-WAIT with ss -s or ss -t -a -n | grep -c 3306 on your seafile webservers, maybe your system is affected by the same issue.

You could also check your DB server limitations (which we did as well at first).

With 7.1.6 the database connections for seafile-server should be much more stable. Keep in mind that seahub and seafdav still open a new connection to the DB for every single request since that’s the default behavior of Gunicorn/Django.

Best regards

2 Likes

Thanks, that is quite possible. Looking through the logs these errors only started showing up in May, which is likely when I updated from 7.0.5 to 7.1.3

Checking ss -t -a -n | grep -c 3306 gives me fluctuating numbers around 300 right now. Admittedly I don’t know if that is a normal or a high number.