Seafile Server 7.0.5 unter Raspbian plötzlich nicht mehr erreichbar

Hallo,

ich habe folgende Konfiguration:
Seafile Server 7.0.5 (Raspberry Version)
aktuelles Raspian auf Raspberry
aktueller NGINX Webserver

folgendes Problem:
Bis heute mittag lief die Konfiguration einwandfrei. Danach war der Raspberry für ein paar Stunden aus und nach dem erneuten Hochfahren bringen alle Clients einen Sync-Fehler.

Der Aufruf der Webpage ergibt folgende Meldung:
> # Page unavailable
> Sorry, but the requested page is unavailable due to a server hiccup.
> Our engineers have been notified, so check back later.

Auf dem gleichen Raspberry läuft noch PIHOLE ebenfalls über NGINX. Das funktioniert weiterhin ohne Probleme, die lokale Webpage vom PIHOLE ist weiterhin über den NGINX erreichbar.

An der Konfiguration wurde in letzter Zeit nichts geändert. An welcher Stelle kann man da Anfangen zu suchen.

Ist das eine Meldung welche der NGINX ausgibt da er den Seahub Dienst nicht erreichen kann oder kommt die Meldung vom Seahub und ich muss den Fehler beim Seafile/Seahub suchen?

Anmerkung:
PIHOLE Webpage wird über den Port 80 angesprochen
Seafile über Port 8004
Diese Konfiguration lief die ganze Zeit seit Anfang Januar Problemlos.

Schau mal in den Seafile Logs … Da findet sich meistens etwas

Ich habe in dem Verzeichnis “logs” nachgeschaut.

In der Datei “seafile.log” steht folgendes (stark verkürzt, es befinden sich sehr viele dieser Einträge im log:
[04/11/20 11:07:41] ../common/seaf-db.c(124): Failed to get database connection: Failed to connect to MySQL: Can't connect to MySQL server on '127.0.0.1' (111). [04/11/20 11:07:41] http-server.c(853): DB error when check repo existence. [04/11/20 11:07:41] ../common/seaf-db.c(124): Failed to get database connection: Failed to connect to MySQL: Can't connect to MySQL server on '127.0.0.1' (111). [04/11/20 11:07:41] http-server.c(853): DB error when check repo existence. [04/11/20 11:07:41] ../common/seaf-db.c(124): Failed to get database connection: Failed to connect to MySQL: Can't connect to MySQL server on '127.0.0.1' (111). [04/11/20 11:07:41] http-server.c(853): DB error when check repo existence. [04/11/20 11:07:41] ../common/seaf-db.c(124): Failed to get database connection: Failed to connect to MySQL: Can't connect to MySQL server on '127.0.0.1' (111). [04/11/20 11:07:41] http-server.c(853): DB error when check repo existence. [04/11/20 11:07:41] ../common/seaf-db.c(124): Failed to get database connection: Failed to connect to MySQL: Can't connect to MySQL server on '127.0.0.1' (111). [04/11/20 11:07:41] http-server.c(853): DB error when check repo existence.

Das scheint also ein Problem mit der Datenbank zu sein. Leider habe ich davon überhaupt keine Ahnung. Ich war froh als damals alles lief. Update der Datenbank kam die letzten Tage meines wissens auch nicht.

Wo kann ich hier weiter nachschauen?

Nachtrag:
Ich habe mir mit “sudo ps -e” die Prozesse angeschaut. Ich finde keinen Eintrag der auf die MySQL Datenbank schließen lässt. Wie müsste der Eintrag für die Datenbank lauten? Ich vermute die wird nicht gestartet.

So, noch ein Nachtrag:

“systemctl status mysql.service” ergibt folgende Ausgabe:

*● mysql.service - LSB: Start and stop the mysql database server daemon*
*   Loaded: loaded (/etc/init.d/mysql; generated)*
*   Active: failed (Result: exit-code) since Sat 2020-04-11 11:27:56 CEST; 53s ago*
*     Docs: man:systemd-sysv-generator(8)*
*  Process: 1758 ExecStart=/etc/init.d/mysql start (code=exited, status=1/FAILURE)*

*Apr 11 11:27:25 server mysqld_safe[1945]: mysqld from pid file /var/run/mysqld/mysqld.pid ended*
*Apr 11 11:27:56 server /etc/init.d/mysql[2225]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in*
*Apr 11 11:27:56 server /etc/init.d/mysql[2225]: [61B blob data]*
*Apr 11 11:27:56 server /etc/init.d/mysql[2225]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory")'*
*Apr 11 11:27:56 server /etc/init.d/mysql[2225]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!*
*Apr 11 11:27:56 server /etc/init.d/mysql[2225]:*
*Apr 11 11:27:56 server mysql[1758]: Starting MariaDB database server: mysqld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . failed!*
*Apr 11 11:27:56 server systemd[1]: mysql.service: Control process exited, code=exited, status=1/FAILURE*
*Apr 11 11:27:56 server systemd[1]: mysql.service: Failed with result 'exit-code'.*
*Apr 11 11:27:56 server systemd[1]: Failed to start LSB: Start and stop the mysql database server daemon.*

Das Verzeichnis “/var/run/mysqld/” ist leer.
“ls” ergibt keine Ausgabe.

Jetzt bin ich mit meinem Wissen am Ende. Was kann ich hier tun?

In die Logs der Datenbank gucken.

Vermutlich korrupt was auf defekten Speicher hinweist

Hier die error.log.1 von “/var/log/mysql/”

200411 23:56:42 [Note] InnoDB: Using mutexes to ref count buffer pool pages
200411 23:56:42 [Note] InnoDB: The InnoDB memory heap is disabled
200411 23:56:42 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
200411 23:56:42 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
200411 23:56:42 [Note] InnoDB: Compressed tables use zlib 1.2.8
200411 23:56:42 [Note] InnoDB: Using Linux native AIO
200411 23:56:42 [Note] InnoDB: Not using CPU crc32 instructions
200411 23:56:42 [Note] InnoDB: Initializing buffer pool, size = 128.0M
200411 23:56:42 [Note] InnoDB: Completed initialization of buffer pool
200411 23:56:43 [Note] InnoDB: Highest supported file format is Barracuda.
200411 23:56:43 [Note] InnoDB: The log sequence numbers 1623589 and 1623589 in ibdata files do not match the log sequen$200411 23:56:43 [Note] InnoDB: Database was not shutdown normally!
200411 23:56:43 [Note] InnoDB: Starting crash recovery.
200411 23:56:43 [Note] InnoDB: Reading tablespace information from the .ibd files...
200411 23:56:43 [Note] InnoDB: Restoring possible half-written data pages
200411 23:56:43 [Note] InnoDB: from the doublewrite buffer...
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 279.
InnoDB: You may have to recover from a backup.
2020-04-11 23:56:43 b6f54210 InnoDB: Page dump in ascii and hex (16384 bytes):
 len 16384; hex ccfab26b000001170000000000000000000000000048566f0006000000000000000000000000fffffffe0000000000000015000$InnoDB: End of page dump
2020-04-11 23:56:43 b6f54210 InnoDB: uncompressed page, stored checksum in field1 3438981739, calculated checksums for $InnoDB: Page may be a system page
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 279.
InnoDB: You may have to recover from a backup.
InnoDB: It is also possible that your operating
InnoDB: system has corrupted its own file cache
InnoDB: and rebooting your computer removes the
InnoDB: error.
InnoDB: If the corrupt page is an index page
InnoDB: you can also try to fix the corruption
InnoDB: by dumping, dropping, and reimporting
InnoDB: the corrupt table. You can use CHECK
InnoDB: TABLE to scan your table for corruption.
InnoDB: See also http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
InnoDB: Ending processing because of a corrupt database page.
2020-04-11 23:56:43 b6f54210  InnoDB: Assertion failure in thread 3069526544 in file buf0buf.cc line 4527
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
200411 23:56:43 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs                        
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,                                                                
something is definitely wrong and this may fail.
Server version: 10.0.28-MariaDB-2+b1
key_buffer_size=16777216
read_buffer_size=131072
max_used_connections=0
max_threads=153
thread_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 351246 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x30000
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
addr2line: 'mysqld': No such file

Kann man das reparieren oder am besten gleich eine neue Speicherkarte und neu aufsetzen?