Problem with Real-time Backup Server

I’m trying to set up a real-time backup server using version 6.3.9. Both servers are running ubuntu server 16.04 and working fine; they are configured with the same database names; seahub is not running on the backup server. I’ve set up the correct field in seafile.conf as per the manual and restarted seafile service on both servers but they fail to sync.

result of seaf-backup-cmd.sh on primary server:

$ ./seaf-backup-cmd.sh status
Traceback (most recent call last):
  File "/opt/seafile/seafile-pro-server-6.3.9/seaf-backup-cmd.py", line 50, in <module>
    args.func(args)
  File "/opt/seafile/seafile-pro-server-6.3.9/seaf-backup-cmd.py", line 11, in show_backup_status
    ret_str = seafile_api.get_backup_status()
  File "/opt/seafile/seafile-pro-server-6.3.9/seafile/lib/python2.7/site-packages/seaserv/api.py", line 1124, in get_backup_status
    return seafserv_threaded_rpc.get_backup_status()
  File "/opt/seafile/seafile-pro-server-6.3.9/seafile/lib/python2.7/site-packages/pysearpc/client.py", line 127, in newfunc
    return fret(ret_str)
  File "/opt/seafile/seafile-pro-server-6.3.9/seafile/lib/python2.7/site-packages/pysearpc/client.py", line 25, in _fret_string
    raise SearpcError(dicts['err_msg'])
pysearpc.common.SearpcError: cannot find function get_backup_status.

on backup server, seafile log:

[02/25/19 22:31:47] ../common/mq-mgr.c(61): [mq client] mq cilent is started
[02/25/19 22:31:47] http-tx-mgr.c(2177): Sync polling timer triggered, start to fetch repo list from primary.
[02/25/19 22:31:48] http-tx-mgr.c(1028): Failed to get repo list from primary: Internal server error.

Firewall is configured with both ports 80 and 443 open on both servers altough I’m using https for sync on seafile.conf

Any ideas?

Can you first upgrade to the latest version 6.3.12?

I’ve just did - the problem persists with the same error messages;

I’ve also tried using http instead of https but still same error.

This command is only useful on the backup server. It’s not supported on primary server.

You should check the seafile.log on the primary server for any error messages.

Hi Daniel, Jonathan,
Thank you for your assistance.
Problem seems to be solved - I found an error with the configuration on one of the servers (the one I tested after updating to 6.3.12). I had previously tested both with 6.3.9, at least one of which was correctly configured, and both were failing. After updating to 6.3.12 I’ve tested on one server only and that was wrongly configured. After correcting the configuration it started syncing without any issue. I then tested on the other server and that too was working correctly.
I’m thinking that it may have been a specific error with 6.3.9 or I may have missed something for being too tired.
Thank you for your help.
Kind regards,

I’m still encountering issues with this.

The servers were originally working and I left them overnight syncing - I have 2 primary servers, one with 1.10TB and the other with 450GB. This morning I notice that the servers were no longer syncing and the backup servers cpu was maxing out; server 1 interrupted synching after 150GB and server 2 after 400GB.

The logs do not show any problem exept for the index.log:

02/27/2019 16:48:29 [ERROR] seafes:264 check_concurrent_update: another index task is running, quit now
02/27/2019 16:58:33 [INFO] root:210 main: storage: using filesystem storage backend
02/27/2019 16:58:33 [INFO] root:212 main: index office pdf: True
02/27/2019 16:58:33 [ERROR] seafes:264 check_concurrent_update: another index task is running, quit now
02/27/2019 17:08:28 [INFO] root:210 main: storage: using filesystem storage backend
02/27/2019 17:08:28 [INFO] root:212 main: index office pdf: True
02/27/2019 17:08:29 [ERROR] seafes:264 check_concurrent_update: another index task is running, quit now
02/27/2019 17:18:32 [INFO] root:210 main: storage: using filesystem storage backend
02/27/2019 17:18:32 [INFO] root:212 main: index office pdf: True
02/27/2019 17:18:32 [ERROR] seafes:264 check_concurrent_update: another index task is running, quit now

The process using most of the cpu is this one:

/usr/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/opt/seafile/seafile-pro-server-6.3.12/pro/elasticsearch -cp /opt/seafile/seafile-pro-server-6.3.12/pro/elasticsearch/lib/elasticsearch-2.4.5.jar:/opt/seafile/seafile-pro-server-6.3.12/pro/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.path.logs=/opt/seafile/logs -Des.path.data=/opt/seafile/pro-data/search/data -Des.network.host=127.0.0.1 -Des.insecure.allow.root=true -p /opt/seafile/pids/elasticsearch.pid

I’ve used the script to install seafile but I’ve changed the nginx config file to match the manual, as well as creating systemd services instead on the init.d script. I’ve also updated the conf files to use https.

Thank you in advance for your help.

Hi

You cannot use one backup server for two primary servers. You should setup two separate backup servers for the primary servers.

Apologies if I did not explain myself well - I do have two primary servers and two back-up servers.

I believe I found the cause of the problem this time around - when setting up the servers with the installation script, it automatically enables indexing of files, previewing of office, etc… Most of the cpu was being used by the indexing and scanning of office documents and would stall.

I have now disabled almost everything under the seafevents.conf and restarted the servers; It took a long time for the sync to start again (ended up leaving it overnight) but this morning is finally syncing again.

Once again, thanks for your time and help.