Indexing not working in seafile pro 6.2.4

Hello,

I recently udgrade to seafile pro 6.2.4 and automatic indexing files is not working.

The only way it works is when I manually update the index using

./pro/pro.py search --update

Unfortunately I see no errors in logs.
Index.log :

[12/30/2017 23:56:50] storage: using filesystem storage backend
[12/30/2017 23:56:50] index office pdf: True
[12/31/2017 00:06:50] storage: using filesystem storage backend
[12/31/2017 00:06:50] index office pdf: True
[12/31/2017 00:16:50] storage: using filesystem storage backend
[12/31/2017 00:16:50] index office pdf: True

have you an advice to diagnose the problem ?

It seems that the seafevents process is being restarted every 10 minutes. Can you check seafevents.log?

Hi Daniel,

thanks for your reply.
Here is my seafevents.log, i tought that the 10 minutes timeframe was the delay between 2 indexing

[2017-12-31 11:04:07,767] [INFO] starts to index files
[2017-12-31 11:04:07,767] [DEBUG] Running command: “/usr/bin/python2.7” “-m” “seafes.index_local” “–logfile” “/opt/seafile/logs/index.log” “update”, cwd = /opt/seafile/seafile-pro-server-6.2.4/pro/python/seafes
[2017-12-31 11:14:07,859] [INFO] starts to index files
[2017-12-31 11:14:07,860] [DEBUG] Running command: “/usr/bin/python2.7” “-m” “seafes.index_local” “–logfile” “/opt/seafile/logs/index.log” “update”, cwd = /opt/seafile/seafile-pro-server-6.2.4/pro/python/seafes
[2017-12-31 11:24:07,953] [INFO] starts to index files
[2017-12-31 11:24:07,955] [DEBUG] Running command: “/usr/bin/python2.7” “-m” “seafes.index_local” “–logfile” “/opt/seafile/logs/index.log” “update”, cwd = /opt/seafile/seafile-pro-server-6.2.4/pro/python/seafes

Hi.

How did you start seafevents?

Can you post your seafevents configuration file.

You can update the index_local.py file as follows, and I think there is a problem with the connection to elasticsearch.

near 36 lines:

class IndexLocal(object):      
 """ Independent update index.
 """                        
 def __init__(self, es):    
     logger.info("breakpoint one")
     self.fileindexupdater = FileIndexUpdater(es)                                                                                                  
     logger.info("breakpoint two")
     self.error_counter = 0 
     self.worker_list = []

i will waiting for your response.

Hello,

I don’t know how seafevents is started, it is handled by seafile.
However, here is the “ps” command that show seafevents is running :

seafile 1325 0.1 0.3 240740 38304 ? Sl 2017 4:47 /usr/bin/python2.7 -m seafevents.main --config-file /opt/seafile/conf/seafevents.conf --logfile /opt/seafile/logs/seafevents.log -P /opt/seafile/pids/seafevents.pid

Here is my seafevents.conf (sorry but comments appears in bold on the forum)

[DATABASE]
type = mysql
host = 127.0.0.1
port = 12346
username = aaaaaa
password = xxxxxxxxxxxxxxx
name = bbbbbbbbb

[AUDIT]
enabled = true

[INDEX FILES]
enabled = true
interval = 10m

If true, indexes the contents of office/pdf files while updating search index

Note: If you change this option from “false” to “true”, then you need to clear the search index and update the index again. See the FAQ for details.

index_office_pdf = true

[OFFICE CONVERTER]
enabled = true
workers = 1

how many pages are allowed to be previewed online. Default is 50 pages

max-pages = 50

the max size of documents allowed to be previewed online, in MB. Default is 10 MB

Previewing a large file (for example >30M) online is likely going to freeze the browser.

max-size = 10

[SEAHUB EMAIL]
enabled = true

interval of sending Seahub email. Can be s(seconds), m(minutes), h(hours), d(days)

interval = 30m

I have updated seafile-pro-server-6.2.4/pro/python/seafes/index_local.py with the debug lines but nothing appears in seafevents.log

hi.

Did you restart the server?

it’s will appears in index.log

Ok after a reboot, nothing new appears in index.log (I restarted the server at 11:16)

[01/02/2018 11:14:29] storage: using filesystem storage backend
[01/02/2018 11:14:29] index office pdf: True
[01/02/2018 11:27:12] storage: using filesystem storage backend
[01/02/2018 11:27:12] index office pdf: True

But I have an error on seafevents.log every 10 seconds:

[2018-01-02 11:31:51,926] [INFO] remove pidfile /opt/seafile/pids/seafevents.pid
[2018-01-02 11:32:01,939] [INFO] audit is enabled
[2018-01-02 11:32:01,940] [INFO] [seafevents] database: mysql, name: seahub-db
[2018-01-02 11:32:01,988] [INFO] [seafevents] database: mysql, name: seahub-db
[2018-01-02 11:32:02,011] [DEBUG] seafes enabled: True
[2018-01-02 11:32:02,011] [DEBUG] seafes dir: /opt/seafile/seafile-pro-server-6.2.4/pro/python/seafes
[2018-01-02 11:32:02,011] [DEBUG] seafes logfile: /opt/seafile/logs/index.log
[2018-01-02 11:32:02,012] [DEBUG] seafes index interval: 600 sec
[2018-01-02 11:32:02,012] [DEBUG] seafes index office/pdf: True
[2018-01-02 11:32:02,012] [DEBUG] seahub email enabled: True
[2018-01-02 11:32:02,012] [DEBUG] seahub dir: /opt/seafile/seafile-pro-server-6.2.4/seahub
[2018-01-02 11:32:02,012] [DEBUG] send seahub email interval: 1800 sec
[2018-01-02 11:32:02,012] [INFO] LDAP section is not set, disable ldap sync.
[2018-01-02 11:32:02,013] [INFO] [virus_scan] scan_command option is not found in /opt/seafile/conf/seafile.conf, disable virus scan.
[2018-01-02 11:32:02,013] [INFO] [seafevents] database: mysql, name: seahub-db
[2018-01-02 11:32:02,031] [DEBUG] office enabled: True
[2018-01-02 11:32:02,031] [DEBUG] office convert workers: 1
[2018-01-02 11:32:02,031] [DEBUG] office outputdir: /tmp/seafile-office-output
[2018-01-02 11:32:02,031] [DEBUG] office convert max pages: 50
[2018-01-02 11:32:02,031] [DEBUG] office convert max size: 10 MB
[2018-01-02 11:32:02,031] [INFO] login record updater disabled
[2018-01-02 11:32:02,031] [DEBUG] using config file /opt/seafile/conf/ccnet.conf
[2018-01-02 11:32:02,032] [INFO] try to connect to ccnet-server…
[2018-01-02 11:32:02,032] [INFO] connected to ccnet server
[2018-01-02 11:32:02,032] [DEBUG] using config file /opt/seafile/conf/ccnet.conf
[2018-01-02 11:32:02,033] [ERROR] Another instance is already running
Traceback (most recent call last):
File “/opt/seafile/seafile-pro-server-6.2.4/pro/python/seafevents/app/app.py”, line 114, in connect_ccnet
self._sync_client.register_service_sync(‘seafevents-events-dummy-service’, ‘rpc-inner’)
File “/opt/seafile/seafile-pro-server-6.2.4/seafile/lib64/python2.7/site-packages/ccnet/sync_client.py”, line 94, in register_service_sync
self.send_cmd(cmd)
File “/opt/seafile/seafile-pro-server-6.2.4/seafile/lib64/python2.7/site-packages/ccnet/sync_client.py”, line 49, in send_cmd
raise RuntimeError(‘Failed to send-cmd: %s %s’ % (resp.code, resp.code_msg))
RuntimeError: Failed to send-cmd: 516 The service existed
[2018-01-02 11:32:02,034] [INFO] exit with code 1
[2018-01-02 11:32:02,034] [INFO] remove pidfile /opt/seafile/pids/seafevents.pid

I restart the server another time i now I don’t get the previous error every 10 seconds.
But files are still not indexed avery 10 minutes.
I’ll try to clear the index another time.

Sorry for the delay.

it look like you havn’t kill all process to run it.

i think you need restart seahub and seafile. like

./seahub.sh stop
./seafile.sh stop  #make sure all process has been abort, if not  you can kill him. 
# process include seahub、 seafile、 seafevnet、 seafes.
# and then you can start server
./seafile.sh start
./seahub.sh start

Hi, I restarted the service another time without succes :frowning:
I don’t understand why I don’t see the debug messages you asked me to add ?

hello.

This is obviously going to be written to index.log, at the beginning of the normal start. At least in my place.

ok,
I found the problem !
When I updated seafile, I was connected as root and it seems one .lock file was ownered by root. So I think that Seafile user can’t run indexing anymore. Unfortunatelly the logs didn’t point that.
I managed to fix this by changing the owner.

Thanks a lot for your help that show me the good direction to follow :slight_smile:
Bye

Hello, I had similar problem

Hi Fab,so which .lock file did you notice that causes the issue?Could you share it,Thx.