"Page unavailable" when using search box in web client

I’m using Seafile Server Pro 6.1.4 behind apache on Ubuntu 16.04.2. When I’m trying to use the search box in the web client, I always get the following error message:

Page unavailable
Sorry, but the requested page is unavailable due to a server hiccup.
Our engineers have been notified, so check back later.

Any ideas?

Check seahub_django_request.log. Also check the ElasticSearch component.

seahub_django_request.log has indeed some errors and warnings. The relevant error seems to be:

2017-07-29 21:59:03,779 [ERROR] django.request:256 handle_uncaught_exception Internal Server Error: /search/
Traceback (most recent call last):
File “/media/raid1/seafile/seafile-pro-server-6.1.4/seahub/thirdpart/Django-1.8.18-py2.7.egg/django/core/handlers/base.py”, line 132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/seahub/seahub/auth/decorators.py”, line 27, in _wrapped_view
return view_func(request, *args, **kwargs)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/seahub-extra/seahub_extra/search/views.py”, line 89, in search
results, total = search_repo_file_by_name(request, repo, keyword, suffixes, start, size)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/seahub-extra/seahub_extra/search/utils.py”, line 82, in search_repo_file_by_name
files_found, total = es_search([repo.id], keyword, suffixes, start, size)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/seafes/init.py”, line 8, in es_search
files_index = RepoFilesIndex(conn)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/seafes/indexes/repo_files.py”, line 91, in init
self.create_index_if_missing(index_settings=self.index_settings)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/seafes/indexes/base.py”, line 21, in create_index_if_missing
if not self.es.indices.exists(index=self.INDEX_NAME):
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/elasticsearch-2.4.1-py2.7.egg/elasticsearch/client/utils.py”, line 69, in _wrapped
return func(*args, params=params, **kwargs)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/elasticsearch-2.4.1-py2.7.egg/elasticsearch/client/indices.py”, line 225, in exists
params=params)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/elasticsearch-2.4.1-py2.7.egg/elasticsearch/transport.py”, line 327, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/elasticsearch-2.4.1-py2.7.egg/elasticsearch/connection/http_urllib3.py”, line 106, in perform_request
raise ConnectionError(‘N/A’, str(e), e)
ConnectionError: ConnectionError(<urllib3.connection.HTTPConnection object at 0x7f61c0c46150>: Failed to establish a new connection: [Errno 111] Connection refused) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7f61c0c46150>: Failed to establish a new connection: [Errno 111] Connection refused)

What does that tell me?
I thought whether the problem might be related to seafile running behind an apache virtual host. However, I do have configured this setup according to the manual (https://manual.seafile.com/deploy/deploy_with_apache.html). The custom service URL is in ccnet.conf and seahub_settings.py. I have copied and adapted the apache vhost-conf from the manual. The problem also persists when I disable all firewall rules.

What is probably special about the setup is that I have configured apache to serve the vhost for seafile on two different ports (443 and 334). Both ports are redirected to http://127.0.0.1:8082. Connections to 443 are accepted only from my LAN. Port 334 is used for NAT behind my router, that is, the router forwards incoming WAN requests for myseafile.domain.com:443 to INTERNAL.IP.OF.SERVER:334. This setup has been made to expose only the seafile vhost to the Internet and not any https-host on my server.

Also, my server runs a local DNS which redirects requests from the LAN for some domains to local network devices. I.e., for a host on the Internet, myseafile.domain.com resolves to my router’s public IP (and is NATted to my server), while for a host on my LAN, myseafile.domain.com resolves to the server’s LAN IP directly.

Although this setup is probably special, apart from the problem with the search box, everything works fine with this setup.

Hope the information help to identify the problem cause.

Update: I managed to fix the original error message (“Page unavailable”). Problem was that the integrated ElasticSearch did not work with JRE 1.9. Hence, I uninstalled the following packages

  • openjdk-9-jre
  • openjdk-9-jre-headless

… and re-installed (uninstall / install) the following packages:

  • openjdk-8-jre
  • openjdk-8-jre-headless

That got ElasticSearch to work, i.e., search results are displayed in the Web GUI.

However, the logs still report a recurrent error pattern. In index.log, I have the following message for virtually all of my seafile libraries:

[08/12/2017 11:19:59] 8c9ed8da-dd3e-4b6a-baaf-64081c9239a3: inrecovery
[08/12/2017 11:19:59] Error when index repo 8c9ed8da-dd3e-4b6a-baaf-64081c9239a3
Traceback (most recent call last):
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/seafes/file_index_updater.py”, line 50, in run
self.update_repo(e.repo_id, e.commit_id)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/seafes/file_index_updater.py”, line 116, in update_repo
self.check_recovery(repo_id)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/seafes/file_index_updater.py”, line 112, in check_recovery
self.update_files_index(repo_id, old, new)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/seafes/file_index_updater.py”, line 100, in update_files_index
self.files_index.add_files(repo_id, version, added_files)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/seafes/indexes/repo_files.py”, line 117, in add_files
self.add_file_to_index(repo_id, version, path, obj_id)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/seafes/indexes/repo_files.py”, line 132, in add_file_to_index
content = extractor.extract(repo_id, version, obj_id, path) if extractor else None
File “/media/raid1/seafile/seafile-pro-server-6.1.4/pro/python/seafes/extract.py”, line 168, in extract
content = f.get_content(limit=self.text_size_limit)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/seahub/thirdpart/seafobj/fs.py”, line 114, in get_content
self._content = stream.read(limit)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/seahub/thirdpart/seafobj/fs.py”, line 139, in read
blocks[self.block_idx])
File “/media/raid1/seafile/seafile-pro-server-6.1.4/seahub/thirdpart/seafobj/blocks.py”, line 13, in load_block
data = self.obj_store.read_obj(repo_id, version, obj_id)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/seahub/thirdpart/seafobj/backends/base.py”, line 12, in read_obj
data = self.read_obj_raw(repo_id, version, obj_id)
File “/media/raid1/seafile/seafile-pro-server-6.1.4/seahub/thirdpart/seafobj/backends/filesystem.py”, line 20, in read_obj_raw
with open(path, ‘rb’) as fp:
IOError: [Errno 2] No such file or directory: u’/media/raid1/seafile/seafile-data/storage/blocks/8c9ed8da-dd3e-4b6a-baaf-64081c9239a3/86/63a70ef30a5987b440a621483af2044bae1e0a’

I think, it is related to the following warning in elasticsearch.log:

[2017-08-12 00:49:55,810][WARN ][env ] [Maverick] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]

As described in the ElasticSearch manual, I increased ulimit permanently in /etc/security/limits.conf to 65536 for the user under which seafile runs (cf. File Descriptors | Elasticsearch Guide [8.11] | Elastic). However, ElasticSearch still reports the above warning and ElasticSearch still reports only 4096 als the file descriptor limit:

curl -XGET ‘localhost:9200/_nodes/stats/process?filter_path=**.max_file_descriptors&pretty’
{
“nodes” : {
“u2lprTMRQQ2FZLPqMzmVKw” : {
“process” : {
“max_file_descriptors” : 4096
}
}
}
}

Question:
How can I increase the max number of file descriptors for the ElasticSearch instance integrated in seafile?