Ok, I gave it another try. And I want to thank you for pointing out the volume mapping in my original docker compose. I’m now using the same one downloaded from Seafile per the manual. Instead of trying to shoehorn it into the folder/file structure that I’m used to seeing in Unraid, I followed the steps in the manual. Changing only the port, to an unused one.
version: '2.0'
services:
db:
image: mariadb:10.11
container_name: seafile-mysql
environment:
- MYSQL_ROOT_PASSWORD=db_dev # Requested, set the root's password of MySQL service.
- MYSQL_LOG_CONSOLE=true
volumes:
- /opt/seafile-mysql/db:/var/lib/mysql # Requested, specifies the path to MySQL data persistent store.
networks:
- seafile-net
memcached:
image: memcached:1.6.18
container_name: seafile-memcached
entrypoint: memcached -m 256
networks:
- seafile-net
elasticsearch:
image: elasticsearch:8.6.2
container_name: seafile-elasticsearch
environment:
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 2g
volumes:
- /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data # Requested, specifies the path to Elasticsearch data persistent store.
networks:
- seafile-net
seafile:
image: docker.seadrive.org/seafileltd/seafile-pro-mc:latest
container_name: seafile
ports:
- "8888:80"
# - "443:443" # If https is enabled, cancel the comment.
volumes:
- /opt/seafile-data:/shared # Requested, specifies the path to Seafile data persistent store.
environment:
- DB_HOST=db
- DB_ROOT_PASSWD=db_dev # Requested, the value should be root's password of MySQL service.
# - TIME_ZONE=Asia/Shanghai # Optional, default is UTC. Should be uncomment and set to your local time zone.
- SEAFILE_ADMIN_EMAIL=me@example.com # Specifies Seafile admin user, default is 'me@example.com'
- SEAFILE_ADMIN_PASSWORD=asecret # Specifies Seafile admin password, default is 'asecret'
- SEAFILE_SERVER_LETSENCRYPT=false # Whether to use https or not
- SEAFILE_SERVER_HOSTNAME=example.seafile.com # Specifies your host name if https is enabled
depends_on:
- db
- memcached
- elasticsearch
networks:
- seafile-net
networks:
seafile-net:
I should mention I had no success with this before. Trying to get it to stand up outside of a docker compose within Unraid, using the example template. I should also mention this server is not exposed to the internet so to further minimize issues, I didn’t even change the credentials at any level.
seafile.conf:
[fileserver]
port = 8082
[database]
type = mysql
host = db
port = 3306
user = seafile
password = 25f31dee-2d05-44fa-a979-043b0a1c6a02
db_name = seafile_db
connection_charset = utf8
[notification]
enabled = false
host = 127.0.0.1
port = 8083
log_level = info
jwt_private_key = 8@byttkkbqp_m+88&qxn)w+b*(ucovmd)c04#0$$(d2#8zm9*#
[storage]
enable_storage_classes = true
storage_classes_file = /opt/seafile_storage_classes.json
[memcached]
memcached_options = --SERVER=192.168.1.250 --POOL-MIN=10 --POOL-MAX=100
Again, no deviation from the manual at this point.
seafhub_settings.py:
# -*- coding: utf-8 -*-
SECRET_KEY = "b'0yg4$&y&hu369noy#-lx!u!(4og#!$4j8t6)cyg@yhqqfliwd#'"
SERVICE_URL = "http://example.seafile.com/"
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'seahub_db',
'USER': 'seafile',
'PASSWORD': '25f31dee-2d05-44fa-a979-043b0a1c6a02',
'HOST': 'db',
'PORT': '3306',
'OPTIONS': {'charset': 'utf8mb4'},
}
}
CACHES = {
'default': {
'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',
'LOCATION': 'memcached:11211',
},
'locmem': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
},
}
COMPRESS_CACHE_BACKEND = 'locmem'
TIME_ZONE = 'America/Denver'
FILE_SERVER_ROOT = "http://example.seafile.com/seafhttp"
ENABLE_STORAGE_CLASSES = True
STORAGE_CLASS_MAPPING_POLICY = 'USER_SELECT'
And finally, the json in question, seafile_storage_classes.json:
[{
"storage_id": "cold_storage",
"name": "Hot Storage",
"is_default": false,
"commits": {
"backend": "s3",
"bucket": "seafile-commits",
"key": "secret",
"key_id": "super secret"
},
"fs": {
"backend": "s3",
"bucket": "seafile-fs",
"key": "secret",
"key_id": "super secret"
},
"blocks": {
"backend": "s3",
"bucket": "seafile-blocks",
"key": "secret",
"key_id": "super secret"
}
},
{
"storage_id": "hot_storage",
"name": "Hot Storage",
"is_default": true,
"fs": {
"backend": "fs",
"dir": "/storage/seafile/seafile-data"
},
"commits": {
"backend": "fs",
"dir": "/storage/seafile/seafile-data"
},
"blocks": {
"backend": "fs",
"dir": "/storage/seafile/seaflle-data"
}
},
{
"storage_id": "swift_storage",
"name": "Swift Storage",
"fs": {
"backend": "swift",
"tenant": "adminTenant",
"user_name": "admin",
"password": "openstack",
"container": "seafile-commits",
"auth_host": "192.168.56.31:5000",
"auth_ver": "v2.0"
},
"commits": {
"backend": "swift",
"tenant": "adminTenant",
"user_name": "admin",
"password": "openstack",
"container": "seafile-fs",
"auth_host": "192.168.56.31:5000",
"auth_ver": "v2.0"
},
"blocks": {
"backend": "swift",
"tenant": "adminTenant",
"user_name": "admin",
"password": "openstack",
"container": "seafile-blocks",
"auth_host": "192.168.56.31:5000",
"auth_ver": "v2.0",
"region": "RegionTwo"
}
},
{
"storage_id": "ceph_storage",
"name": "Ceph Storage",
"fs": {
"backend": "ceph",
"ceph_config": "/etc/ceph/ceph.conf",
"pool": "seafile-fs"
},
"commits": {
"backend": "ceph",
"ceph_config": "/etc/ceph/ceph.conf",
"pool": "seafile-commits"
},
"blocks": {
"backend": "ceph",
"ceph_config": "/etc/ceph/ceph.conf",
"pool": "seafile-blocks"
}
},
{
"storage_id": "new_backend",
"name": "New store",
"for_new_library": true,
"is_default": false,
"fs": {
"backend": "fs",
"dir": "/storage/seafile/new-data"
},
"commits": {
"backend": "fs",
"dir": "/storage/seafile/new-data"
},
"blocks": {
"backend": "fs",
"dir": "/storage/seafile/new-data"
}
}
]
I did run the sanitized json through jsonlint.com and received a pass.
All of that said, it still won’t work. Similar error regarding inability to read the json.
2023-09-27 16:28:18 …/common/obj-store.c(1131): Failed to load json file: /opt/seafile_storage_classes.json
2023-09-27 16:28:18 …/common/obj-store.c(110): [Object store] Failed to load backend for fs.
Error: failed to create ccnet session
Could it be permissions related? I did note the required chmod to get elastic search to work and did perform that. I also confirmed that Seafile worked without the addition of the storage backend configuration just to make sure I wasn’t trying to fix a server instance in an already broken state.
I must be missing something…