Hot & Cold storage backends

Hi there! I’ve been running the Professional edition of Seafile on my Unraid server for several months now and it’s been great. One thing I can’t wrap my head around is not only how to configure the multi-storage backends successfully and what it looks like from the web client side of things. Ideally I would like the option to add a new library and specify at time of creation which “bucket” it should store in, whether that’s S3 or SFTP, etc.

I have read through this (imagine the link to the manual here, I’m too new to add URLS)and just not understanding. Nor can I get it to work. I have tried multiple times. What is confusing to me is that the linked section of the manual shows that to enable it in seafile.conf, then specify in the .json file the different types of backend storage. But then when I went searching for clearer answers, or perhaps a more beginer friendly tutorial, I found this(if you google Seafile S3 Collabora, you’ll find this one on Viobotta dot com) which shows adding the configuration for the backends to seafile.conf. I did try it his way per the second article and could not get that to work either. I also couldnt’ get Collabora to work either, but that’s another topic.

Does anyone have any tips or maybe just a clearer explanation of how this is supposed to work? Or if you’re willing, a look at a sanitized version of your .json or .conf files so I can try to figure it out myself? I’m unfortunately getting nowhere with the manual. And I’ll admit it makes me feel a bit dumb to be able to read it and still not be successful. I’m willing to learn, but I’m afraid I’m stuck and need a hand if anyone is willing.

And I don’t know if this is possible even if I ever got it working, but I’d like to have the ability to designate a library as cold/archive storage for the more critical stuff and then hot libraries as needed for the less critical but often called upon files.

Right now I’m using rclone to mount the Seafile folders because I’ve had difficulty getting the Database dump to work, then copying that to a remote on S3 storage. It works. But if I could skip the middle man, so to speak, that would be terrific!

Hi @Father_Redbeard, I haven’t really tried implementing the “multiple storage backends” feature yet, and I am no expert, but I think that posting your current Seafile server configuration will help troubleshooting your issue.

That’s a fair point. Here are my sanitized configuration files in order to hopefully see what I’m doing wrong. I should mention that this is installed on Unraid server using compose manager and not the docker templates from the app store. Both of which are either out of date or the current CE. Interstingly enough, if you copy the json as specified in the manual and paste it in an online json validation tool, it throws several formatting errors. That said, I am not a json expert so I couldn’t tell you what was right or wrong either way. I of course have my s3 compatible buckets keys/keyID inputting but have removed from the below examples.

The error I get as it sits right now is

2023-09-26 09:13:57 Waiting Nginx 
2023-09-26 09:13:58 Nginx ready 
2023-09-26 09:13:58 This is an idle script (infinite loop) to keep container running. 

** Message: 09:13:58.408: seafile-controller.c(1023): loading seafdav config from /opt/seafile/conf/seafdav.conf

2023-09-26 09:13:58 ../common/seaf-utils.c(434): Use database Mysql
2023-09-26 09:13:58 ../common/obj-store.c(1131): Failed to load json file: /opt/seafile_storage_classes.json
failed to run "seaf-server -t" [65280]
[2023-09-26 09:13:58] Skip running setup-seafile-mysql.py because there is existing seafile-data folder.
Traceback (most recent call last):
  File "/scripts/start.py", line 95, in <module>
    main()
  File "/scripts/start.py", line 80, in main
    call('{} start'.format(get_script('seafile.sh')))
  File "/scripts/utils.py", line 70, in call
    return subprocess.check_call(*a, **kw)
  File "/usr/lib/python3.8/subprocess.py", line 364, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '/opt/seafile/seafile-pro-server-10.0.9/seafile.sh start' returned non-zero exit status 1.
2023-09-26 09:13:58 ../common/obj-store.c(110): [Object store] Failed to load backend for fs.
Error: failed to create ccnet session

Seafile docker-compose.yml -

version: '2.0'
services:
  db:
    image: mariadb:10.6
    container_name: seafile-mysql
    environment:
      - MYSQL_ROOT_PASSWORD=db_dev  # Requested, set the root's password of MySQL service.
      - MYSQL_LOG_CONSOLE=true
    volumes:
      - /mnt/user/seafile/seafile-mysql/db:/var/lib/mysql  # Requested, specifies the path to MySQL data persistent store.
    networks:
      - seafile-net

  memcached:
    image: memcached:1.6.18
    container_name: seafile-memcached
    entrypoint: memcached -m 256
    networks:
      - seafile-net

  elasticsearch:
    image: elasticsearch:7.16.2
    container_name: seafile-elasticsearch
    environment:
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 2g
    volumes:
      - /mnt/user/seafile/seafile-elasticsearch/data:/usr/share/elasticsearch/data  # Requested, specifies the path to Elasticsearch data persistent store.
    networks:
      - seafile-net
          
  seafile:
    image: docker.seadrive.org/seafileltd/seafile-pro-mc:latest
    container_name: seafile
    ports:
      - "8888:80"
#     - "443:443"  # If https is enabled, cancel the comment.
    volumes:
      - /mnt/user/seafile/seafile-data:/shared   # Requested, specifies the path to Seafile data persistent store.
    environment:
      - DB_HOST=db
      - DB_ROOT_PASSWD=db_dev  # Requested, the value should be root's password of MySQL service.
#      - TIME_ZONE=Asia/Shanghai # Optional, default is UTC. Should be uncomment and set to your local time zone.
      - SEAFILE_ADMIN_EMAIL=me@example.com # Specifies Seafile admin user, default is 'me@example.com'
      - SEAFILE_ADMIN_PASSWORD=asecret     # Specifies Seafile admin password, default is 'asecret'
      - SEAFILE_SERVER_LETSENCRYPT=false   # Whether to use https or not
      - SEAFILE_SERVER_HOSTNAME=example.seafile.com # Specifies your host name if https is enabled
    depends_on:
      - db
      - memcached
      - elasticsearch
    networks:
      - seafile-net

networks:
  seafile-net:

seafile.conf-

[fileserver]
port = 8082

[database]
type = mysql
host = db
port = 3306
user = seafile
password = 9987ebdf-5d83-4ccb-ac37-2bbdf672954c
db_name = seafile_db
connection_charset = utf8

[notification]
enabled = false
host = 127.0.0.1
port = 8083
log_level = info
jwt_private_key = x0*p1&ws9)qc0$ghk$h=i069zkufnq3oqez^q2@g(!k-l1aol

[storage]
enable_storage_classes = true
storage_classes_file = /opt/seafile_storage_classes.json

[memcached]
memcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100

seafile_storage_classes.json-

[
{
"storage_id": "hot_storage",
"name": "Hot Storage",
"is_default": true,
"commits": {"backend": "s3", "bucket": "seafile-commits", "key": "ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09", "key_id": "AKIAIOT3GCU5VGCCL44A"},
"fs": {"backend": "s3", "bucket": "seafile-fs", "key": "ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09", "key_id": "AKIAIOT3GCU5VGCCL44A"},
"blocks": {"backend": "s3", "bucket": "seafile-blocks", "key": "ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09", "key_id": "AKIAIOT3GCU5VGCCL44A"}
},

{
"storage_id": "cold_storage",
"name": "Cold Storage",
"is_default": false,
"fs": {"backend": "fs", "dir": "/storage/seafile/seafile-data"},
"commits": {"backend": "fs", "dir": "/storage/seafile/seafile-data"},
"blocks": {"backend": "fs", "dir": "/storage/seafile/seaflle-data"}
},

{
"storage_id": "swift_storage",
"name": "Swift Storage",
"fs": {"backend": "swift", "tenant": "adminTenant", "user_name": "admin", "password": "openstack", "container": "seafile-commits", "auth_host": "192.168.56.31:5000", "auth_ver": "v2.0"},
"commits": {"backend": "swift", "tenant": "adminTenant", "user_name": "admin", "password": "openstack", "container": "seafile-fs", "auth_host": "192.168.56.31:5000", "auth_ver": "v2.0"},
"blocks": {"backend": "swift", "tenant": "adminTenant", "user_name": "admin", "password": "openstack", "container": "seafile-blocks", "auth_host": "192.168.56.31:5000", "auth_ver": "v2.0", "region": "RegionTwo"}
}

{
"storage_id": "ceph_storage",
"name": "ceph Storage",
"fs": {"backend": "ceph", "ceph_config": "/etc/ceph/ceph.conf", "pool": "seafile-fs"},
"commits": {"backend": "ceph", "ceph_config": "/etc/ceph/ceph.conf", "pool": "seafile-commits"},
"blocks": {"backend": "ceph", "ceph_config": "/etc/ceph/ceph.conf", "pool": "seafile-blocks"}
}
]

seahub_settings.py-

SECRET_KEY = "b'-%h%vf)*z#a_06)l6dm7(o*&4q8@y4je6dq!zv&lc)mri86i*7'"
SERVICE_URL = "http://example.seafile.com/"

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'seahub_db',
        'USER': 'seafile',
        'PASSWORD': '9987ebdf-5d83-4ccb-ac37-2bbdf672954c',
        'HOST': 'db',
        'PORT': '3306',
        'OPTIONS': {'charset': 'utf8mb4'},
    }
}


CACHES = {
    'default': {
        'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',
        'LOCATION': 'memcached:11211',
    },
    'locmem': {
        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
    },
}
COMPRESS_CACHE_BACKEND = 'locmem'
TIME_ZONE = 'America/Denver'
FILE_SERVER_ROOT = "http://example.seafile.com/seafhttp"
ENABLE_STORAGE_CLASSES =  True
STORAGE_CLASS_MAPPING_POLICY = 'USER_SELECT'

Hello, this error should be because your json content is formatted incorrectly. The correct json content should look like the following:

    [
    {
    "storage_id": "hot_storage",
    "name": "Hot Storage",
    "is_default": true,
    "commits": {"backend": "s3", "bucket": "seafile-commits", "key": "ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09", "key_id": "AKIAIOT3GCU5VGCCL44A"},
    "fs": {"backend": "s3", "bucket": "seafile-fs", "key": "ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09", "key_id": "AKIAIOT3GCU5VGCCL44A"},
    "blocks": {"backend": "s3", "bucket": "seafile-blocks", "key": "ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09", "key_id": "AKIAIOT3GCU5VGCCL44A"}
    },
    {
    "storage_id": "cold_storage",
    "name": "Cold Storage",
    "is_default": false,
    "fs": {"backend": "fs", "dir": "/storage/seafile/seafile-data"},
    "commits": {"backend": "fs", "dir": "/storage/seafile/seafile-data"},
    "blocks": {"backend": "fs", "dir": "/storage/seafile/seaflle-data"}
    },
    {
    "storage_id": "swift_storage",
    "name": "Swift Storage",
    "fs": {"backend": "swift", "tenant": "adminTenant", "user_name": "admin", "password": "openstack", "container": "seafile-commits", "auth_host": "192.168.56.31:5000", "auth_ver": "v2.0"},
    "commits": {"backend": "swift", "tenant": "adminTenant", "user_name": "admin", "password": "openstack", "container": "seafile-fs", "auth_host": "192.168.56.31:5000", "auth_ver": "v2.0"},
    "blocks": {"backend": "swift", "tenant": "adminTenant", "user_name": "admin", "password": "openstack", "container": "seafile-blocks", "auth_host": "192.168.56.31:5000", "auth_ver": "v2.0", "region": "RegionTwo"}
    },
    {
    "storage_id": "ceph_storage",
    "name": "ceph Storage",
    "fs": {"backend": "ceph", "ceph_config": "/etc/ceph/ceph.conf", "pool": "seafile-fs"},
    "commits": {"backend": "ceph", "ceph_config": "/etc/ceph/ceph.conf", "pool": "seafile-commits"},
    "blocks": {"backend": "ceph", "ceph_config": "/etc/ceph/ceph.conf", "pool": "seafile-blocks"}
    }
    ]

You can verify that the json content is legitimate by using a json formatted website. For example: https://jsonformatter.curiousconcept.com/#

I actually did check the json against another site and saw that it was incorrectly formatted. I literally copied and pasted it from the manual and only changed the key/id for storage buckets. And of course that didn’t work.

I will certainly try with your example. Thank you for the assistance!

The json path is /opt/seafile_storage_classes.json, but the folder containing seafile data in your container is shared. That’s your problem.
If you run docker exec -it /bin/bash seafile and ls /opt you won’t find the json file.

You are correct. But I’ll admit I don’t fully understand how to resolve this. I can’t specify the path for the json to be in /mnt/user/seafile directory. And I believe there is some relationship between /opt/ and the /mnt/ directories in an unraid setup, but I don’t quite know what that looks like.

Any hints on how I can fix this configuration?

Ok, I gave it another try. And I want to thank you for pointing out the volume mapping in my original docker compose. I’m now using the same one downloaded from Seafile per the manual. Instead of trying to shoehorn it into the folder/file structure that I’m used to seeing in Unraid, I followed the steps in the manual. Changing only the port, to an unused one.

version: '2.0'
services:
  db:
    image: mariadb:10.11
    container_name: seafile-mysql
    environment:
      - MYSQL_ROOT_PASSWORD=db_dev  # Requested, set the root's password of MySQL service.
      - MYSQL_LOG_CONSOLE=true
    volumes:
      - /opt/seafile-mysql/db:/var/lib/mysql  # Requested, specifies the path to MySQL data persistent store.
    networks:
      - seafile-net

  memcached:
    image: memcached:1.6.18
    container_name: seafile-memcached
    entrypoint: memcached -m 256
    networks:
      - seafile-net

  elasticsearch:
    image: elasticsearch:8.6.2
    container_name: seafile-elasticsearch
    environment:
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 2g
    volumes:
      - /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data  # Requested, specifies the path to Elasticsearch data persistent store.
    networks:
      - seafile-net
          
  seafile:
    image: docker.seadrive.org/seafileltd/seafile-pro-mc:latest
    container_name: seafile
    ports:
      - "8888:80"
#     - "443:443"  # If https is enabled, cancel the comment.
    volumes:
      - /opt/seafile-data:/shared   # Requested, specifies the path to Seafile data persistent store.
    environment:
      - DB_HOST=db
      - DB_ROOT_PASSWD=db_dev  # Requested, the value should be root's password of MySQL service.
#      - TIME_ZONE=Asia/Shanghai # Optional, default is UTC. Should be uncomment and set to your local time zone.
      - SEAFILE_ADMIN_EMAIL=me@example.com # Specifies Seafile admin user, default is 'me@example.com'
      - SEAFILE_ADMIN_PASSWORD=asecret     # Specifies Seafile admin password, default is 'asecret'
      - SEAFILE_SERVER_LETSENCRYPT=false   # Whether to use https or not
      - SEAFILE_SERVER_HOSTNAME=example.seafile.com # Specifies your host name if https is enabled
    depends_on:
      - db
      - memcached
      - elasticsearch
    networks:
      - seafile-net

networks:
  seafile-net:

I should mention I had no success with this before. Trying to get it to stand up outside of a docker compose within Unraid, using the example template. I should also mention this server is not exposed to the internet so to further minimize issues, I didn’t even change the credentials at any level.

seafile.conf:

[fileserver]
port = 8082

[database]
type = mysql
host = db
port = 3306
user = seafile
password = 25f31dee-2d05-44fa-a979-043b0a1c6a02
db_name = seafile_db
connection_charset = utf8

[notification]
enabled = false
host = 127.0.0.1
port = 8083
log_level = info
jwt_private_key = 8@byttkkbqp_m+88&qxn)w+b*(ucovmd)c04#0$$(d2#8zm9*#

[storage]
enable_storage_classes = true
storage_classes_file = /opt/seafile_storage_classes.json

[memcached]
memcached_options = --SERVER=192.168.1.250 --POOL-MIN=10 --POOL-MAX=100

Again, no deviation from the manual at this point.

seafhub_settings.py:

# -*- coding: utf-8 -*-
SECRET_KEY = "b'0yg4$&y&hu369noy#-lx!u!(4og#!$4j8t6)cyg@yhqqfliwd#'"
SERVICE_URL = "http://example.seafile.com/"

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql',
        'NAME': 'seahub_db',
        'USER': 'seafile',
        'PASSWORD': '25f31dee-2d05-44fa-a979-043b0a1c6a02',
        'HOST': 'db',
        'PORT': '3306',
        'OPTIONS': {'charset': 'utf8mb4'},
    }
}


CACHES = {
    'default': {
        'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',
        'LOCATION': 'memcached:11211',
    },
    'locmem': {
        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
    },
}
COMPRESS_CACHE_BACKEND = 'locmem'
TIME_ZONE = 'America/Denver'
FILE_SERVER_ROOT = "http://example.seafile.com/seafhttp"
ENABLE_STORAGE_CLASSES = True
STORAGE_CLASS_MAPPING_POLICY = 'USER_SELECT'

And finally, the json in question, seafile_storage_classes.json:

[{
		"storage_id": "cold_storage",
		"name": "Hot Storage",
		"is_default": false,
		"commits": {
			"backend": "s3",
			"bucket": "seafile-commits",
			"key": "secret",
			"key_id": "super secret"
		},
		"fs": {
			"backend": "s3",
			"bucket": "seafile-fs",
			"key": "secret",
			"key_id": "super secret"
		},
		"blocks": {
			"backend": "s3",
			"bucket": "seafile-blocks",
			"key": "secret",
			"key_id": "super secret"
		}
	},
	{
		"storage_id": "hot_storage",
		"name": "Hot Storage",
		"is_default": true,
		"fs": {
			"backend": "fs",
			"dir": "/storage/seafile/seafile-data"
		},
		"commits": {
			"backend": "fs",
			"dir": "/storage/seafile/seafile-data"
		},
		"blocks": {
			"backend": "fs",
			"dir": "/storage/seafile/seaflle-data"
		}
	},
	{
		"storage_id": "swift_storage",
		"name": "Swift Storage",
		"fs": {
			"backend": "swift",
			"tenant": "adminTenant",
			"user_name": "admin",
			"password": "openstack",
			"container": "seafile-commits",
			"auth_host": "192.168.56.31:5000",
			"auth_ver": "v2.0"
		},
		"commits": {
			"backend": "swift",
			"tenant": "adminTenant",
			"user_name": "admin",
			"password": "openstack",
			"container": "seafile-fs",
			"auth_host": "192.168.56.31:5000",
			"auth_ver": "v2.0"
		},
		"blocks": {
			"backend": "swift",
			"tenant": "adminTenant",
			"user_name": "admin",
			"password": "openstack",
			"container": "seafile-blocks",
			"auth_host": "192.168.56.31:5000",
			"auth_ver": "v2.0",
			"region": "RegionTwo"
		}
	},
	{
		"storage_id": "ceph_storage",
		"name": "Ceph Storage",
		"fs": {
			"backend": "ceph",
			"ceph_config": "/etc/ceph/ceph.conf",
			"pool": "seafile-fs"
		},
		"commits": {
			"backend": "ceph",
			"ceph_config": "/etc/ceph/ceph.conf",
			"pool": "seafile-commits"
		},
		"blocks": {
			"backend": "ceph",
			"ceph_config": "/etc/ceph/ceph.conf",
			"pool": "seafile-blocks"
		}
	},
	{
		"storage_id": "new_backend",
		"name": "New store",
		"for_new_library": true,
		"is_default": false,
		"fs": {
			"backend": "fs",
			"dir": "/storage/seafile/new-data"
		},
		"commits": {
			"backend": "fs",
			"dir": "/storage/seafile/new-data"
		},
		"blocks": {
			"backend": "fs",
			"dir": "/storage/seafile/new-data"
		}
	}
]

I did run the sanitized json through jsonlint.com and received a pass.
All of that said, it still won’t work. Similar error regarding inability to read the json.

2023-09-27 16:28:18 …/common/obj-store.c(1131): Failed to load json file: /opt/seafile_storage_classes.json
2023-09-27 16:28:18 …/common/obj-store.c(110): [Object store] Failed to load backend for fs.
Error: failed to create ccnet session

Could it be permissions related? I did note the required chmod to get elastic search to work and did perform that. I also confirmed that Seafile worked without the addition of the storage backend configuration just to make sure I wasn’t trying to fix a server instance in an already broken state.

I must be missing something…

Hey, let’s try to sort it out.
I believe your are missing how volumes work.
When you write this inside your docker compose file:

    volumes:
      - /opt/seafile-data:/shared

it means that you are saying: the folder on the host machine /opt/seafile-data is accessible inside the Docker container as /shared.
So your container is able to access /shared, right? It doesn’t even know what /opt/seafile-data is.

Now, the matter is: where did you put your seafile_storage_classes.json?
I bet you put that into the host machine in /opt/seafile-data, is that right?
But when you configured seafile.conf, you wrote storage_classes_file = /opt/seafile_storage_classes.json.

And that’s the issue, because the Docker container can only see the files from the host machine in /shared.
So, to recap: you put seafile_storage_classes.json on your host machine in /opt/seafile-data and inside seafile.conf you use storage_classes_file = /shared/seafile_storage_classes.json.

Try and let me know

I believe your are missing how volumes work.

Oh, 100% agree! And I appreciate you taking the time to explain it so well. It’s finally clicked for me! I started my selfhosting journey early part of this year and while I’ve learned a ton, I still have so far to go.

I did as you suggested and it has finally found the json, as you predicted. Do I need to comment out the unused backends like Swift and Ceph? Could that be why its not letting me access Seafile anymore via web? I mean, I’m already excited I finally get the volumes concept after it didn’t sink in for so long…

Error:

023-09-28 08:43:58 ../common/seaf-utils.c(434): Use database Mysql
2023-09-28 08:43:58 ../common/obj-backend-ceph.c(418): [Obj backend] Cannot read config file: No such file or directory
2023-09-28 08:43:58 ../common/obj-backend-ceph.c(527): [Obj backend] Failed to init ceph: pool name is seafile-fs.
2023-09-28 08:43:58 ../common/obj-store.c(1191): [fs] Failed to load backend ceph -- storage_id: ceph_storage
2023-09-28 08:43:58 ../common/obj-store.c(110): [Object store] Failed to load backend for fs.
Error: failed to create ccnet session

Great! Yes, I suggest removing the stuff you don’t need from the Json.

Yep, that was it. It now shows Hot & Cold storage options when creating a new library. It fails on Cold (s3) but I need to double check my bucket keys and IDs to see if that’s why it’s failing. I didn’t see anything in the seafile, elasticsearch, memcache, or mysql logs. So I’m not postive, but the fact that it now starts up without errors and shows the user choice backends is encouraging.

I’ll have a look at the buckets themselves and make sure I entered the correct credentials and try again.

Error when selecting “Cold Storage” is:

Failed to create library

Well, I thought it might be the fact that Backblaze B2 is technically S3 compatible, perhaps there was more to it with Seafile. I was able to set up 3 buckets in a iDrive e2 bucket (free trial) and added the storage key id and secret keys to the json as described in the manual. I was then able to see the Hot and Cold storage options in the web UI but I cannot create a new library in cold storage. It errors out every time.

For point of comparison, I used the same e2 keys and secrets but appended it to seafile.conf instead, as described here. And it worked! However it only saves to e2 instead of both local FS and e2. I tried a few variations of appending verbiage to include local FS but could not get it to work. And in case I still can’t send links since I’m new here, the seafile.conf has the follow appended to it:

[commit_object_backend]
name = s3
bucket = unique-prefix-seafile-commit-objects
key_id = <s3 access key>
key = <s3 secret key>
host = <s3 endpoint hostname>
aws_region = <s3 region>
use_v4_signature = false
path_style_request = true
use_https = true
memcached_options = --SERVER=memcached --POOL-MIN=10 --POOL-MAX=100 --RETRY-TIMEOUT=3600

[fs_object_backend]
name = s3
bucket = unique-prefix-seafile-fs-objects
key_id = <s3 access key>
key = <s3 secret key>
host = <s3 endpoint hostname>
aws_region = <s3 region>
use_v4_signature = false
path_style_request = true
use_https = true
memcached_options = --SERVER=memcached --POOL-MIN=10 --POOL-MAX=100 --RETRY-TIMEOUT=3600

[block_backend]
name = s3
bucket = unique-prefix-seafile-block-objects
key_id = <s3 access key>
key = <s3 secret key>
host = <s3 endpoint hostname>
aws_region = <s3 region>
use_v4_signature = false
path_style_request = true
use_https = true
memcached_options = --SERVER=memcached --POOL-MIN=10 --POOL-MAX=100 --RETRY-TIMEOUT=3600

Now I’m curious if you have success using the prescribed method in the manual but using a S3 compatible backend.

EDIT: After more testing, I added NEW B2 keys and applicationIDs to the seafile.conf mentioned in this post and it still doesn’t work. So Backblaze may have some weirdness that Seafile doesn’t like, or vice versa. However, it does work with iDrive e2.

It doesn’t surprise me at all. They are all “S3 compatible”, but very often each provider has some different implementation of the S3 protocol.
I am using iDrive E2 (not with Seafile, yet) for backups since I’ve got 1TB of storage for 4 euros, and I’m doing well.

I suggest opening a issue on Github.

Are you using iDrive with Seafile? Or have you tested that? It’s so strange that the same keys and same buckets work fine when appended to the seafile.conf yet don’t work as described in the manual by designating the json file and having seafile.conf point to it.

I may still being doing something wrong. I’ll have to test more.

Side note: the manual doesn’t mention anything about the scenario of using Seafuse with S3 backend storage. And since they have you create 3 separate buckets on your S3 provider (fs, commits, and blocks), I can’t imagine it would work at all. The argument could be made that since the data is all on object storage, you shouldn’t need to use seafuse at all. But worst case scenario, you have 3 buckets with data and a non-working Seafile install. How would you then get the data back? Copy it down, then fuse it? Sorry, thinking out loud here.

Thank you for all the help and education, btw. I truly appreciate it.

No, I am using rclone to sync between my Seafile and Idrive E2. Never tried using Idrive E2 as backend storage on Seafile.

I don’t think you need to create 3 separate buckets.

You reinstall Seafile and that’s it. Unless you have lost your database, in that case there are a couple of ways to get back your data.

Have you tried rclone mount against a seafile folder? I’ve tested it locally on my server because I kept running into db dump issues so I figured why not try to mount then sync or copy it over to B2. It works. Defeats the purpose of the chunking of the data in the first place, I suppose. But I just had to test it.

I don’t think you need to create 3 separate buckets.

I could’ve sworn I saw that in the manual. Now I’ll have to try it without.

Thanks again for all the help!

You can analyze and review your JSON code, There are many tools which does that.

https://codebeautify.org/json-fixer will fix your JSON data
https://jsonformatter.org will helps to analyze and validate JSON String