Problems using different storage classes

Hello to all members of the seafile support team. We hope you are well!

We are using the seafile pro version 11.0.11 and we have been testing using different storage classes according to the doc: Multiple Storage Backends - Seafile Admin Manual

As of today, we only have S3 storage in a single region and we only have that in the seafile.conf configuration.

To test we created the JSON file of the storage classes as the doc says.

The idea is that we want to have different buckets by regions, but when we put the new one by default, the old libs cannot be accessed, we assume that it is looking for it in the default class.

When we put the old class by default it does recognize the old libs, but when creating new libs even if I choose the new class it continues saving in the old buckets, not in the new ones.

Please help us solve this problem.

Greetings

Hi,

Libraries have to be mapped to storage classes. Currently we only support 3 mapping policies: user chosen, role-based and library id-based. These policies doesn’t seem to fit into your idea of “regions”. What policy did you choose?

Hi Jonathan, thanks a lot for your reply.
Could you please explain to me better how the policy based on library id should work?

We have tried with all of them, currently we are with:
ENABLE_STORAGE_CLASSES = True
STORAGE_CLASS_MAPPING_POLICY = ‘USER_SELECT’

But it doesn’t work well as I already mentioned. If I put the new class by default it doesn’t find the other libs. If I put the old class by default, then all the new libs, even if when creating them you choose the new one, are still saved in the old class.

Please shed more light on how they should work.

Thank you very much

Could you post your storage class json file here?

Hello Jonathan.

Here is the current version using: ENABLE_STORAGE_CLASSES = True STORAGE_CLASS_MAPPING_POLICY = ‘USER_SELECT’

[
{
“storage_id”: “s3_01”,
“name”: “S3 1”,
“is_default”: true,
“commits”: {
“backend”: “s3”,
“bucket”: “seafile-commit”,
“key”: “key-value”,
“key_id”: “<key_id>”,
“use_v4_signature”: true,
“aws_region”: “region 2”,
“use_https”: true
},
“fs”: {
“backend”: “s3”,
“bucket”: “seafile-fs”,
“key”: “key-value”,
“key_id”: “<key_id>”,
“use_v4_signature”: true,
“aws_region”: “region 2”,
“use_https”: true
},
“blocks”: {
“backend”: “s3”,
“bucket”: “seafile-block”,
“key”: “key-value”,
“key_id”: “<key_id>”,
“use_v4_signature”: true,
“aws_region”: “region 2”,
“use_https”: true
}
},
{
“storage_id”: “s3_02”,
“name”: “S3 2”,
“is_default”: false,
“commits”: {
“backend”: “s3”,
“bucket”: “seafile-commit”,
“key”: “key-value”,
“key_id”: “<key_id>”,
“use_v4_signature”: true,
“aws_region”: “region 1”
},
“fs”: {
“backend”: “s3”,
“bucket”: “seafile-fs”,
“key”: “key-value”,
“key_id”: “<key_id>”,
“use_v4_signature”: true,
“aws_region”: “region 1”
},
“blocks”: {
“backend”: “s3”,
“bucket”: “seafile-block”,
“key”: “key-value”,
“key_id”: “<key_id>”,
“use_v4_signature”: true,
“aws_region”: “region 1”
}
}
]

Hi Jonathan, I think we found the bug that causes the storage class assignment to work incorrectly when creating libs. I was looking at the code for creating libs and it turns out that if the user belongs to an org, the storage_id validation is not done and is not passed when creating the repo.

So I created a test user that did not belong to an org and it works fine. So I think that part needs to be fixed for users that belong to orgs, as is the case with us.

I look forward to your response.

def _create_repo(self, request, repo_name, repo_desc, username, org_id):
passwd = request.data.get(“passwd”, None)

    # to avoid 'Bad magic' error when create repo, passwd should be 'None'
    # not an empty string when create unencrypted repo
    if not passwd:
        passwd = None

    if (passwd is not None) and (not config.ENABLE_ENCRYPTED_LIBRARY):
        return None, api_error(status.HTTP_403_FORBIDDEN,
                         'NOT allow to create encrypted library.')

    if org_id and org_id > 0:
        repo_id = seafile_api.create_org_repo(repo_name,
                repo_desc, username, org_id, passwd,
                enc_version=settings.ENCRYPTED_LIBRARY_VERSION)
    else:
        if is_pro_version() and ENABLE_STORAGE_CLASSES:

            if STORAGE_CLASS_MAPPING_POLICY in ('USER_SELECT',
                    'ROLE_BASED'):

                storages = get_library_storages(request)
                storage_id = request.data.get("storage_id", None)
                if storage_id and storage_id not in [s['storage_id'] for s in storages]:
                    error_msg = 'storage_id invalid.'
                    return None, api_error(status.HTTP_400_BAD_REQUEST, error_msg)

                repo_id = seafile_api.create_repo(repo_name,
                        repo_desc, username, passwd,
                        enc_version=settings.ENCRYPTED_LIBRARY_VERSION,
                        storage_id=storage_id )
            else:
                # STORAGE_CLASS_MAPPING_POLICY == 'REPO_ID_MAPPING'
                repo_id = seafile_api.create_repo(repo_name,
                        repo_desc, username, passwd,
                        enc_version=settings.ENCRYPTED_LIBRARY_VERSION)
        else:
            repo_id = seafile_api.create_repo(repo_name,
                    repo_desc, username, passwd,
                    enc_version=settings.ENCRYPTED_LIBRARY_VERSION)

    if passwd and ENABLE_RESET_ENCRYPTED_REPO_PASSWORD:
        add_encrypted_repo_secret_key_to_database(repo_id, passwd)

    return repo_id, None

I’ll reply to you by email.

This is the current design. We thought the multi-tenant feature is more for service providers and usually they’ll use a single scalable storage backend. We’ll consider about adding this feature for multi-tenant.