Synced libraries unavailable after changing server URL

i had to change the URL of my seafile server (e.g., from example.com:1234/seafile to example.org:1234/seafile). it looks like all went well, i can reach and use the web interface without errors or warnings, the renewed SSL certificate is accepted.

after that, i went into the account settings of the seafile desktop client (linux, v7.0.6) and changed the URL accordingly. i was given a warning that the certificate is invalid (interestingly, the dialog mentioned the old URL which was just replaced, of course that can’t be validated), i chose to accept it anyway. since then all my synced libraries appear as grey clouds. i can browse them in the clients file browser, so it is obviously capable of reaching the files. however, i am unable to sync any library and not even to unsync it. also resyncing doesn’t do anything. what i can do is syncing previously unsynced libraries which are presented with a green arrow cloud when finished, so there’s generally no problem with communicating to the server.

i assume the client is somehow using outdated information (the old URL) for the previously synced libraries. my question is: is there any local “reset” i can do to tell the client to get used to the new URL without removing the account and configuring everything from scratch?

in ~/.ccnet/logs/seafile.log i see:

[09/28/20 14:42:52] sync-mgr.c(1337): File syncing protocol version on server https://example.org:1234 is 2. Client file syncing protocol version is 2. Use version 2.
[09/28/20 14:42:52] http-tx-mgr.c(782): libcurl failed to GET example.org/seafhttp/protocol-version: SSL peer certificate or SSH remote key was not OK.

it looks like libcurl is omitting the specified port after the URL was changed, i checked it is in the configuration of the account.

it doesn’t even help to delete the account completely and add it again. contrary to the warning the client gives that all sync configuration will be lost, i’m back to where i left off when i add the account again. this is really bad.

i examined the source code of a library webpage after logging in into seafile with a browser. there i can see that window.app = { config: { serviceURL: ... and similar still show the old URL although it was changed in the server’s configuration (both ccnet.conf and seahub_settings.py). i did not only repeatedly restart seahub, seafile, apache and memcached but even rebooted the whole server. where does this outdated configuration info come from?

i was able to fix the outdated URL info on seahub’s pages by logging in with an admin account an changing the the settings for SERVICE_URL and FILE_SERVER_ROOT again via the webform. i take it you have to do this to actually affect the database, updating the configuration files is not enough. this could be more straightforward.

unfortunately, it didn’t help at all with the syncing problems of the desktop client. i guess the client is also not able to fully update the local database if the URL was changed? if so, i wonder why that configuration option is there to begin with.

i had a closer look at all SQLite3 *.db files below Seafile/.seafile-data. these three things looked odd to me:

  1. certs.db does not contain an entry for the new URL (nor for the old one, but for other configured servers) – why is this missing? can i add the certificate manually?

  2. table RepoProperty in repo.db contains multiple entries with key relay-adress still pointing to the old URL, while server-url is always set to the new one; can those simply be updated manually?

  3. table ServerProperty in repo.db contains two entries for the new URL, one starting with “https://” the other directly with the domain name without protocol; can the one without the protocol savely be removed?

another odd thing: in table RepoProperty of repo.db, server-url is always set to the new URL but also missing both the https protocol and port, it’s just the plain URL. incontrast to the other servers with working configuration, they have the protocol as part of their URL.

i am under the impression that changing as little as a subdomain in the URL can completely ruin the local databases with incorrect and/or incomplete updates.

another observation: is the data from certs.db being used at all? i decoded the certificates stored for two other servers; the first one was expired for years and the other one total crap (a self-signed fritzbox certificate, but the server was never run behind a fritzbox). it is impossible these certs were used for communiating with the designated servers.

meanwhile i can sync files again, but it took me hours to get to that point. here’s what i had to do:

  1. manually remove the incomplete URL from table ServerProperty of database repo.db

  2. manually set all instances of relay-address to NULL in table RepoPropertyof database repo.db

  3. manually fix all instances of server-url from example.org into https://example.org:1234 in table RepoPropertyof database repo.db

  4. launch the client and demand a full resync of all libraries from that server – now that works!

conclusion: (at least) the 7.0.6 client can ruin the local database when you use the account settings dialog to update the server URL, by replacing the old URL with incomplete data (at least omitting the protocol prefix) and adding a dysfunctional relay-address using the outdated URL. i don’t know whether that data are all taken from the account settings or partly fetched from the server. however, if you reconfigure the server’s URL in its configuration files, its database should be updated accordingly, automatically during the next restart. you should not need to update theses addresses again via the web form. instead of having the exact same information in multiple files and databases (server side), the basic URL, port and local subfolder (if not /) should be defined in just one place and used from there.