Seafile SYNC Client on DFS


#1

Hi,

I use sync client 6.1.1 on seafile pro 6.1.8

I wonder if there is any potential Issue on libraries synced from a DFS user folder.

The user files are on the Active Directory Server and shared to any user.
The local PC works on a local cache that is synchronized with the server. (windows sync)

Everything worked files on a 8.9 GB library since i deleted 1Gb of files on the local cache.

Now the seafile sync client stop synchronizing and the log tells :

[11/10/17 18:05:05] wt-monitor-win32.c(565): GetQueuedCompletionStatus failed, error code 995[11/10/17 18:05:05] wt-monitor-win32.c(565): GetQueuedCompletionStatus failed, error code 87
[11/10/17 18:05:05] wt-monitor-win32.c(565): GetQueuedCompletionStatus failed, error code 87
[11/10/17 18:05:05] wt-monitor-win32.c(565): GetQueuedCompletionStatus failed, error code 87

What can i do as a workaround, while that the local files a deleted ?

Regards,

Gautier


Sync cache on the client "there is a conflict with an existing library"
#2

Likely the same as for Samba: One needs to set a sync interval because automatic change detection is technically not possible.


#3

Hi,

The process hangs on downloading file list and i do have to wait for the option to be available :

Then the process hangs on indexing files so i can set the option (1 second)

But the seafile.log traces errors

[11/13/17 10:53:47] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:47] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:48] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:49] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:49] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:50] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:51] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:52] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:52] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:53] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:54] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:55] vc-utils.c(165): Failed to update index errno=13 Permission denied
[11/13/17 10:53:55] vc-utils.c(165): Failed to update index errno=13 Permission denied

The database seems locked :

[11/13/17 10:54:10] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'data') --> ('finished', 'finished')
[11/13/17 10:54:10] wt-monitor-win32.c(565): GetQueuedCompletionStatus failed, error code 87[11/13/17 10:54:10] clone-mgr.c(847): Transition clone state for 022257ce from [fetch] to [done].
[11/13/17 10:54:10] SQL error: 5 - database is locked
:	DELETE FROM CloneTasks WHERE repo_id='022257ce-5b1f-487f-a8c2-6af5ab5c6c02'

The sync process ends with Success, whereas the database is still locked :

[11/13/17 10:55:02] sync-mgr.c(702): Repo '_Projets' sync state transition from 'synchronized' to 'committing'.
[11/13/17 11:00:30] repo-mgr.c(3728): All events are processed for repo 022257ce-5b1f-487f-a8c2-6af5ab5c6c02.
[11/13/17 11:00:54] sync-mgr.c(702): Repo '_Projets' sync state transition from 'committing' to 'uploading'.
[11/13/17 11:00:54] http-tx-mgr.c(3423): Upload with HTTP sync protocol version 1.
[11/13/17 11:00:54] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'init') --> ('normal', 'check')
[11/13/17 11:00:54] sync-mgr.c(702): Repo '_Info' sync state transition from 'synchronized' to 'committing'.
[11/13/17 11:00:55] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'check') --> ('normal', 'commit')
[11/13/17 11:00:55] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'commit') --> ('normal', 'fs')
[11/13/17 11:00:55] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'fs') --> ('normal', 'data')
[11/13/17 11:00:59] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'data') --> ('normal', 'update-branch')
[11/13/17 11:00:59] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'update-branch') --> ('finished', 'finished')
[11/13/17 11:00:59] sync-mgr.c(702): Repo '_Projets' sync state transition from 'uploading' to 'initializing'.
[11/13/17 11:00:59] sync-mgr.c(1516): Removing blocks for repo _Projets(022257ce).
[11/13/17 11:01:00] SQL error: 5 - database is locked
:	REPLACE INTO Config VALUES ('notify_sync', 'on');
[11/13/17 11:01:26] sync-mgr.c(702): Repo '_Info' sync state transition from 'committing' to 'initializing'.
[11/13/17 11:01:26] sync-mgr.c(702): Repo '_Projets' sync state transition from 'synchronized' to 'committing'.

Regards,

Gautier


#4

Hi,

I don’t know if it is related, but i now see conflict reports on my DFS (Microsoft Synchronisation Center).
Somes Seafile files are not synchronized

\Documents\Seafile\accounts.db
\Documents\Seafile\repo.db

Have a look at the image, too.

Then i reboot the PC.
The DFS errors have disappeared.
But the Seafie SYNC process hangs on indexing.

Here is the seafile.log at startup

[11/13/17 17:53:49] seaf-daemon.c(558): starting seafile client 6.1.3
[11/13/17 17:53:49] seaf-daemon.c(560): seafile source code version 5fc440fe04370308d8dd9de4b1c63a388da68278
[11/13/17 17:53:49] ../common/mq-mgr.c(60): [mq client] mq cilent is started
[11/13/17 17:53:50] ../common/mq-mgr.c(106): [mq mgr] publish to heartbeat mq: seafile.heartbeat
[11/13/17 17:53:50] wt-monitor-win32.c(565): GetQueuedCompletionStatus failed, error code 995[11/13/17 17:53:50] wt-monitor-win32.c(565): GetQueuedCompletionStatus failed, error code 87[11/13/17 17:53:52] sync-mgr.c(702): Repo '_Projets' sync state transition from 'synchronized' to 'committing'.

Then I wait 10 minutes.
The SYNC process is complete
The logs finally traces :

    [11/13/17 18:00:45] sync-mgr.c(702): Repo '_Projets' sync state transition from 'committing' to 'uploading'.
    [11/13/17 18:00:45] http-tx-mgr.c(3423): Upload with HTTP sync protocol version 1.
    [11/13/17 18:00:45] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'init') --> ('normal', 'check')
    [11/13/17 18:00:45] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'check') --> ('normal', 'commit')
    [11/13/17 18:00:45] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'commit') --> ('normal', 'fs')
    [11/13/17 18:00:45] sync-mgr.c(702): Repo '_Info' sync state transition from 'synchronized' to 'committing'.
    [11/13/17 18:00:45] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'fs') --> ('normal', 'data')
    [11/13/17 18:00:45] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'data') --> ('normal', 'update-branch')
    [11/13/17 18:00:45] http-tx-mgr.c(1132): Transfer repo '022257ce': ('normal', 'update-branch') --> ('finished', 'finished')
    [11/13/17 18:00:45] sync-mgr.c(702): Repo '_Projets' sync state transition from 'uploading' to 'initializing'.
    [11/13/17 18:00:45] sync-mgr.c(1516): Removing blocks for repo _Projets(022257ce).
    [11/13/17 18:01:25] sync-mgr.c(702): Repo '_Info' sync state transition from 'committing' to 'initializing'.
    [11/13/17 18:01:26] sync-mgr.c(702): Repo '_Dossiers' sync state transition from 'synchronized' to 'committing'.
    [11/13/17 18:01:31] repo-mgr.c(3728): All events are processed for repo 4a338242-c020-4285-8e33-592e8f786dd3.
    [11/13/17 18:01:31] sync-mgr.c(702): Repo '_Dossiers' sync state transition from 'committing' to 'initializing'.
    [11/13/17 18:01:31] sync-mgr.c(702): Repo '_Info' sync state transition from 'synchronized' to 'committing'.
    [11/13/17 18:02:08] sync-mgr.c(702): Repo '_Info' sync state transition from 'committing' to 'initializing'.
    [11/13/17 18:02:09] sync-mgr.c(702): Repo '_Projets' sync state transition from 'synchronized' to 'committing'.

Repo _Projets is now synced. But what a mess !

My conclusion is : don’t sync big libraries on DFS !

Files 35422
Size	7,3 Go

Regards,

Gautier


#5

I don’t think 1 second is a good idea because it’ll scan the files all the day (so there will be load on the DFS all the time).


#6

It could be. To fix it place the Seafile directory out of DFS. There is a hidden directory which is heavily being used. You can still sync libraries on DFS.


#7

Yes it does.

The DFS SYNC Process can not acces the files because Seafile Sync porcess is using them.

I’ll try with your local drive option for seafile folder.

Regards


#8

Hi,

I did as you told me and placed the Seafile directory on a local drive not SYNCED With DFS.
I do not encounter any problem anymore.
I think that it could be mentioned during the client Installation process and on the manual.
Actually, there is not a lot of information no best pratices about the client.

What is you opinion, @Jonathan ?

Regards,

Gautier


#9

Hi,

I think that you may implement a defaut sync interval for all libraries, at the top of the client parameters.

It could be useful to prevent sync errors between Seafile and DFS.

It looks like settng a default sync interval is also possible on DFS via Group Policy.

Regards,

Gautier


#10

For all locations where inotify works reliably (which might not be easy to find out automatically) it is a waste of resources.


#11

I’m not sure i understood. For our case, All the documents of the Windows logged user are in the DFS Folder, so are the libraries.


#12

Hi,

I think i found another issue :

In our case, syncing huge libraries located in a CIFS/DFS user home folder is hanging a lot.

This clearly makes syncing unusable, whereas the Seafile default install folder is located on a local drive.

regards


#13

I’m pretty sure the issues involved in such a setup have been discussed multiple times. Inotify (a mechanism to get notified when a file changed) does not work with CIFS/DFS, thus the client needs to rescan the whole library over and over again (using the sync interval). That can be a resource hungry process and is likely going to take more and more time the more files there are.


#14

Hi,

This is exactly what happens. And it could also be the reason why the process is so long on huge libraries (>5Go).
Tiis is a major issue for us. I hope this can be fixed in future releases of the client, but i’m not very optimitic.
Are you, @daniel.pan ?


#15

The client has to scan all the folder, checking the timestamp for all the files. It does not read the actual file content. So the performance depends on the number of files. This can’t be improved in the future.


#16

To solve the problem the only thing you can do is directly run the Seafile client on the fileserver.


#17

But then, the distributed core architecture of Seafile is compromised.

The solution would rather be to limit the number of files in a library, which is, in a user perspective, very complicated…

I’m very sorry to learn that :confounded:


#18

It is only important for the sync interval and remote file systems. I do synchronize more than half a million files from a laptop without issues.

I don’t really get what you mean by this. What do you mean by its distributed core architecture?


#19

I mean that if clients can’t synchronize but only the server does, the cloud model is gone…


#20

It is not really the case. It is still the client which is syncing. In the best case synchronization always happens from the machine the disk is in. The client was always developed to synchronize local file systems.