Indexing files-task fills up memory and is finally stopped by OS

Hi there,

I try to move from OwnCloud to Seafile because I like the way data is encrypted with seafile. However, when I try to sync my whole home directory the client shows “indexing files” and is heavily at work. When 8GB of each RAM and swap are filled up, my Arch linux OS does cancel the executable. This awfully sounds like the windows file copy dialog :slight_smile:

Is there a build-in way to split up the task of initial indexing?
if not, can the code of the client be somehow changed in order to split up too big of a changeset? if you guys point me into the right direction I can try to do the coding.



The client already splits large set of files into multiple batches. I think the problem instead is that the “index” data structure is too large to fit into memory. It’s usually not recommended to sync the entire home directory in one library.

Hi Jonathan,

thanks for the answer! and yes, that is what I thought as well.

Is there a filter option for “splitting” a directory structure? wildcards? eg. leave .tmp alone, only get .*, skip folder “delme”?

I think I did not find such an option in the library settings. Did I miss something?

thanks again!

You can configure ignore list. See