quick question. I’m running a seafile-server of version 6.1.2 on debian 9 with a Xeon 4-Core and 32gb RAM. Nginx only, https configured of course.
I’m into fine-tuning right now and already spotted the “max_indexing_threads = 10” option within the seafile.conf manual page quite often but never thought about modifying it.
Now i’m wondering if raising this amount is advisable and could actually bring me some performance tweaks.
The description of that option reads:
After a file is uploaded via the web interface, or the cloud file browser in the client, it needs to be divided into fixed size blocks and stored into storage backend. We call this procedure “indexing”. By default, the file server uses 1 thread to sequentially index the file and store the blocks one by one. This is suitable for most cases. But if you’re using S3/Ceph/Swift backends, you may have more bandwidth in the storage backend for storing multiple blocks in parallel. We provide an option to define the number of concurrent threads in indexing:
Which Bandwidth we are talking about here?
Ok i’m actually asking this even my system only got 2x Raid-1 HDD’s which are very unhappy with many I/O processes but does anybody here got any experience with that or even played around with it while not having a S3/Ceph/Swift backend?
Any suggestions are welcome