I’m running a private Seafile server on a hosted (Virtuozzo) VPS. By switching the hosting offer to “level 2” I could fix some trouble with inodes limits caused by very many small files in a library.
Yet, I still have some “server killing” trouble. I often hit the beancounter limit of 750 for “numothersock” (checked via “cat /proc/user_beancounters”). After a server boot 500 are in use. Alone about 200 of these (stream) sockets are accounted to the seaf-server process.This seems a awful lot.
Since I just started investigating the numothersock problem, I’m not sure why I hit the limit of 750 some times. But I wonder
a) could the seaf-server sockets be limited?
b) are the ~200 a fixed value or might even more sockets be opened by Seafile?
You know that Seafile saves files in blocks and also splits them right?
Choose a KVM hoster. These limits are just senseless and dumb.
And this explains the number even though no client is even up-/downloading anything? I’m not complaining, just curious about a) option and b) the limits…
I would, but currently my time is too limited to give a migration much thoughts.
I have the same problem. Any chance to limit the usage on the side of Seafile?
Guys, just use a hoster that doesn’t have this issues. There are tons of hosters out there that do not have such limits.
Migrating Seafile is pretty easy. I once considered writing a script for migration, but then I’d have to support it (users tent to complain on free things they don’t understand) and that’s the reason I won’t.
We might add this to our best practice manual.
Migrating Seafile in short:
Set TTL for your DNS records to 300 seconds or so, Install new node, rsync seafile server and seafile data folder over to new host, copy nginx and ssl config over, shutdown old seafile server (service), dump DBs to seafile folder, run rsync again, in the meantime switch DNS records, import DBs, start seafile server on new machine, done. => Not so difficult eh?