Zombie Processes


Over the last few weeks I have noticed a lot of zombie processes being created by the seafile-controller, at the moment I only have one as I rebooted the vm earlier today.

root@cloud:~# ps ajx | grep Z 92533 94460 92528 92528 ? -1 Z 1001 0:00 [sh] <defunct> 93598 94477 94476 93588 pts/0 94476 S+ 0 0:00 grep --color=auto Z root@cloud:~# ps aux | grep 92528 seafile 92528 0.0 0.0 54948 3620 ? Ss 14:12 0:00 /home/seafile/<removed>/seafile-pro-server-6.0.4/seafile/bin/seafile-controller -c /home/seafile/<removed>/ccnet -d /home/seafile/<removed>/seafile-data -F /home/seafile/<removed>/conf root 94488 0.0 0.0 12944 1080 pts/0 R+ 14:32 0:00 grep --color=auto 92528
Prior to the reboot there was 16 zombie processes that had been there for over 48 hours. This has happened on all pro versions 6.0.0-6.0.4. The removed parts above are just a FQDN I do not want public.

Any assistance on how to resolve this or provide more debugging information would be appreciated.

Hi there,

I would suspect that this comes from the fact that pro/python/seafevents/tasks/seahub_email_sender.py:SendSeahubEmailTimer._send_seahub_email just calls pro/python/seafevents/utils/__init__.py:run without doing anything with the returned subprocess (especially not waiting for its exit status which would remove the zombie processes).
IMHO, it could be sufficient to call run_and_wait instead - but since this is code from the Pro Edition, I can’t suggest that as a pull request. :wink:
You could maybe try that change yourself though.
But on the other hand, I wouldn’t suspect any bigger problem here.


Well seeing as I am not a programmer, that won’t be happening and no one seems to care from seafile, it is sadly going to be easier to migrate to a different solution, was testing seafile out in the free 3 users, but it’s proving to be just as bad as nextcloud but at least it doesn’t leave zombie processes eating resources to the point seafile is unusable.

Zombie process do not eat ressources.