I’m curious what kind of bad things would break supervisor.  We’ve been running 
it well for about 2 years, in dev, test and production.

From: Mike Axiak [mailto:[email protected]]
Sent: Thursday, February 21, 2013 10:19 AM
To: Timothy Jones
Cc: [email protected]
Subject: Re: [Supervisor-users] Doing a better job of isolating processes

Neither of these really solve the problem, which is that child processes break 
supervisor when bad things happen, nor does it scale when having lots of child 
file descriptors.

Cheers,
Mike

On Thu, Feb 21, 2013 at 9:53 AM, Timothy Jones 
<[email protected]<mailto:[email protected]>> wrote:
You can raise the number of FDs that are available to supervisor.  Here’s how I 
do it in RHEL 5.5.

echo ‘fs.file-max = 65000
* soft nofile 8192
* hard nofile 65000
* soft core 8192
* hard core 65535’ > /etc/security/limits.d/60-nofile-limit.conf

Then log back in. Verify the higher limit with ‘ulimit –n’.   It can be further 
raised to 65535.

Also there is a fork on github.com<http://github.com> [1] where the select() is 
replaced with poll(), which should allow it to monitor many more processes.


tlj
[1] https://github.com/igorsobreira/supervisor/

From: 
[email protected]<mailto:[email protected]>
 
[mailto:[email protected]<mailto:[email protected]>]
 On Behalf Of Mike Axiak
Sent: Wednesday, February 20, 2013 11:26 PM
To: 
[email protected]<mailto:[email protected]>
Subject: [Supervisor-users] Doing a better job of isolating processes

Hello all,

At my company we're in the middle of using supervisor for the deployments of 
lots of daemons/apps and it works really well for us. One issue I've been 
running into more and more is supervisor getting into a bad state due to file 
descriptor limits on the OS. If child applications soak up file descriptors, it 
can prevent supervisor from opening up sockets to even get logs. (I am going to 
generate a small test case for this to substantiate it.)

This got me thinking about the overall architecture of supervisord, and as it 
stands it seems like running the process as a child with a simple fork may not 
be the most robust over the long term. I was wondering if, rather than keeping 
a pipe to the stderr/stdout and running all of the jobs as direct subprocesses, 
supervisord could run processes as daemons (perhaps using the nice python 
daemon library [1]) and separately watch the PIDs to achieve the same state 
management supervisor does today.

I will probably embark on trying a branch of supervisor that behaves this way, 
but was wondering if anyone has any insight into issues that may arise with 
this paradigm shift.

Cheers,
Mike Axiak


1: https://pypi.python.org/pypi/python-daemon/

_______________________________________________
Supervisor-users mailing list
[email protected]
https://lists.supervisord.org/mailman/listinfo/supervisor-users

Reply via email to