I have to deal with mod_cluster, and it is extremely memory hungry (in the GB 
range per process).  As mitigation, I'm trying to get down to a single apache 
worker process per host when we aren't under heavy load.  That would save me 
about 6GB per host.



We have several hosts running the exact same thing behind a load balancer and 
I've never seen a crash, so I'm not concerned with running a single instance.  
Running 4 20 thread instances is almost 4 times the memory of this one 
instance, for example.





This is the relevant portion of the configuration:



LoadModule mpm_event_module modules/mod_mpm_event.so

ServerLimit 8

StartServers 1

ThreadLimit 80

ThreadsPerChild 80

MaxRequestWorkers 640

MaxSpareThreads 120

MinSpareThreads 8







The top of mod-Status:



Apache Server Status for HOSTNAME (via 10.X.X.X)

Server Version: CUSTOMSTRING/2.4.18 (Unix) OpenSSL/1.0.1e-fips 
mod_cluster/1.3.1.Final

Server MPM: event

Server Built: Dec 16 2015 16:07:29

________________________________________

Current Time: Tuesday, 17-May-2016 14:37:00 EDT

Restart Time: Monday, 02-May-2016 09:36:16 EDT

Parent Server Config. Generation: 10

Parent Server MPM Generation: 9

Server uptime: 15 days 5 hours 44 seconds

Server load: 0.72 0.75 0.89

Total accesses: 39007867 - Total Traffic: 1.7 GB

CPU Usage: u2533.2 s168.49 cu0 cs0 - .206% CPU load

29.7 requests/sec - 1364 B/second - 45 B/request

5 requests currently being processed, 155 idle workers

PID         Connections       Threads                Async connections

                total       accepting             busy      idle         
writing  keep-alive           closing

11397    35           yes         2              78           0              33 
          0

29323    26           yes         3              77           0              23 
          0

Sum       61                          5              155         0              
56           0

................................................................

................________________________________________________

_______W____W___________________................................

................................................____________W___

______W__________________________________W______________________

................................................................

................................................................

................................................................

................................................................

................................................................





The idle threads here usually stays around the mid 150s.  These particular 
workers were started about 40 minutes apart, but I have the similar pattern 
showing in other regions with similar start times and the same workers being up 
for over a month.



Given the MaxSpareThreads 120, I would expect this to drop the second worker 
fairly quickly and work as described 
(https://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxsparethreads).  But, 
that's not happening and I'm stuck with two processes handling the load.  It's 
acting almost as if there is a "ServerMin 2" directive hard-coded or something.



This certainly looks like a bug (whether in the documentation or the code 
itself).  Any suggestions on how to get this to work before I submit a bug 
ticket?







Thank you,



Rick Houser

Reply via email to