maintaining shared memory size (was: Re: swamped with connection?)
I think I have to reword the question: How do I maintain the size of the shared memory between apache children? What cause a memory page to be copied (not shared) from perl's point of view? This brings the question of how to increase shared memory usage. I've tried to load every modules upfront. But even before any request comes in, the shared memory is only 7 Mb. What makes it so small? Thanks... --- Badai Aqrandista Cheepy (?) _ Your opinion counts..for your chance to win a Mini Cooper click here http://www.qualifiedopinions.com/joinup.php?source=hotmail
Re: swamped with connection?
On Tue, 2005-08-23 at 15:52 +1000, Badai Aqrandista wrote: RAM = 700 Mb Per process total size = 40 Mb Shared memory = 7 Mb So, the number of processes = (700 - 7) / 33 = 21 processes So, does that mean it can only accept up to 21 connections? Yes. If you are running a reverse proxy in front of it, that should be enough to handle a lot of traffic. However, you should be aware that a few months back we discovered that our methods for measuring shared memory didn't work very well on Linux 2.4 kernels and don't really work at all on 2.6 kernels, so there may be more sharing (via copy-on-write) going on than you can see here. Looking at the total free memory on your machine when 21 processes are running may be more useful than just doing the calculation. - Perrin
Re: maintaining shared memory size (was: Re: swamped with connection?)
On Tue, 2005-08-23 at 17:23 +1000, Badai Aqrandista wrote: How do I maintain the size of the shared memory between apache children? What cause a memory page to be copied (not shared) from perl's point of view? Anything that writes to memory -- modifying any variable (even just reading one in a different context) or compiling some code are the most common things. There's a bit more here: http://modperlbook.com/html/ch10_01.html - Perrin
Re: swamped with connection?
On Tuesday 23 August 2005 14:23, Perrin Harkins wrote: However, you should be aware that a few months back we discovered that our methods for measuring shared memory didn't work very well on Linux 2.4 kernels and don't really work at all on 2.6 kernels, so there may be more sharing (via copy-on-write) going on than you can see here. See also http://marc.theaimsgroup.com/?l=apache-modperlm=112343986910467w=2 Torsten pgptlfOwFDQgb.pgp Description: PGP signature
Re: swamped with connection?
Torsten Foertsch wrote: On Tuesday 23 August 2005 14:23, Perrin Harkins wrote: However, you should be aware that a few months back we discovered that our methods for measuring shared memory didn't work very well on Linux 2.4 kernels and don't really work at all on 2.6 kernels, so there may be more sharing (via copy-on-write) going on than you can see here. See also http://marc.theaimsgroup.com/?l=apache-modperlm=112343986910467w=2 Speaking of which, is there any reason not to commit this? I can finally verify that it works(doesn't break anything) locally on a new box I've got. One comment might be to add an option to turn it off even if the newer Smaps support is present? -- END What doesn't kill us can only make us stronger. Nothing is impossible. Philip M. Gollucci ([EMAIL PROTECTED]) 301.254.5198 Consultant / http://p6m7g8.net/Resume/ Senior Developer / Liquidity Services, Inc. http://www.liquidityservicesinc.com http://www.liquidation.com http://www.uksurplus.com http://www.govliquidation.com http://www.gowholesale.com
Re: swamped with connection?
On Tuesday 23 August 2005 21:08, Philip M. Gollucci wrote: One comment might be to add an option to turn it off even if the newer Smaps support is present? I have also thought of it. Sounds sound. I'll send a patch soon. Torsten pgpKu6J3Mu1CO.pgp Description: PGP signature
swamped with connection?
Hi all, it's me again :D I am still trying to improve my company's webapp performance. I'm testing it with httperf and autobench. The application seems to be able to respond when hammered by 20 connections per second and 10 calls per connection. But then, it doesn't respond to any request when the connection rate is raised to 40 (with 10 calls per connection) and above. What does the apache treats as a request (hence forks another child)? - every new connection, or - every http request regardless of it comes with a new connection or comes through an existing connection Does anyone have any idea what's going on? My only guess is because the connections goes over the MaxClients count while the existing apache children are still processing the previous requests. But why, then, none of the requests after 40 conn/sec is responded? I'll give more detail if requested. THANKS A LOT!!! --- Badai Aqrandista Cheepy (?) _ REALESTATE: biggest buy/rent/share listings http://ninemsn.realestate.com.au
Re: swamped with connection?
On Tue, 2005-08-23 at 10:20 +1000, Badai Aqrandista wrote: I am still trying to improve my company's webapp performance. I'm testing it with httperf and autobench. The application seems to be able to respond when hammered by 20 connections per second and 10 calls per connection. But then, it doesn't respond to any request when the connection rate is raised to 40 (with 10 calls per connection) and above. Did you run out of memory? Is the CPU pegged? Give us something to go on here... What does the apache treats as a request (hence forks another child)? - every new connection, or - every http request regardless of it comes with a new connection or comes through an existing connection Every new connection, if you use Keep-Alive or HTTP 1.1. My only guess is because the connections goes over the MaxClients count while the existing apache children are still processing the previous requests. What is your MaxClients set to? Have you read the performance tuning docs on the mod_perl site? - Perrin
Re: swamped with connection?
Did you run out of memory? Is the CPU pegged? Give us something to go on here... Have you read the performance tuning docs on the mod_perl site? Yes, ages ago. I just read it again and did the calculation again. Apparently, yes, it runs out of memory and holds up the connection so no other requests can be processed. So, here's the variables: RAM = 700 Mb Per process total size = 40 Mb Shared memory = 7 Mb So, the number of processes = (700 - 7) / 33 = 21 processes So, does that mean it can only accept up to 21 connections? This brings the question of how to increase shared memory usage. I've tried to load every modules upfront. But even before any request comes in, the shared memory is only 7 Mb. What makes it so small? Thank you... --- Badai Aqrandista Cheepy (?) _ REALESTATE: biggest buy/rent/share listings http://ninemsn.realestate.com.au