Could be problem be that PHP is not using connection efficiently?
Apache KeepAlive with PHP, is a dual edged sword with you holding the blade :-)


If I am not mistaken, what happens is that a connection is kept alive because Apache believes that other requests will come in from the client who made the initial connection. So 10 concurrent connections are fine, but they are not released timely enough with 100 concurrent connections. The system ends up waiting around for other KeepAlive connections to timeout before Apache allows others to come in. We had this exact problem in an environment with millions of impressions per day going to the database. Because of the nature of our business, we were able to disable KeepAlive and the load immediately dropped (concurrent connection on the Postgresql database also dropped sharply). We also turned off PHP persistent connections to the database.

The drawback is that connections are built up and torn down all the time, and with Postgresql, it is sort of expensive. But thats a fraction of the expense of having KeepAlive on.

Warmest regards, Ericson Smith
Tracking Specialist/DBA
+-----------------------+--------------------------------------+
| http://www.did-it.com | "Crush my enemies, see then driven |
| [EMAIL PROTECTED] | before me, and hear the lamentations |
| 516-255-0500 | of their women." - Conan |
+-----------------------+--------------------------------------+




Alex Madon wrote:

Hello,
I am testing a web application (using the DBX PHP function to call a Postgresql backend).
I have 375Mb RAM on my test home box.
I ran ab (apache benchmark) to test the behaviour of the application under heavy load.
When increasing the number of requests, all my memory is filled, and the Linux server begins to cache and remains frozen.


ab -n 100 -c 10 http://localsite/testscript
behaves OK.

If I increases to
ab -n 1000 -c 100 http://localsite/testscript
I get this memory problem.

If I eliminate the connection to the (UNIX) socket of Postgresql, the script behaves well even under very high load (and of course with much less time spent per request).

I tried to change some parameters in postgresql.conf
max_connections = 32
to max_connections = 8

and

shared_buffers = 64
to shared_buffers = 16

without success.

I tried to use pmap on httpd and postmaster Process ID but don't get much help.

Does anybody have some idea to help to debug/understand/solve this issue? Any feedback is appreciated.
To me, it would not be a problem if the box is very slow under heavy load (DoS like), but I really dislike having my box out of service after such a DoS attack.
I am looking for a way to limit the memory used by postgres.


Thanks
Alex


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match


begin:vcard
fn:Ericson Smith
n:Smith;Ericson
org:Did-it.com;Programming
adr:#304;;55 Maple Avenue;Rockville Center;NY;11570;USA
email;internet:[EMAIL PROTECTED]
title:Web Developer
tel;work:516-255-0500
tel;cell:646-483-3420
note:Nothing special!
x-mozilla-html:FALSE
url:http://www.did-it.com
version:2.1
end:vcard

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Reply via email to