Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-31 Thread David Hodgkinson


On 30 Aug 2005, at 10:16, Badai Aqrandista wrote:


I used to use A::S::MySQL, but it created 2 connections for every  
process. This caused 'Too many connections' error. So I tried to  
use memcached. Now that I know how to make one connection per  
process (using database.table identifier on all sql queries), I'll  
probably try to use A::S::MySQL again.


But now you have reduced MaxClients, that shouldn't happen.




Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-30 Thread Tony Clayton

Quoting Badai Aqrandista [EMAIL PROTECTED]:

 
 
 Then how do you know what to write in C?
 
 
 I have localized one subroutine that has been heavily called in the
 search 
 function. That should be the way to pick the candidate, shouldn't
 it?
 
 I have never done this and I am worried that writing it in C would
 push the 
 project's deadline a little bit further. However, does anyone have
 any hint 
 on doing this? What are the recommended readings?
 
 Anyhow, I've decrease the MaxClients to 3 and it worked!!! The speed
 
 significantly increases, from average of 110 sec/request to 85
 sec/request. 
 But, that's not enough. I need less than 10 sec/request.
 
 Is that possible without changing hardware?

I see from an earlier post on the mason-users list that your app is
using HTML::Mason and Apache::Session::Memcached.  It seems like you've
got quite a few variables to juggle in your performance bottleneck
analysis.

Have you tried switching to, say, Apache::Session::MySQL to see if
there
 is any difference?

You can also try these measures with your app:
- Use static source mode in mason, and preload components:
  http://masonhq.com/docs/manual/Admin.html#static_source_mode
- Preload (some/all) of your perl modules in the httpd parent using the
PerlModule apache directive
- If you are connecting to a database, be sure to use Apache::DBI

One cheap way to check for memory leaks (most probably introduced by
your perl code if they exist) is to run httpd -X (non-forking mode) and
watch the memory usage of httpd while you send some requests.  

You may also want to strace the process: 
# httpd -X 
# strace -o /tmp/strace.out -p `pidof httpd`
# tail -f /tmp/strace.out

Then hit the server with some requests and watch the strace output. 
This is especially useful for finding I/O or IPC bottlenecks.

good luck
Tony




Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-30 Thread Badai Aqrandista

Hi,


I see from an earlier post on the mason-users list that your app is
using HTML::Mason and Apache::Session::Memcached.  It seems like you've
got quite a few variables to juggle in your performance bottleneck
analysis.


Actually, I am using TT and Mason in this application. TT is used to support 
templates from the old version that we did not have time to port to Mason. 
The part that needs the speed is the part that uses TT.



Have you tried switching to, say, Apache::Session::MySQL to see if
there
 is any difference?
I used to use A::S::MySQL, but it created 2 connections for every process. 
This caused 'Too many connections' error. So I tried to use memcached. Now 
that I know how to make one connection per process (using database.table 
identifier on all sql queries), I'll probably try to use A::S::MySQL again.



One cheap way to check for memory leaks (most probably introduced by
your perl code if they exist) is to run httpd -X (non-forking mode) and
watch the memory usage of httpd while you send some requests.

You may also want to strace the process:
# httpd -X 
# strace -o /tmp/strace.out -p `pidof httpd`
# tail -f /tmp/strace.out

Then hit the server with some requests and watch the strace output.
This is especially useful for finding I/O or IPC bottlenecks.

Hmmm... Interesting... I'll give it a try...


good luck

Thanks...

---
Badai Aqrandista
Cheepy (?)

_
Dating? Try Lavalife – get 7 days FREE! Sign up NOW. 
http://www.lavalife.com/clickthru/clickthru.act?id=ninemsncontext=an99a=20233locale=en_AU_t=33473




Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-30 Thread Perrin Harkins
On Tue, 2005-08-30 at 14:25 +1000, Badai Aqrandista wrote:
 I have localized one subroutine that has been heavily called in the search 
 function. That should be the way to pick the candidate, shouldn't it?

What usually matters when working on speed is where the most wall clock
time is being spent.  However, rewriting sections in C gives the most
benefit on subs that use the most CPU time rather than wall time.

 I have never done this and I am worried that writing it in C would push the 
 project's deadline a little bit further.

It probably will.  It will also make your code harder to maintain,
possibly introduce lots of new bugs, and maybe not help very much.
Rewriting things in C is a last resort.  If you must do it, try
Inline::C.

 Anyhow, I've decrease the MaxClients to 3 and it worked!!! The speed 
 significantly increases, from average of 110 sec/request to 85 sec/request. 
 But, that's not enough. I need less than 10 sec/request.

Your requests take 110 seconds each?  What is your application doing?
That doesn't sound like something that could be cured by simple
optimization.

- Perrin



Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-30 Thread David Hodgkinson


On 30 Aug 2005, at 01:56, Badai Aqrandista wrote:





You *do* have KeepAlive off in your httpd, right?


No...


I mean in the backend Apache, not the frontend whatever.

When you're happering your server, is the CPU on the server
running at or near 100%? If not, you have other problems.


Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-30 Thread Badai Aqrandista





You *do* have KeepAlive off in your httpd, right?


No...


I mean in the backend Apache, not the frontend whatever.

Yes, I understand... I put it in the backend...


When you're happering your server, is the CPU on the server
running at or near 100%? If not, you have other problems.

Almost 100%, around 90%...

---
Badai Aqrandista
Cheepy (?)

_
Your opinion counts..for your chance to win a Mini Cooper click here 
http://www.qualifiedopinions.com/joinup.php?source=hotmail




Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-29 Thread Badai Aqrandista


On 29 Aug 2005, at 01:15, Badai Aqrandista wrote:


I think I have to write some of the code in C. I can't find any  other 
places in the code to optimize (or probably I uncounciously  don't want to 
make changes because I don't have any test suites)...


Then how do you know what to write in C?


The part that is called the most... I  have localized it to one or two 
subroutines... But I haven't done so... My C skill is very2 rusty...



Franky, if your code is CPU bound (rather than waiting for the
database) then a MaxClients of 3 will serve your purposes.

I'll give it a try...


You *do* have KeepAlive off in your httpd, right?

No...

Thanks a lot for the enlightment...

---
Badai Aqrandista
Cheepy (?)

_
SEEK: Over 80,000 jobs across all industries at Australia's #1 job site.   
http://ninemsn.seek.com.au?hotmail




Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-29 Thread David Hodgkinson


On 29 Aug 2005, at 01:15, Badai Aqrandista wrote:


I think I have to write some of the code in C. I can't find any  
other places in the code to optimize (or probably I uncounciously  
don't want to make changes because I don't have any test suites)...


Then how do you know what to write in C?

Read and understand the chapter(s) in the mod_perl guide on
profiling and see where that takes you.

Franky, if your code is CPU bound (rather than waiting for the
database) then a MaxClients of 3 will serve your purposes.

You *do* have KeepAlive off in your httpd, right?




Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-29 Thread Ask Bjørn Hansen


On Aug 29, 2005, at 5:11 PM, David Hodgkinson wrote:


You *do* have KeepAlive off in your httpd, right?


That is one of the great things about perlbal[1].  You can support  
KeepAlive without using more resources.



 - ask

[1] http://www.danga.com/perlbal/

--
http://www.askbjoernhansen.com/



Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-29 Thread Badai Aqrandista




Then how do you know what to write in C?



I have localized one subroutine that has been heavily called in the search 
function. That should be the way to pick the candidate, shouldn't it?


I have never done this and I am worried that writing it in C would push the 
project's deadline a little bit further. However, does anyone have any hint 
on doing this? What are the recommended readings?


Anyhow, I've decrease the MaxClients to 3 and it worked!!! The speed 
significantly increases, from average of 110 sec/request to 85 sec/request. 
But, that's not enough. I need less than 10 sec/request.


Is that possible without changing hardware?

---
Badai Aqrandista
Cheepy (?)

_
Sell your car for $9 on carpoint.com.au   
http://www.carpoint.com.au/sellyourcar




Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-28 Thread Badai Aqrandista




top - 17:24:27 up 34 days,  9:01,  4 users,  load average: 20.67,  12.84, 
9.26

Tasks: 142 total,   7 running, 135 sleeping,   0 stopped,   0 zombie
Cpu(s): 88.7% us,  7.6% sy,  0.0% ni,  0.0% id,  2.0% wa,  0.0%  hi,  1.7% 
si

Mem:906736k total,   359464k used,   547272k free, 6184k  buffers
Swap:  3036232k total,   111564k used,  2924668k free,17420k  cached


Which processes are swapping?

Have you tried setting MaxClients to say 10 (or some other low number  
that'll ensure you don't run out of memory).




I just did, and cleared the swap before doing another testing. The result: 
Nothing swapped, but the average response time goes down by 20 seconds. I 
guess Perrin was right, with 30 MaxClients, the processes are competing the 
CPU cycles.


I think I have to write some of the code in C. I can't find any other places 
in the code to optimize (or probably I uncounciously don't want to make 
changes because I don't have any test suites)...


---
Badai Aqrandista
Cheepy (?)

_
Low rate ANZ MasterCard. Apply now! 
http://clk.atdmt.com/MAU/go/msnnkanz003006mau/direct/01/  Must be over 
18 years.




Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-27 Thread Ask Bjørn Hansen


On Aug 24, 2005, at 0:49, Badai Aqrandista wrote:

I have put a reverse procy in front of my mod_perl servers and I  
have set MaxClient to 30. I have tried setting it to 50, but it  
slows down the response time.


This is what top gave me when I hammered the test server with httperf:

--

top - 17:24:27 up 34 days,  9:01,  4 users,  load average: 20.67,  
12.84, 9.26

Tasks: 142 total,   7 running, 135 sleeping,   0 stopped,   0 zombie
Cpu(s): 88.7% us,  7.6% sy,  0.0% ni,  0.0% id,  2.0% wa,  0.0%  
hi,  1.7% si
Mem:906736k total,   359464k used,   547272k free, 6184k  
buffers
Swap:  3036232k total,   111564k used,  2924668k free,17420k  
cached


Which processes are swapping?

Have you tried setting MaxClients to say 10 (or some other low number  
that'll ensure you don't run out of memory).



 - ask

--
http://www.askbjoernhansen.com/



Re: maintaining shared memory size (was: Re: swampedwithconnection?)

2005-08-24 Thread Badai Aqrandista



 Does this sound like fixing the wrong problem?

Yes.  Put a reverse proxy in front of your server, tune MaxClients so
you won't go into swap, and then benchmark to see how much load you can
handle.  Then think about tuning.


Thanks for replying...

I have put a reverse procy in front of my mod_perl servers and I have set 
MaxClient to 30. I have tried setting it to 50, but it slows down the 
response time.


This is what top gave me when I hammered the test server with httperf:

--

top - 17:24:27 up 34 days,  9:01,  4 users,  load average: 20.67, 12.84, 
9.26

Tasks: 142 total,   7 running, 135 sleeping,   0 stopped,   0 zombie
Cpu(s): 88.7% us,  7.6% sy,  0.0% ni,  0.0% id,  2.0% wa,  0.0% hi,  1.7% si
Mem:906736k total,   359464k used,   547272k free, 6184k buffers
Swap:  3036232k total,   111564k used,  2924668k free,17420k cached

 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
22802 www-data  15   0 39664  35m 7968 S 30.7  4.0   0:05.02 apache-perl
22370 www-data  16   0 39664  35m 7968 S 12.8  4.0   0:03.45 apache-perl
13604 www-data  17   0 40096  35m 7968 R  5.7  4.0   0:30.93 apache-perl
15424 root  15   0 32060 7668 1560 S  3.8  0.8   6:49.47 memcached
13611 www-data  15   0 40036  35m 7968 S  3.5  4.0   0:17.13 apache-perl
22804 www-data  17   0 39664  35m 7968 R  3.1  4.0   0:03.01 apache-perl
13612 www-data  16   0 40032  35m 7968 S  0.7  4.0   0:33.38 apache-perl



I ran httperf to create 50 connections. This is the result:

1 connections per second - avg reply time = 103953.7 ms
10 connections per second - avg reply time = 123167.2 ms
20 connections per second - avg reply time = 121483.7 ms
30 connections per second - avg reply time = 114411.3 ms
40 connections per second - avg reply time = 130168.7 ms
50 connections per second - avg reply time = 130457.4 ms

When only creating 1 connection, the avg reply time = 3289.4 ms
When creating 10 connections, with 1 conn per second, the avg reply time = 
25929.7 ms


I have no idea where to start fixing this. It seems that the more connection 
there is, the higher the avg reply time.


Does anyone ever experienced this? Why does it happen?
Basically I just want to make my webapp ready to launch... (doesn't anyone?)

THANK YOU...

---
Badai Aqrandista
Cheepy (?)

_
SEEK: Over 80,000 jobs across all industries at Australia's #1 job site.
http://ninemsn.seek.com.au?hotmail