Resource consumption.

2003-10-28 Thread Rudi Starcevic
Hi,

I'm pretty sure I have a cron job analysing apache logs which is
consuming too much of the system's resources.
So much is spent on Webalizer and Awstats that the web server stops
answering requests.

The output of `uptime` was something like 2.2 before I manually kill the
script and all is OK again.

What can I do about this ?

Here is my simple bash script:

# do webazolver
for i in /var/log/apache/access_tmp/*-access_log; do
webazolver -N 20 -D /var/log/webazolver/dns_cache.db $i
done

# do webalizer
for i in /var/log/apache/access_tmp/*-access_log; do
site=`echo $i | sed 's/\/var\/log\/apache\/access_tmp\///'`
site=`echo $site | sed 's/-access_log//'`
if [ -e /etc/webalizer/$site.webalizer.conf ];
then
webalizer -D /var/log/webazolver/dns_cache.db -c
/etc/webalizer/$site.webalizer.conf;
fi
done

It just loops through the apache logs and analyzes them.
I even use 'webazolver' to try and help but still grinds down the machine.
I currently have this script fire every 4 hours. So the logs are not
too big.

I'm thinking maybe to add a `sleep 300` or something to the script.
Maybe it's better to check if one instance to Webalizer is already 
running then sleep and try again.

Any suggestions.
I have about 20 virtual sites on this box and 400 on another.

Many thanks
Regards
Rudi.






-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Resource consumption.

2003-10-28 Thread Rudi Starcevic
Hi,

OK sorry I found the answer.
Next time I'll try harder before I bother you.

I found out about the `wait` command in Bash scripting.
I'll try something like:

# do webalizer
for i in /var/log/apache/access_tmp/*-access_log; do
   site=`echo $i | sed 's/\/var\/log\/apache\/access_tmp\///'`
   site=`echo $site | sed 's/-access_log//'`
   if [ -e /etc/webalizer/$site.webalizer.conf ];
then
webalizer -D /var/log/webazolver/dns_cache.db -c \
/etc/webalizer/$site.webalizer.conf;

WEB_PID=$!;
wait $WEB_PID;
fi
done
 
Cheers
Rudi.




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Resource consumption.

2003-10-28 Thread Rudi Starcevic
HI,

Thanks Russell,

I'm pretty sure I have a cron job analysing apache logs which is
consuming too much of the system's resources.
So much is spent on Webalizer and Awstats that the web server stops
answering requests.
   

CPU time or IO bandwidth?
 

CPU time is what I meant. Sorry I should be more clear

The output of `uptime` was something like 2.2 before I manually kill the
script and all is OK again.
   

2.2 should not be a great problem.  A machine that has a single CPU and a 
single hard disk probably won't be giving good performance when it's load 
average exceeds 2.0, but it should still work.

I thought that is the load average went about 1.0 that this was bad and 
mean you need
to do something to help bring the load under 1.0.

Even one process of Awstats uses heaps of CPU - over 90%.
Maybe I need to create a user account for processing Apache logs and limit
CPU consumption with 'ulimit' or something ??
Cheers
Rudi.


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: Resource consumption.

2003-10-28 Thread Chris Foote
On Wed, 29 Oct 2003, Rudi Starcevic wrote:

 I'm pretty sure I have a cron job analysing apache logs which is
 consuming too much of the system's resources.
 So much is spent on Webalizer and Awstats that the web server stops
 answering requests.
 
 CPU time or IO bandwidth?

 CPU time is what I meant. Sorry I should be more clear

 The output of `uptime` was something like 2.2 before I manually kill the
 script and all is OK again.
 
 2.2 should not be a great problem.  A machine that has a single CPU and a
 single hard disk probably won't be giving good performance when it's load
 average exceeds 2.0, but it should still work.
 
 I thought that is the load average went about 1.0 that this was bad and
 mean you need
 to do something to help bring the load under 1.0.

 Even one process of Awstats uses heaps of CPU - over 90%.
 Maybe I need to create a user account for processing Apache logs and limit
 CPU consumption with 'ulimit' or something ??

I think you might be overlooking the value of the 'nice' shell
builtin - try:

# do webalizer
for i in /var/log/apache/access_tmp/*-access_log; do
   site=`echo $i | sed 's/\/var\/log\/apache\/access_tmp\///'`
   site=`echo $site | sed 's/-access_log//'`
   if [ -e /etc/webalizer/$site.webalizer.conf ];
then
nice webalizer -D /var/log/webazolver/dns_cache.db -c \
   /etc/webalizer/$site.webalizer.conf;
WEB_PID=$!;
wait $WEB_PID;
fi
done


Cheers,
Chris

Linux.Conf.Au Adelaide Jan 12-17 2004
Australia's Premier Linux Conference
http://lca2004.linux.org.au


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Resource consumption.

2003-10-28 Thread Rudi Starcevic
Hi Chris,

I think you might be overlooking the value of the 'nice' shell builtin - try:
 

Indeed.
Thanks.
Regards
Rudi.
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Resource consumption.

2003-10-28 Thread Rudi Starcevic
Hi,

I'm pretty sure I have a cron job analysing apache logs which is
consuming too much of the system's resources.
So much is spent on Webalizer and Awstats that the web server stops
answering requests.

The output of `uptime` was something like 2.2 before I manually kill the
script and all is OK again.

What can I do about this ?

Here is my simple bash script:

# do webazolver
for i in /var/log/apache/access_tmp/*-access_log; do
webazolver -N 20 -D /var/log/webazolver/dns_cache.db $i
done

# do webalizer
for i in /var/log/apache/access_tmp/*-access_log; do
site=`echo $i | sed 's/\/var\/log\/apache\/access_tmp\///'`
site=`echo $site | sed 's/-access_log//'`
if [ -e /etc/webalizer/$site.webalizer.conf ];
then
webalizer -D /var/log/webazolver/dns_cache.db -c
/etc/webalizer/$site.webalizer.conf;
fi
done

It just loops through the apache logs and analyzes them.
I even use 'webazolver' to try and help but still grinds down the machine.
I currently have this script fire every 4 hours. So the logs are not
too big.

I'm thinking maybe to add a `sleep 300` or something to the script.
Maybe it's better to check if one instance to Webalizer is already 
running then sleep and try again.

Any suggestions.
I have about 20 virtual sites on this box and 400 on another.

Many thanks
Regards
Rudi.








Re: Resource consumption.

2003-10-28 Thread Rudi Starcevic
Hi,

Me again ..

I guess what I want to do is have this script execute webalizer
once at a time, waiting until webalizer is finshed before starting
again.
Instead the script fires off many webalizers at once.
Sorry I guess my simple bash skills are not up to scratch.
I'll head over to tldp.org to see if I can't find the answer.

 # do webalizer
 for i in /var/log/apache/access_tmp/*-access_log; do
 site=`echo $i | sed 's/\/var\/log\/apache\/access_tmp\///'`
 site=`echo $site | sed 's/-access_log//'`
 if [ -e /etc/webalizer/$site.webalizer.conf ];
 then
 webalizer -D /var/log/webazolver/dns_cache.db -c \
 /etc/webalizer/$site.webalizer.conf;
 fi
 done
 

Cheers
Rudi.




Re: Resource consumption.

2003-10-28 Thread Rudi Starcevic
Hi,

OK sorry I found the answer.
Next time I'll try harder before I bother you.

I found out about the `wait` command in Bash scripting.
I'll try something like:

# do webalizer
for i in /var/log/apache/access_tmp/*-access_log; do
   site=`echo $i | sed 's/\/var\/log\/apache\/access_tmp\///'`
   site=`echo $site | sed 's/-access_log//'`
   if [ -e /etc/webalizer/$site.webalizer.conf ];
then
webalizer -D /var/log/webazolver/dns_cache.db -c \
/etc/webalizer/$site.webalizer.conf;

WEB_PID=$!;
wait $WEB_PID;
fi
done
 
Cheers
Rudi.






Re: Resource consumption.

2003-10-28 Thread Russell Coker
On Tue, 28 Oct 2003 23:03, Rudi Starcevic wrote:
 I'm pretty sure I have a cron job analysing apache logs which is
 consuming too much of the system's resources.
 So much is spent on Webalizer and Awstats that the web server stops
 answering requests.

CPU time or IO bandwidth?

 The output of `uptime` was something like 2.2 before I manually kill the
 script and all is OK again.

2.2 should not be a great problem.  A machine that has a single CPU and a 
single hard disk probably won't be giving good performance when it's load 
average exceeds 2.0, but it should still work.

But if your processing takes longer than the cron interval then you will have 
serious problems.  Changing the cron interval from 4 hours to 24 may reduce 
the chance of getting two cron jobs running at the same time.

Also you may want to consider adding more RAM.  Webalizer can get a bit memory 
hungry at times, and it seems to have fairly linear data access patterns so 
when it starts paging it thrashes.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




Re: Resource consumption.

2003-10-28 Thread Rudi Starcevic
HI,
Thanks Russell,
I'm pretty sure I have a cron job analysing apache logs which is
consuming too much of the system's resources.
So much is spent on Webalizer and Awstats that the web server stops
answering requests.
   

CPU time or IO bandwidth?
 

CPU time is what I meant. Sorry I should be more clear
The output of `uptime` was something like 2.2 before I manually kill the
script and all is OK again.
   

2.2 should not be a great problem.  A machine that has a single CPU and a 
single hard disk probably won't be giving good performance when it's load 
average exceeds 2.0, but it should still work.

I thought that is the load average went about 1.0 that this was bad and 
mean you need
to do something to help bring the load under 1.0.

Even one process of Awstats uses heaps of CPU - over 90%.
Maybe I need to create a user account for processing Apache logs and limit
CPU consumption with 'ulimit' or something ??
Cheers
Rudi.




Re: Resource consumption.

2003-10-28 Thread Chris Foote
On Wed, 29 Oct 2003, Rudi Starcevic wrote:

 I'm pretty sure I have a cron job analysing apache logs which is
 consuming too much of the system's resources.
 So much is spent on Webalizer and Awstats that the web server stops
 answering requests.
 
 CPU time or IO bandwidth?

 CPU time is what I meant. Sorry I should be more clear

 The output of `uptime` was something like 2.2 before I manually kill the
 script and all is OK again.
 
 2.2 should not be a great problem.  A machine that has a single CPU and a
 single hard disk probably won't be giving good performance when it's load
 average exceeds 2.0, but it should still work.
 
 I thought that is the load average went about 1.0 that this was bad and
 mean you need
 to do something to help bring the load under 1.0.

 Even one process of Awstats uses heaps of CPU - over 90%.
 Maybe I need to create a user account for processing Apache logs and limit
 CPU consumption with 'ulimit' or something ??

I think you might be overlooking the value of the 'nice' shell
builtin - try:

# do webalizer
for i in /var/log/apache/access_tmp/*-access_log; do
   site=`echo $i | sed 's/\/var\/log\/apache\/access_tmp\///'`
   site=`echo $site | sed 's/-access_log//'`
   if [ -e /etc/webalizer/$site.webalizer.conf ];
then
nice webalizer -D /var/log/webazolver/dns_cache.db -c \
   /etc/webalizer/$site.webalizer.conf;
WEB_PID=$!;
wait $WEB_PID;
fi
done


Cheers,
Chris

Linux.Conf.Au Adelaide Jan 12-17 2004
Australia's Premier Linux Conference
http://lca2004.linux.org.au




Re: Resource consumption.

2003-10-28 Thread Rudi Starcevic
Hi Chris,
I think you might be overlooking the value of the 'nice' shell builtin - try:
 

Indeed.
Thanks.
Regards
Rudi.



apache and php resource consumption

2002-02-25 Thread Vinai Kopp
Hello,
I am trying to set up a system so I can monitor the cpu and memory usage of 
apache processes, in particular of pages containing php scripts, because 
sometimes it happens that some apache processes eat up a machine. What I 
want are numbers to show to the people writing the programms.

I'm thinking of reading /proc/self/statm or /proc/self/status in a php 
script automatically appended to each request and writing the results to a 
logfile.

But I'm not shure how to interpret the output - what I am looking for is a 
way - if at all possible - to get the amount of memory used by the apache 
chield, by the php script (since it's the same process might be difficult) 
and the cpu time of the request (I guess I could also get that from the 
apache logs).

This is for machines running apache 1.3.20 and php 4.0.6.
Has someone gone into this before? Am I missing something?
Any pointers are appreciated!
Greetings,
Vinai