RE: Specific limiting examples (was RE: Apache::SizeLimit for unsharedRAM ???)

2001-01-15 Thread Rob Bloodgood

 I think that the problem here is that we've asked for more info
 and he hasn't
 supplied it.  He's given us generics and as a result has gotten generic
 answers.

I haven't been fishing for a handout of doing the work for me. I've been
trying to see what people have done.  The reason for the waiting I've done
is, I've looked at no less than 7 CPAN modules that do some kind of
resource/server monitoring, and I couldn't figure out which one(s) would go
together as a reasonable combination.

 1) What are the average hits per second that you are projecting
 this box to handle?
Up to, say 3,000,000 hits/aday.

 2) What is the peak hits per second that you are project this box
 to handle?

This is a guesstimate, but approx 50/s

 3) We know you have a gig of ram, but give us info on the rest of
 the platform?

No, I have 2GB of ram, on a dual PIII/550 BX board, on a 8GB drive

 4) What's your OS like? Solaris, AIX, HP-UX, Linux, FreeBSD, etc.   which
 version and/or flavor

Running RedHat 6.1 updated w/ all kindsa current pathches/updates.

 5) What other processes are you running?

predominantly I'm running a Count Daemon on the same box.  My project is
http://www.exitexchange.com, and I take raw hits at the webserver and fire
them over to my count daemon which keeps load down on the Database... but
I'm ahead of myself.

 6) Do you have a Database?  Which one? A gig of ram is nothing to Oracle

Oracle 8.1.6 on a Sun 450 (4x400Mhz UltraSparc ### w/ 2GB of Ram, 7x9GB SCSI
in a software RAID for the dataspace running Solaris 8)

 6a) Will be running queries constantly or will you be caching a lot?

Whenever possible, I try to cache alot.  The largest part of the application
that is NOT cached is the part I'm working on right now... ever heard of
POE?

 7)  What other modules are you running?  PhP? SpeedyCGI? Axkit? Cocoon?

Well, the count server is only running mod_perl, with a couple of custom
server extensions, all pretty lean.  Per process is abt 12.5MB, shared is
5600k.

 In short what is the server DOING at any given moment.  Until
 folks have a feel
 for this no one is going to be able to offer you any insight
 beyond what you
 already have.

Well I get a hit, I hit the database for the response, I send it back
interpolated into the response content.   Currently about 800K times/day.
:-)

However
I got what I was looking for out of this discussion a couple of messages
back, with Perrin's example.  YES the numbers are made up.  No problem... I
have a basic syntactic skeleton to work with, now I can fine tune.

L8r,
Rob




Re: Specific limiting examples (was RE: Apache::SizeLimit for unsharedRAM ???)

2001-01-15 Thread Ask Bjoern Hansen

On Thu, 11 Jan 2001, Perrin Harkins wrote:

[...]
 Even a carefully coded system will leak over time, and I think killing off
 children after 1MB of growth would probably be too quick on the draw.  

I tend to set the number to N number of requests. If each httpd
child needs to be forked every 1 requests that's pretty
insignificant and it can save you from some blowups.

[...]
 It would be nice if Apache could dynamically decide how many processes to
 run at any given moment based on available memory and how busy the server
 is, 

It's really hard to take into account other processes on the system
starting and stopping too. And other processes that could be swapped
out. Or couldn't.

 but in practice the best thing I've found is to tune it for the
 worst case scenario.  If you can survive that, the rest is cake.  
 Then you can get on to really hard things, like scaling your
 database.

In my experience setting MaxClients for the mod_perl httpd really
really low (may be as low as 5-10 childs) and letting your mod_proxy
deal with hundreds of clients works great. It all depends though; in
my application I pretty much don't have anything blocking, so I
couldn't get much more done with more childs.

If most of your time to serve a request is spend waiting for the
database you will maybe want more childs going.


 - ask

-- 
ask bjoern hansen - http://ask.netcetera.dk/
more than 70M impressions per day, http://valueclick.com




Re: Specific limiting examples (was RE: Apache::SizeLimit for unsharedRAM ???)

2001-01-15 Thread Perrin Harkins

On Mon, 15 Jan 2001, Ask Bjoern Hansen wrote:
 I tend to set the number to N number of requests. If each httpd child
 needs to be forked every 1 requests that's pretty insignificant
 and it can save you from some blowups.

The reason I like using SizeLimit instead of a number of requests is that
it won't kill off processes when there isn't a problem.  It also catches
situations where you occasionally do something that raises the size of a
process significantly by killing those off sooner.

- Perrin




Re: Specific limiting examples (was RE: Apache::SizeLimit for unsharedRAM ???)

2001-01-11 Thread Perrin Harkins

On Thu, 11 Jan 2001, Rob Bloodgood wrote:
 Second of all, with the literally thousands of pages of docs necessary to
 understand in order to be really mod_perl proficient

Most of the documentation is really reference-oriented.  All the important
concepts in mod_perl performance tuning fit in a few pages of the guide.

 I mean, 1GB is a lot of ram.

It's all relative.  If you have significant traffic on your site, 1GB RAM
might not be nearly enough.

 And finally, I was hoping to prod somebody into posting snippets of
 CODE
 and
 httpd.conf
 
 that describe SPECIFIC steps/checks/modules/configs designed to put a
 reasonable cap on resources so that we can serve millions of hits w/o
 needing a restart.

I think you're making this much harder than it needs to be.  It's this
simple:

MaxClients 30

PerlFixupHandler Apache::SizeLimit
Perl
  use Apache::SizeLimit;
  # sizes are in KB
  $Apache::SizeLimit::MAX_PROCESS_SIZE   = 3;
  $Apache::SizeLimit::CHECK_EVERY_N_REQUESTS = 5;
/Perl

If you're paranoid, you can throw BSD::Resource in the mix to catch
things like infinite loops in your code.

None of this will make your code faster or your server bigger.  It will
just prevent it from going into swap.  Having too much traffic can still
hose your site in lots of other ways that have nothing to do with swap and
everything to do with the design of your application and the hardware you
run it on, but there's nothing mod_perl-specific about those issues.

- Perrin




Re: Specific limiting examples (was RE: Apache::SizeLimit for unsharedRAM ???)

2001-01-11 Thread Jimi Thompson

I think that the problem here is that we've asked for more info and he hasn't
supplied it.  He's given us generics and as a result has gotten generic
answers.

1) What are the average hits per second that you are projecting this box to
handle?
2) What is the peak hits per second that you are project this box to handle?
3) We know you have a gig of ram, but give us info on the rest of the platform?

4) What's your OS like? Solaris, AIX, HP-UX, Linux, FreeBSD, etc.   which
version and/or flavor
5) What other processes are you running?
6) Do you have a Database?  Which one? A gig of ram is nothing to Oracle
6a) Will be running queries constantly or will you be caching a lot?
7)  What other modules are you running?  PhP? SpeedyCGI? Axkit? Cocoon?
In short what is the server DOING at any given moment.  Until folks have a feel
for this no one is going to be able to offer you any insight beyond what you
already have.

Perrin Harkins wrote:

 On Thu, 11 Jan 2001, Rob Bloodgood wrote:
  Second of all, with the literally thousands of pages of docs necessary to
  understand in order to be really mod_perl proficient

 Most of the documentation is really reference-oriented.  All the important
 concepts in mod_perl performance tuning fit in a few pages of the guide.

  I mean, 1GB is a lot of ram.

 It's all relative.  If you have significant traffic on your site, 1GB RAM
 might not be nearly enough.

  And finally, I was hoping to prod somebody into posting snippets of
  CODE
  and
  httpd.conf
 
  that describe SPECIFIC steps/checks/modules/configs designed to put a
  reasonable cap on resources so that we can serve millions of hits w/o
  needing a restart.

 I think you're making this much harder than it needs to be.  It's this
 simple:

 MaxClients 30

 PerlFixupHandler Apache::SizeLimit
 Perl
   use Apache::SizeLimit;
   # sizes are in KB
   $Apache::SizeLimit::MAX_PROCESS_SIZE   = 3;
   $Apache::SizeLimit::CHECK_EVERY_N_REQUESTS = 5;
 /Perl

 If you're paranoid, you can throw BSD::Resource in the mix to catch
 things like infinite loops in your code.

 None of this will make your code faster or your server bigger.  It will
 just prevent it from going into swap.  Having too much traffic can still
 hose your site in lots of other ways that have nothing to do with swap and
 everything to do with the design of your application and the hardware you
 run it on, but there's nothing mod_perl-specific about those issues.

 - Perrin

--
Jimi Thompson
Web Master
Link.com

"It's the same thing we do every night, Pinky."





Re: Specific limiting examples (was RE: Apache::SizeLimit for unsharedRAM ???)

2001-01-11 Thread Perrin Harkins

On Thu, 11 Jan 2001 [EMAIL PROTECTED] wrote:
  I think you're making this much harder than it needs to be.  It's this
  simple:
  MaxClients 30
  PerlFixupHandler Apache::SizeLimit
  Perl
use Apache::SizeLimit;
# sizes are in KB
$Apache::SizeLimit::MAX_PROCESS_SIZE   = 3;
$Apache::SizeLimit::CHECK_EVERY_N_REQUESTS = 5;
  /Perl
 This is just like telling an ISP that they can only have 60ish dial
 in lines for modems because that could theoreticly fill their T1.  Even
 though they would probably hardly even hit 50% if they only had 60 modems
 for a T1.
 
 The idea that any process going over 30 megs should be killed is probably
 safe.  The solution though is only really valid if our normal process is
 29 megs.  Otherwise we are limiting each system to something lower then it
 produce.

It's a compromise.  Running a few less processes than you could is better
than risking swapping, because your service is basically gone when you hit
swap.  (Note that this is different from your ISP example, because the ISP
could still provide some service, albeit with reduced bandwidth, after
maxing out it's T1.)  The numbers I put in here were random, but in a real
system you would adjust this according to your expected process size.

Even a carefully coded system will leak over time, and I think killing off
children after 1MB of growth would probably be too quick on the draw.  
Child processes have a significant startup cost and there's a sweet spot
between how big you let the processes get and how many you run which you
have to find by testing with a load tool.  It's different for different
applications.

It would be nice if Apache could dynamically decide how many processes to
run at any given moment based on available memory and how busy the server
is, but in practice the best thing I've found is to tune it for the worst
case scenario.  If you can survive that, the rest is cake.  Then you can
get on to really hard things, like scaling your database.

- Perrin