Apache::Session::DB_File and open sessions

2001-01-19 Thread Todd Finney

I'm having a hard time with Apache::Session::DB_File, and I 
think I have it narrowed down to a small enough problem to 
ask about it.  I haven't given up on A::S::Postgres, I'm 
just trying to get things working with DB_File before I try 
to solve my other problem.

The one-sentence version of my question is: Is there a 
problem with tying a session twice during two different 
HeaderParserHandlers, as long as your doing the standard 
cleanup stuff (untie | make_modified) in each?

The longer version:

I have managed to boil this problem down to as little code 
as possible for reproduction purposes.

I have two PerlHeaderParserHandlers listed in httpd.conf, 
to be run one after the other when a particular file is 
requested:

Files handlertest.html
SetHandler  perl-script
PerlHeaderParserHandler LocalSites::TestParser1 
LocalSites::TestParser2
/Files

In the Real World, the first handler is a session manager, 
and the second handler is a login form processor.  The 
login form points to handlertest.html, a dummy file that no 
one should ever land on.

I have also added a couple of debug lines to 
Apache::Session::Lock::File, so I can track things:

In the beginning of acquire_read_lock:

warn "ARL: Write is: ".$self-{write}." Read is: 
".$self-{read}." Opened is: ".$self-{opened};

with similar lines in acquire_write_lock (AWL) and DESTROY.

The first HeaderParserHandler ties the session, puts the 
session id into pnotes, and unties.
--
package LocalSites::TestParser1;
use strict;

use Apache::Constants qw(DECLINED);
use Apache::Session::DB_File;
use Apache::Log;

sub handler {
 my $r = shift;
 my $log = $r-server-log();
 my $session = $r-header_in('Cookie');
 $session =~ s/SESSION_ID=(\w*)/$1/;
 my %session = ();
 $log-error("TestParser1: tying session");
 tie %session, 'Apache::Session::DB_File',$session, 
{
 FileName='/www/data/sessions/sessions.db',
 LockDirectory='/var/lock/sessions',
 };
 $r-pnotes('SESSION_ID', %session-{_session_id});
 $log-error("TestParser1: untying session");
 untie(%session);
 $log-error("TestParser1: done");
 return DECLINED;
}
1;
__END__;
--

The second handler, this one grabs the session id from 
pnotes, ties the session, modifies it, saves and unties:

--
package LocalSites::TestParser2;
use strict;

use Apache::Constants qw(DECLINED);
use Apache::Session::DB_File;
use Apache::Log;

sub handler {
 my $r = shift;
 my $log = $r-server-log();
 my $session = $r-pnotes('SESSION_ID');
 my %session = ();
 $log-error("TestParser2: tying session");
 tie %session, 'Apache::Session::DB_File',$session, 
{
 FileName='/www/data/sessions/sessions.db',
 LockDirectory='/var/lock/sessions',
 };
 %session-{'AUTH'} = 1;
 tied(%session)-make_modified();
 $log-error("TestParser2: untying session");
 untie(%session);
 $log-error("TestParser2: done");
 return DECLINED;
}
1;
__END__;
--

The error log output follows.  From the looks of it, the 
second module is failing to secure a write lock during the 
2nd handler because the 2nd handler already has it marked 
open.  The second handler does not exhibit this behavior 
when it's running by itself (unchanged except "my $session 
= undef;").

I'm not sure why I'm not seeing the DESTROY call from the 
first handler, but the read lock acquisition in the second 
indicates that the file is free for the plundering.

When both handlers are run, the process hangs where 
indicated below (acquiring the write lock).  Restarting 
Apache kills the process and results in the DESTROY calls 
below the indicated point.  Without a restart the process 
will sit indefinitely.

[Fri Jan 19 03:26:50 2001] [error] TestParser1: tying 
session
ARL: Write is: 0 Read is: 0 Opened is: 0 at 
/usr/lib/perl5/site_perl/5.005/Apache/Session/Lock/File.pm 
line 31.

Apache::Session::Lock::File::acquire_read_lock('Apache::Session::Lock::File=HASH(0x8670cf4)',
 
'Apache::Session::DB_File=HASH(0x8670c70)') called at 
/usr/lib/perl5/site_perl/5.005/Apache/Session.pm line 552
Apache::Session::acquire_read_lock('Apache::Session::DB_File=HASH(0x8670c70)') 
called at /usr/lib/perl5/site_perl/5.005/Apache/Session.pm 
line 472
Apache::Session::restore('Apache::Session::DB_File=HASH(0x8670c70)') 
called at /usr/lib/perl5/site_perl/5.005/Apache/Session.pm 
line 386
Apache::Session::TIEHASH('Apache::Session::DB_File', 
'f4a0314ccd0c5e1b84e5670edd924983', 'HASH(0x85ec664)') 
called at 
/www/boygenius/libraries/LocalSites/TestParser1.pm line 

Re: Apache::Session::DB_File and open sessions

2001-01-19 Thread Perrin Harkins

Todd Finney wrote:
 The one-sentence version of my question is: Is there a
 problem with tying a session twice during two different
 HeaderParserHandlers, as long as your doing the standard
 cleanup stuff (untie | make_modified) in each?

It seems like the answer should be no unless there's some kind of bug,
but I don't understand why you're doing it this way.  Why don't you just
put a reference to the %session hash in pnotes and use it in the second
handler, instead of putting the ID in and re-creating it?  That should
be considerably more efficient.
- Perrin



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withsc ripts that contain un-shared memory

2001-01-19 Thread Perrin Harkins

Sam Horrocks wrote:
  say they take two slices, and interpreters 1 and 2 get pre-empted and
  go back into the queue.  So then requests 5/6 in the queue have to use
  other interpreters, and you expand the number of interpreters in use.
  But still, you'll wind up using the smallest number of interpreters
  required for the given load and timeslice.  As soon as those 1st and
  2nd perl interpreters finish their run, they go back at the beginning
  of the queue, and the 7th/ 8th or later requests can then use them, etc.
  Now you have a pool of maybe four interpreters, all being used on an MRU
  basis.  But it won't expand beyond that set unless your load goes up or
  your program's CPU time requirements increase beyond another timeslice.
  MRU will ensure that whatever the number of interpreters in use, it
  is the lowest possible, given the load, the CPU-time required by the
  program and the size of the timeslice.

You know, I had brief look through some of the SpeedyCGI code yesterday,
and I think the MRU process selection might be a bit of a red herring. 
I think the real reason Speedy won the memory test is the way it spawns
processes.

If I understand what's going on in Apache's source, once every second it
has a look at the scoreboard and says "less than MinSpareServers are
idle, so I'll start more" or "more than MaxSpareServers are idle, so
I'll kill one".  It only kills one per second.  It starts by spawning
one, but the number spawned goes up exponentially each time it sees
there are still not enough idle servers, until it hits 32 per second. 
It's easy to see how this could result in spawning too many in response
to sudden load, and then taking a long time to clear out the unnecessary
ones.

In contrast, Speedy checks on every request to see if there are enough
backends running.  If there aren't, it spawns more until there are as
many backends as queued requests.  That means it never overshoots the
mark.

Going back to your example up above, if Apache actually controlled the
number of processes tightly enough to prevent building up idle servers,
it wouldn't really matter much how processes were selected.  If after
the 1st and 2nd interpreters finish their run they went to the end of
the queue instead of the beginning of it, that simply means they will
sit idle until called for instead of some other two processes sitting
idle until called for.  If the systems were both efficient enough about
spawning to only create as many interpreters as needed, none of them
would be sitting idle and memory usage would always be as low as
possible.

I don't know if I'm explaining this very well, but the gist of my theory
is that at any given time both systems will require an equal number of
in use interpreters to do an equal amount of work and the diffirentiator
between the two is Apache's relatively poor estimate of how many
processes should be available at any given time.  I think this theory
matches up nicely with the results of Sam's tests: when MaxClients
prevents Apache from spawning too many processes, both systems have
similar performance characteristics.

There are some knobs to twiddle in Apache's source if anyone is
interested in playing with it.  You can change the frequency of the
checks and the maximum number of servers spawned per check.  I don't
have much motivation to do this investigation myself, since I've already
tuned our MaxClients and process size constraints to prevent problems
with our application.

- Perrin



Re: Apache::Session::DB_File and open sessions

2001-01-19 Thread Todd Finney

Thanks to Perrin's suggestion (read: clue brick), things 
are much happier now.  Going around the problem is just as 
good as fixing it, I suppose.

I'm still curious about that behavior, though.

cheers,
Todd


At 04:22 AM 1/19/01, Perrin Harkins wrote:
Todd Finney wrote:
  The one-sentence version of my question is: Is there a
  problem with tying a session twice during two different
  HeaderParserHandlers, as long as your doing the 
 standard
  cleanup stuff (untie | make_modified) in each?

It seems like the answer should be no unless there's some 
kind of bug,
but I don't understand why you're doing it this way.  Why 
don't you just
put a reference to the %session hash in pnotes and use it 
in the second
handler, instead of putting the ID in and re-creating 
it?  That should
be considerably more efficient.
- Perrin




Using rewrite...

2001-01-19 Thread Tomas Edwardsson

Hi

I'm using rewrite to send a request to a relevant server, for
instance if a filename ends with .pl I rewrite it to the perl
enabled apache:

RewriteEngine On

# Perl Enabled.
RewriteRule ^/(.*\.ehtm)$ http://%{HTTP_HOST}:81/$1 [P]
RewriteRule ^/(.*\.pl)$ http://%{HTTP_HOST}:81/$1 [P]
# PHP Enabled
RewriteRule ^(.*\.php)$ http://%{HTTP_HOST}:83$1 [P]
# Everyting else, images etc...
RewriteRule ^/(.*)$ http://%{HTTP_HOST}:82/$1 [P]

The problem is that I can't find a way to send the request
to a relevant port if the request calls for a URL which ends
with a slash ("/"). Any hints ?

- Tomas Edwardsson
- Mekkano ehf



Re: Using rewrite...

2001-01-19 Thread Matthew Byng-Maddick

On Fri, 19 Jan 2001, Tomas Edwardsson wrote:
 The problem is that I can't find a way to send the request
 to a relevant port if the request calls for a URL which ends
 with a slash ("/"). Any hints ?

RewriteCond and %{REQUEST_FILENAME} ?

This happens after the default URI Translation handler.

MBM

-- 
Matthew Byng-Maddick   Home: [EMAIL PROTECTED]  +44 20  8981 8633  (Home)
http://colondot.net/   Work: [EMAIL PROTECTED] +44 7956 613942  (Mobile)
Under  any conditions,  anywhere,  whatever you  are doing,  there is some
ordinance under which you can be booked.  -- Robert D. Sprecht, Rand Corp.




Re: Using rewrite...

2001-01-19 Thread Tomas Edwardsson

RewriteCond %{REQUEST_FILENAME} .*\.php$
RewriteRule ^(.*)$ http://%{HTTP_HOST}:83$1

I Tested it like this and this doesn't seem to work, either
I'm misunderstanding RewriteCond or this method doesn't work.

- Tomas

On Fri, Jan 19, 2001 at 10:59:43AM +, Matthew Byng-Maddick wrote:
 On Fri, 19 Jan 2001, Tomas Edwardsson wrote:
  The problem is that I can't find a way to send the request
  to a relevant port if the request calls for a URL which ends
  with a slash ("/"). Any hints ?
 
 RewriteCond and %{REQUEST_FILENAME} ?
 
 This happens after the default URI Translation handler.
 
 MBM
 
 -- 
 Matthew Byng-Maddick   Home: [EMAIL PROTECTED]  +44 20  8981 8633  (Home)
 http://colondot.net/   Work: [EMAIL PROTECTED] +44 7956 613942  (Mobile)
 Under  any conditions,  anywhere,  whatever you  are doing,  there is some
 ordinance under which you can be booked.  -- Robert D. Sprecht, Rand Corp.



Re: Using rewrite...

2001-01-19 Thread Matthew Byng-Maddick

On Fri, 19 Jan 2001, Tomas Edwardsson wrote:
 RewriteCond %{REQUEST_FILENAME} .*\.php$
 RewriteRule ^(.*)$ http://%{HTTP_HOST}:83$1
 I Tested it like this and this doesn't seem to work, either
 I'm misunderstanding RewriteCond or this method doesn't work.

What happens if you turn RewriteLog On and set RewriteLogLevel 9?

MBM

-- 
Matthew Byng-Maddick   Home: [EMAIL PROTECTED]  +44 20  8981 8633  (Home)
http://colondot.net/   Work: [EMAIL PROTECTED] +44 7956 613942  (Mobile)
Under  any conditions,  anywhere,  whatever you  are doing,  there is some
ordinance under which you can be booked.  -- Robert D. Sprecht, Rand Corp.




Re: Using rewrite...

2001-01-19 Thread Tomas Edwardsson

It doesn't seem to apply the values of the DirectoryIndex to the filenames.

DirectoryIndex index.php index.ehtm index.pl index.html

RewriteCond %{REQUEST_FILENAME} .*\.php$
RewriteRule ^(.*)$ http://%{HTTP_HOST}:83$1

rewrite.log:
194.144.154.45 - - [19/Jan/2001:11:40:23 +] 
[dave.mekkano.com/sid#816ad70][rid#81b8f88/initial] (4) RewriteCond: input='/' 
pattern='.*\.php$' = not-matched

RewriteCond %{SCRIPT_FILENAME} .*\.php$
RewriteRule ^(.*)$ http://%{HTTP_HOST}:83$1

rewrite.log:
194.144.154.45 - - [19/Jan/2001:11:40:28 +] 
[dave.mekkano.com/sid#816ad70][rid#81b8f88/initial] (4) RewriteCond: input='/' 
pattern='.*\.php$' = not-matched

- Tomas Edwardsson

On Fri, Jan 19, 2001 at 11:32:34AM +, Matthew Byng-Maddick wrote:
 On Fri, 19 Jan 2001, Tomas Edwardsson wrote:
  RewriteCond %{REQUEST_FILENAME} .*\.php$
  RewriteRule ^(.*)$ http://%{HTTP_HOST}:83$1
  I Tested it like this and this doesn't seem to work, either
  I'm misunderstanding RewriteCond or this method doesn't work.
 
 What happens if you turn RewriteLog On and set RewriteLogLevel 9?
 
 MBM
 
 -- 
 Matthew Byng-Maddick   Home: [EMAIL PROTECTED]  +44 20  8981 8633  (Home)
 http://colondot.net/   Work: [EMAIL PROTECTED] +44 7956 613942  (Mobile)
 Under  any conditions,  anywhere,  whatever you  are doing,  there is some
 ordinance under which you can be booked.  -- Robert D. Sprecht, Rand Corp.



Re: Using rewrite...

2001-01-19 Thread Matthew Byng-Maddick

On Fri, 19 Jan 2001, Matthew Byng-Maddick wrote:
 On Fri, 19 Jan 2001, Tomas Edwardsson wrote:
  The problem is that I can't find a way to send the request
  to a relevant port if the request calls for a URL which ends
  with a slash ("/"). Any hints ?
 RewriteCond and %{REQUEST_FILENAME} ?
 This happens after the default URI Translation handler.

Oops. This appears that I'm lying. It doesn't work in my setup either. 

MBM (needs to go read some source :)

-- 
Matthew Byng-Maddick   Home: [EMAIL PROTECTED]  +44 20  8981 8633  (Home)
http://colondot.net/   Work: [EMAIL PROTECTED] +44 7956 613942  (Mobile)
Under  any conditions,  anywhere,  whatever you  are doing,  there is some
ordinance under which you can be booked.  -- Robert D. Sprecht, Rand Corp.




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withsc ripts that contain un-shared memory

2001-01-19 Thread Sam Horrocks

There's only one run queue in the kernel.  THe first task ready to run is
  put
at the head of that queue, and anything arriving afterwards waits.  Only
if that first task blocks on a resource or takes a very long time, or
a higher priority process becomes able to run due to an interrupt is that
process taken out of the queue.
  
  Note that any I/O request that isn't completely handled by buffers will
  trigger the 'blocks on a resource' clause above, which means that
  jobs doing any real work will complete in an order determined by
  something other than the cpu and not strictly serialized.  Also, most
  of my web servers are dual-cpu so even cpu bound processes may
  complete out of order.

 I think it's much easier to visualize how MRU helps when you look at one
 thing running at a time.  And MRU works best when every process runs
 to completion instead of blocking, etc.  But even if the process gets
 timesliced, blocked, etc, MRU still degrades gracefully.  You'll get
 more processes in use, but still the numbers will remain small.

 Similarly, because of the non-deterministic nature of computer systems,
 Apache doesn't service requests on an LRU basis; you're comparing
  SpeedyCGI
 against a straw man. Apache's servicing algortihm approaches randomness,
  so
 you need to build a comparison between forced-MRU and random choice.
  
Apache httpd's are scheduled on an LRU basis.  This was discussed early
in this thread.  Apache uses a file-lock for its mutex around the accept
call, and file-locking is implemented in the kernel using a round-robin
(fair) selection in order to prevent starvation.  This results in
incoming requests being assigned to httpd's in an LRU fashion.
  
  But, if you are running a front/back end apache with a small number
  of spare servers configured on the back end there really won't be
  any idle perl processes during the busy times you care about.  That
  is, the  backends will all be running or apache will shut them down
  and there won't be any difference between MRU and LRU (the
  difference would be which idle process waits longer - if none are
  idle there is no difference).

 If you can tune it just right so you never run out of ram, then I think
 you could get the same performance as MRU on something like hello-world.

Once the httpd's get into the kernel's run queue, they finish in the
same order they were put there, unless they block on a resource, get
timesliced or are pre-empted by a higher priority process.
  
  Which means they don't finish in the same order if (a) you have
  more than one cpu, (b) they do any I/O (including delivering the
  output back which they all do), or (c) some of them run long enough
  to consume a timeslice.
  
Try it and see.  I'm sure you'll run more processes with speedycgi, but
you'll probably run a whole lot fewer perl interpreters and need less ram.
  
  Do you have a benchmark that does some real work (at least a dbm
  lookup) to compare against a front/back end mod_perl setup?

 No, but if you send me one, I'll run it.



RE: Fwd: [speedycgi] Speedycgi scales better than mod_perl withsc ripts that contain un-shared memory

2001-01-19 Thread Stephen Anderson



   This doesn't affect the argument, because the core of it is that:
   
   a) the CPU will not completely process a single task all 
 at once; instead,
   it will divide its time _between_ the tasks
   b) tasks do not arrive at regular intervals
   c) tasks take varying amounts of time to complete
   
[snip]

  I won't agree with (a) unless you qualify it further - what 
 do you claim
  is the method or policy for (a)?

I think this has been answered ... basically, resource conflicts (including
I/O), interrupts, long running tasks, higher priority tasks, and, of course,
the process yielding, can all cause the CPU to switch processes (which of
these qualify depends very much on the OS in question).

This is why, despite the efficiency of single-task running, you can usefully
run more than one process on a UNIX system. Otherwise, if you ran a single
Apache process and had no traffic, you couldn't run a shell at the same time
- Apache would consume practically all your CPU in its select() loop 8-)

  Apache httpd's are scheduled on an LRU basis.  This was 
 discussed early
  in this thread.  Apache uses a file-lock for its mutex 
 around the accept
  call, and file-locking is implemented in the kernel using a 
 round-robin
  (fair) selection in order to prevent starvation.  This results in
  incoming requests being assigned to httpd's in an LRU fashion.

I'll apologise, and say, yes, of course you're right, but I do have a query:

There are at (IIRC) 5 methods that Apache uses to serialize requests:
fcntl(), flock(), Sys V semaphores, uslock (IRIX only) and Pthreads
(reliably only on Solaris). Do they _all_ result in LRU?

  Remember that the httpd's in the speedycgi case will have very little
  un-shared memory, because they don't have perl interpreters in them.
  So the processes are fairly indistinguishable, and the LRU isn't as 
  big a penalty in that case.


Ye_but_, interpreter for interpreter, won't the equivalent speedycgi
have roughly as much unshared memory as the mod_perl? I've had a lot of
(dumb) discussions with people who complain about the size of
Apache+mod_perl without realising that the interpreter code's all shared,
and with pre-loading a lot of the perl code can be too. While I _can_ see
speedycgi having an advantage (because it's got a much better overview of
what's happening, and can intelligently manage the situation), I don't think
it's as large as you're suggesting. I think this needs to be intensively
benchmarked to answer that

  other interpreters, and you expand the number of interpreters in use.
  But still, you'll wind up using the smallest number of interpreters
  required for the given load and timeslice.  As soon as those 1st and
  2nd perl interpreters finish their run, they go back at the beginning
  of the queue, and the 7th/ 8th or later requests can then 
 use them, etc.
  Now you have a pool of maybe four interpreters, all being 
 used on an MRU
  basis.  But it won't expand beyond that set unless your load 
 goes up or
  your program's CPU time requirements increase beyond another 
 timeslice.
  MRU will ensure that whatever the number of interpreters in use, it
  is the lowest possible, given the load, the CPU-time required by the
  program and the size of the timeslice.

Yep...no arguments here. SpeedyCGI should result in fewer interpreters.


I will say that there are a lot of convincing reasons to follow the
SpeedyCGI model rather than the mod_perl model, but I've generally thought
that the increase in that kind of performance that can be obtained as
sufficiently minimal as to not warrant the extra layer... thoughts, anyone?

Stephen.



RE: Fwd: [speedycgi] Speedycgi scales better than mod_perl withsc ripts that contain un-shared memory

2001-01-19 Thread Matt Sergeant

There seems to be a lot of talk here, and analogies, and zero real-world
benchmarking.

Now it seems to me from reading this thread, that speedycgi would be
better where you run 1 script, or only a few scripts, and mod_perl might
win where you have a large application with hundreds of different URLs
with different code being executed on each. That may change with the next
release of speedy, but then lots of things will change with the next major
release of mod_perl too, so its irrelevant until both are released.

And as well as that, speedy still suffers (IMHO) that is still follows the
CGI scripting model, whereas mod_perl offers a much more flexible
environemt, and feature rich API (the Apache API). What's more, I could
never build something like AxKit in speedycgi, without resorting to hacks
like mod_rewrite to hide nasty URL's. At least thats my conclusion from
first appearances.

Either way, both solutions have their merits. Neither is going to totally
replace the other.

What I'd really like to do though is sum up this thread in a short article
for take23. I'll see if I have time on Sunday to do it.

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withsc ripts that contain un-shared memory

2001-01-19 Thread Sam Horrocks

  You know, I had brief look through some of the SpeedyCGI code yesterday,
  and I think the MRU process selection might be a bit of a red herring. 
  I think the real reason Speedy won the memory test is the way it spawns
  processes.

 Please take a look at that code again.  There's no smoke and mirrors,
 no red-herrings.  Also, I don't look at the benchmarks as "winning" - I
 am not trying to start a mod_perl vs speedy battle here.  Gunther wanted
 to know if there were "real bechmarks", so I reluctantly put them up.

 Here's how SpeedyCGI works (this is from version 2.02 of the code):

When the frontend starts, it tries to quickly grab a backend from
the front of the be_wait queue, which is a LIFO.  This is in
speedy_frontend.c, get_a_backend() function.

If there aren't any idle be's, it puts itself onto the fe_wait queue.
Same file, get_a_backend_hard().

If this fe (frontend) is at the front of the fe_wait queue, it
"takes charge" and starts looking to see if a backend needs to be
spawned.  This is part of the "frontend_ping()" function.  It will
only spawn a be if no other backends are being spawned, so only
one backend gets spawned at a time.

Every frontend in the queue, drops into a sigsuspend and waits for an
alarm signal.  The alarm is set for 1-second.  This is also in
get_a_backend_hard().

When a backend is ready to handle code, it goes and looks at the fe_wait
queue and if there are fe's there, it sends a SIGALRM to the one at
the front, and sets the sent_sig flag for that fe.  This done in
speedy_group.c, speedy_group_sendsigs().

When a frontend wakes on an alarm (either due to a timeout, or due to
a be waking it up), it looks at its sent_sig flag to see if it can now
grab a be from the queue.  If so it does that.  If not, it runs various
checks then goes back to sleep.

 In most cases, you should get a be from the lifo right at the beginning
 in the get_a_backend() function.  Unless there aren't enough be's running,
 or somethign is killing them (bad perl code), or you've set the
 MaxBackends option to limit the number of be's.


  If I understand what's going on in Apache's source, once every second it
  has a look at the scoreboard and says "less than MinSpareServers are
  idle, so I'll start more" or "more than MaxSpareServers are idle, so
  I'll kill one".  It only kills one per second.  It starts by spawning
  one, but the number spawned goes up exponentially each time it sees
  there are still not enough idle servers, until it hits 32 per second. 
  It's easy to see how this could result in spawning too many in response
  to sudden load, and then taking a long time to clear out the unnecessary
  ones.
  
  In contrast, Speedy checks on every request to see if there are enough
  backends running.  If there aren't, it spawns more until there are as
  many backends as queued requests.
 
 Speedy does not check on every request to see if there are enough
 backends running.  In most cases, the only thing the frontend does is
 grab an idle backend from the lifo.  Only if there are none available
 does it start to worry about how many are running, etc.

  That means it never overshoots the mark.

 You're correct that speedy does try not to overshoot, but mainly
 because there's no point in overshooting - it just wastes swap space.
 But that's not the heart of the mechanism.  There truly is a LIFO
 involved.  Please read that code again, or run some tests.  Speedy
 could overshoot by far, and the worst that would happen is that you
 would get a lot of idle backends sitting in virtual memory, which the
 kernel would page out, and then at some point they'll time out and die.
 Unless of course the load increases to a point where they're needed,
 in which case they would get used.

 If you have speedy installed, you can manually start backends yourself
 and test.  Just run "speedy_backend script.pl " to start a backend.
 If you start lots of those on a script that says 'print "$$\n"', then
 run the frontend on the same script, you will still see the same pid
 over and over.  This is the LIFO in action, reusing the same process
 over and over.

  Going back to your example up above, if Apache actually controlled the
  number of processes tightly enough to prevent building up idle servers,
  it wouldn't really matter much how processes were selected.  If after
  the 1st and 2nd interpreters finish their run they went to the end of
  the queue instead of the beginning of it, that simply means they will
  sit idle until called for instead of some other two processes sitting
  idle until called for.  If the systems were both efficient enough about
  spawning to only create as many interpreters as needed, none of them
  would be sitting idle and memory usage would always be as low as
  possible.
  
  I don't know if I'm explaining this very well, but the gist of my theory
  is that at any given time both 

httpd keeps crashing overnight

2001-01-19 Thread Philip Mak

Hello,

On my machine, I am running two instances of Apache. They both use the
same executable, but different config files; one has "AddModule
mod_perl.c" and the other one doesn't.

I used to only run one instance of Apache with the same executable as I
have now that was mod_perl enabled. Back then, it was stable.

My problem is that the mod_perl httpd is sometimes crashing overnight. In
the last three days, it has mysteriously crashed twice. When I restart it
with "apachectl_modperl start" (apachectl_modperl is just apachectl but
with the config file path set differently), it comes up with no problem,
but I suppose it might crash again in the future.

Examination of the error_log after I restart it only shows:

[Fri Jan 19 04:44:27 2001] [error] [asp] [4941] [WARN] redefinition of
subroutine Apache::ASP::Compiles::_tmp_global_asa::display_footer at
Apache::ASP::Compiles::_tmp_global_asa::_home_sakura_linazel_index_ssixINL,
originally defined at
Apache::ASP::Compiles::_tmp_global_asa::_home_sakura_linazel_reasons_aspxINL
[Fri Jan 19 08:35:08 2001] [warn] pid file
/usr/local/apache/logs/httpd_modperl.pid overwritten -- Unclean shutdown
of previous Apache run?

which doesn't give me any idea why it crashed. I also tried doing a `find
/ -name "core"` but did not find any core files in a directory that seems
to be related to Apache.

Running "uptime" shows that the server has been up all along, and the
non-mod_perl enabled Apache is running fine.

Does anyone know how I can go about tracking the cause of the crash?

Thanks,

-Philip Mak ([EMAIL PROTECTED])




Re: httpd keeps crashing overnight

2001-01-19 Thread George Sanderson

Hi George,

Thanks for the reply...

 My problem is that the mod_perl httpd is sometimes crashing overnight. In
 the last three days, it has mysteriously crashed twice. When I restart it
 with "apachectl_modperl start" (apachectl_modperl is just apachectl but
 with the config file path set differently), it comes up with no problem,
 but I suppose it might crash again in the future.
 
 How long had it been running OK before you started having problems?
 Did something change just before the problem started occurring?

Previously, I only had one mod_perl httpd running on the system. I split
it into a non-mod_perl httpd and a mod_perl httpd because the system was
running out of memory.

This change happened 4 days ago. Before that, I did not have this crashing 
problem.

 What ports are you using for your httpd that does and does not have the
 problem?

Both httpds listen on port 80. The mod_perl enabled one listens on
216.74.79.145:80 and 216.74.79.194:80, while the non-mod_perl enabled one
listens on port 80 of all other IP addresses on the machine.

 Is there any indications in the access_log at about the time of the crash?

203.177.3.11 - - [19/Jan/2001:05:05:21 -0500] "GET /anime/seraphimcall/
HTTP/1.1" 200 3256 "http://www.animelyrics.com/anime/_S" "Mozilla/4.0
(compatible; MSIE 5.0; Windows 98; DigExt)"
207.35.188.14 - - [19/Jan/2001:08:40:20 -0500] "GET / HTTP/1.0" 200 6205
"-" "Mozilla/4.7 [en]C-CCK-MCD  (WinNT; U)"

Those are the two log entries for animelyrics.com before and after the
crash; I don't see anything unusual. I also looked at slayers.aaanime.net:

24.67.224.12 - - [19/Jan/2001:05:03:24 -0500] "GET
/~linazel/fanfics/fanfic.asp?fanfic=nobilitypart=10 HTTP/1.0" 200 15547
"http://slayers.aaanime.net/~linazel/fanfics/fanfic.asp?fanfic=nobilitypart
=9"
"Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt; AtHome0107;
sureseeker.com; Hotbar 2.0)"
172.133.91.48 - - [19/Jan/2001:08:35:32 -0500] "GET 
/~linazel/banners/ybanner.gif HTTP/1.1" 200 20544 "-" "Opera/5.02 (Windows
98;
U)  [en]"

I don't see anything out of the ordinary here, either.

 Perhaps you could run a cron job to scan the processes in order narrow down
 the exact time of the problem.

What would I be looking for?
Is there any indication of a burst load (or a similar pattern) just before
crash?
Is there a back end data base involved?

It took about 4 hours before the httpd process was restarted.
It would be nice to know how long after the last request the httpd root
process crashed.  If a cron job ran once a minute to scan for the httpd
root process and report when it disappears, it might be a clue as to the
nature of the problem.  In the report you might want to include information
about the last 10 minutes (last 10 scans from temp files 1 through 10) of
all the httpd process running via the `ps -gaux|grep httpd`.  It would be
interesting to know how many httpd process were running and also what
`vmstat` had to say before and after the crash.

Often when trying to solve an intermittent problem, it is good, if you can
duplicate the problem a will.  The information obtained about the problem
should help you to achieve this.  

For example, the access_log indicates that the last browser access before
the problem were both:
"Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt;".
However, it is difficult to tell if this is just a coincidence or not until
a pattern can be established.






Re: Upgrading mod_perl on production machine (again)

2001-01-19 Thread rolf van widenfelt


face it, you are trying to perform surgery on a live subject...

with all the Makefiles you'll be making, (httpd, modperl, perl...) you're bound
to slip
on one of them and install over some of your existing stuff.

i went thru a conflict like this once, and avoided it by simply getting
a second machine, and installing all the new stuff there.

but, if someone can offer a procedure for setting up two independent httpd+modperl+perl
environments on one machine it would be pretty interesting!
(sorry, if this was already outlined in the responses last Sept)

-rolf

Bill Moseley [EMAIL PROTECTED] writes:
 
 This is a revisit of a question last September where I asked about
 upgrading mod_perl and Perl on a busy machine.
 
 IIRC, Greg, Stas, and Perrin offered suggestions such as installing from
 RPMs or tarballs, and using symlinks.  The RPM/tarball option worries me a
 bit, since if I do forget a file, then I'll be down for a while, plus I
 don't have another machine of the same type where I can create the tarball.
  Sym-linking works great for moving my test application into live action,
 but it seems trickier to do this with the entire Perl tree.
 
 Here's the problem: this client only has this one machine, yet I need to
 setup a test copy of the application on the same machine running on a
 different port for the client and myself to test.  And I'd like to know
 that when the test code gets moved live, that all the exact same code is
 running (modules and all).



Re: httpd keeps crashing overnight

2001-01-19 Thread Joshua Chamas

Philip Mak wrote:
 
 Examination of the error_log after I restart it only shows:
 
 [Fri Jan 19 04:44:27 2001] [error] [asp] [4941] [WARN] redefinition of
 subroutine Apache::ASP::Compiles::_tmp_global_asa::display_footer at
 Apache::ASP::Compiles::_tmp_global_asa::_home_sakura_linazel_index_ssixINL,
 originally defined at
 Apache::ASP::Compiles::_tmp_global_asa::_home_sakura_linazel_reasons_aspxINL

This has nothing to do with crashing.

This warning says that you have a sub display_footer {} definition 
in both index.ssi  reasons.asp, one of which overrides the other
depending on the order of compilation.  The real fix I think is to
not have subs defined in ASP scripts, and move them to a real 
perl package, or global.asa which serves as a package in the
namespace as your scripts.

Another fix is to have UniquePackages turned on so each scripts
has its own package namespace, but then scripts can't share perl 
globals with each other and global.asa, making some things harder.

--Josh

_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



Re: Upgrading mod_perl on production machine (again)

2001-01-19 Thread Marc Spitzer

make 2 directories:
/opt/local1
and
/opt/local/2

do an original install in /opt/local1, perl http mod_perl what ever packages
you need etc.  when it is time to upgrade do a new install in /opt/local2 of
what you need, run the httpd on an off port, i.e. port 8765, until you get
the new stuff working correctly.  turn off the old httpd and move the new
httpd to port 80, now you have achieved upgrade with a clean name space.
when you are sure that everything is working backup /opt/local1 and delete
all the files in it.  the next upgrade you need to do use /opt/local1 as the
build area.

marc
- Original Message -
From: "rolf van widenfelt" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: "Bill Moseley" [EMAIL PROTECTED]
Sent: Friday, January 19, 2001 3:09 PM
Subject: Re: Upgrading mod_perl on production machine (again)



 face it, you are trying to perform surgery on a live subject...

 with all the Makefiles you'll be making, (httpd, modperl, perl...) you're
bound
 to slip
 on one of them and install over some of your existing stuff.

 i went thru a conflict like this once, and avoided it by simply getting
 a second machine, and installing all the new stuff there.

 but, if someone can offer a procedure for setting up two independent
httpd+modperl+perl
 environments on one machine it would be pretty interesting!
 (sorry, if this was already outlined in the responses last Sept)

 -rolf

 Bill Moseley [EMAIL PROTECTED] writes:
 
  This is a revisit of a question last September where I asked about
  upgrading mod_perl and Perl on a busy machine.
 
  IIRC, Greg, Stas, and Perrin offered suggestions such as installing from
  RPMs or tarballs, and using symlinks.  The RPM/tarball option worries me
a
  bit, since if I do forget a file, then I'll be down for a while, plus I
  don't have another machine of the same type where I can create the
tarball.
   Sym-linking works great for moving my test application into live
action,
  but it seems trickier to do this with the entire Perl tree.
 
  Here's the problem: this client only has this one machine, yet I need to
  setup a test copy of the application on the same machine running on a
  different port for the client and myself to test.  And I'd like to know
  that when the test code gets moved live, that all the exact same code is
  running (modules and all).





ap_configtestonly

2001-01-19 Thread Rodney Tamblyn

Hi everyone,

I am new to mod perl.

Just attempting to install on an mklinux box (DR3).  I am having some problems.

If anyone has successfully installed it is prepared to give me some pointers or 
suggestions on how to get this going, please email me off-list.

Here's the error I get during make:

in function 'perl_startup':
mod_perl.c:738 'ap_configtestonly' undeclared (first use this function)
(Each undeclaired identified is reported only once for each function it appears in)



-- 
--
Rodney Tamblyn
Educational Media group
Higher Education Development Centre, 75 Union Place
University of Otago, PO Box 56, Dunedin, New Zealand
ph +64 3 479 7580 Fax +64 3 479 8362
http://hedc.otago.ac.nz ~ http://rodney.weblogs.com




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-19 Thread Perrin Harkins

On Fri, 19 Jan 2001, Sam Horrocks wrote:
   You know, I had brief look through some of the SpeedyCGI code yesterday,
   and I think the MRU process selection might be a bit of a red herring. 
   I think the real reason Speedy won the memory test is the way it spawns
   processes.
 
  Please take a look at that code again.  There's no smoke and mirrors,
  no red-herrings.

I didn't mean that MRU isn't really happening, just that it isn't the
reason why Speedy is running fewer interpeters.

  Also, I don't look at the benchmarks as "winning" - I
  am not trying to start a mod_perl vs speedy battle here.

Okay, but let's not be so polite about things that we don't acknowledge
when someone is onto a better way of doing things.  Stealing good ideas
from other projects is a time-honored open source tradition.

  Speedy does not check on every request to see if there are enough
  backends running.  In most cases, the only thing the frontend does is
  grab an idle backend from the lifo.  Only if there are none available
  does it start to worry about how many are running, etc.

Sorry, I had a lot of the details about what Speedy is doing wrong.  
However, it still sounds like it has a more efficient approach than
Apache in terms of managing process spawning.

  You're correct that speedy does try not to overshoot, but mainly
  because there's no point in overshooting - it just wastes swap space.
  But that's not the heart of the mechanism.  There truly is a LIFO
  involved.  Please read that code again, or run some tests.  Speedy
  could overshoot by far, and the worst that would happen is that you
  would get a lot of idle backends sitting in virtual memory, which the
  kernel would page out, and then at some point they'll time out and die.

When you spawn a new process it starts out in real memory, doesn't
it?  Spawning too many could use up all the physical RAM and send a box
into swap, at least until it managed to page out the idle
processes.  That's what I think happened to mod_perl in this test.

  If you start lots of those on a script that says 'print "$$\n"', then
  run the frontend on the same script, you will still see the same pid
  over and over.  This is the LIFO in action, reusing the same process
  over and over.

Right, but I don't think that explains why fewer processes are running.  
Suppose you start 10 processes, and then send in one request at a time,
and that request takes one time slice to complete.  If MRU works
perfectly, you'll get process 1 over and over again handling the requests.  
LRU will use process 1, then 2, then 3, etc.  But both of them have 9
processes idle and one in use at any given time.  The 9 idle ones should
either be killed off, or ideally never have been spawned in the first
place.  I think Speedy does a better job of preventing unnecessary process
spawning.

One alternative theory is that keeping the same process busy instead of
rotating through all 10 means that the OS can page out the other 9 and
thus use less physical RAM.

Anyway, I feel like we've been putting you on the spot, and I don't want
you to feel obligated to respond personally to all the messages on this
thread.  I'm only still talking about it because it's interesting and I've
learned a couple of things about Linux and Apache from it.  If I get the
chance this weekend, I'll try some tests of my own.

- Perrin




Compiling mod_perl 1.24 with the Sun Solaris C Compiler

2001-01-19 Thread Matisse Enzer

I'm having trouble getting mod_perl 1.24 to compile using the Solaris compiler
(Version is: Sun WorkShop 6 2000/04/07 C 5.1)

Any ideas?

For various reasons I'm using the Apache 1.3.12 source tree, and I can compile
fine using gcc, but the Solaris compiler complains:


/opt/SUNWspro/WS6/bin/cc -c  -I../../os/unix -I../../include 
-DSOLARIS2=270 -DUSE_EXPAT -I../../lib/expat-lite 
`/export/home/matisse/devel/apache/apache_1.3.12/src/apaci` 
-I/usr/local/include 
-I/usr/local/lib/perl5/5.00503/sun4-solaris/CORE  -I. -I../.. 
-DUSE_PERL_SSI -DMOD_PERL -KPIC -DSHARED_MODULE mod_include.c  mv 
mod_include.o mod_include.lo
"/usr/local/lib/perl5/5.00503/sun4-solaris/CORE/iperlsys.h", line 
319: formal parameter lacks name: param #1
"/usr/local/lib/perl5/5.00503/sun4-solaris/CORE/iperlsys.h", line 
319: formal parameter lacks name: param #2
"/usr/local/lib/perl5/5.00503/sun4-solaris/CORE/iperlsys.h", line 
319: formal parameter lacks name: param #3
"/usr/local/lib/perl5/5.00503/sun4-solaris/CORE/iperlsys.h", line 
319: syntax error before or at: __attribute__
"/usr/local/lib/perl5/5.00503/sun4-solaris/CORE/iperlsys.h", line 
319: warning: syntax error:  empty declaration
"/usr/include/ctype.h", line 48: cannot recover from previous errors
cc: acomp failed for mod_include.c



PS: I need to use this compiler becaue i want to compile in another 
module, mod_curl.c (StoryServer)
which require the Solaris compiler.


-- 
---
Matisse Enzer
TechTv Web Engineering
[EMAIL PROTECTED]
415-355-4364 (desk)
415-225-6703 (cellphone)