Re: Modperl/Apache deficiencies... Memory usage.

2000-04-16 Thread shane

Perrin-
On Sat, Apr 15, 2000 at 11:33:15AM -0700, Perrin Harkins wrote:
  Each process of apache has
  it's registry which holds the compiled perl scripts in..., a copy of
  each for each process.  This has become an issue for one of the
  companies that I work for, and I noted from monitoring the list that
  some people have apache processes that are upwards of 25Megs, which is
  frankly ridiculous.
 
 I have processes that large, but more than 50% of that is shared through
 copy-on-write.
 
  I wrote a very small perl engine
  for phhttpd that worked within it's threaded paradigm that sucked up a
  neglibible amount of memory which used a very basic version of
  Apache's registry.
 
 Can you explain how this uses less memory than mod_perl doing the same
 thing?  Was it just that you were using fewer perl interpreters?  If so, you
 need to improve your use of apache with a multi-server setup.  The only way
 I could see phttpd really using less memory to do the same work is if you
 somehow managed to get perl to share more of its internals in memory.  Did
 you?

Yep very handily I might add ;-).  Basically phhttpd is not process
based, it's threaded based.  Which means that everything is running
inside of the same address space.  Which means 100% sharing except for
the present local stack of variables... which is very minimal.  In
terms of the perl thing... when you look at your processes and see all
that non-shared memory, most of that is stack variables.  Now most
webservers are running on single processor machines, so they get no
benefit from having 10s or even 100s of copies of these perl stack
variables.  Its much more efficient to have a single process handle
all the perl requests.  On a multiprocessor box that single process
could have multiple threads in order to take advantage of the
processors.  See..., mod_perl stores the stack state of every script
it runs in the apache process... for every script... copies of it,
many many copies of it.  This is not efficient.  What would be
efficient is to have as many threads/processes as you have processors
for the mod_perl engine.  In other words seperate the engine from the
apache process so that there is never unneccesary stack variables
being tracked.

Hmm... can I explain this better.  Let me try.  Okay, for every apache
proccess there is an entire perl engine with all the stack variables
for every script you run recorded there.  What I'm proposing is a
system where by there would be a seperate process that would have only
a perl engine in it... you would make as many of these processes as
you have processors.  (Or multithread them... it doesn't really
matter)  Now your apache processes would not have a bunch of junk
memory in them.  Your apache processes would be the size of a stock
apache process, like 4-6M or so, and you would have 1 process that
would be 25MB or so that would have all your registry in it.  For a
high capacity box this would be an incredible boon to increasing
capacity.  (I'm trying to explain clearly, but I'd be the first to
admit this isn't one of my strong points)

As to how the multithreaded phhttpd can handle tons of load, well...
that's a seperate issue and frankly a question much better handled by
Zach.  I understand it very well, but I don't feel that I could
adequately explain it.  Its based on real time sig_queue software
technology... for a "decent" reference on this you can take a look at
a book by Oreily called "POSIX.4 Programming for the Real World".  I
should say that this book doesn't go into enough depth... but it's the
only book that goes into any depth that I could find.

 
  What I'm
  thinking is essentially we take the perl engine which has the apache
  registry and all the perl symbols etc., and seperate it into it's own
  process which would could be multithreaded (via pthreads) for multiple
  processor boxes.  (above 2 this would be beneficial probably)  On the
  front side the apache module API would just connect into this other
  process via shared memory pages (shmget et. al), or Unix pipes or
  something like that.
 
 This is how FastCGI, and all the Java servlet runners (JServ, Resin, etc.)
 work.  The thing is, even if you run the perl interpreters in a
 multi-threaded process, it still needs one interpreter per perl thread and I
 don't know how much you'd be able to share between them.  It might not be
 any smaller at all.

But there is no need to have more than one perl thread per processor.
Right now we have a perl "thread" (er.. engine is a better term) per
process.  Since most boxes start up 10 processes or so of Apache we'd
be talking about a memory savings something like this:
6MB stock apache process
25MB (we'll say that's average) mod_perl apache process 50% shared,
leaving 12.5 MB non shared
The way it works now: 12.5 * 10=125MB + 12.5 (shared bit one
instance)= 147.5 MB total.
Suggested way:
6MB stock with about 3MB shared or so.  3MB * 10=30 +25MB mod_perl
process = 55MB total.

That 

Re: Modperl/Apache deficiencies... Memory usage.

2000-04-16 Thread Stas Bekman

On Sat, 15 Apr 2000 [EMAIL PROTECTED] wrote:

   I wrote a very small perl engine
   for phhttpd that worked within it's threaded paradigm that sucked up a
   neglibible amount of memory which used a very basic version of
   Apache's registry.
  
  Can you explain how this uses less memory than mod_perl doing the same
  thing?  Was it just that you were using fewer perl interpreters?  If so, you
  need to improve your use of apache with a multi-server setup.  The only way
  I could see phttpd really using less memory to do the same work is if you
  somehow managed to get perl to share more of its internals in memory.  Did
  you?
 
 Yep very handily I might add ;-).  Basically phhttpd is not process
 based, it's threaded based.  Which means that everything is running
 inside of the same address space.  Which means 100% sharing except for
 the present local stack of variables... which is very minimal.  In
 terms of the perl thing... when you look at your processes and see all
 that non-shared memory, most of that is stack variables.  Now most
 webservers are running on single processor machines, so they get no
 benefit from having 10s or even 100s of copies of these perl stack
 variables.  Its much more efficient to have a single process handle
 all the perl requests.  On a multiprocessor box that single process
 could have multiple threads in order to take advantage of the
 processors.  See..., mod_perl stores the stack state of every script
 it runs in the apache process... for every script... copies of it,
 many many copies of it.  This is not efficient.  What would be
 efficient is to have as many threads/processes as you have processors
 for the mod_perl engine.  In other words seperate the engine from the
 apache process so that there is never unneccesary stack variables
 being tracked.

I'm not sure you are right by claiming that the best performance will be
achieved when you have a single process/thread per given processor. This
would be true *only* if the nature of your code would be CPU bound. 
Unfortunately there are various IO operations and communications with
other components like RDBMS engines, which in turn have their IO as well. 
Given that your CPU is idle while the IO operation is under process, you
could use the CPU for processing another request at this time. 

Hmm, that's the whole point of the multi-process OS. Unless I
misunderstood your suggestion, what you are offering is a kinda DOS-like
OS where there is only one process that occupies CPU at any given time.
(well, assuming that the rest of the OS essential processes are running
somewhere too in a multi-processes environment.)

__
Stas Bekman | JAm_pH--Just Another mod_perl Hacker
http://stason.org/  | mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]  | http://perl.orghttp://stason.org/TULARC/
http://singlesheaven.com| http://perlmonth.com http://sourcegarden.org
--




Re: Modperl/Apache deficiencies... Memory usage.

2000-04-16 Thread Gunther Birznieks

I think I may be a bit dense on this list so forgive me if I try to clarify 
(at least for myself to make sure I have this right)...

I think what you are proposing is not that much different from the proxy 
front-end model. The mod_proxy is added overhead, but that solves your 
memory problem. You can have 50 apache processes on the front-end dealing 
with images and the like and then have only 2 or 5 or however many 
Apache/Perl processes on the backend.

The only inefficiency with this is that HTTP is the protocol being used for 
the front-end HTTPD daemon to communicate with Perl instead of a direct 
socket using a binary/compressed data protocol.

By the way, if you really prefer this out of process yet still a pool of 
Perl interpreters model, you could always consider purchasing Binary 
Evolution's Velocigen product for Netscape for UNIX. I believe they have a 
mode that allows the Perl engine to run out-of-process with a lightweight 
NSAPI wrapper talking to Perl.

It turns out that this is probably the best way to deal with a buggy 
product like Netscape anyway... NSAPI is such a flakey beast that it's no 
wonder that a company would want to separate the application processes out 
(but now I am getting out of topic).

It's likely that this is a faster solution that the mod_proxy solution 
mod_perl uses because mod_proxy and HTTP are both relatively complex and 
designed to do more than provide back-end application server communications.

Here's the relevant Velocigen URL:

http://www.binaryevolution.com/velocigen/arch.vet

However, I would caution that really mod_perl speeds things up SO much as 
it is, that this architectural improvement over using front-end/back-end 
apache servers is really probably not going to make that big a deal unless 
you are writing something that will be under some really really heavy 
stress. And, of course, you should do your own benchmarking to see if this 
is the case.

While you are at it, you might consider PerlEx from ActiveState. As that 
provide in-process thread-pooled Perl engines that run in the IIS memory 
space.

But again, I would stress that speed isn't the only thing. Think about 
reliability. I think the mod_perl model tends to be more reliable (in the 
front/backend scenario) because the apache servers can be monitored to die 
off independently when they spin out of control.. and they can't pollute 
each other's memory space.  Using some mod_rewrite rules, you can also 
limit which applications are partitioned from each other in which back-end 
services as well very easily.

I don't know how easily you can specify what I would term 
application-affinities in the Velocigen or PerlEx model based on URL alone.

Anyway, good luck with your search for information...

Thanks,
 Gunther

At 10:46 PM 4/15/00 +, [EMAIL PROTECTED] wrote:
Perrin-
On Sat, Apr 15, 2000 at 11:33:15AM -0700, Perrin Harkins wrote:
   Each process of apache has
   it's registry which holds the compiled perl scripts in..., a copy of
   each for each process.  This has become an issue for one of the
   companies that I work for, and I noted from monitoring the list that
   some people have apache processes that are upwards of 25Megs, which is
   frankly ridiculous.
 
  I have processes that large, but more than 50% of that is shared through
  copy-on-write.
 
   I wrote a very small perl engine
   for phhttpd that worked within it's threaded paradigm that sucked up a
   neglibible amount of memory which used a very basic version of
   Apache's registry.
 
  Can you explain how this uses less memory than mod_perl doing the same
  thing?  Was it just that you were using fewer perl interpreters?  If 
 so, you
  need to improve your use of apache with a multi-server setup.  The only way
  I could see phttpd really using less memory to do the same work is if you
  somehow managed to get perl to share more of its internals in memory.  Did
  you?

Yep very handily I might add ;-).  Basically phhttpd is not process
based, it's threaded based.  Which means that everything is running
inside of the same address space.  Which means 100% sharing except for
the present local stack of variables... which is very minimal.  In
terms of the perl thing... when you look at your processes and see all
that non-shared memory, most of that is stack variables.  Now most
webservers are running on single processor machines, so they get no
benefit from having 10s or even 100s of copies of these perl stack
variables.  Its much more efficient to have a single process handle
all the perl requests.  On a multiprocessor box that single process
could have multiple threads in order to take advantage of the
processors.  See..., mod_perl stores the stack state of every script
it runs in the apache process... for every script... copies of it,
many many copies of it.  This is not efficient.  What would be
efficient is to have as many threads/processes as you have processors
for the mod_perl engine.  In other words 

Re: Modperl/Apache deficiencies... Memory usage.y

2000-04-16 Thread shane

On Sat, Apr 15, 2000 at 01:39:38PM -0500, Leslie Mikesell wrote:
 According to [EMAIL PROTECTED]:
 
  Does anyone know of any program which has been developed like this?
  Basically we'd be turning the "module of apache" portion of mod_perl
  into a front end to the "application server" portion of mod_perl that
  would do the actual processing.
 
 This is basically what you get with the 'two-apache' mode.

To be frank... it's not.  Not even close.  Especially in the case that
the present site I'm working on where they have certain boxes for
dynamic, others for static.  This is usefull when you have one box
running dynamic/static requests..., but it's not a solution, it's a
work around.  (I should say we're moving to have some boxes static
some dynamic... at present it's all jumbled up ;-()

 
  It seems quite logical that something
  like this would have been developed, but possibly not.  The seperation
  of the two components seems like it should be done, but there must be
  a reason why no one has done it yet... I'm afraid this reason would be
  the apache module API doesn't lend itself to this.
 
 The reason it hasn't been done in a threaded model is that perl
 isn't stable running threaded yet, and based on the history
 of making programs thread-safe, I'd expect this to take at
 least a few more years.  But, using a non-mod-perl front
 end proxy with ProxyPass and RewriteRule directives to hand
 off to a mod_perl backend will likely get you a 10-1 reduction
 in backend processes and you already know the configuration
 syntax for the second instance.

Well, now your discussing threaded perl... a whole seperate bag of
tricks :).  That's not what I'm talking about... I'm talking about
running a standard perl inside of a threaded enviro.  I've done this,
and thrown tens of thousands of requests at it with no problems.  I
believe threaded perl is an attempt to allow multiple simultaneous
requests going into a single perl engine that is "multi threaded".
There are problems with this... and it's difficult to accomplish, and
alltogether a slower approach than queing because of the context
switching type overhead.  Not to mention the I/O issue of this...
yikes! makes my head spin.

Thanks,
Shane.
 
  Les Mikesell
[EMAIL PROTECTED]



Re: Modperl/Apache deficiencies... Memory usage.

2000-04-16 Thread shane

On Sun, Apr 16, 2000 at 09:28:56AM +0300, Stas Bekman wrote:
 On Sat, 15 Apr 2000 [EMAIL PROTECTED] wrote:
 
I wrote a very small perl engine
for phhttpd that worked within it's threaded paradigm that sucked up a
neglibible amount of memory which used a very basic version of
Apache's registry.
   
   Can you explain how this uses less memory than mod_perl doing the same
   thing?  Was it just that you were using fewer perl interpreters?  If so, you
   need to improve your use of apache with a multi-server setup.  The only way
   I could see phttpd really using less memory to do the same work is if you
   somehow managed to get perl to share more of its internals in memory.  Did
   you?
  
  Yep very handily I might add ;-).  Basically phhttpd is not process
  based, it's threaded based.  Which means that everything is running
  inside of the same address space.  Which means 100% sharing except for
  the present local stack of variables... which is very minimal.  In
  terms of the perl thing... when you look at your processes and see all
  that non-shared memory, most of that is stack variables.  Now most
  webservers are running on single processor machines, so they get no
  benefit from having 10s or even 100s of copies of these perl stack
  variables.  Its much more efficient to have a single process handle
  all the perl requests.  On a multiprocessor box that single process
  could have multiple threads in order to take advantage of the
  processors.  See..., mod_perl stores the stack state of every script
  it runs in the apache process... for every script... copies of it,
  many many copies of it.  This is not efficient.  What would be
  efficient is to have as many threads/processes as you have processors
  for the mod_perl engine.  In other words seperate the engine from the
  apache process so that there is never unneccesary stack variables
  being tracked.
 
 I'm not sure you are right by claiming that the best performance will be
 achieved when you have a single process/thread per given processor. This
 would be true *only* if the nature of your code would be CPU bound. 
 Unfortunately there are various IO operations and communications with
 other components like RDBMS engines, which in turn have their IO as well. 
 Given that your CPU is idle while the IO operation is under process, you
 could use the CPU for processing another request at this time. 
 
 Hmm, that's the whole point of the multi-process OS. Unless I
 misunderstood your suggestion, what you are offering is a kinda DOS-like
 OS where there is only one process that occupies CPU at any given time.
 (well, assuming that the rest of the OS essential processes are running
 somewhere too in a multi-processes environment.)

That is an excellent point Stas.  One that I considered a while ago
but sort of forgot about when I started this thread.  Hrmm... that
brings up much more complex issues.  Yes, your right that is my
assumption, and that's because that's the case I'm working under 90%
of the time.  It's a horrible assumption though for the "community at
large".  Hmm... well, you've stumped me... that's a very very clear
problem with the design I had in mind.  There are ways around this I
can see in my brain, but they are far from eloquent.  If something
were blocking on a network read it would stop the WHOLE perl engine...
TERRIBLE, not usefull at all for anyone that's going to be doing
something like that.  Well, there must be a way around it... if anyone
has any ideas please shoot them my way... this is a paradox of the nth
order.  Actually it's a problem for mod_perl too..., but it's not
nearly as large of a problem than for the design I had in mind.

Congrats Stas... good thinking.
Thanks,
Shane.
(DOSlike isn't fair though! :.  Though I see your point... efficiency
was the key element to what I was thinking, but I had mostly
considered the CPU bound case... the network bound case hadn't really
entered my mind.  The way around this is horribly yucky from a
programatic point of view... e-gads!)

 
 __
 Stas Bekman | JAm_pH--Just Another mod_perl Hacker
 http://stason.org/  | mod_perl Guide  http://perl.apache.org/guide 
 mailto:[EMAIL PROTECTED]  | http://perl.orghttp://stason.org/TULARC/
 http://singlesheaven.com| http://perlmonth.com http://sourcegarden.org
 --
 



Re: Modperl/Apache deficiencies... Memory usage.

2000-04-16 Thread Gunther Birznieks

At 11:52 PM 4/15/00 +, [EMAIL PROTECTED] wrote:
On Sun, Apr 16, 2000 at 09:28:56AM +0300, Stas Bekman wrote:
  On Sat, 15 Apr 2000 [EMAIL PROTECTED] wrote:
 
 I wrote a very small perl engine
 for phhttpd that worked within it's threaded paradigm that sucked 
 up a
 neglibible amount of memory which used a very basic version of
 Apache's registry.
   
Can you explain how this uses less memory than mod_perl doing the same
thing?  Was it just that you were using fewer perl 
 interpreters?  If so, you
need to improve your use of apache with a multi-server setup.  The 
 only way
I could see phttpd really using less memory to do the same work is 
 if you
somehow managed to get perl to share more of its internals in 
 memory.  Did
you?
  
   Yep very handily I might add ;-).  Basically phhttpd is not process
   based, it's threaded based.  Which means that everything is running
   inside of the same address space.  Which means 100% sharing except for
   the present local stack of variables... which is very minimal.  In
   terms of the perl thing... when you look at your processes and see all
   that non-shared memory, most of that is stack variables.  Now most
   webservers are running on single processor machines, so they get no
   benefit from having 10s or even 100s of copies of these perl stack
   variables.  Its much more efficient to have a single process handle
   all the perl requests.  On a multiprocessor box that single process
   could have multiple threads in order to take advantage of the
   processors.  See..., mod_perl stores the stack state of every script
   it runs in the apache process... for every script... copies of it,
   many many copies of it.  This is not efficient.  What would be
   efficient is to have as many threads/processes as you have processors
   for the mod_perl engine.  In other words seperate the engine from the
   apache process so that there is never unneccesary stack variables
   being tracked.
 
  I'm not sure you are right by claiming that the best performance will be
  achieved when you have a single process/thread per given processor. This
  would be true *only* if the nature of your code would be CPU bound.
  Unfortunately there are various IO operations and communications with
  other components like RDBMS engines, which in turn have their IO as well.
  Given that your CPU is idle while the IO operation is under process, you
  could use the CPU for processing another request at this time.
 
  Hmm, that's the whole point of the multi-process OS. Unless I
  misunderstood your suggestion, what you are offering is a kinda DOS-like
  OS where there is only one process that occupies CPU at any given time.
  (well, assuming that the rest of the OS essential processes are running
  somewhere too in a multi-processes environment.)

That is an excellent point Stas.  One that I considered a while ago
but sort of forgot about when I started this thread.  Hrmm... that
brings up much more complex issues.  Yes, your right that is my
assumption, and that's because that's the case I'm working under 90%
of the time.  It's a horrible assumption though for the "community at
large".  Hmm... well, you've stumped me... that's a very very clear
problem with the design I had in mind.  There are ways around this I
can see in my brain, but they are far from eloquent.  If something
were blocking on a network read it would stop the WHOLE perl engine...
TERRIBLE, not usefull at all for anyone that's going to be doing
something like that.  Well, there must be a way around it... if anyone
has any ideas please shoot them my way... this is a paradox of the nth
order.  Actually it's a problem for mod_perl too..., but it's not
nearly as large of a problem than for the design I had in mind.

Congrats Stas... good thinking.
Thanks,
Shane.
(DOSlike isn't fair though! :.  Though I see your point... efficiency
was the key element to what I was thinking, but I had mostly
considered the CPU bound case... the network bound case hadn't really
entered my mind.  The way around this is horribly yucky from a
programatic point of view... e-gads!)

I guess that's why you would want to try different mixes of numbers of 
servers on the back-end to see which gives you the greatest performance. 
Jeffrey Baker also brings IO issues up in his ApacheDBI posts about why a 
single pooled connection of DBI handles is not so hot in the real world 
when compared against the single handle cached per apache process.

If you are CPU bound, then it may be just as well as to have a few servers 
chugging away. And limit the number that can be forked off on the back-end. 
If you are IO bound, then you would launch many more. However, the Apache 
model of restarts and forking is not entirely shabby. That fact is, that if 
you find some apache processes cannot fulfill the task, then another one 
can always be created even if you have set a general upper limit of 
mod_perl processes that 

Re: [correction] Benchmarking CGI.pm and Apache::Request

2000-04-16 Thread Gunther Birznieks

I have not been a real fan of Apache::Request and so haven't used it, but 
23% does seem like a big difference.

Of course, if there is an initial hit in terms of parsing the incoming 
parameters in Perl versus C, then that would tend to be a fixed computation 
whose effect in slowing down the script would go down quite a bit during 
normal script operation. Since getting data out of the parameters (once 
parsed) should be computationally inexpensive.

Can you take out the print statement at the end that goes through the 
parameters and just print "Hello world" instead and see if the difference 
is still 23%... If it is, when we can assume that this is the primary 
slowdown and one that could be resolved if Lincoln Stein (or someone else) 
were to code a CGI.pm wrapper around Apache::Request that simply uses it 
for the core calls and calls CGI.pm for HTML generation type of calls.

In advance I have to say that I am still a bit biased towards the use of 
CGI.pm and Apache::Registry because I hate writing scripts that can only be 
used on one platform. To me, I am interested in finding out if there are 
some clean ways of making it possible for me not to write something 
proprietary to mod_perl and still not get embarressed by the speed. :)

Thanks,
 Gunther

At 05:48 PM 4/15/00 +0300, Stas Bekman wrote:
Well, Gunther has pointed out that the benchmark has been unfair, since
CGI.pm's methods have not been precompiled. I've also preloaded both
scripts this time to improve numbers and unloaded a system a bit while
running the tests. Here is the corrected version:

=head1 Benchmarking CGI.pm and Apache::Request

Let's write two registry scripts that use CCGI.pm and
CApache::Request to process the form's input and print it out.

   benchmarks/cgi_pm.pl
   
   use strict;
   use CGI;
   my $q = new CGI;
   print $q-header('text/plain');
   print join "\n", map {"$_ = ".$q-param($_) } $q-param;

   benchmarks/apache_request.pl
   
   use strict;
   use Apache::Request ();
   my $r = Apache-request;
   my $q = Apache::Request-new($r);
   $r-send_http_header('text/plain');
   print join "\n", map {"$_ = ".$q-param($_) } $q-param;

We preload both modules that we ought to benchmark in the
Istartup.pl:

   use Apache::Request ();
   use CGI  qw(-compile :all);

We will preload the both scripts as well:

   use Apache::RegistryLoader ();
   Apache::RegistryLoader-new-handler(
   "/perl/benchmarks/cgi_pm.pl",
"/home/httpd/perl/benchmarks/cgi_pm.pl");
   Apache::RegistryLoader-new-handler(
   "/perl/benchmarks/apache_request.pl",
"/home/httpd/perl/benchmarks/apache_request.pl");

Now let's benchmark the two:

   % ab -n 1000 -c 10 \

'http://localhost/perl/benchmarks/cgi_pm.pl?a=bc=+k+d+d+fd=asfas=+1+2+3+4'

   Time taken for tests:   23.950 seconds
   Requests per second:41.75
   Connnection Times (ms)
 min   avg   max
   Connect:0 045
   Processing:   204   238   274
   Total:204   238   319

   % ab -n 1000 -c 10 \

'http://localhost/perl/benchmarks/apache_request.pl?a=bc=+k+d+d+fd=asfas=+1+2+3+4'

   Time taken for tests:   18.406 seconds
   Requests per second:54.33
   Connnection Times (ms)
 min   avg   max
   Connect:0 032
   Processing:   156   183   202
   Total:156   183   234

Apparently the latter script using CApache::Request is about 23%
faster. If the input is going to be larger the speed up in per cents
grows as well.

Again this benchmark shows that the real timing of the input
processing, when the script is much heavier the overhead of using
CCGI.pm can be insignificant.

__
Stas Bekman | JAm_pH--Just Another mod_perl Hacker
http://stason.org/  | mod_perl Guide  http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  | http://perl.orghttp://stason.org/TULARC/
http://singlesheaven.com| http://perlmonth.com http://sourcegarden.org
--

__
Gunther Birznieks ([EMAIL PROTECTED])
Extropia - The Web Technology Company
http://www.extropia.com/




FW: Apache::Session::SysV - No space left on device

2000-04-16 Thread Gerald Richter

I forward it to the modperl list, hopefully somebody there know more about
Semaphores...
Gerald

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On Behalf Of
Neil Conway
Sent: Sunday, April 16, 2000 4:32 AM
To: [EMAIL PROTECTED]
Subject: Apache::Session::SysV - No space left on device


Hello everyone,

I'm using FreeBSD 3-STABLE, mod_perl 1.22, Apache 1.3.12, and
HTML::Embperl 1.3b2 on an x86 machine. I'm a newbie trying to get
Apache::Session to work (through HTML::Embperl). This is part of my
startup.pl:

%
use constant DB_INFO = 'DBI:Pg:dbname=template1';

BEGIN {
$ENV{EMBPERL_SESSION_CLASSES} = 'DBIStore SysVSemaphoreLocker';
$ENV{EMBPERL_SESSION_ARGS} = 'Datasource=' . DB_INFO;
}
%

When I try to access a simple HTML::Embperl using %mdat or %udat, I get
the following error:

[5596]ERR: 24: Line 10: Error in Perl code: No space left on device at
/usr/local/lib/perl5/site_perl/5.005/Apache/Session/SysVSemaphoreLocker.pm
line 63.

The output of 'ipcs -as' is as follows:

Semaphores:
T ID KEYMODE   OWNERGROUP  CREATOR   CGROUP NSEMS
OTIMECTIME
s  655365432014 --rw--- postgres postgres postgres postgres
16no-entry  5:49:28
s  655375432015 --rw--- postgres postgres postgres postgres
16no-entry  5:49:28

Apache is running as user 'nobody', group 'nobody'.

A similar thread was mentioned before on the modperl list
(http://forum.swarthmore.edu/epigone/modperl/spaygelthang/23142101.NAA14
[EMAIL PROTECTED]), but no one responded. That poster was
trying to use Apache::Session with
SysVSemaphoreLocker and OpenBSD.

What am I doing wrong? Do I need to allow Apache::Session to create
semaphores? How would I do that?

TIA

--
Neil Conway [EMAIL PROTECTED]
Get my GnuPG key from: http://klamath.dyndns.org/mykey.asc
Encrypted mail welcomed

Never criticize anybody until you have walked a mile in their
shoes, because by that time you will be a mile away and have
their shoes.




[3rd correction] Benchmarking Apache::Registry and Perl Content Handler

2000-04-16 Thread Stas Bekman

Geez, mod_perl rocks When I've run the benchmarks on the *strong*
machine the results were all different! 
700+ RPS for the hello benchmark!!!

The only mistery left is why registry is so slow on the slow machine
relative to fast machine??? And yeah I've used the latest apache/mod_perl

Enough exclamation marks, see it for yourself. I'll repost the
Apache::Request vs CGI.pm on the strong machine tomorrow.

=head1 Benchmarking

Now we will run different benchmarks to learn what techniques should
be used and what not. The following SW/HW is used for the testing
purposes:

  HW: Dual Pentium II (Deschutes) 400Mhz 512 KB cache 256MB 
  RAM (DIMM PC100)

  SW: Linux (RH 6.1) Perl 5.005_03
  Apache/1.3.12 mod_perl/1.22 mod_ssl/2.6.2 OpenSSL/0.9.5

The relevant Apache configuration:

  MinSpareServers 10
  MaxSpareServers 20
  StartServers 10
  MaxClients 20
  MaxRequestsPerChild 1


=head2 Apache::Registry and Perl Content Handler


=head3 The Light (Empty) Code

First lets see the overhead that CApache::Registry adds. In order to
do that we will use an almost empty scripts, that only send a basic
header and one word as a content.

The Iregistry.pl script running under CApache::Registry:

  benchmarks/registry.pl
  --
  use strict;
  print "Content-type: text/plain\r\n\r\n";
  print "Hello";

The Perl Content handler:

  Benchmark/Handler.pm
  
  package Benchmark::Handler;
  use Apache::Constants qw(:common);
  
  sub handler{
$r = shift;
$r-send_http_header('text/html');
$r-print("Hello");
return OK;
  }
  1;

with settings:

  PerlModule Benchmark::Handler
  Location /benchmark_handler
SetHandler perl-script
PerlHandler Benchmark::Handler
  /Location

so we get CBenchmark::Handler preloaded.

We will use the CApache::RegistryLoader to preload the script as
well, so the benchmark will be fair and only the processing time will
be measured. In the Istartup.pl we add:

  use Apache::RegistryLoader ();
  Apache::RegistryLoader-new-handler(
  "/perl/benchmarks/registry.pl",
   "/home/httpd/perl/benchmarks/registry.pl");

And we if we check the ICompiled Registry Scripts" section with help
of LApache::Status|debug/Apache_Status_Embedded_Inter (
http://localhost/perl-status?rgysubs ), where we see the listing of
the already compiled scripts :

  Apache::ROOT::perl::benchmarks::registry_2epl

=head3 The Heavy Code

We we will see that the overhead is insignificant when the code itself
is significantly heavier and slower. Let's leave the above code
examples umodified but add some CPU intensive processing operation (it
can be also an IO operation or a database query.)

  my $x = 100;
  my $y = log ($x ** 100)  for (0..1);


=head3 Processing and Results

So now we can proceed with the benchmark. We will generate 5000
request with 10 as a concurrency level (i.e. emulating 10 concurrent
users):

  % ab -n 5000 -c 10 http://localhost/perl/benchmarks/registry.pl
  % ab -n 5000 -c 10 http://localhost/benchmark_handler

And the results:

=over 

=item *

  Light code:

TypeRPS Av.CTime
  ---   --- ---
  Registry  561  16
  Handler   707  13

  Heavy code:

TypeRPS Av.CTime
  ---   --- ---
  Registry   68 146
  Handler70 141

  Reports: 
  ---
  RPS   : Requests Per Second
  Av. CTime : Average request processing time (msec) as seen by client


=head3 Conclusions

=over 

=item * The Light Code

We can see that the average overhead added by CApache::Registry is
about:

  16 - 13 = 3 milli-seconds

per request.

The difference in speed in per cents is about 19%.

=item * The Heavy Code

If we are looking at the average processing time, we see that the time
delta between the two handlers is almost the same and have grown from
3 msec to 5 msec. Which means that the identical heavy code that has
been added was running for 130msec (146-16). It doesn't mean that the
added code itself has been running for 130msec. It means that it took
130msec for this code to be completed, in multi-process environment
where each process gets a time slice to use CPU.

If we run this extra code under plain Benchmark:

  benchmark.pl
  
  use Benchmark;
  
  timethis (1_000,
   sub {
my $x = 100;
my $y = log ($x ** 100)  for (0..1);
  });

  % perl benchmark.pl
  timethis 1000: 25 wallclock secs (24.93 usr +  0.00 sys = 24.93 CPU)

We see that it takes about 25 CPU seconds to complete.

The interesting thing is that when the server under test runs on a
slow machine the results are completely different. I'll present them
here for comparison:

  Light code:

TypeRPS Av.CTime
  ---   --- ---
  Registry   61 160
  Handler   196  50

  Heavy code:

TypeRPS Av.CTime
  ---   --- ---
  

Apache::Scoreboard won't compile...

2000-04-16 Thread Rusty Foster

I'm at a loss as to what to do here... 

The system in question: 
slackware 7, 
Apache 1.3.12, 
mod_perl 1.22,
perl 5.005_03
Apache::Scoreboard 0.10

Here's what happens when I run make (perl Makefile.PL runs fine):

[mkdir blib stuff...]
cp lib/Apache/ScoreboardGraph.pm blib/lib/Apache/ScoreboardGraph.pm
cp Scoreboard.pm blib/lib/Apache/Scoreboard.pm
make[1]: Entering directory `/usr/src/Apache-Scoreboard-0.10/Dummy'
mkdir ../blib/arch/auto/Apache/DummyScoreboard
mkdir ../blib/lib/auto/Apache/DummyScoreboard
cp DummyScoreboard.pm ../blib/lib/Apache/DummyScoreboard.pm
/usr/bin/perl -I/usr/lib/perl5/i386-linux -I/usr/lib/perl5
/usr/lib/perl5/ExtUtils/xsubpp  -typemap /usr/lib/perl5/ExtUtils/typemap
-typemap /usr/lib/perl5/site_perl/i386-linux/auto/Apache/typemap
-typemap typemap DummyScoreboard.xs xstmp.c  mv xstmp.c
DummyScoreboard.c
Please specify prototyping behavior for DummyScoreboard.xs (see perlxs
manual)
cc -c -I../ -I/usr/lib/perl5/site_perl/i386-linux/auto/Apache/include
-I/usr/lib/perl5/site_perl/i386-linux/auto/Apache/include/modules/perl
-I/usr/lib/perl5/site_perl/i386-linux/auto/Apache/include/include
-I/usr/lib/perl5/site_perl/i386-linux/auto/Apache/include/regex
-I/usr/lib/perl5/site_perl/i386-linux/auto/Apache/include/os/unix
-I/include -I/usr/local/apache/include -Dbool=char -DHAS_BOOL
-I/usr/local/include -O2-DVERSION=\"0.04\" -DXS_VERSION=\"0.04\"
-fpic -I/usr/lib/perl5/i386-linux/CORE  DummyScoreboard.c
In file included from DummyScoreboard.xs:2:
/usr/lib/perl5/site_perl/i386-linux/auto/Apache/include/include/scoreboard.h:150:
field `start_time' has incomplete type
/usr/lib/perl5/site_perl/i386-linux/auto/Apache/include/include/scoreboard.h:151:
field `stop_time' has incomplete type
make[1]: *** [DummyScoreboard.o] Error 1
make[1]: Leaving directory `/usr/src/Apache-Scoreboard-0.10/Dummy'
make: *** [subdirs] Error 2


Note in particular the "Please specify prototyping behavior for
DummyScoreboard.xs" and the "incomplete type" warnings. Does anyone know
what this problem is, or even know where I should start looking? Thanks
in advance.

--R
-- 
===
|  Rusty Foster   | "You can never entirely stop being what   |
|   [EMAIL PROTECTED]| you once were. That's why it's important  |
|[EMAIL PROTECTED]  | to be the right person today, and not put |
| http://www.kuro5hin.org | it off till tomorrow."-Larry Wall |
===



Re: Modperl/Apache deficiencies... Memory usage.y

2000-04-16 Thread Leslie Mikesell

According to [EMAIL PROTECTED]:
  
  This is basically what you get with the 'two-apache' mode.
 
 To be frank... it's not.  Not even close.

It is the same to the extent that you get a vast reduction in the
number of backend mod_perl processes.  As I mentioned before, I
see a fairly constant ratio of 10-1 but it is really going to depend
on how fast your script can deliver its output back to the front
end (some of mine are slow).  It is difficult to benchmark this on
a LAN because the thing that determines the number of front-end
connections is the speed at which the content can be delivered back
to the client.  On internet connections you will see many slow
links, and letting those clients talk directly to mod_perl is the
only real problem.

 Especially in the case that
 the present site I'm working on where they have certain boxes for
 dynamic, others for static.

This is a perfect setup.  Let the box handling static content
also proxy the dynamic requests to the backend.

 This is usefull when you have one box
 running dynamic/static requests..., but it's not a solution, it's a
 work around.  (I should say we're moving to have some boxes static
 some dynamic... at present it's all jumbled up ;-()

Mod_rewrite is your friend when you need to spread things over
an arbitrary mix of boxes.  And it doesn't hurt much to
run an extra front end on your dynamic box either - it will
almost always be a win if clients are hitting it directly.

A fun way to convince yourself that the front/back end setup is
working is to run something called 'lavaps' (at least under Linux,
you can find this at www.freshmeat.net).  This shows your processes
as moving colored blobs floating around with the size related to
memory use and the activity and brightness related to processor
use.  It is pretty dramatic on a box typically running 200 1Meg
frontends, and 20 10Meg backends. You get the idea quickly what
would happen with 200 10Meg processes instead - or trying to
funnel through one perl backend.
  
 Well, now your discussing threaded perl... a whole seperate bag of
 tricks :).  That's not what I'm talking about... I'm talking about
 running a standard perl inside of a threaded enviro.  I've done this,
 and thrown tens of thousands of requests at it with no problems.

You could simulate this by configuring mod_perl backend to only
run one child and let the backlog sit in the listen queue. But
you will end up with the same problem.

 I
 believe threaded perl is an attempt to allow multiple simultaneous
 requests going into a single perl engine that is "multi threaded".
 There are problems with this... and it's difficult to accomplish, and
 alltogether a slower approach than queing because of the context
 switching type overhead.  Not to mention the I/O issue of this...
 yikes! makes my head spin.

What happens in your model - or any single threaded, single processing
model - when something takes longer than you expect?  If you are
just doing internal CPU processing and never have an error in your
programs you will be fine, but much of my mod_perl work involves
database connections and network I/O to yet another server for the
data to be displayed.  Some of these are slow and I can't allow
other requests to block until all prior ones have finished.  The
apache/mod_perl model automatically keeps the right number of
processes running to handle the load and since I mostly run
dual-processor machines I want at least a couple running all the
time.

   Les Mikesell
[EMAIL PROTECTED]



Apache::Session beginner

2000-04-16 Thread Tom Peer



Does anyone know any good resources for learning 
about Apache::Session ?

Thanks,

Tom


Re: Apache::Session beginner

2000-04-16 Thread Jeff Beard

Besides the pod docs there's another usage description in the guide:

http://perl.apache.org/guide/modules.html#Apache_Session_Maintain_session

If you're looking to use Apache::Session::DBI, then you'll need to run down 
the docs for you're database of choice.


Cheers,

Jeff



At 07:43 PM 4/16/00, Tom Peer wrote:
Does anyone know any good resources for learning about Apache::Session ?

Thanks,

Tom



Jeff Beard
___
Web:www.cyberxape.com
Phone:  303.443.9339
Location:   Boulder, CO, USA





Re: cgiwrap for Apache::ASP?

2000-04-16 Thread Philip Mak

On Fri, 14 Apr 2000, Ime Smits wrote:

 | I also have ASP installed, and I'd like to be able to transparently suid
 | the .asp scripts too. Do you know how I could go about doing this?
 
 I think this is a general bad idea. The only purpose of running scripts via
 a suexec or setuid mechanism I can think of is to stop different users 
 websites running an the same httpd digging and interfering in each other's
 data and files.

This server is used by many unaffiliated people who run their own
websites. Some people want to write their own CGI or ASP scripts that work
with files. The simplest example is a form that can be filled out and
stores the data in a file. If I don't suid their scripts, then they can
mess up each others' data files. They also cannot write data files into
their own directories.

Also, my system has cgiexec (does suid for CGI scripts) installed. The
cgiexec documentation says that once cgiexec is installed, it is a
security risk if people can execute code as "nobody" since that user has
special access to the cgiexec code. Right now, anyone can execute code as
nobody by writing ASP code, so in essence I have a security hole in my
system, and I DO need cgiexec.

So, does anyone have suggestions on how to do suid for ASP scripts?

 If you're not trusting the people making websites and you're looking for a
 virtual hosting solution, I think some postings earlier this week about

That's exactly the case here.

 proxying requests to a user-dedicated apache listening on localhost is the
 best solution.

Wouldn't this require running one web server process for each user? I may
be wrong, but it seems to be simpler to just suid their scripts.

-Philip Mak ([EMAIL PROTECTED])




Re: cgiwrap for Apache::ASP?

2000-04-16 Thread Tom Brown

 Also, my system has cgiexec (does suid for CGI scripts) installed. The
 cgiexec documentation says that once cgiexec is installed, it is a
 security risk if people can execute code as "nobody" since that user has
 special access to the cgiexec code. Right now, anyone can execute code as
 nobody by writing ASP code, so in essence I have a security hole in my
 system, and I DO need cgiexec.
 
 So, does anyone have suggestions on how to do suid for ASP scripts?

no (because there isn't an easy, or even moderately difficult one), but
the solution to the "nobody" problem is to run your mod_perl webserver
under a "modperl" userid. 

--
[EMAIL PROTECTED]   | Put all your eggs in one basket and 
http://BareMetal.com/  |  WATCH THAT BASKET!
web hosting since '95  | - Mark Twain





Re: cgiwrap for Apache::ASP?

2000-04-16 Thread Ime Smits

| Also, my system has cgiexec (does suid for CGI scripts) installed. The
| cgiexec documentation says that once cgiexec is installed, it is a
| security risk if people can execute code as "nobody" since that user has
| special access to the cgiexec code. Right now, anyone can execute code as
| nobody by writing ASP code, so in essence I have a security hole in my
| system, and I DO need cgiexec.

Like I said, doing something like suEXEC will solve your file access
problems, but it won't prevent people from messing up things like the
$Session and $Application objects which are accessible to all users running
their site on this webserver. It won't even prevent a user to redefine a
scalars, subroutines or even complete modules which are not belonging to
their own scripts.

This would open up a whole new security hole, because then someone can, with
some smart work, insert handlers or redefine modules which aren't theirs and
then let the code be executed later as someone else with full access to
*all* files of victim.

One way or the other, you are not going to get the security you want.

And then again: if you just think how easily badly coded perl scripts (which
will work perfectly fine as a strict CGI) can become a major pain in the ass
(memory leakage, resource exhaustion) in a mod_perl environment, I wouldn't
trust anybody without prior verification of their coding abilities in such a
virtual hosting environment in the first place.

| So, does anyone have suggestions on how to do suid for ASP scripts?

For the occasional script where you *have* to do suexec because you're going
to have to write to disk, make them plain CGI (that is, with
#!/usr/bin/perl-thing on the first line) and use the Apache suexec.
Otherwise, rethink your concept and see what can be put in a real database.

| Wouldn't this require running one web server process for each user? I may
| be wrong, but it seems to be simpler to just suid their scripts.

Yep. But that's the only way you are going to have a real secure setup.

Ime




Re: cgiwrap for Apache::ASP?

2000-04-16 Thread Tom Brown

On Mon, 17 Apr 2000, Ime Smits wrote:

 | Also, my system has cgiexec (does suid for CGI scripts) installed. The
 | cgiexec documentation says that once cgiexec is installed, it is a
 | security risk if people can execute code as "nobody" since that user has
 | special access to the cgiexec code. Right now, anyone can execute code as
 | nobody by writing ASP code, so in essence I have a security hole in my
 | system, and I DO need cgiexec.
 
 Like I said, doing something like suEXEC will solve your file access
 problems, but it won't prevent people from messing up things like the
 $Session and $Application objects which are accessible to all users running
 their site on this webserver. It won't even prevent a user to redefine a
 scalars, subroutines or even complete modules which are not belonging to
 their own scripts.

Huh? SuEXEC only works with mod_cgi (e.g. it requires the exec() part of
it's name to get the Su part), it is not applicable to the persistant
mod_perl world.

The rest of your discussion seems to relate to the persistance of
the mod_perl environment.

rest of message snipped

-Tom




Re: cgiwrap for Apache::ASP?

2000-04-16 Thread Ime Smits

| Huh? SuEXEC only works with mod_cgi (e.g. it requires the exec() part of
| it's name to get the Su part), it is not applicable to the persistant
| mod_perl world.

Ok, I must admit I mixed up referals to the concept (setuid()) and the
imlementation (suexec).

The point I was making was that even if - hypothetically spoken - you would
get Apache to do some set(e)uid() and set(e)gid() system calls just before
script startup (like the suexec utility does), you will not get the security
you want anyway.

Ime




Re: Modperl/Apache deficiencies... Memory usage.

2000-04-16 Thread Leslie Mikesell

According to Gunther Birznieks:

 If you want the ultimate in clean models, you may want to consider coding 
 in Java Servlets. It tends to be longer to write Java than Perl, but it's 
 much cleaner as all memory is shared and thread-pooling libraries do exist 
 to restrict 1-thread (or few threads) per CPU (or the request is blocked) 
 type of situation.

Do you happen to know of anyone doing xml/xsl processing in
servlets?  A programmer here has written some nice looking stuff
but it appears that the JVM is never garbage-collecting and
will just grow and get slower until someone restarts it.  I
don't know enough java to tell if it is his code or the xslt
classes that are causing it.

Yes, I know this is off-topic for mod_perl except to point out
that the clean java model isn't necessarily trouble free either.

  Les Mikesell
   [EMAIL PROTECTED]



Re: Apache::ASP problem, example index.html not working

2000-04-16 Thread Joshua Chamas

You need to also copy over the global.asa file from
the examples.

--Joshua

Andy Yiu wrote:
 
 Hi, this is Andy again.
 
 It's about that, after I installed the ASP patch, all
 other *.asp are working but the index.html is not
 which claim that it couldn't find glocal.asa
 
 The .htaccess file I used is from your example folder,
 which is :
 
 
 # Note this file was used for Apache 1.3.0
 # Please see the readme, for what exactly the config
 variables do.
 
 PerlSetVar Global  .
 PerlSetVar GlobalPackage Apache::ASP::Demo
 PerlSetVar StateDir  /tmp/asp_demo
 PerlSetVar StatINC 0
 PerlSetVar StatINCMatch 0
 PerlSetVar Clean 0
 PerlSetVar DynamicIncludes 1
 PerlSetVar FileUploadMax 25000
 PerlSetVar FileUploadTemp 1
 PerlSetVar SessionQueryParse 0
 PerlSetVar SessionQuery 1
 PerlSetVar Debug -2
 PerlSetVar StateCache 0
 
 # .asp files for Session state enabled
 Files ~ (\.asp)
 SetHandler perl-script
 PerlHandler Apache::ASP
 PerlSetVar CookiePath  /
 PerlSetVar SessionTimeout  .5
 #   PerlSetVar StateSerializer Storable
 #   PerlSetVar StateDB DB_File
 #   PerlSetVar StatScripts 0
 /Files
 
 # .htm files for the ASP parsing, but not the $Session
 object
 # NoState turns off $Session  $Application
 Files ~ (\.htm)
 SetHandler perl-script
 PerlHandler Apache::ASP
 PerlSetVar NoState 1
 PerlSetVar BufferingOn 1
 PerlSetVar NoCache 1
 PerlSetVar DebugBufferLength 250
 /Files
 
 Files ~ (\.inc|\.htaccess)
 ForceType text/plain
 /Files
 
 # .ssi for full ssi support, with Apache::Filter
 Files ~ (\.ssi)
 SetHandler perl-script
 PerlHandler Apache::ASP Apache::SSI
 PerlSetVar Global .
 PerlSetVar Filter On
 PerlSetVar NoCache 1
 /Files
 
 Files ~ (session_query_parse.asp$)
 SetHandler perl-script
 PerlHandler Apache::ASP
 PerlSetVar CookiePath  /
 PerlSetVar SessionTimeout  1
 PerlSetVar SessionQueryParseMatch
 ^http://localhost
 /Files
 -
 
 the folder I put my asp example files is
 /data/home/asp/eg
 
 And here is the error message I get.
 
 ---
 Errors Output
 
  %EG is not defined, make sure you copied
 ./eg/global.asa correctly at (eval 12) line 5.
 , /usr/lib/perl5/site_perl/5.005/Apache/ASP.pm line
 1229
 
 Debug Output
 
  RUN ASP (v0.18) for /data/home/asp/eg/index.html
  GlobalASA package Apache::ASP::Demo
  ASP object created - GlobalASA:
 Apache::ASP::GlobalASA=HASH(0x83c5370); Request:
 Apache::ASP::Request=HASH(0x83947d0); Response:
 Apache::ASP::Response=HASH(0x83946d4); Server:
 Apache::ASP::Server=HASH(0x83945a8); basename:
 index.html; compile_includes: 1; dbg: 2;
 debugs_output: ARRAY(0x82834c0); filename:
 /data/home/asp/eg/index.html; global: /tmp;
 global_package: Apache::ASP::Demo; id: NoCache;
 includes_dir: ; init_packages: ARRAY(0x8302fe4);
 no_cache: 1; no_state: 1; package: Apache::ASP::Demo;
 pod_comments: 1; r: Apache=SCALAR(0x840ca70);
 sig_warn: ; stat_inc: ; stat_inc_match: ;
 stat_scripts: 1; unique_packages: ; use_strict: ;
  parsing index.html
  runtime exec of dynamic include header.inc args ()
  parsing header.inc
  undefing sub Apache::ASP::Demo::_tmp_header_inc code
 CODE(0x840ff20)
  compile include header.inc sub _tmp_header_inc
  runtime exec of dynamic include footer.inc args ()
  parsing footer.inc
  already failed to load Apache::Symbol
  undefing sub Apache::ASP::Demo::_tmp_footer_inc code
 CODE(0x842b6a0)
  compile include footer.inc sub _tmp_footer_inc
  already failed to load Apache::Symbol
  undefing sub Apache::ASP::Demo::NoCache code
 CODE(0x842b6b8)
  compiling into package Apache::ASP::Demo subid
 Apache::ASP::Demo::NoCache
  executing NoCache
  %EG is not defined, make sure you copied
 ./eg/global.asa correctly at (eval 12) line 5.
 , /usr/lib/perl5/site_perl/5.005/Apache/ASP.pm line
 1229
 
 ASP to Perl Program
 
   1: package Apache::ASP::Demo; ;; sub
 Apache::ASP::Demo::NoCache {  ;;  return(1) unless
 $_[0];  ;; no strict;;use vars qw($Application
 $Session $Response $Server $Request);;
   2: # split the page in 2 for nice formatting and
 english style sorting
   3: my(@col1, @col2);
   4: my @keys = sort keys %EG;
   5: @keys || die("\%EG is not defined, make sure you
 copied ./eg/global.asa correctly");
   6: my $half = int(@keys/2);
   7:
   8: for(my $i =0; $i = $#keys; $i++) {
   9:if($i  $half) {
  10:push(@col1, $keys[$i]);
  11:} else {
  12:push(@col2, $keys[$i]);
  13:}
  14: }
  15: $Response-Debug(\@col1, \@col2);
  16: $title = 'Example ASP Scripts';
  17: $Response-Write('
  18:
  19: '); $Response-Include('header.inc', );
 $Response-Write('
  20:
  21: table border=0
  22: '); while(@col1) {
  23:my $col1 = shift @col1;
  24:my $col2 = shift @col2;
  25:$Response-Write('
  26:tr
  27:'); for([$col1, 

cvs commit: modperl-2.0/src/modules/perl modperl_log.h

2000-04-16 Thread dougm

dougm   00/04/16 17:02:26

  Modified:.Makefile.PL
   lib/Apache Build.pm
   patches  link-hack.pat
   src/modules/perl modperl_log.h
  Log:
  add Apache::Build::{ccopts,ldopts} methods
  enable libgtop linking if debug and available
  beef up logic for generating/cleaning files
  
  Revision  ChangesPath
  1.9   +5 -5  modperl-2.0/Makefile.PL
  
  Index: Makefile.PL
  ===
  RCS file: /home/cvs/modperl-2.0/Makefile.PL,v
  retrieving revision 1.8
  retrieving revision 1.9
  diff -u -r1.8 -r1.9
  --- Makefile.PL   2000/04/16 01:41:21 1.8
  +++ Makefile.PL   2000/04/17 00:02:22 1.9
  @@ -10,7 +10,7 @@
   
   our $VERSION;
   
  -my $build = Apache::Build-new;
  +my $build = Apache::Build-new(debug = 1);
   my $code  = ModPerl::Code-new;
   
   #quick hack until build system is complete
  @@ -43,6 +43,7 @@
 $httpd_version, $VERSION, $^V;
   
   $build-save;
  +$build-save_ldopts;
   
   $code-generate;
   }
  @@ -57,17 +58,16 @@
   my $opts = shift;
   return clean() if $opts-{m} eq 'c';
   
  -my $ccopts = ExtUtils::Embed::ccopts();
  +my $ccopts = $build-ccopts;
   my @inc = $build-inc;
   my($cc, $ar) = map { $build-perl_config($_) } qw(cc ar);
   
   chdir $code-path;
   
  -my $flags = "-g -Wall";
  -$flags .= " -E" if $opts-{E};
  +$ccopts .= " -E" if $opts-{E};
   
   for (sort { (stat $b)[9] = (stat $a)[9] } @{ $code-c_files }) {
  -echo_cmd "$cc $flags $ccopts @inc -c $_";
  +echo_cmd "$cc $ccopts @inc -c $_";
   }
   
   echo_cmd "$ar crv libmodperl.a @{ $code-o_files }";
  
  
  
  1.4   +70 -8 modperl-2.0/lib/Apache/Build.pm
  
  Index: Build.pm
  ===
  RCS file: /home/cvs/modperl-2.0/lib/Apache/Build.pm,v
  retrieving revision 1.3
  retrieving revision 1.4
  diff -u -r1.3 -r1.4
  --- Build.pm  2000/04/15 17:38:45 1.3
  +++ Build.pm  2000/04/17 00:02:23 1.4
  @@ -5,6 +5,7 @@
   use warnings;
   use Config;
   use Cwd ();
  +use ExtUtils::Embed ();
   use constant is_win32 = $^O eq 'MSWin32';
   use constant IS_MOD_PERL_BUILD = grep { -e "$_/lib/mod_perl.pm" } qw(. ..);
   
  @@ -60,6 +61,40 @@
   
   #--- Perl Config stuff ---
   
  +sub gtop_ldopts {
  +my $xlibs = "-L/usr/X11/lib -L/usr/X11R6/lib -lXau";
  +return " -lgtop -lgtop_sysdeps -lgtop_common $xlibs";
  +}
  +
  +sub ldopts {
  +my($self) = @_;
  +
  +my $ldopts = ExtUtils::Embed::ldopts();
  +chomp $ldopts;
  +
  +if ($self-{use_gtop}) {
  +$ldopts .= $self-gtop_ldopts;
  +}
  +
  +$ldopts;
  +}
  +
  +sub ccopts {
  +my($self) = @_;
  +
  +my $ccopts = ExtUtils::Embed::ccopts();
  +
  +if ($self-{use_gtop}) {
  +$ccopts .= " -DMP_USE_GTOP";
  +}
  +
  +if ($self-{debug}) {
  +$ccopts .= " -g -Wall -DMP_TRACE";
  +}
  +
  +$ccopts;
  +}
  +
   sub perl_config {
   my($self, $key) = @_;
   
  @@ -181,22 +216,37 @@
   
   sub new {
   my $class = shift;
  +
  +my $self = bless {
  +cwd = Cwd::fastcwd(),
  +@_,
  +}, $class;
   
  -bless {
  -   cwd = Cwd::fastcwd(),
  -   @_,
  -  }, $class;
  +if ($self-{debug}) {
  +$self-{use_gtop} = 1 if $self-find_dlfile('gtop');
  +}
  +
  +$self;
   }
   
   sub DESTROY {}
   
  -my $save_file = 'lib/Apache/BuildConfig.pm';
  +my %default_files = (
  +'build_config' = 'lib/Apache/BuildConfig.pm',
  +'ldopts' = 'src/modules/perl/ldopts',
  +);
   
   sub clean_files {
   my $self = shift;
  -$self-{save_file} || $save_file;
  +map { $self-default_file($_) } keys %default_files;
   }
   
  +sub default_file {
  +my($self, $name, $override) = @_;
  +my $key = join '_', 'file', $name;
  +$self-{$key} ||= ($override || $default_files{$name});
  +}
  +
   sub freeze {
   require Data::Dumper;
   local $Data::Dumper::Terse = 1;
  @@ -205,12 +255,24 @@
   $data;
   }
   
  +sub save_ldopts {
  +my($self, $file) = @_;
  +
  +$file ||= $self-default_file('ldopts', $file);
  +my $ldopts = $self-ldopts;
  +
  +open my $fh, '', $file or die "open $file: $!";
  +print $fh "#!/bin/sh\n\necho $ldopts\n";
  +close $fh;
  +chmod 0755, $file;
  +}
  +
   sub save {
   my($self, $file) = @_;
   
  -$self-{save_file} = $file || $save_file;
  +$file ||= $self-default_file('build_config');
   (my $obj = $self-freeze) =~ s/^//;
  -open my $fh, '', $self-{save_file} or die "open $file: $!";
  +open my $fh, '', $file or die "open $file: $!";
   
   print $fh EOF;
   package Apache::BuildConfig;
  
  
  
  1.2   +6 -9  modperl-2.0/patches/link-hack.pat
  
  Index: link-hack.pat
  ===