SetupConfDa.al

2000-12-21 Thread Myrddin

Hello. :)

I had mod_perl and Embperl working just fine until I decided to upgrade my
apache/mod_perl bits (largely because there was demand to also allow php to
work, which meant having to go through the headache of building a new apache
that supported mod_perl, mod_php, ssl, etc).  I used 'apachetoolbox' to do the
job for me (http://www.apachetoolbox.com/index.php) and everything seemed to
go smoothly.  No errors or anything.

Until I tried to access pages that were using embedded perl.  I got messages
in the error_log file saying (formatted here for ease of readability):

   [Wed Dec 20 01:11:43 2000] [error] Undefined subroutine
  HTML::Embperl::handler called.
   [Wed Dec 20 01:11:44 2000] [error] Can't locate HTML/Embperl.pm in @INC
(@INC contains: /usr/local/lib/perl5/5.6.0/i686-linux
/usr/local/lib/perl5/5.6.0
/usr/local/lib/perl5/site_perl/5.6.0/i686-linux
/usr/local/lib/perl5/site_perl/5.6.0
/usr/local/lib/perl5/site_perl .
/usr/local/apache/
/usr/local/apache/lib/perl) at (eval 7) line 3.

So, I found Embperl.pm, put a copy of it in /usr/local/apache/lib/perl/HTML
(just for testing), and I then got the following messages (I edited out the
@INC path as it's identical to above):

   [Wed Dec 20 01:13:47 2000] [error] Can't locate loadable object for module
HTML::Embperl in @INC at (eval 7) line 3
   [Wed Dec 20 01:13:47 2000] [error] Can't locate
auto/HTML/Embperl/SetupConfDa.al in @INC at
/usr/local/apache/lib/perl/HTML/Embperl.pm line 679
   
So, my natural instinct was to look for SetupConfDa.al.  It's nowhere to be
found on my system.  I suspected I might need to upgrade my Embperl, so I
fired off the old 'perl -MCPAN -e shell' and tried to 'install
HTML::Embperl'... the only problem is that during the test phase, it keeps
crapping out on the chdir.htm test, and so won't install without a force.  I
don't feel comfortable doing that, but at the same time, I have to revert back
to my original apache setup in order for my existing embedded perl pages to
load properly (which they do, by the way).

Any help would be greatly appreciated. :)

- Myrddin



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Perrin Harkins

Gunther Birznieks wrote:
 Sam just posted this to the speedycgi list just now.
[...]
 The underlying problem in mod_perl is that apache likes to spread out
 web requests to as many httpd's, and therefore as many mod_perl interpreters,
 as possible using an LRU selection processes for picking httpd's.

Hmmm... this doesn't sound right.  I've never looked at the code in
Apache that does this selection, but I was under the impression that the
choice of which process would handle each request was an OS dependent
thing, based on some sort of mutex.

Take a look at this: http://httpd.apache.org/docs/misc/perf-tuning.html

Doesn't that appear to be saying that whichever process gets into the
mutex first will get the new request?  In my experience running
development servers on Linux it always seemed as if the the requests
would continue going to the same process until a request came in when
that process was already busy.

As I understand it, the implementation of "wake-one" scheduling in the
2.4 Linux kernel may affect this as well.  It may then be possible to
skip the mutex and use unserialized accept for single socket servers,
which will definitely hand process selection over to the kernel.

 The problem is that at a high concurrency level, mod_perl is using lots
 and lots of different perl-interpreters to handle the requests, each
 with its own un-shared memory.  It's doing this due to its LRU design.
 But with SpeedyCGI's MRU design, only a few speedy_backends are being used
 because as much as possible it tries to use the same interpreter over and
 over and not spread out the requests to lots of different interpreters.
 Mod_perl is using lots of perl-interpreters, while speedycgi is only using
 a few.  mod_perl is requiring that lots of interpreters be in memory in
 order to handle the requests, wherase speedy only requires a small number
 of interpreters to be in memory.

This test - building up unshared memory in each process - is somewhat
suspect since in most setups I've seen, there is a very significant
amount of memory being shared between mod_perl processes.  Regardless,
the explanation here doesn't make sense to me.  If we assume that each
approach is equally fast (as Sam seems to say earlier in his message)
then it should take an equal number of speedycgi and mod_perl processes
to handle the same concurrency.

That leads me to believe that what's really happening here is that
Apache is pre-forking a bit over-zealously in response to a sudden surge
of traffic from ab, and thus has extra unused processes sitting around
waiting, while speedycgi is avoiding this situation by waiting for
someone to try and use the processes before forking them (i.e. no
pre-forking).  The speedycgi way causes a brief delay while new
processes fork, but doesn't waste memory.  Does this sound like a
plausible explanation to folks?

This is probably all a moot point on a server with a properly set
MaxClients and Apache::SizeLimit that will not go into swap.  I would
expect mod_perl to have the advantage when all processes are
fully-utilized because of the shared memory.  It would be cool if
speedycgi could somehow use a parent process model and get the shared
memory benefits too.  Speedy seems like it might be more attractive to
ISPs, and it would be nice to increase interoperability between the two
projects.

- Perrin



Re: SetupConfDa.al

2000-12-21 Thread Gerald Richter

Embperl wasn't installed at all before, to get the newest version from CPAN
was the correct solution.


 I suspected I might need to upgrade my Embperl, so I
 fired off the old 'perl -MCPAN -e shell' and tried to 'install
 HTML::Embperl'... the only problem is that during the test phase, it keeps
 crapping out on the chdir.htm test, and so won't install without a force.

Could you run the tests with

make test TESTARGS="-i"

this will continue after the failed chdir test. If the chdir test is the
only test that fails, there should be no problem to install and use Embperl.
The chdir test fails on some platforms (namly solaris maybe others) and I
didn't have a chance to track down why.

Gerald

-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925131
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-





Re: sorting subroutine and global variables

2000-12-21 Thread Alexander Farber (EED)

Hi,

thanks for your reply,

Stas Bekman wrote:
 On Wed, 20 Dec 2000, Alexander Farber (EED) wrote:
 
  sub mysort
  {
  my $param = $query - param ('sort') || 'MHO'; # XXX global $query,
 # not mod_perl clean?
  return $a - {$param} cmp $b - {$param};
  }
 
  This subroutine is called later as:
 
  for my $href (sort mysort values %$hohref)
  {
  ...
  }
 
 Your code is better written as:
 
   my $param = $query-param('sort') || 'MHO';
   for my $href (sort {$a-{$param} cmp $b-{$param}} values %$hohref) { }

but isn't it the same? The anonymous sub {$a-{$param} cmp $b-{$param}}
uses the "outside"-variable $param.

 why wasting resources...

Also, assuming I would like to have a separate sorting subroutine
mysort, since it is mopre complicated as listed above... How would
you pass some parameters to this subroutine? Via global vars?

Regards
Alex



Re: POST with PERL

2000-12-21 Thread Glorfindel

Hi,

You should not try to post TO a flat html filebut only FROM it.


Hope it help.


[EMAIL PROTECTED] wrote:

 Hi!

 I have a little problem. A wrote a perl-script to manage guestbook-like
 section of my page. Script is working, but from the shell. When I try to
 post a data through html form I get an error saying that post method is
 not suported by this document. It's in directory with options ExecCGI and
 FollowSymLinks. There is also a script handler for .cgi. What's the
 matter?
 Thanks from advance

 IronHand   mailto:[EMAIL PROTECTED]

--
Don't be irreplaceable, if you can't be replaced, you can't be promoted.






Re: fork inherits socket connection

2000-12-21 Thread Kees Vonk 7249 24549

 Below is a solution to this problem:
 
 * fork the long running process from mod_perl
 * and be able to restart the server 
o without killing this process 
o without this process keeping the socket busy 
  and thus preventing the server restart
 
 Thanks to Doug for the hint. You need to patch Apache::SubProcess 
 (CPAN):
 
 --- SubProcess.xs.orig  Sat Sep 25 19:17:12 1999
 +++ SubProcess.xs   Tue Dec 19 21:03:22 2000
 @@ -103,6 +103,14 @@
 XPUSHs(io_hook(ioep, io_hook_read));
  }
  
 +
 +void
 +ap_cleanup_after_fork(r)
 +Apache r
 +
 +CODE:
 +ap_cleanup_for_exec();  
 +
  int
  ap_call_exec(r, pgm=r-filename)
  Apache r
 
 
 which makes the new method available: cleanup_after_fork() 
 
 This is the clean test case that shows that the conditions are
 fulfilled properly:
 
   use strict;
   use POSIX 'setsid';
   use Apache::SubProcess;
 
   my $r = shift;
   $r-send_http_header("text/plain");
 
   $SIG{CHLD} = 'IGNORE';
   defined (my $kid = fork) or die "Cannot fork: $!\n";
   if ($kid) {
 print "Parent $$ has finished, kid's PID: $kid\n";
   } else {
   $r-cleanup_after_fork();
   chdir '/'or die "Can't chdir to /: $!";
   open STDIN, '/dev/null'  or die "Can't read /dev/null: $!";
   open STDOUT, '/dev/null'
   or die "Can't write to /dev/null: $!";
   open STDERR, '/tmp/log' or die "Can't write to /tmp/log: $!";
   setsid or die "Can't start a new session: $!";
 
   local $|=1;
   warn "started\n";
   # do something time-consuming
   sleep 1, warn "$_\n" for 1..20;
   warn "completed\n";
   # we want the process to be terminated, Apache::exit() won't
   # terminate the process
   CORE::exit(0);
   }
 
 both processes are completely idependent now. Watch the /tmp/log as 
 the forked process works, while you can restart the server.

Thank you very much, that works great and it looks much neater than what 
I had before.

BTW. what is the function of the chdir '/', is that needed or can I 
leave that out?


Kees




Re: sorting subroutine and global variables

2000-12-21 Thread Stas Bekman

On Thu, 21 Dec 2000, Alexander Farber (EED) wrote:
 Stas Bekman wrote:
  On Wed, 20 Dec 2000, Alexander Farber (EED) wrote:
  
   sub mysort
   {
   my $param = $query - param ('sort') || 'MHO'; # XXX global $query,
  # not mod_perl clean?
   return $a - {$param} cmp $b - {$param};
   }
  
   This subroutine is called later as:
  
   for my $href (sort mysort values %$hohref)
   {
   ...
   }
  
  Your code is better written as:
  
my $param = $query-param('sort') || 'MHO';
for my $href (sort {$a-{$param} cmp $b-{$param}} values %$hohref) { }
 
 but isn't it the same? The anonymous sub {$a-{$param} cmp $b-{$param}}
 uses the "outside"-variable $param.

First it's not the same waste-wise. your customized sorting function is
called for every 2 items in the list to be sorted.

perl -le '@a = (1,5,8,2); print sort {print "waste"; $a = $b } @a'
waste
waste
waste
waste
waste
1258

Second it's not the same closure-wise. It does matter whether you use anon
sub or the named sub. See:
http://perl.apache.org/guide/perl.html#Understanding_Closures_the_Ea

  why wasting resources...
 
 Also, assuming I would like to have a separate sorting subroutine
 mysort, since it is mopre complicated as listed above... How would
 you pass some parameters to this subroutine? Via global vars?

I'd still use anonymous sub:

my $my_sort = sub {
  $a-{$param} cmp $b-{$param}
  # as much code as you want
};
for my $href (sort $my_sort values %$hohref) { }

it's recompiled on every run, therefore not sticky vars.
See the link above.

Of course the simplest solution is putting the code into the package and
use/require it from your script -- no problem as well. See the guide.

_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: fork inherits socket connection

2000-12-21 Thread Stas Bekman

On Thu, 21 Dec 2000, Kees Vonk 7249 24549 wrote:

  Below is a solution to this problem:
  
  * fork the long running process from mod_perl
  * and be able to restart the server 
 o without killing this process 
 o without this process keeping the socket busy 
   and thus preventing the server restart
  
[snip]
defined (my $kid = fork) or die "Cannot fork: $!\n";
if ($kid) {
  print "Parent $$ has finished, kid's PID: $kid\n";
} else {
$r-cleanup_after_fork();
chdir '/'or die "Can't chdir to /: $!";
[snip]

 Thank you very much, that works great and it looks much neater than what 
 I had before.

thanks goes to doug for cleanup_after_fork :) BTW, in the official release
it'll be called cleanup_for_exec(), so you better use this one from the
beginning.

 BTW. what is the function of the chdir '/', is that needed or can I 
 leave that out?

it's useful in case you decide to umount the partition later, see perlipc
manpage. You can leave it out of course.

_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Gunther Birznieks

I think you could actually make speedycgi even better for shared memory 
usage by creating a special directive which would indicate to speedycgi to 
preload a series of modules. And then to tell speedy cgi to do forking of 
that "master" backend preloaded module process and hand control over to 
that forked process whenever you need to launch a new process.

Then speedy would potentially have the best of both worlds.

Sorry I cross posted your thing. But I do think it is a problem of mod_perl 
also, and I am happily using speedycgi in production on at least one 
commercial site where mod_perl could not be installed so easily because of 
infrastructure issues.

I believe your mechanism of round robining among MRU perl interpreters is 
actually also accomplished by ActiveState's PerlEx (based on 
Apache::Registry but using multithreaded IIS and pool of Interpreters). A 
method similar to this will be used in Apache 2.0 when Apache is 
multithreaded and therefore can control within program logic which Perl 
interpeter gets called from a pool of Perl interpreters.

It just isn't so feasible right now in Apache 1.0 to do this. And sometimes 
people forget that mod_perl came about primarily for writing handlers in 
Perl not as an application environment although it is very good for the 
later as well.

I think SpeedyCGI needs more advocacy from the mod_perl group because put 
simply speedycgi is way easier to set up and use than mod_perl and will 
likely get more PHP people using Perl again. If more people rely on Perl 
for their fast websites, then you will get more people looking for more 
power, and by extension more people using mod_perl.

Whoops... here we go with the advocacy thing again.

Later,
Gunther

At 02:50 AM 12/21/2000 -0800, Sam Horrocks wrote:
   Gunther Birznieks wrote:
Sam just posted this to the speedycgi list just now.
   [...]
The underlying problem in mod_perl is that apache likes to spread out
web requests to as many httpd's, and therefore as many mod_perl 
 interpreters,
as possible using an LRU selection processes for picking httpd's.
  
   Hmmm... this doesn't sound right.  I've never looked at the code in
   Apache that does this selection, but I was under the impression that the
   choice of which process would handle each request was an OS dependent
   thing, based on some sort of mutex.
  
   Take a look at this: http://httpd.apache.org/docs/misc/perf-tuning.html
  
   Doesn't that appear to be saying that whichever process gets into the
   mutex first will get the new request?

  I would agree that whichver process gets into the mutex first will get
  the new request.  That's exactly the problem I'm describing.  What you
  are describing here is first-in, first-out behaviour which implies LRU
  behaviour.

  Processes 1, 2, 3 are running.  1 finishes and requests the mutex, then
  2 finishes and requests the mutex, then 3 finishes and requests the mutex.
  So when the next three requests come in, they are handled in the same order:
  1, then 2, then 3 - this is FIFO or LRU.  This is bad for performance.

   In my experience running
   development servers on Linux it always seemed as if the the requests
   would continue going to the same process until a request came in when
   that process was already busy.

  No, they don't.  They go round-robin (or LRU as I say it).

  Try this simple test script:

  use CGI;
  my $cgi = CGI-new;
  print $cgi-header();
  print "mypid=$$\n";

  WIth mod_perl you constantly get different pids.  WIth mod_speedycgi you
  usually get the same pid.  THis is a really good way to see the LRU/MRU
  difference that I'm talking about.

  Here's the problem - the mutex in apache is implemented using a lock
  on a file.  It's left up to the kernel to decide which process to give
  that lock to.

  Now, if you're writing a unix kernel and implementing this file locking 
 code,
  what implementation would you use?  Well, this is a general purpose thing -
  you have 100 or so processes all trying to acquire this file lock.  You 
 could
  give out the lock randomly or in some ordered fashion.  If I were writing
  the kernel I would give it out in a round-robin fashion (or the
  least-recently-used process as I referred to it before).  Why?  Because
  otherwise one of those processes may starve waiting for this lock - it may
  never get the lock unless you do it in a fair (round-robin) manner.

  THe kernel doesn't know that all these httpd's are exactly the same.
  The kernel is implementing a general-purpose file-locking scheme and
  it doesn't know whether one process is more important than another.  If
  it's not fair about giving out the lock a very important process might
  starve.

  Take a look at fs/locks.c (I'm looking at linux 2.3.46).  In there is the
  comment:

  /* Insert waiter into blocker's block list.
   * We use a circular list so that processes can be easily woken up in
   * the order they blocked. The documentation 

Re: fork inherits socket connection

2000-12-21 Thread Stas Bekman

On Thu, 21 Dec 2000, Kees Vonk 7249 24549 wrote:

  BTW, in the official release it'll be called
  cleanup_for_exec(), so you better use this one from the
  beginning.
 
 Does that mean I don't have to patch SubProcess.xs? Is 
 cleanup_for_exec available by default?

You'll have it in the next version of Apache::SubProcess. I suggested to
adjust my patch to use cleanup_for_exec, so when you upgrade to the new
Apache::SubProcess things won't go broken.

_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl with scripts that contain un-shared memory

2000-12-21 Thread Joe Schaefer


[ Sorry for accidentally spamming people on the
  list.  I was ticked off by this "benchmark",
  and accidentally forgot to clean up the reply 
  names.  I won't let it happen again :(  ]

Matt Sergeant [EMAIL PROTECTED] writes:

 On Thu, 21 Dec 2000, Ken Williams wrote:
 
  Well then, why doesn't somebody just make an Apache directive to control how
  hits are divvied out to the children?  Something like
 
NextChild most-recent
NextChild least-recent
NextChild (blah...)
 
  but more well-considered in name.  Not sure whether a config directive
  would do it, or whether it would have to be a startup command-line
  switch.  Or maybe a directive that can only happen in a startup config
  file, not a .htaccess file.
 
 Probably nobody wants to do it because Apache 2.0 fixes this "bug".
 

KeepAlive On

:)

All kidding aside, the problem with modperl is memory consumption, 
and to use modperl seriously, you currently have to code around 
that (preloading commonly used modules like CGI, or running it in 
a frontend/backend config similar to FastCGI.)  FastCGI and modperl
are fundamentally different technologies.  Both have the ability
to accelerate CGI scripts;  however, modperl can do quite a bit
more than that. 

Claimed benchmarks that are designed to exploit this memory issue 
are quite silly, especially when the actual results are never 
revealed. It's overzealous advocacy or FUD, depending on which 
side of the fence you are sitting on.


-- 
Joe Schaefer




[ANNOUNCE]: mod_perl Pocket Reference

2000-12-21 Thread Andrew Ford

I have just heard from my editor at O'Reilly that the "mod_perl Pocket
Reference" has now been printed and should be in the bookshops
shortly.

I have updated the quick reference card too, but I am having technical
problems (or rather TeX-nichal problems) in formatting it.  Once those
are resolved I will put the new version up on refcards.com.

Andrew
-- 
Andrew Ford,  Director   Ford  Mason Ltd Tel: +44 1531 829900
[EMAIL PROTECTED]  South Wing, Compton HouseFax: +44 1531 829901
http://www.ford-mason.co.uk  Compton Green, Redmarley  Mobile: +44 7785 258278
http://www.refcards.com  Gloucester, GL19 3JB, UK   



Re: slight mod_perl problem

2000-12-21 Thread Vivek Khera

 "SB" == Stas Bekman [EMAIL PROTECTED] writes:

 startup.pl does not get repeated on a restart. However it will when
 started with ./apachectl start. I have never encountered this with Apache
 1.3.12 or 13.

SB I've just tested it -- it's not.

I just tested it also, and the startup script is run exactly once.

This is on: Apache/1.3.14 (Unix) mod_perl/1.24_02-dev

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl with scripts that contain un-shared memory

2000-12-21 Thread Joe Schaefer

Gunther Birznieks [EMAIL PROTECTED] writes:

 But instead he crafted an experiment to show that in this particular case 
 (and some applications do satisfy this case) SpeedyCGI has a particular 
 benefit.

And what do I have to do to repeat it? Unlearn everything in Stas'
guide?

 
 This is why people use different tools for different jobs -- because 
 architecturally they are designed for different things. SpeedyCGI is 
 designed in a different way from mod_perl. What I believe Sam is saying is 
 that there is a particular real-world scenario where SpeedyCGI likely has 
 better performance benefits to mod_perl.

Sure, and that's why some people use it.  But to say

"Speedycgi scales better than mod_perl with  scripts that contain un-shared memory"

is to me quite similar to saying

"SUV's are better than cars since they're safer to drive drunk in."

 
 Discouraging the posting of experimental information like this is where the 
 FUD will lie. This isn't an advertisement in ComputerWorld by Microsoft or 
 Oracle, it's a posting on a mailing list. Open for discussion.

Maybe I'm wrong about this, but I didn't see any mention of the 
apparatus used in his experiment.  I only saw what you posted,
and your post had only anecdotal remarks of results without
detailing any config info.

I'm all for free and open discussions because they can
point to interesting new ideas.  However, some attempt at 
full disclosure (comments on the config used are as important 
important than anecdotal remarks about the results) is 
necessary so objective opinions can be formed.

-- 
Joe Schaefer



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl with scripts that contain un-shared memory

2000-12-21 Thread Gunther Birznieks

At 09:53 AM 12/21/00 -0500, Joe Schaefer wrote:

[ Sorry for accidentally spamming people on the
   list.  I was ticked off by this "benchmark",
   and accidentally forgot to clean up the reply
   names.  I won't let it happen again :(  ]

Not sure what you mean here. Some people like the duplicate reply names 
especially as the mod_perl list is still a bit slow on responding. I know I 
prefer to see replies to my messages ASAP and they tend to come faster if I 
am CCed on the list.

All kidding aside, the problem with modperl is memory consumption,
and to use modperl seriously, you currently have to code around
that (preloading commonly used modules like CGI, or running it in
a frontend/backend config similar to FastCGI.)  FastCGI and modperl
are fundamentally different technologies.  Both have the ability
to accelerate CGI scripts;  however, modperl can do quite a bit
more than that.

Claimed benchmarks that are designed to exploit this memory issue
are quite silly, especially when the actual results are never
revealed. It's overzealous advocacy or FUD, depending on which
side of the fence you are sitting on.

I think I get your point on the first paragraph. But the 2nd paragraph is 
odd. Are you classifying the original post as being overzealous advocacy or 
FUD? I don't think I would classify it as such.

I could see it bordering on FUD if there was one benchmark which Sam 
produced and he just posted "SpeedyCGI is faster than mod_perl" without 
providing any details.

But instead he crafted an experiment to show that in this particular case 
(and some applications do satisfy this case) SpeedyCGI has a particular 
benefit.

This is why people use different tools for different jobs -- because 
architecturally they are designed for different things. SpeedyCGI is 
designed in a different way from mod_perl. What I believe Sam is saying is 
that there is a particular real-world scenario where SpeedyCGI likely has 
better performance benefits to mod_perl.

Discouraging the posting of experimental information like this is where the 
FUD will lie. This isn't an advertisement in ComputerWorld by Microsoft or 
Oracle, it's a posting on a mailing list. Open for discussion.

   




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl with scripts that contain un-shared memory

2000-12-21 Thread Vivek Khera

 "KW" == Ken Williams [EMAIL PROTECTED] writes:

KW Well then, why doesn't somebody just make an Apache directive to
KW control how hits are divvied out to the children?  Something like

According to memory, mod_perl 2.0 uses a most-recently-used strategy
to pull perl interpreters from the thread pool.  It sounds to me like
with apache 2.0 in thread-mode and mod_perl 2.0 you get the same
effect of using the proxy front end that we currently need.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Keith G. Murphy

Perrin Harkins wrote:

[cut]
 
 Doesn't that appear to be saying that whichever process gets into the
 mutex first will get the new request?  In my experience running
 development servers on Linux it always seemed as if the the requests
 would continue going to the same process until a request came in when
 that process was already busy.
 
Is it possible that the persistent connections utilized by HTTP 1.1 just
made it look that way?  Would happen if the clients were MSIE.

Even recent Netscape browsers only use 1.0, IIRC.

(I was recently perplexed by differing performance between MSIE and NS
browsers hitting my system until I realized this.)



Re: POST with PERL

2000-12-21 Thread apache

If you just visit the script directly rather than posting to it, do you see the script 
source code? If so, it sounds like you have a missing / invalid ScriptAlias entry. If 
you just add something like:

ScriptAlias /cgi-bin/ "/home/httpd/html/cgi-bin/"

... on the assumption that '/cgi-bin/' is the URI of the directory in which your 
script can be accessed, and that /home/httpd/html/cgi-bin/ is the path on the 
filesystem to that same directory. Add the above entry either in the global 
configuration (if the site is using the default settings) or in a VirtualHost 
directive (if that's how the site's being accessed).

Another thing to point out is that it's impossible (as far as I know) to have static 
files (such as web pages, images, etc.) and scripts in the same directory. The reason 
for this is that Apache will never serve a page within a ScriptAlias directory as 
above as-is to a browser - the files can only be executed (this is for security 
reasons as I understand it - so someone can't download a copy of your script, only 
execute it on the server).

Hope this helps, and sorry if I got the wrong end of the stick.

Andy.
-- 
Beauty is in the eye of the beholder... Oh, no - it's just an eyelash.

[EMAIL PROTECTED] wrote:

 Hi!

 I have a little problem. A wrote a perl-script to manage guestbook-like
 section of my page. Script is working, but from the shell. When I try to
 post a data through html form I get an error saying that post method is
 not suported by this document. It's in directory with options ExecCGI and
 FollowSymLinks. There is also a script handler for .cgi. What's the
 matter?
 Thanks from advance

 IronHand   mailto:[EMAIL PROTECTED]



Apache::ASP

2000-12-21 Thread Loay Oweis

Hello

I am trying to find a feasible solution for configuring an apache web
server to serve asp, ssi and jsp.

I am leaning towards RedHat due to my experience with it. I have tried
the lated RH7.0 which comes with mod_perl but not perl-apache-asp. I
have tried an Apache::ASP CPAN install with no luck simply getting the
Internal Server Error page. I thinking of going to RH6.2 and trying the
rpm install as it seems to may be easier to install. Please recommend
what is the best way to go. I would like to continue with this RH7.0 if
possible; however, RH6.2 would do.

Thank you

Loay Oweis




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Sam Horrocks

  Gunther Birznieks wrote:
   Sam just posted this to the speedycgi list just now.
  [...]
   The underlying problem in mod_perl is that apache likes to spread out
   web requests to as many httpd's, and therefore as many mod_perl interpreters,
   as possible using an LRU selection processes for picking httpd's.
  
  Hmmm... this doesn't sound right.  I've never looked at the code in
  Apache that does this selection, but I was under the impression that the
  choice of which process would handle each request was an OS dependent
  thing, based on some sort of mutex.
  
  Take a look at this: http://httpd.apache.org/docs/misc/perf-tuning.html
  
  Doesn't that appear to be saying that whichever process gets into the
  mutex first will get the new request?

 I would agree that whichver process gets into the mutex first will get
 the new request.  That's exactly the problem I'm describing.  What you
 are describing here is first-in, first-out behaviour which implies LRU
 behaviour.

 Processes 1, 2, 3 are running.  1 finishes and requests the mutex, then
 2 finishes and requests the mutex, then 3 finishes and requests the mutex.
 So when the next three requests come in, they are handled in the same order:
 1, then 2, then 3 - this is FIFO or LRU.  This is bad for performance.

  In my experience running
  development servers on Linux it always seemed as if the the requests
  would continue going to the same process until a request came in when
  that process was already busy.

 No, they don't.  They go round-robin (or LRU as I say it).

 Try this simple test script:

 use CGI;
 my $cgi = CGI-new;
 print $cgi-header();
 print "mypid=$$\n";

 WIth mod_perl you constantly get different pids.  WIth mod_speedycgi you
 usually get the same pid.  THis is a really good way to see the LRU/MRU
 difference that I'm talking about.

 Here's the problem - the mutex in apache is implemented using a lock
 on a file.  It's left up to the kernel to decide which process to give
 that lock to.

 Now, if you're writing a unix kernel and implementing this file locking code,
 what implementation would you use?  Well, this is a general purpose thing -
 you have 100 or so processes all trying to acquire this file lock.  You could
 give out the lock randomly or in some ordered fashion.  If I were writing
 the kernel I would give it out in a round-robin fashion (or the
 least-recently-used process as I referred to it before).  Why?  Because
 otherwise one of those processes may starve waiting for this lock - it may
 never get the lock unless you do it in a fair (round-robin) manner.

 THe kernel doesn't know that all these httpd's are exactly the same.
 The kernel is implementing a general-purpose file-locking scheme and
 it doesn't know whether one process is more important than another.  If
 it's not fair about giving out the lock a very important process might
 starve.

 Take a look at fs/locks.c (I'm looking at linux 2.3.46).  In there is the
 comment:

 /* Insert waiter into blocker's block list.
  * We use a circular list so that processes can be easily woken up in
  * the order they blocked. The documentation doesn't require this but
  * it seems like the reasonable thing to do.
  */
 static void locks_insert_block(struct file_lock *blocker, struct file_lock *waiter)

  As I understand it, the implementation of "wake-one" scheduling in the
  2.4 Linux kernel may affect this as well.  It may then be possible to
  skip the mutex and use unserialized accept for single socket servers,
  which will definitely hand process selection over to the kernel.

 If the kernel implemented the queueing for multiple accepts using a LIFO
 instead of a FIFO and apache used this method instead of file locks,
 then that would probably solve it.

 Just found this on the net on this subject:
http://www.uwsg.iu.edu/hypermail/linux/kernel/9704.0/0455.html
http://www.uwsg.iu.edu/hypermail/linux/kernel/9704.0/0453.html

   The problem is that at a high concurrency level, mod_perl is using lots
   and lots of different perl-interpreters to handle the requests, each
   with its own un-shared memory.  It's doing this due to its LRU design.
   But with SpeedyCGI's MRU design, only a few speedy_backends are being used
   because as much as possible it tries to use the same interpreter over and
   over and not spread out the requests to lots of different interpreters.
   Mod_perl is using lots of perl-interpreters, while speedycgi is only using
   a few.  mod_perl is requiring that lots of interpreters be in memory in
   order to handle the requests, wherase speedy only requires a small number
   of interpreters to be in memory.
  
  This test - building up unshared memory in each process - is somewhat
  suspect since in most setups I've seen, there is a very significant
  amount of memory being shared between mod_perl processes.

 My message and testing concerns un-shared memory only.  If all of your memory
 is shared, then there shouldn't be a 

Apache::ASP permissions problem?

2000-12-21 Thread Michael Hurwitch

I have been trying to install Apache:ASP 2.07 on Solaris 2.6 with Perl
5.6.0. When I try to load an ASP page, I get the following errors:


Errors Output
  1.. Can't open /tmp/.state/3d/3d8ae603196.lock: ,
/raid1/perl5.6.0/lib/site_perl/5.6.0/Apache/ASP.pm line 4831
Debug Output
  1.. Can't open /tmp/.state/3d/3d8ae603196.lock: ,
/raid1/perl5.6.0/lib/site_perl/5.6.0/Apache/ASP.pm line 4831
ASP to Perl Script

Here is the directory listing:
zeus:mhurwitch/tmp/.state/3d ls -ltotal 0-rw-r-   1 nobody   nobody
0 Dec 21 10:07 3d8ae603196.dir-rw-rw-rw-   1 nobody   nobody 0 Dec
21 10:07 3d8ae603196.lock-rw-r-   1 nobody   nobody 0 Dec 21
10:07 3d8ae603196.pag

Here is the ASP page:


HEAD

TITLEMike's ASP Page/TITLE

%
$Session-{"Test"} = "This is a test";
%

/HEAD

BODY

%
print "Hello!\n";
%

/BODY


Httpd.conf is pretty simple :

Files *.asp
SetHandler perl-script
PerlHandler Apache::ASP
PerlSetVar Global /tmp
PerlSetVar Debug 2
/Files


If we use PerlSetVar NoState 1, the page works fine.

Apache runs as user 'nobody', and I think this is permissions problem, but
/tmp/.state seems to be fine.

I'd appreciate any help you can offer.

Thanks,

Michael Hurwitch

[EMAIL PROTECTED]




Transfering Headers... w/ miltiple 'Set_Cookie'

2000-12-21 Thread Jason Leidigh




Hi,

I have a problem with a "procy" I'm writing. 
The proxy mut pass the headers from the response to the response that the Apache 
will return to the client. I am using the following method as read from 
The Egal Book (Writing Apache Modules)

 
$headers-scan(sub 
{ 
$r-header_out(@_); 
});

Where $headers is a HTTP::Response object... the 
problem is that there are many Set-Cookie instructions in $headers but mod_perl 
seems to use a tied hash to link to the request table and so each set cookie 
replaces the last as the hash key is seen to equal? How do I get 
around thisI tried:

$r-headers_out-add('Set-Cookie' = 
$cookieString);

Inside a loop which modifies $cookieString each 
time through but it had the same effect. In the end Apache sens you a 
single cookie which is equivelent to the last set.

Thanks in advance..

Jason

PS I am still waiting for a good sugguestion on how 
to protect code:

I have a mod_perl module which I would like to 
protect. The code isvery "private" and I would like to have it 
existonly as perl byte code... which can be used each time the server may 
be restarted... is this possible? How? Thanks in advance and to 
those wo responded to may last question "Help me beat 
Java..."

The goal is not to prevent decompiling which seemed 
to be one persons sugguestion using obfuscation...but simply to make it 
impossible to open and read the .pm. Make it a prcompiled Perl Byte Code which 
Apache can insert into its parent and child processes whithout expecting a raw 
perl script which requires compiling...
Jason Z. LeidighProject LeaderUOL 
InternacionalAv.Libertador 1068 - Piso 3Capital Federal - ( C1112ABN 
)Buenos Aires - ArgentinaT.E: 
0054-11-5777-2446Fax: 0054-11-5777-2402www.uol.com.ar[EMAIL PROTECTED]
Jason Z. LeidighProject LeaderUOL 
InternacionalAv.Libertador 1068 - Piso 3Capital Federal - ( C1112ABN 
)Buenos Aires - ArgentinaT.E: 
0054-11-5777-2446Fax: 0054-11-5777-2402www.uol.com.ar[EMAIL PROTECTED]


Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-21 Thread Stas Bekman

Folks, your discussion is not short of wrong statements that can be easily
proved, but I don't find it useful. Instead please read:

http://perl.apache.org/~dougm/modperl_2.0.html#new

Too quote the most relevant part:

"With 2.0, mod_perl has much better control over which PerlInterpreters
are used for incoming requests. The intepreters are stored in two linked
lists, one for available interpreters one for busy. When needed to handle
a request, one is taken from the head of the available list and put back
into the head of the list when done. This means if you have, say, 10
interpreters configured to be cloned at startup time, but no more than 5
are ever used concurrently, those 5 continue to reuse Perls allocations,
while the other 5 remain much smaller, but ready to go if the need
arises."

Of course you should read the rest.

So the moment mod_perl 2.0 hits the shelves, this possible benefit
of speedycgi over mod_perl becomes irrelevant. I think this more or less
summarizes this thread.

And Gunther, nobody tries to shut people expressing their opinions here,
it's just that different people express their feelings in different ways,
that's the way the open list goes... :) so please keep on forwarding
things that you find interesting. I don't think anybody here has a relief
when you are busy and not posting as you happen to say -- I believe that
your posts are very interesting and you shouldn't discourage yourself from
keeping on doing that. Those who don't like your posts don't have to read
them.

Hope you are all having fun and getting ready for the holidays :) I'm
going to buy my ski equipment soonish!

_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: Apache::ASP

2000-12-21 Thread Joshua Chamas

Loay Oweis wrote:
 
 Hello
 
 I am trying to find a feasible solution for configuring an apache web
 server to serve asp, ssi and jsp.
 
 I am leaning towards RedHat due to my experience with it. I have tried
 the lated RH7.0 which comes with mod_perl but not perl-apache-asp. I
 have tried an Apache::ASP CPAN install with no luck simply getting the
 Internal Server Error page. I thinking of going to RH6.2 and trying the

RH 7.0 should be fine ( though I'm using 6.2 personally), what's in your 
httpd error_log, the problem is likely easy to fix.

--Josh

_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



ANNOUNCE: TPC5: mod_perl track: Call For Papers

2000-12-21 Thread Stas Bekman

Ok, the CFP is officially out, let those proposals come in:

All the details can be found here:
http://perl.apache.org/conferences/tpc5-us/cfp.html

Notice that while our cfp resembles the general CFP
http://conferences.oreilly.com/perl5/ it's not the same, so don't confuse
the two. Ours has the magic mod_perl token on the top :)

I think I've summarized the last CFP thread into a pretty long wishlist of
topics. If something is missing and you think it's relevant to mod_perl
and interesting to the hear about submit as well.

Thank you for your support! Looking forward to meeting you all in person
at ApacheCon and TPC5.

_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: Apache::ASP permissions problem?

2000-12-21 Thread Joshua Chamas

Michael Hurwitch wrote:
 
 I have been trying to install Apache:ASP 2.07 on Solaris 2.6 with Perl
 5.6.0. When I try to load an ASP page, I get the following errors:
 
 Errors Output
   1.. Can't open /tmp/.state/3d/3d8ae603196.lock: ,
 /raid1/perl5.6.0/lib/site_perl/5.6.0/Apache/ASP.pm line 4831
 Debug Output
   1.. Can't open /tmp/.state/3d/3d8ae603196.lock: ,
 /raid1/perl5.6.0/lib/site_perl/5.6.0/Apache/ASP.pm line 4831
 ASP to Perl Script
 
 Here is the directory listing:
 zeus:mhurwitch/tmp/.state/3d ls -ltotal 0-rw-r-   1 nobody   nobody
 0 Dec 21 10:07 3d8ae603196.dir-rw-rw-rw-   1 nobody   nobody 0 Dec
 21 10:07 3d8ae603196.lock-rw-r-   1 nobody   nobody 0 Dec 21
 10:07 3d8ae603196.pag
 

This looks like a bug in the way Apache::ASP sets the file
permissions.  Before you do anything, I would like to see
the ls -allg for .state and .state/3d ... but to 
fix your problem temporarily so you can move onto 
development, you can "chmod -R 0755 .state"

-- Josh

_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



Performance measures of a perl script !!!

2000-12-21 Thread Edmar Edilton da Silva


 Hi all,
 I ran some performance measures
on two perl scripts under
mod_perl, the first script access a Oracle database using the DBD::Oracle
module and the second script access a MS SQL Server database using
the
DBD::Sybase module. The measured response times was very different
at the
two cases, the time for the Oracle database was approximately three
times
the time for MS SQL Server databse. Why all this difference between
the measured values ? I also understood that an Oracle connection needs
much more resource of machine than an MS SQL Server connection (both
database servers are not installed on the same machine that the
Web
server). Is it correctly? Please, is there some place where I can find
docs about it? Any help will be very appreciated. Thanks...


 Edmar Edilton da Silva
 Bacharel em Cincia da Computaco - UFV
 Mestrando em Cincia da Computaco - UNICAMP




Re: Performance measures of a perl script !!!

2000-12-21 Thread Stas Bekman

On Thu, 21 Dec 2000, Edmar Edilton da Silva wrote:

 Hi all,
 
 I ran some performance measures on two perl scripts under
 mod_perl, the first script access a Oracle database using the
 DBD::Oracle
 module and the second script access a MS SQL Server database using the
 DBD::Sybase module. The measured response times was very different at
 the
 two cases, the time for the Oracle database was approximately three
 times
 the time for MS SQL Server databse. Why all this difference between
 the measured values ? I also understood that an Oracle connection needs
 much more resource of machine than an MS SQL Server connection (both
 database servers are not installed on the same  machine that the Web
 server). Is it correctly? Please, is there some place where I can find
 docs about it? Any help will be very appreciated. Thanks...

Yes, see the guide.

Oracle is slow to connect -- use Apache::DBI and connect_on_init to solve
this problem.


_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: Performance measures of a perl script !!!

2000-12-21 Thread Matt Sergeant

On Thu, 21 Dec 2000, Edmar Edilton da Silva wrote:

 Hi all,

 I ran some performance measures on two perl scripts under
 mod_perl, the first script access a Oracle database using the
 DBD::Oracle
 module and the second script access a MS SQL Server database using the
 DBD::Sybase module. The measured response times was very different at
 the
 two cases, the time for the Oracle database was approximately three
 times
 the time for MS SQL Server databse. Why all this difference between
 the measured values ? I also understood that an Oracle connection needs
 much more resource of machine than an MS SQL Server connection (both
 database servers are not installed on the same  machine that the Web
 server). Is it correctly? Please, is there some place where I can find
 docs about it? Any help will be very appreciated. Thanks...

Dear Fast Car magazine,

My Ferrari uses more petrol than my MacLaren F1. Why is this?

Answer: They use different engines.

Seriously, different DB's take different amounts of time to connect
because they are written differently. If you want no connection time use
Apache::DBI to cache the connection.

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Sam Horrocks

  Folks, your discussion is not short of wrong statements that can be easily
  proved, but I don't find it useful.

 I don't follow.  Are you saying that my conclusions are wrong, but
 you don't want to bother explaining why?
 
 Would you agree with the following statement?

Under apache-1, speedycgi scales better than mod_perl with
scripts that contain un-shared memory 



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Gunther Birznieks

At 09:16 PM 12/21/00 +0100, Stas Bekman wrote:
[much removed]

So the moment mod_perl 2.0 hits the shelves, this possible benefit
of speedycgi over mod_perl becomes irrelevant. I think this more or less
summarizes this thread.
I think you are right about the summarization. However, I also think it's 
unfair for people here to pin too many hopes on mod_perl 2.0.

First Apache 2.0 has to be fully released. It's still in Alpha! Then, 
mod_perl 2.0 has to be released. I haven't seen any realistic timelines 
that indicate to me that these will be released and stable for production 
use in only a few months time. And Apache 2.0 has been worked on for years. 
I first saw a talk on Apache 2.0's architecture at the first ApacheCon 2 
years ago! To be fair, back then they were using Mozilla's NPR which I 
think they learned from, threw away, and rewrote from scratch after all (to 
become APR). But still, the point is that it's been a long time and 
probably will be a while yet.

Who in their right mind would pin their business or production database on 
the hope that mod_perl 2.0 comes out in a few months? I don't think anyone 
would. Sam has a solution that works now, and is open source and provides 
some benefits for web applications that mod_perl and apache is not as 
efficient at for some types of applications.

As people interested in Perl, we should be embracing these alternatives not 
telling people to wait for new versions of software that may not come out soon.

If there is a problem with mod_perl advocacy, it's that it is precisely too 
mod_perl centric. Mod_perl is a niche crowd which has a high learning 
curve. I think the technology mod_perl offers is great, but as has been 
said before, the problem is that people are going to PHP away from Perl. If 
more people had easier solutions to implement their simple apps in Perl yet 
be as fast as PHP, less people would go to PHP.

Those Perl people would eventually discover mod_perl's power as they 
require it, and then they would take the step to "upgrade" to the power of 
handlers away from the "missing link".

But without that "missing link" to make things easy for people to move from 
PHP to Perl, then Perl will miss something very crucial to maintaining its 
standing as the "defacto language for Web applications".

3 years ago, I think it would be accurate to say Perl apps drive 95% of the 
dynamic web. Sadly, I believe (anecdotally) that this is no longer true.

SpeedyCGI is not "THE" missing link, but I see it as a crucial part of this 
link between newbies and mod_perl. This is why I believe that mod_perl and 
its documentation should have a section (even if tiny) on this stuff, so 
that people will know that if they find mod_perl too hard, that there are 
alternatives that are less powerful, yet provide at least enough power to 
beat PHP.

I also see SpeedyCGI as being on the way to being more ISP-friendly already 
for hosting casual users of Perl than mod_perl is. Different apps use a 
different backend engine by default. So the problem with virtual hosts 
screwing each other over by accident is gone for the casual user. There are 
still some needs for improvement (eg memory is likely still an issue with 
different backends)...

Anyway, these are just my feelings. I really shouldn't be spending time on 
posting this as I have some deadlines to meet. But I felt they were still 
important points to make that I think some people may be potentially 
missing here. :)





Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-21 Thread Stas Bekman

On Thu, 21 Dec 2000, Sam Horrocks wrote:

   Folks, your discussion is not short of wrong statements that can be easily
   proved, but I don't find it useful.
 
  I don't follow.  Are you saying that my conclusions are wrong, but
  you don't want to bother explaining why?
  
  Would you agree with the following statement?
 
 Under apache-1, speedycgi scales better than mod_perl with
 scripts that contain un-shared memory 

I don't know. It's easy to give a simple example and claim being better.
So far whoever tried to show by benchmarks that he is better, most often
was proved wrong, since the technologies in question have so many
features, that I believe no benchmark will prove any of them absolutely
superior or inferior. Therefore I said that trying to tell that your grass
is greener is doomed to fail if someone has time on his hands to prove you
wrong. Well, we don't have this time.

Therefore I'm not trying to prove you wrong or right. Gunther's point of
the original forward was to show things that mod_perl may need to adopt to
make it better. Doug already explained in his paper that the MRU approach
has been already implemented in mod_perl-2.0. You could read it in the
link that I've attached and the quote that I've quoted.

So your conclusions about MRU are correct and we have it implemented
already (well very soon now :). I apologize if my original reply was
misleading.

I'm not telling that benchmarks are bad. What I'm telling is that it's
very hard to benchmark things which are different. You benefit the most
from the benchmarking when you take the initial code/product, benchmark
it, then you try to improve the code and benchmark again to see whether it
gave you any improvement. That's the area when the benchmarks rule and
their are fair because you test the same thing. Well you could read more
of my rambling about benchmarks in the guide.

So if you find some cool features in other technologies that mod_perl
might adopt and benefit from, don't hesitate to tell the rest of the gang.



Something that I'd like to comment on:

I find it a bad practice to quote one sentence from person's post and
follow up on it. Someone from the list has sent me this email (SB == me):

SB I don't find it useful

and follow up. Why not to use a single letter:

SB I

and follow up? It's so much easier to flame on things taken out of their
context.

it has been no once that people did this to each other here on the list, I
think I did too. So please be more careful when taking things out of
context. Thanks a lot, folks!

Cheers...

_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Sam Horrocks

I've put your suggestion on the todo list.  It certainly wouldn't hurt to
have that feature, though I think memory sharing becomes a much much smaller
issue once you switch to MRU scheduling.

At the moment I think SpeedyCGI has more pressing needs though - for
example multiple scripts in a single interpreter, and an NT port.


  I think you could actually make speedycgi even better for shared memory 
  usage by creating a special directive which would indicate to speedycgi to 
  preload a series of modules. And then to tell speedy cgi to do forking of 
  that "master" backend preloaded module process and hand control over to 
  that forked process whenever you need to launch a new process.
  
  Then speedy would potentially have the best of both worlds.
  
  Sorry I cross posted your thing. But I do think it is a problem of mod_perl 
  also, and I am happily using speedycgi in production on at least one 
  commercial site where mod_perl could not be installed so easily because of 
  infrastructure issues.
  
  I believe your mechanism of round robining among MRU perl interpreters is 
  actually also accomplished by ActiveState's PerlEx (based on 
  Apache::Registry but using multithreaded IIS and pool of Interpreters). A 
  method similar to this will be used in Apache 2.0 when Apache is 
  multithreaded and therefore can control within program logic which Perl 
  interpeter gets called from a pool of Perl interpreters.
  
  It just isn't so feasible right now in Apache 1.0 to do this. And sometimes 
  people forget that mod_perl came about primarily for writing handlers in 
  Perl not as an application environment although it is very good for the 
  later as well.
  
  I think SpeedyCGI needs more advocacy from the mod_perl group because put 
  simply speedycgi is way easier to set up and use than mod_perl and will 
  likely get more PHP people using Perl again. If more people rely on Perl 
  for their fast websites, then you will get more people looking for more 
  power, and by extension more people using mod_perl.
  
  Whoops... here we go with the advocacy thing again.
  
  Later,
  Gunther
  
  At 02:50 AM 12/21/2000 -0800, Sam Horrocks wrote:
 Gunther Birznieks wrote:
  Sam just posted this to the speedycgi list just now.
 [...]
  The underlying problem in mod_perl is that apache likes to spread out
  web requests to as many httpd's, and therefore as many mod_perl 
   interpreters,
  as possible using an LRU selection processes for picking httpd's.

 Hmmm... this doesn't sound right.  I've never looked at the code in
 Apache that does this selection, but I was under the impression that the
 choice of which process would handle each request was an OS dependent
 thing, based on some sort of mutex.

 Take a look at this: http://httpd.apache.org/docs/misc/perf-tuning.html

 Doesn't that appear to be saying that whichever process gets into the
 mutex first will get the new request?
  
I would agree that whichver process gets into the mutex first will get
the new request.  That's exactly the problem I'm describing.  What you
are describing here is first-in, first-out behaviour which implies LRU
behaviour.
  
Processes 1, 2, 3 are running.  1 finishes and requests the mutex, then
2 finishes and requests the mutex, then 3 finishes and requests the mutex.
So when the next three requests come in, they are handled in the same order:
1, then 2, then 3 - this is FIFO or LRU.  This is bad for performance.
  
 In my experience running
 development servers on Linux it always seemed as if the the requests
 would continue going to the same process until a request came in when
 that process was already busy.
  
No, they don't.  They go round-robin (or LRU as I say it).
  
Try this simple test script:
  
use CGI;
my $cgi = CGI-new;
print $cgi-header();
print "mypid=$$\n";
  
WIth mod_perl you constantly get different pids.  WIth mod_speedycgi you
usually get the same pid.  THis is a really good way to see the LRU/MRU
difference that I'm talking about.
  
Here's the problem - the mutex in apache is implemented using a lock
on a file.  It's left up to the kernel to decide which process to give
that lock to.
  
Now, if you're writing a unix kernel and implementing this file locking 
   code,
what implementation would you use?  Well, this is a general purpose thing -
you have 100 or so processes all trying to acquire this file lock.  You 
   could
give out the lock randomly or in some ordered fashion.  If I were writing
the kernel I would give it out in a round-robin fashion (or the
least-recently-used process as I referred to it before).  Why?  Because
otherwise one of those processes may starve waiting for this lock - it may
never get the lock unless you do it in a fair (round-robin) manner.
  
THe kernel doesn't know 

Proposals for ApacheCon 2001 in; help us choose! (fwd)

2000-12-21 Thread Stas Bekman


Well, this is new. You choose what sessions at ApacheCon you want. 

I don't think it's a fair approach by ApacheCon committee, as by applying
this approach they are going to make the big player even bigger and kill
the emerging technologies, who will definitely not get enough quotes and
thus will not make it in :( I'm not on the committee, so I cannot really
influence, but I'll try anyway.

So if you want to get as many mod_perl sessions as possible your voting is
most important.

The original announce is attached.

-- Forwarded message --
Date: Thu, 21 Dec 2000 14:21:33 -0500
From: Rodent of Unusual Size [EMAIL PROTECTED]
Reply-To: ApacheCon Planning Ctte [EMAIL PROTECTED]
To: Apache Announcements [EMAIL PROTECTED],
 ApacheCon Announcements [EMAIL PROTECTED],
 Apache Developers [EMAIL PROTECTED], ASF members [EMAIL PROTECTED],
 [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED],
 [EMAIL PROTECTED], [EMAIL PROTECTED]
Newsgroups: comp.infosystems.www.servers.unix,
comp.infosystems.www.servers.ms-windows,
comp.infosystems.www.servers.misc, de.comm.infosystems.www.servers
Subject: Proposals for ApacheCon 2001 in; help us choose!

-BEGIN PGP SIGNED MESSAGE-

Greetings!

If you're not interested in Apache and the ApacheCon conferences,
read no further.

The deadline for session proposals for the ApacheCon 2001 event
in Santa Clara in April 2001 has passed, and we have received more
than 150 submissions.  That's a lot, and we could sure use your
help figuring out which ones to schedule (since we'll have fewer
than half as many session slots).

To help you help us, two changes have been made to the ApacheCon
Web site.  The first is that anyone can now get an 'account'
on the site, whether you've attended ApacheCon in the past or not.
This allows you to participate in things like.. well, like the
next item.

The second change is a form that lets you rate the submissions
according to how interested you'd be in attending if it were
scheduled.  The planners will use this feedback in figuring out
which sessions are likely to be the best and/or most popular.

Go to URL:http://ApacheCon.Com/html/login.html.  If you already
have an ApacheCon account, log in with it.  If you've forgotten
your password, use the lower half of the form and press 'send password.'
If you don't have an account yet, use the lower part of the form
and click 'create account.'  In either of the last two cases,
wait for the mail message sent to your email address, and then
log in using the top half of the form.

Once you're logged in, choose the 'rate CFPs' link under the
ApacheCon 2001 headline.

Please let me know if you have any questions or suggestions..
and thanks for your support!
- -- 
#kenP-)}

Ken Coarhttp://Golux.Com/coar/
Apache Software Foundation  http://www.apache.org/
"Apache Server for Dummies" http://Apache-Server.Com/
"Apache Server Unleashed"   http://ApacheUnleashed.Com/

-BEGIN PGP SIGNATURE-
Version: PGP for Personal Privacy 5.0
Charset: noconv

iQCVAwUBOkJYMprNPMCpn3XdAQFibwQAqRSCITNKCy+ejfhEq/mldJJ+GAZDvx1r
3XkwkZjLJVzwfMjd6diGGcvpvps25rkNjmz2jJqjyTeu+yLalXp30LKaKhL3QOCr
gLEXEDqjI7w3Z/2/oNKNvzYJvKz8arITEfEFOhQipD+KiLTOLsVTV84z+i5APNsq
yIqLLmotvI4=
=itaL
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Sam Horrocks

I really wasn't trying to work backwards from a benchmark.  It was
more of an analysis of the design, and the benchmarks bore it out.
It's sort of like coming up with a theory in science - if you can't get
any experimental data to back up the theory, you're in big trouble.
But if you can at least point out the existence of some experiments
that are consistent with your theory, it means your theory may be true.

The best would be to have other people do the same tests and see if they
see the same trend.  If no-one else sees this trend, then I'd really
have to re-think my analysis.

Another way to look at it - as you say below MRU is going to be in
mod_perl-2.0.  ANd what is the reason for that?  If there's no performance
difference between LRU and MRU why would the author bother to switch
to MRU.  So, I'm saying there must be some benchmarks somewhere that
point out this difference - if there weren't any real-world difference,
why bother even implementing MRU.

I claim that my benchmarks point out this difference between MRU over
LRU, and that's why my benchmarks show better performance on speedycgi
than on mod_perl.

Sam

- SpeedyCGI uses MRU, mod_perl-2 will eventually use MRU.  
  On Thu, 21 Dec 2000, Sam Horrocks wrote:
  
 Folks, your discussion is not short of wrong statements that can be easily
 proved, but I don't find it useful.
   
I don't follow.  Are you saying that my conclusions are wrong, but
you don't want to bother explaining why?

Would you agree with the following statement?
   
   Under apache-1, speedycgi scales better than mod_perl with
   scripts that contain un-shared memory 
  
  I don't know. It's easy to give a simple example and claim being better.
  So far whoever tried to show by benchmarks that he is better, most often
  was proved wrong, since the technologies in question have so many
  features, that I believe no benchmark will prove any of them absolutely
  superior or inferior. Therefore I said that trying to tell that your grass
  is greener is doomed to fail if someone has time on his hands to prove you
  wrong. Well, we don't have this time.
  
  Therefore I'm not trying to prove you wrong or right. Gunther's point of
  the original forward was to show things that mod_perl may need to adopt to
  make it better. Doug already explained in his paper that the MRU approach
  has been already implemented in mod_perl-2.0. You could read it in the
  link that I've attached and the quote that I've quoted.
  
  So your conclusions about MRU are correct and we have it implemented
  already (well very soon now :). I apologize if my original reply was
  misleading.
  
  I'm not telling that benchmarks are bad. What I'm telling is that it's
  very hard to benchmark things which are different. You benefit the most
  from the benchmarking when you take the initial code/product, benchmark
  it, then you try to improve the code and benchmark again to see whether it
  gave you any improvement. That's the area when the benchmarks rule and
  their are fair because you test the same thing. Well you could read more
  of my rambling about benchmarks in the guide.
  
  So if you find some cool features in other technologies that mod_perl
  might adopt and benefit from, don't hesitate to tell the rest of the gang.
  
  
  
  Something that I'd like to comment on:
  
  I find it a bad practice to quote one sentence from person's post and
  follow up on it. Someone from the list has sent me this email (SB == me):
  
  SB I don't find it useful
  
  and follow up. Why not to use a single letter:
  
  SB I
  
  and follow up? It's so much easier to flame on things taken out of their
  context.
  
  it has been no once that people did this to each other here on the list, I
  think I did too. So please be more careful when taking things out of
  context. Thanks a lot, folks!
  
  Cheers...
  
  _
  Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
  http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
  mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
  http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  
  



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-21 Thread Perrin Harkins

Hi Sam,

  Processes 1, 2, 3 are running.  1 finishes and requests the mutex, then
  2 finishes and requests the mutex, then 3 finishes and requests the mutex.
  So when the next three requests come in, they are handled in the same order:
  1, then 2, then 3 - this is FIFO or LRU.  This is bad for performance.

Thanks for the explanation; that makes sense now.  So, I was right that
it's OS dependent, but most OSes use a FIFO approach which leads to LRU
selection in the mutex.

Unfortunately, I don't see that being fixed very simply, since it's not
really Apache doing the choosing.  Maybe it will be possible to do
something cool with the wake-one stuff in Linux 2.4 when that comes out.

By the way, how are you doing it?  Do you use a mutex routine that works
in LIFO fashion?

   In my experience running
   development servers on Linux it always seemed as if the the requests
   would continue going to the same process until a request came in when
   that process was already busy.
 
  No, they don't.  They go round-robin (or LRU as I say it).

Keith Murphy pointed out that I was seeing the result of persistent HTTP
connections from my browser.  Duh.

  But a point I'm making is that with mod_perl you have to go to great
  lengths to write your code so as to avoid unshared memory.  My claim is that
  with mod_speedycgi you don't have to concern yourself as much with this.
  You can concentrate more on the application and less on performance tuning.

I think you're overstating the case a bit here.  It's really easy to take
advantage of shared memory with mod_perl - I just add a 'use Foo' to my
startup.pl!  It can be hard for newbies to understand, but there's nothing
difficult about implementing it.  I often get 50% or more of my
application shared in this way.  That's a huge savings.

  I don't assume that each approach is equally fast under all loads.  They
  were about the same with concurrency level-1, but higher concurrency levels
  they weren't.

Well, certainly not when mod_perl started swapping...

Actually, there is a reason why MRU could lead to better performance (as
opposed to just saving memory): caching of allocated memory.  The first
time Perl sees lexicals it has to allocate memory for them, so if you
re-use the same interpreter you get to skip this step and that should give
some kind of performance benefit.

  I am saying that since SpeedyCGI uses MRU to allocate requests to perl
  interpreters, it winds up using a lot fewer interpreters to handle the
  same number of requests.

What I was saying is that it doesn't make sense for one to need fewer
interpreters than the other to handle the same concurrency.  If you have
10 requests at the same time, you need 10 interpreters.  There's no way
speedycgi can do it with fewer, unless it actually makes some of them
wait.  That could be happening, due to the fork-on-demand model, although
your warmup round (priming the pump) should take care of that.

  I don't think it's pre-forking.  When I ran my tests I would always run
  them twice, and take the results from the second run.  The first run
  was just to "prime the pump".

That seems like it should do it, but I still think you could only have
more processes handling the same concurrency on mod_perl if some of the
mod_perl processes are idle or some of the speedycgi requests are waiting.

   This is probably all a moot point on a server with a properly set
   MaxClients and Apache::SizeLimit that will not go into swap.
 
  Please let me know what you think I should change.  So far my
  benchmarks only show one trend, but if you can tell me specifically
  what I'm doing wrong (and it's something reasonable), I'll try it.

Try setting MinSpareServers as low as possible and setting MaxClients to a
value that will prevent swapping.  Then set ab for a concurrency equal to
your MaxClients setting.

  I believe that with speedycgi you don't have to lower the MaxClients
  setting, because it's able to handle a larger number of clients, at
  least in this test.

Maybe what you're seeing is an ability to handle a larger number of
requests (as opposed to clients) because of the performance benefit I
mentioned above.  I don't know how hard ab tries to make sure you really
have n simultaneous clients at any given time.

  In other words, if with mod_perl you had to turn
  away requests, but with mod_speedycgi you did not, that would just
  prove that speedycgi is more scalable.

Are the speedycgi+Apache processes smaller than the mod_perl
processes?  If not, the maximum number of concurrent requests you can
handle on a given box is going to be the same.

  Maybe.  There must a benchmark somewhere that would show off of
  mod_perl's advantages in shared memory.  Maybe a 100,000 line perl
  program or something like that - it would have to be something where
  mod_perl is using *lots* of shared memory, because keep in mind that
  there are still going to be a whole lot fewer SpeedyCGI processes than
  there are 

Re: Proposals for ApacheCon 2001 in; help us choose! (fwd)

2000-12-21 Thread Gunther Birznieks

I share your concern Stas.

But I also suspect that if the suggestions looked odd, I doubt they would 
blindly accept the web-based votes.  Can't entirely rule out bugs in the 
voting system and people voting multiple times...

However, I think it's an excellent idea to at least get an idea of what the 
people generally seem to say that they want.

At 02:22 AM 12/22/00 +0100, Stas Bekman wrote:

Well, this is new. You choose what sessions at ApacheCon you want.

I don't think it's a fair approach by ApacheCon committee, as by applying
this approach they are going to make the big player even bigger and kill
the emerging technologies, who will definitely not get enough quotes and
thus will not make it in :( I'm not on the committee, so I cannot really
influence, but I'll try anyway.

So if you want to get as many mod_perl sessions as possible your voting is
most important.

The original announce is attached.

-- Forwarded message --
Date: Thu, 21 Dec 2000 14:21:33 -0500
From: Rodent of Unusual Size [EMAIL PROTECTED]
Reply-To: ApacheCon Planning Ctte [EMAIL PROTECTED]
To: Apache Announcements [EMAIL PROTECTED],
  ApacheCon Announcements [EMAIL PROTECTED],
  Apache Developers [EMAIL PROTECTED], ASF members 
 [EMAIL PROTECTED],
  [EMAIL PROTECTED], [EMAIL PROTECTED], 
 [EMAIL PROTECTED],
  [EMAIL PROTECTED], [EMAIL PROTECTED]
Newsgroups: comp.infosystems.www.servers.unix,
 comp.infosystems.www.servers.ms-windows,
 comp.infosystems.www.servers.misc, de.comm.infosystems.www.servers
Subject: Proposals for ApacheCon 2001 in; help us choose!

-BEGIN PGP SIGNED MESSAGE-

Greetings!

If you're not interested in Apache and the ApacheCon conferences,
read no further.

The deadline for session proposals for the ApacheCon 2001 event
in Santa Clara in April 2001 has passed, and we have received more
than 150 submissions.  That's a lot, and we could sure use your
help figuring out which ones to schedule (since we'll have fewer
than half as many session slots).

To help you help us, two changes have been made to the ApacheCon
Web site.  The first is that anyone can now get an 'account'
on the site, whether you've attended ApacheCon in the past or not.
This allows you to participate in things like.. well, like the
next item.

The second change is a form that lets you rate the submissions
according to how interested you'd be in attending if it were
scheduled.  The planners will use this feedback in figuring out
which sessions are likely to be the best and/or most popular.

Go to URL:http://ApacheCon.Com/html/login.html.  If you already
have an ApacheCon account, log in with it.  If you've forgotten
your password, use the lower half of the form and press 'send password.'
If you don't have an account yet, use the lower part of the form
and click 'create account.'  In either of the last two cases,
wait for the mail message sent to your email address, and then
log in using the top half of the form.

Once you're logged in, choose the 'rate CFPs' link under the
ApacheCon 2001 headline.

Please let me know if you have any questions or suggestions..
and thanks for your support!
- --
#kenP-)}

Ken Coarhttp://Golux.Com/coar/
Apache Software Foundation  http://www.apache.org/
"Apache Server for Dummies" http://Apache-Server.Com/
"Apache Server Unleashed"   http://ApacheUnleashed.Com/

-BEGIN PGP SIGNATURE-
Version: PGP for Personal Privacy 5.0
Charset: noconv

iQCVAwUBOkJYMprNPMCpn3XdAQFibwQAqRSCITNKCy+ejfhEq/mldJJ+GAZDvx1r
3XkwkZjLJVzwfMjd6diGGcvpvps25rkNjmz2jJqjyTeu+yLalXp30LKaKhL3QOCr
gLEXEDqjI7w3Z/2/oNKNvzYJvKz8arITEfEFOhQipD+KiLTOLsVTV84z+i5APNsq
yIqLLmotvI4=
=itaL
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

__
Gunther Birznieks ([EMAIL PROTECTED])
eXtropia - The Web Technology Company
http://www.extropia.com/




Re: Proposals for ApacheCon 2001 in; help us choose! (fwd)

2000-12-21 Thread John K Sterling

 At 02:22 AM 12/22/00 +0100, Stas Bekman wrote:

 Well, this is new. You choose what sessions at ApacheCon you want.
 
 I don't think it's a fair approach by ApacheCon committee, as by applying
 this approach they are going to make the big player even bigger and kill
 the emerging technologies, who will definitely not get enough quotes and
 thus will not make it in :( I'm not on the committee, so I cannot really
 influence, but I'll try anyway.
 

you don't want a little say in things? i think its clear that they are not
going to just pick the speaker by the number of votes they get.  its just a
great way to allow the people who are going to be at apachecon to put their 2
cents in on what is going to be there.  if anything it gives the public more
say.  the group is obviously interested in picking new diverse topics, but they
also want feedback.

sterling





Re: Proposals for ApacheCon 2001 in; help us choose! (fwd)

2000-12-21 Thread Stas Bekman

On Fri, 22 Dec 2000, Gunther Birznieks wrote:

 I share your concern Stas.
 
 But I also suspect that if the suggestions looked odd, I doubt they would 
 blindly accept the web-based votes.  Can't entirely rule out bugs in the 
 voting system and people voting multiple times...
 
 However, I think it's an excellent idea to at least get an idea of what the 
 people generally seem to say that they want.

I've contacted the ASF proposing the voting inside groups, to make things
fair. Otherwise it's clear which sessions will go in and which not. I
don't think voting will really help here.

Once you tell php fans to choose between php talks, mod_perl fans to
choose between mod_perl talks, and httpd fans between httpd talks,
etc. The voting will be more fair.

We have seen ASF doing this mistake once already: The Apoloosa awards last
spring, when there was an attempt to choose the people who have
contributed the most. And as I was told many people who have contributed a
lot, but aren't known by community, because they aren't speakers, book
writers, etc. but doing the dirty work didn't even enter the selection
list. This was mistake number one. 

Mistake number two was to vote across all projects, so of course php
leader has won and mod_perl was second, but I think that each project had
to be awarded separately.

Well, they do this mistake again.

And I'm not talking about mod_perl here, I'm not worried about mod_perl
talks voted -- we have a huge community (I hope the editors of take23 take
notes :). I'm more worried about other emerging technologies which might
be left behind.

_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2000-12-21 Thread Ken Williams

[EMAIL PROTECTED] (Perrin Harkins) wrote:
Hi Sam,
[snip]
  I am saying that since SpeedyCGI uses MRU to allocate requests to perl
  interpreters, it winds up using a lot fewer interpreters to handle the
  same number of requests.

What I was saying is that it doesn't make sense for one to need fewer
interpreters than the other to handle the same concurrency.  If you have
10 requests at the same time, you need 10 interpreters.  There's no way
speedycgi can do it with fewer, unless it actually makes some of them
wait.

Well, there is one way, though it's probably not a huge factor.  If
mod_perl indeed manages the child-farming in such a way that too much
memory is used, then each process might slow down as memory becomes
sparse, especially if you start swapping.  Then if each request takes
longer, your child pool is more saturated with requests, and you might
have to fork a few more kids.

So in a sense, I think you're both correct.  If "concurrency" means the
number of requests that can be handled at once, both systems are
necessarily (and trivially) equivalent.  This isn't a very useful
measurement, though; a more useful one is how many children (or perhaps
how much memory) will be necessary to handle a given number of incoming
requests per second, and with this metric the two systems could perform
differently.


  ------
  Ken Williams Last Bastion of Euclidity
  [EMAIL PROTECTED]The Math Forum



[PATCH] Apache::test docs

2000-12-21 Thread T.J. Mather

Hi,

There is an error in the POD for Apache::test under the fetch method.  It
states that the return value of the fetch method is dependent on whether
it is called under a scalar or list context, when it is actually dependent
on whether it is called as a function or method.  I have attached the
patch for the POD.

Happy holidays,
T.J.


--- test.pm Thu Dec 21 23:03:13 2000
+++ test.pm.new Thu Dec 21 23:35:30 2000
@@ -625,8 +625,8 @@
 submission and we add a 'Content_Type' header with a value of
 'application/x-www-form-urlencoded'.

-In a scalar context, fetch() returns the content of the web server's
-response.  In a list context, fetch() returns the content and the
+If called as a function, fetch() returns the content of the web server's
+response.  If called as a method, fetch() returns the
 HTTP::Response object itself.  This can be handy if you need to check
 the response headers, or the HTTP return code, or whatever.



Re: Strangeness with Carp under mod_perl

2000-12-21 Thread Doug MacEachern

On Tue, 10 Oct 2000, darren chamberlain wrote:

 Hi All.
 
 This is a curiosity question, mostly. I have a simple method of sending
 debugging messages to the error log:
 
 use constant DEBUG = 1; # Set to 0 to turn off debugging throughout
 sub debug ($) {
 if (DEBUG) {
 return carp sprintf "[%s] [%s] %s", scalar caller, scalar localtime, shift;
 }
 return 1;
 }
 
 which gets called as:
 
 debug("Entering handler");
 
 and in scripts, I get nicely formatted output (I split the lines here):
 
 [BGEP::Utils] [Tue Oct 10 13:24:33 2000] Getting date list at
   /usr/local/bin/foo.pl line 22
   
 
 But under mod_perl, I'm getting:
 
 [BGEP::TestPkg] [Tue Oct 10 13:17:00 2000] Sending message to
  '[EMAIL PROTECTED]' at /dev/null line 0
  

i cannot reproduce this.  using the mod_perl-1.24_02-dev build,
i put your code into a Foo.pm, along with:
sub test {
print Foo::debug("test");
}

and called it like so from t/net/perl/test.pl:
use Foo ();
my $r = shift;

$r-send_http_header('text/plain');

Foo::test();

[Foo] [Thu Dec 21 20:20:29 2000] test at 
/home/dougm/ap/build/mod_perl-1.24_02-dev/t/net/perl/test.pl line 6

i'm using Perl-5.7.0-dev.  you might want to try 5.6.1-trial1.  there
are some 5.6.0 bugs that clobber the Perl structure than maintains
filename:linenumber (PL_curcop), not sure if they're related to your case
or not.




Re: Handler is preventing redirects on missing trailing / ?

2000-12-21 Thread Doug MacEachern

On Wed, 11 Oct 2000, Clayton Mitchell wrote:
 
 I then noticed that URI's of directories lacking a trailing '/' were not
 being redirected in the browser and so relative links started to break.

since your PerlHandler is handling the directory, you need to manage that.
mod_autoindex and mod_dir both do it themselves, in different ways, here's
mod_dir:
if (r-uri[0] == '\0' || r-uri[strlen(r-uri) - 1] != '/') {
char *ifile;
if (r-args != NULL)
ifile = ap_pstrcat(r-pool, ap_escape_uri(r-pool, r-uri),
"/", "?", r-args, NULL);
else
ifile = ap_pstrcat(r-pool, ap_escape_uri(r-pool, r-uri),
"/", NULL);

ap_table_setn(r-headers_out, "Location",
  ap_construct_url(r-pool, ifile, r));
return HTTP_MOVED_PERMANENTLY;
}




Re: Recognizing server config, like Aliases, from modules

2000-12-21 Thread Doug MacEachern

On Thu, 12 Oct 2000, Rodney Broom wrote:

 Hi all,
 
 I've got a set of new modules that do things like session handling, URI
 rewriting, authentication, etc. I've got a set of tests to prevent some rewrite
 problems that look like this:
 
 if ($uri =~ m|^/cgi-bin/|) {
 return DECLINED;
 if ($uri =~ m|^/icons/|) {
 return DECLINED;
 etc.
 
 This is done to allow access to /cgi-bin and /icons and the like without
 rewriting the URI. UserDir is the same way. The problem is the fact that now
 folks can't ajust the conf file without ajusting the module, too. I guess I
 could slurp up the config file on load and figure it out for myself, but that
 doesn't seem very healthy or very efficient. What I'd like is a method:
 
   $r-get_aliases
 
 But I don't guess that exists, huh? Any thoughts?

no, because those structures are private do mod_alias.
you can actually call the mod_alias translate handler directly to see if
there is an Alias configured for the uri:

use Apache::Module ();

my $rr = $r-lookup_uri($r-uri);
my $mod_alias = Apache::Module-top_module-find("mod_alias");
my $rc = $mod_alias-translate_handler-($rr);

if ($rc == OK) {
#an Alias matched
}

or, if you configure your Alias' with Perl config, you can save it to
match against yourself.




Re: END block aborted during httpd shutdown

2000-12-21 Thread Doug MacEachern

On Wed, 18 Oct 2000, Ernest Lergon wrote:

 Dear list members, dear Doug,
 
 it seems to me, that my initial mail of this thread was to long to read
 and to be answered  - especially because the questions are in the last
 paragraph far down below and need scrolling of the message text ;-))
 
 Ok, I'll try to split it up in bite-sized pieces:
 
 1) Our apache is running 20 childs. A perl module is loaded via
 startup.pl. On shutdown of apache the END block of this module is called
 20 times and not only 1 time as I expected. Why?

because perl_destruct() runs the END blocks, and each child calls
perl_destruct() at exit.

if you only want something to run once in the parent on shutdown or
restart, use a registered cleanup:

#PerlRequire startup.pl
warn "parent pid is $$\n";
Apache-server-register_cleanup(sub { warn "server cleanup in $$\n"});




Re: getting rid of nested sub lexical problem

2000-12-21 Thread Doug MacEachern

On Thu, 19 Oct 2000, Chris Nokleberg wrote:

 Following up on my post on this subject a couple of months ago, here is a
 proof-of-concept drop-in replacement for Apache::Registry that eliminates
 the "my() Scoped Variable in Nested Subroutine" problem.

nice hack!
 
 It requires PERL5OPT = "-d" and PERL5DB = "sub DB::DB {}" environment
 variables set when starting the mod_perl-enabled httpd. This enables the
 %DB::sub hash table that holds subroutine start and end line info. I
 presume that this has some negative (marginal?) impact on performance. If
 someone knows of a better way to reliably figure out where a subroutine
 starts and ends, please let me know.

there is quite a bit of overhead when -d is enabled.  have a look at
Apache::DB.xs, there's in init_debugger() function todo what -d does.  if
another method was added to turn it off, after the registry script was
compiled, the overhead would be reduced a great deal.




Re: Replacing Authen Authz handlers

2000-12-21 Thread Doug MacEachern

On Thu, 26 Oct 2000, Bill Moseley wrote:

 I've got Authen and Authz protecting an entire site:
 
location /
PerlAuthenHandler My::Authen
PerlAuthzHandler My::Authz
AuthType Basic
AuthName Test
require valid-user
/location
 
 I'd like to have one directory that uses Apache's built-in Basic
 Authentication, but I'm having a hard time making it happen.
 
 I've tried using a "PerlSetVar DisableAuthen 1" and then returning DECLINED
 in my handlers, but that's causing this error:
 
configuration error:  couldn't check user.  No user file?: /test
 
 Can someone fill me in, please.  Also, what's Apache seeing that's
 triggering the above error message.

mod_auth.c is not seeing an AuthUserFile





Re: POST results in HTTP/1.0 (null) ??

2000-12-21 Thread Doug MacEachern

On Fri, 3 Nov 2000, Paul J. Lucas wrote:

   So from within a function, I'm doing
 
   my $r = Apache::Request-new( Apache-request() );
   warn "request=", $r-as_string(), "\n";
 
   and, when I to a POST request, I get:
 
 Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
 Content-Length: 6978
 Content-Type: multipart/form-data; boundary=curl3cwvW7Ge8lVBtEGuDRCENOMeIVO
 Host: www.abacus-new.com:80
 Pragma: no-cache
 User-Agent: Mozilla/4.0
 
 HTTP/1.0 (null)
 
   Why is the content merely "HTTP/1.0 (null)"?  What happened to
   the other 6900 bytes or so?

that's the expected result if you haven't called $r-send_http_header yet.

example:
warn $r-as_string;

$r-send_http_header('text/plain');

print $r-as_string;

the error_log $r-warn output is:
GET /perl/test.pl HTTP/1.0
...

HTTP/1.0 (null)

now that $r-status_line and $r-headers_out have been set by
$r-send_http_header, the $r-print output is:
HTTP/1.0 200 OK
Connection: close
Content-Type: text/plain





Re: lookup_uri and Environment Variables?

2000-12-21 Thread Doug MacEachern

On Sun, 5 Nov 2000, Hadmut Danisch wrote:

 
 Hi,
 
 sorry if this was discussed before or if it is
 a dull question, but I couldn't find any other
 help than subscribing to this list:
 
 
 I have a Perl Handler Module (PerlAuthenHandler)
 and want to lookup environment variables set by other
 modules, e.g. the variables set by apache-ssl for the
 DN,...
 
 To do so, I have the folling piece of code:
 
   $subr = $r-lookup_uri($r-uri);
   $envs = $subr-subprocess_env;
   
   foreach $i ( sort keys %$envs )
{ $r-log_error("SE $i ",$envs-{$i});
}
 
 
 This code finds only UNIQUE_ID and variables set by
 the SetEnv directive.
 
 Could anyone give me a hint how to access the other
 variables?

the table has not been initialzed at that stage, you need to force that
yourself, from ch9 of the eagle book:

Finally, if you call Isubprocess_env() in a void context with no
arguments, it will reinitialize the table to contain the standard
variables that Apache adds to the environment before invoking CGI
scripts and server-side include files:

 $r-subprocess_env;




RE: Clarification of PERL_STASH_POST_DATA

2000-12-21 Thread Doug MacEachern

On Wed, 8 Nov 2000, Paul J. Lucas wrote:

 On Wed, 8 Nov 2000, Geoffrey Young wrote:
 
  ... Apache::RequestNotes may be able to help - it basically does
  cookie/get/post/upload parsing during request init and then stashes
  references to the data in pnotes.  The result is a consistent interface to
  the data across all handlers (which is the exact reason this module came
  about)
 
   This is /exactly/ right.  The only caveat is that its API is
   different from Apache::Request.  It Would Be Nice(TM) if the
   module subclassed itself off of Apache::Request so that the
   Apache::Request API would Do The Right Thing(TM).
 
  it requires Doug's libapreq and probably a few code changes, but it may be
  somewhat helpful...
 
   Such functionality should imply be absorbed into
   Apache::Request.

matt has submitted a patch for Apache::Request-instance, you can drop
this in and use it until the next release.

 sub Apache::Request::instance {
 my $class = shift;
 my $r = shift;
 if (my $apreq = $r-pnotes('apreq')) {
 return $apreq;
 }
 my $new_req = $class-new($r);
 $r-pnotes('apreq', $new_req);
 return $new_req;
 }

the POST_DATA hack isn't even worth talking about.  and shouldn't be
documented, where did you find that, the guide?




Re: cybersource generating lots of zombies

2000-12-21 Thread Doug MacEachern

On Fri, 10 Nov 2000, Peter J. Schoenster wrote:

 Hi,
 
 Anyone use cybersource?

yep, but not using their client.  (plug time)  i actually had the somewhat
recent pleasure of rewriting their client library from scratch.
that was mostly done because the original is tied to RSA Tipem, which RSA
no longer supports, hence their short/old list of supported platforms.
plenty of other improvements were made too.
it's for a product i'm working on at covalent called 'Credator', and is
due to ship at the end of this month.  the core is written in C, and has
thin Perl, PHP, Java and Python layers ontop.  part of what the product
offers is a DBI-spirit interface to credit card clearing houses.  i.e.,
the interface is the same no matter what clearing house you use.

httpd.conf is configured like so:
Credator cybersource
CredatorUserName ICS2Test
CredatorPassWord ICS2Test
CredatorTest On
#On points the test server, e.g. ics2test.ic3.com
/Credator

api looks like so:
my $crd = Apache::Credator-new($r);

$crd-field(name = "john doe");
$crd-field(amount = '27.01');
$crd-field(expiration = "1209");
$crd-field(card_number = "400700027");

if ($crd-submit) {
printf "success! authorization=%s\n", $crd-authorization;
}
else {
printf "failure! error=%s\n", $crd-error_message;
}

to switch to another clearing house, just change httpd.conf, the app is
untouched:
Credator verisign
CredatorUserName Test
CredatorPassWord Test
CredatorTest On
#test.signio.com
/Credator

 I'm using it in a mod_perl environement (registry) and whenever I 
 call the cybersource stuff it creates zombies on my system: look 
 at this (498 zombie):
 
  1:36pm  up 10 days, 19:09,  2 users,  load average: 0.51, 0.53 
 570 processes: 70 sleeping, 2 running, 498 zombie, 0 stopped
 
   PID USER PRI  NI  SIZE  RSS SHARE STAT  LIB %CPU 
 %MEM   TIME COMMAND
 14112 nobody16   0 00 0 Z   0 15.3  0.0   0:00 
 do_enc2 defunct
 
 It seems do_enc2 might be invovled (lots of it everywhere).

Credator does not fork any programs, hence no zombies, might what to try
that when it's released :)




Re: make fails on perlio.c

2000-12-21 Thread Doug MacEachern

On Mon, 13 Nov 2000, Bob Foster wrote:

 Hello Folks,
 
 I'm trying to compile mod_perl-1.24_01with apache_1.3.14 on Solaris 2.6.  Everything 
works OK until it hits src/modules/perl/perlio.c and then it fails with the following:
 
 perlio.c:90: parse error before `Sfdisc_t'
 perlio.c:90: warning: no semicolon at end of struct or union
 perlio.c:92: parse error before `}'
 perlio.c:92: warning: data definition has no type or storage class

your 'perl -V' would help.  it looks like you're using a botched sfio
enabled Perl.




Re: trouble compiling mod_perl-1.24_01

2000-12-21 Thread Doug MacEachern

On Wed, 15 Nov 2000, Jere C. Julian, Jr. wrote:

 I'm on a FreeBSD 3.4-RELEASE box and I've just built and tested apache
 1.3.14 from source.  Then I try to build mod_perl with the following
 commands and get the errors below.
...
 Symbol.xs:106: `na' undeclared (first use this function)
 Symbol.xs:106: (Each undeclared identifier is reported only once
 Symbol.xs:106: for each function it appears in.)

this is supposed to be taken care of in Symbol.xs:
#include "patchlevel.h" 
#if ((PATCHLEVEL = 4)  (SUBVERSION = 76)) || (PATCHLEVEL = 5) 
#define na PL_na 
#endif 

which probably means you have a bogus patchlevel.h (from an old tcl
install) in /usr/local/include either delete that or move it aside.  the
right thing for Symbol.xs todo is not to use na at all, patch below.

Index: Symbol/Symbol.xs
===
RCS file: /home/cvs/modperl/Symbol/Symbol.xs,v
retrieving revision 1.4
diff -u -r1.4 Symbol.xs
--- Symbol/Symbol.xs1998/11/24 19:10:56 1.4
+++ Symbol/Symbol.xs2000/12/22 06:01:06
@@ -2,11 +2,6 @@
 #include "perl.h"
 #include "XSUB.h"
 
-#include "patchlevel.h" 
-#if ((PATCHLEVEL = 4)  (SUBVERSION = 76)) || (PATCHLEVEL = 5) 
-#define na PL_na 
-#endif 
-
 #ifdef PERL_OBJECT
 #define sv_name(svp) svp
 #define undef(ref) 
@@ -102,8 +97,10 @@
mg_get(sv);
sym = SvPOKp(sv) ? SvPVX(sv) : Nullch;
}
-   else
-   sym = SvPV(sv, na);
+   else {
+STRLEN n_a;
+sym = SvPV(sv, n_a);
+}
if(sym)
cv = perl_get_cv(sym, TRUE);
break;




Re: return DONE;

2000-12-21 Thread Doug MacEachern

On Wed, 15 Nov 2000, Todd Finney wrote:

 Is returning DONE a Bad Thing?

no.  might not be what you want in certain cases, but difficult to guess
what you're doing.




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-21 Thread Perrin Harkins

On Thu, 21 Dec 2000, Ken Williams wrote:
 So in a sense, I think you're both correct.  If "concurrency" means
 the number of requests that can be handled at once, both systems are
 necessarily (and trivially) equivalent.  This isn't a very useful
 measurement, though; a more useful one is how many children (or
 perhaps how much memory) will be necessary to handle a given number of
 incoming requests per second, and with this metric the two systems
 could perform differently.

Yes, well put.  And that actually brings me back around to my original
hypothesis, which is that once you reach the maximum number of
interprerters that can be run on the box before swapping, it no longer
makes a difference if you're using LRU or MRU.  That's because all
interpreters are busy all the time, and the RAM for lexicals has already
been allocated in all of them.  At that point, it's a question of which
system can fit more interpreters in RAM at once, and I still think
mod_perl would come out on top there because of the shared memory.  Of
course most people don't run their servers at full throttle, and at less
than total saturation I would expect speedycgi to use less RAM and
possibly be faster.

So I guess I'm saying exactly the opposite of the original assertion:
mod_perl is more scalable if you define "scalable" as maximum requests per
second on a given machine, but speedycgi uses fewer resources at less than
peak loads which would make it more attractive for ISPs and other people
who use their servers for multiple tasks.

This is all hypothetical and I don't have time to experiment with it until
after the holidays, but I think the logic is correct.

- Perrin



Re: Uri modification at translation phase ...

2000-12-21 Thread Doug MacEachern

On Thu, 16 Nov 2000, Antonio Pascual wrote:

 Hi Everybody.
 I'm making a module that modifies the uri at the translation phase,
 but I have a doubt.
 
 The way I do it is modifying the uri and returning DECLINED as I read in the
 book "Writing Apache Modules with Perl And C".
 But working like this, the environment variable QUERY_STRING is well
 modified
 but the uri at the browser is not changed.
 I have test return REDIRECT but then POST calls don't work.

try 1.24_01, it fixes a bug that will unclog the REDIRECT+POST problem.




Re: Apache-server_root_relative not found?

2000-12-21 Thread Doug MacEachern

On Mon, 27 Nov 2000, The BOFH wrote:
 
 BEGIN {
use Apache ();
use lib Apache-server_root_relative('libperl');   ## 
 /usr/local/apache/libperl created
 }
... 
 perl -cw modperl_startup.pl returns:
 
 Can't locate object method "server_root_relative" via package "Apache" at 
 conf/modperl_startup.pl line 5.  mod_perl was built with EVERYTHING=1.

Apache- methods are only available inside the server, not on the command
line.  perl -c will pass if you remove the BEGIN block, which is not
required in either case.




Re: 1.24 to 1.24_01 spinning httpds on startup (solved)

2000-12-21 Thread Doug MacEachern

On Tue, 28 Nov 2000, Michael J Schout wrote:

 About a month or 2 ago, I had posted a problem where I tried to upgrade from:
... 
 And reported that after doing this, my httpds would spin on startup.  When I
 turned on MOD_PERL_TRACE=all, it was showing that it was stuck in an infinite
 loop processing configuration stuff.  I posted the mod_perl trace for it as
 well.
 
 I have finally gotten a chance to revisit this and it turns out that what was
 causing problems was that I had in my httpd.conf:
 
 Perl
 $PerlRequire = '/some/path/file.pl';
 /Perl

this has been fixed in the cvs tree.




Re: Upgraded to perl 5.6.0, ImageMagick now gives boot_libapreqerror

2000-12-21 Thread Doug MacEachern

On Mon, 11 Dec 2000, Chris Allen wrote:

 I have just done a complete install of RedHat v7.0 which includes
 Perl 5.6.0. Image Magick was running fine on my old system, but now 
 when I attempt to install it, it gives the following error message
 when attempting to do the PerlMagick install:
 
 ./perlmain.o: In function: 'xs_init' :
 ./perlmain.o(.text+0xc1): undefined reference to  'boot_libapreq'
 collect2: ld returned 1 exit status

just delete this file:
perl -MConfig -le 'print "$Config{installsitearch}/auto/libapreq/libapreq.a"'

since you're building static, MakeMaker tries to link *.a, but that's not
an XS .a, it's meant for pure C apps to link against.
it shouldn't be installed there, esp. now that libapreq has autoconf
support for pure C apps.




Re: segmentation faults

2000-12-21 Thread Doug MacEachern

On Wed, 13 Dec 2000, Dr. Fredo Sartori wrote:

 Apache produces segmentation faults when receiving arbirary 
 requests.
 
 I am running apache-1.3.14 with php-4.0.3pl1, mod_ssl-2.7.1 and 
 mod_perl-1.24_02 (from the CVS tree) on solaris 2.7. 
 The perl version installed is 5.6.0.
 
 According to the backtrace of gdb the problem seems to be
 located in mod_perl:
 
 #0  0x71170 in perl_header_parser (r=0x449040) at mod_perl.c:1021

this is the uselargefiles bug.
Makefile.PL should have told you:

Your Perl is uselargefiles enabled, but Apache is not, suggestions:
*) Rebuild Apache with CFLAGS="-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64"
*) Rebuild Perl with Configure -Uuselargefiles
*) Let mod_perl build Apache (USE_DSO=1 instead of USE_APXS=1)

#2 is probably the best route, since php+mod_ssl would probably also need
the #1 flags.




RE: help with custom Error documents/redirection

2000-12-21 Thread Doug MacEachern

On Wed, 13 Dec 2000, Geoffrey Young wrote:
 
 BTW, it's always good (at least I've found) to call
 my $prev_uri = $r-prev ? $r-prev-uri : $r-uri;

or with one less method call :)
my $prev_uri = ($r-prev || $r)-uri; 




Re: slight mod_perl problem

2000-12-21 Thread Doug MacEachern

On Thu, 21 Dec 2000, Vivek Khera wrote:

  "SB" == Stas Bekman [EMAIL PROTECTED] writes:
 
  startup.pl does not get repeated on a restart. However it will when
  started with ./apachectl start. I have never encountered this with Apache
  1.3.12 or 13.
 
 SB I've just tested it -- it's not.
 
 I just tested it also, and the startup script is run exactly once.

could be he has PerlFreshRestart On, in which case it would be called
twice.




cvs commit: modperl-site/conferences/tpc5-us cfp.html

2000-12-21 Thread sbekman

sbekman 00/12/21 07:41:30

  Added:   conferences/tpc5-us cfp.html
  Log:
  * creating the conferences dirs infrastructure
  * first sketch of the mod_perl cfp (Do not announce it yet!)
  
  Revision  ChangesPath
  1.1  modperl-site/conferences/tpc5-us/cfp.html
  
  Index: cfp.html
  ===
  !doctype html public "-//w3c//dtd html 4.0 transitional//en"
  html
  head
 meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"
 meta name="description" content="The O'Reilly Open Source Convention brings 
together the leaders of 10 critical open source technologies to give you an inside 
look at how to configure, optimize, code, and manage these powerful tools. It is a 
Convention rooted in a single premise--give people high-quality information so they 
can solve their problems quickly, efficiently, and elegantly. It is the only 
convention available that brings together the diverse open source communities and 
allows you to both go deep in your key technologies and sample other tools to solve 
problems."
 meta name="keywords" content="O'Reilly, O'Reilly  associates, open source, open 
source software convention, open source business, perl conference 4.0, Linux, Apache, 
BSD, Python, sendmail, Mozilla, PHP, technical conferences, conventions and 
conferences, software conferences, software conventions"
 meta name="GENERATOR" content="Mozilla/4.75 [en] (X11; U; Linux 2.2.17-21mdk 
i686) [Netscape]"
 titleconferences.oreilly.com --  O'Reilly Perl Conference 5.0/title
  /head
  body text="#00" bgcolor="#FF" link="#006699" vlink="#CC"
  nbsp;
  table BORDER=0 CELLSPACING=0 CELLPADDING=0 WIDTH="694" 
  tr
  td ALIGN=CENTER VALIGN=TOP BGCOLOR="#FF"img SRC="dotclear.gif" height=1 
width=550
  table BORDER=0 CELLSPACING=0 CELLPADDING=5 WIDTH="100%" 
  tr
  td VALIGN=TOP COLSPAN="2"
  centerbfont size=+0An Annual Perl Community Gathering/font/b
  brbfont size=+2The O'Reilly Perl Conference 5/font/b
  brimg SRC="dotclear.gif" BORDER=0 height=4 width=1
  brbfont size=+0July 23-27, 2001 -- San Diego, California/font/b
  brimg SRC="graydot.gif" height=1 width=320/center
  /td
  /tr
  
  tr
  td ALIGN=RIGHT VALIGN=TOP WIDTH="22%"/td
  
  td VALIGN=TOP WIDTH="60%"img SRC="dotclear.gif" BORDER=0 height=1 width=1
  centerbfont size=+2mod_perl track's Call for Participation/font/b/center
  
  pfont size=-1O'Reilly amp; Associates is pleased to announce the 5th
  annual Perl Conference. This event is the central gathering place for the
  Perl community to exchange ideas, techniques, and to advance the language.
  The Perl Conference is a five-day event designed for Perl programmers and
  developers and technical staff involved in Perl technology and its applications.
  The conference will be held at the Sheraton San Diego Hotel and Marina,
  San Diego, California, July 23-27, 2001./font
  pfont size=-1The Perl Conference 5 includes two days of intensely focused
  tutorials aimed mainly at intermediate and advanced Perl programmers. These
  tutorials are designed to provide concrete knowledge that leads directly
  to better Perl programs. Three days of multi-tracked conference sessions
  focus on the cutting edge Perl technology and feature talks, demonstrations,
  and panel debates on topics ranging from object-orientation to data mining./font
  pfont size=-1Technically sophisticated individuals who are actively
  working with Perl technology make all tutorial and conference presentations.
  Presentations by marketing staff or with a marketing focus will not be
  accepted./fontfont size=-1/font
  pfont size=+1You can find the general a 
href="http://conferences.oreilly.com/perl5/"Call
  for Papers here/a.nbsp;/fontfont size=+1/font
  pfont size=+1bThis/b bCFP/bnbsp;bis specialized for/b
  ba href="http://perl.apache.org"mod_perl community/a./b/font
  h3
  font size=-1Participation Opportunities/font/h3
  font size=-1Individuals and companies interested in making technical
  presentations at the Conference are invited to submit proposals to the
  conference organizers following the guidelines below. Proposals will be
  considered in three classes: tutorial programs, conference presentations,
  and refereed papers./font
  h3
  font size=-1Topics/font/h3
  font size=-1The program committee invites submissions of tutorials, conference
  presentations, or refereed papers on topics of interest to Perl programmers.
  Here are some suggested topics:/font
  ul
  li
  font size=-1Basic mod_perl use (installation/configuration/etc)/font/li
  
  li
  font size=-1mod_perl internals/font/li
  
  li
  font size=-1mod_perl 2.0/font/li
  
  li
  font size=-1Interesting 3rd party Apache::*nbsp;modules/font/li
  
  li
  font size=-1Interesting mod_perl based applications/font/li
  
  li
  font size=-1Templating systems and mod_perl/font/li
  
  li
  font size=-1Application Toolkits based on mod_perl/font/li
  
 

cvs commit: modperl Changes INSTALL.win32 INSTALL.activeperl

2000-12-21 Thread dougm

dougm   00/12/21 11:19:12

  Modified:.Changes INSTALL.win32
  Removed: .INSTALL.activeperl
  Log:
  INSTALL.win32 updates, obsolete INSTALL.activeperl removed
  
  Revision  ChangesPath
  1.559 +3 -0  modperl/Changes
  
  Index: Changes
  ===
  RCS file: /home/cvs/modperl/Changes,v
  retrieving revision 1.558
  retrieving revision 1.559
  diff -u -r1.558 -r1.559
  --- Changes   2000/12/20 18:51:50 1.558
  +++ Changes   2000/12/21 19:19:07 1.559
  @@ -10,6 +10,9 @@
   
   =item 1.24_02-dev
   
  +INSTALL.win32 updates, obsolete INSTALL.activeperl removed
  +[Randy Kobes [EMAIL PROTECTED]]
  +
   Solving an 'uninitialized value' warn in Apache::SizeLimit.
   post_connection() expects a return status from the callback function.
   [Stas Bekman [EMAIL PROTECTED]]
  
  
  
  1.5   +80 -24modperl/INSTALL.win32
  
  Index: INSTALL.win32
  ===
  RCS file: /home/cvs/modperl/INSTALL.win32,v
  retrieving revision 1.4
  retrieving revision 1.5
  diff -u -r1.4 -r1.5
  --- INSTALL.win32 2000/12/20 07:32:12 1.4
  +++ INSTALL.win32 2000/12/21 19:19:08 1.5
  @@ -6,69 +6,81 @@
   
   How to build, test, configure and install mod_perl under Win32
   
  -=head1 PREREQUSITES
  +=head1 PREREQUISITES
   
   =over 3
   
  -patience - mod_perl is considered alpha under NT and Windows95.
  +patience - mod_perl is considered alpha under NT and Windows9x.
   
   MSVC++ 5.0+, Apache version 1.3-dev or higher and Perl 5.004_02 or higher.
   
  -mod_perl will _not_ work with ActiveState's port, only with the "official"
  -Perl, available from: http://www.perl.com/CPAN/src/5.0/latest.tar.gz
  +As of version 1.24_01, mod_perl will build on Win32 ActivePerls
  +based on Perl-5.6.0 (builds 6xx). 
   
   =back
   
  +=head1 BINARIES
  +
  +See http://perl.apache.org/distributions.html for Win32 binaries,
  +including ActivePerl ppms of mod_perl and some Apache::* packages.
  +
   =head1 BUILDING
   
  -=over 3
  +There are two ways to build mod_perl - with MS Developer Studio,
  +and through command-line arguments to 'perl Makefile.PL'.
   
  -=item Binaries
  +=head2 Building with MS Developer Studio
   
  -See: http://perl.apache.org/distributions.html
  +=over 3
   
   =item Setup the Perl side
   
  -run 'perl Makefile.PL'
  -run 'nmake install'
  +Run 
   
  +  perl Makefile.PL
  +  nmake install
  +
   This will install the Perl side of mod_perl and setup files for the library build.
   
   =item Build ApacheModulePerl.dll
   
   Using MS developer studio, 
  -select "File - Open Workspace ...", 
  -select "Files of type [Projects (*.dsp)]"
  -browse and open mod_perl-x.xx/src/modules/ApacheModulePerl/ApacheModulePerl.dsp
  +
  + select "File - Open Workspace ...", 
  + select "Files of type [Projects (*.dsp)]"
  + open mod_perl-x.xx/src/modules/ApacheModulePerl/ApacheModulePerl.dsp
   
   =item Settings
   
  -select "Tools - Options - [Directories]"
  + select "Tools - Options - [Directories]"
   
  -select "Show directories for: [Include files]"
  + select "Show directories for: [Include files]"
   
   You'll need to add the following paths:
  -C:\apache_x.xx\src\include
  -.  (should be expanded to C:\...\mod_perl-x.xx\src\modules\perl for you)
  -C:\perl\lib\Core
  + 
  + C:\apache_x.xx\src\include
  + .  (should expand to C:\...\mod_perl-x.xx\src\modules\perl)
  + C:\perl\lib\Core
   
   select "Project - Add to Project - Files" adding:
  -perl.lib   (e.g. C:\perl\lib\Core\perl.lib)
  -ApacheCore.lib (e.g. C:\Apache\ApacheCore.lib)
  + 
  + perl.lib (or perl56.lib)   (e.g. C:\perl\lib\Core\perl.lib)
  + ApacheCore.lib (e.g. C:\Apache\ApacheCore.lib)
   
  -select "Build - Set Active Configuration... - [ApacheModulePerl - Win32 Release]"
  + select "Build - Set Active Configuration... - 
  + [ApacheModulePerl - Win32 Release]"
   
  -select "Build - Build ApacheModulePerl.dll"
  + select "Build - Build ApacheModulePerl.dll"
   
   You may see some harmless warnings, which can be reduced (along with
   the size of the DLL), by setting:
   
  -"Project - Settings - [C/C++] - Category: [Code Generation] - 
  + "Project - Settings - [C/C++] - Category: [Code Generation] - 
 Use runtime library: [Multithreaded DLL]
   
   =item Testing
   
  -Once ApacheModulePerl.dll is built and apache.exe in installed you may
  +Once ApacheModulePerl.dll is built and apache.exe is installed you may
   test mod_perl with:
   
nmake test
  @@ -80,6 +92,51 @@
   
   =back
   
  +=head2 Building with arguments to 'perl Makefile.PL'
  +
  +Generating the Makefile as, for example,
  +
  + perl Makefile.PL APACHE_SRC=..\apache_1.3.xx INSTALL_DLL=\Apache\modules
  +
  +will build mod_perl (including ApacheModulePerl.dll) entirely from 
  +the command line. The arguments accepted include
  +
  +=over 3
  +
  +=item APACHE_SRC
  +
  +This gives the 

cvs commit: modperl-site/conferences/tpc5-us cfp.html

2000-12-21 Thread sbekman

sbekman 00/12/21 12:49:34

  Modified:conferences/tpc5-us cfp.html
  Log:
  * removing images (leftovers from the original cfp)
  * correcting/extending some notes
  
  Revision  ChangesPath
  1.3   +76 -100   modperl-site/conferences/tpc5-us/cfp.html
  
  Index: cfp.html
  ===
  RCS file: /home/cvs/modperl-site/conferences/tpc5-us/cfp.html,v
  retrieving revision 1.2
  retrieving revision 1.3
  diff -u -r1.2 -r1.3
  --- cfp.html  2000/12/21 18:54:22 1.2
  +++ cfp.html  2000/12/21 20:49:33 1.3
  @@ -4,239 +4,215 @@
  meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"
  meta name="description" content="The O'Reilly Open Source Convention brings 
together the leaders of 10 critical open source technologies to give you an inside 
look at how to configure, optimize, code, and manage these powerful tools. It is a 
Convention rooted in a single premise--give people high-quality information so they 
can solve their problems quickly, efficiently, and elegantly. It is the only 
convention available that brings together the diverse open source communities and 
allows you to both go deep in your key technologies and sample other tools to solve 
problems."
  meta name="keywords" content="O'Reilly, O'Reilly  associates, open source, 
open source software convention, open source business, perl conference 4.0, Linux, 
Apache, BSD, Python, sendmail, Mozilla, PHP, technical conferences, conventions and 
conferences, software conferences, software conventions"
  +   meta name="GENERATOR" content="Mozilla/4.75 [en] (X11; U; Linux 2.2.17-21mdk 
i686) [Netscape]"
  titleO'Reilly Perl Conference 5.0 - mod_perl/title
   /head
  -
   body text="#00" bgcolor="#FF" link="#006699" vlink="#CC"
   nbsp;
   table BORDER=0 CELLSPACING=0 CELLPADDING=0 WIDTH="694" 
   tr
  -td ALIGN=CENTER VALIGN=TOP BGCOLOR="#FF"img SRC="dotclear.gif" height=1 
width=550
  +td ALIGN=CENTER VALIGN=TOP BGCOLOR="#FF"
   table BORDER=0 CELLSPACING=0 CELLPADDING=5 WIDTH="100%" 
   tr
   td VALIGN=TOP COLSPAN="2"
   centerbfont size=+0An Annual Perl Community Gathering/font/b
   brbfont size=+2The O'Reilly Perl Conference 5/font/b
  -brimg SRC="dotclear.gif" BORDER=0 height=4 width=1
  -brbfont size=+0July 23-27, 2001 -- San Diego, California/font/b
  -brimg SRC="graydot.gif" height=1 width=320/center
  +pbfont size=+0July 23-27, 2001 -- San Diego, California/font/b/center
   /td
   /tr
   
   tr
   td ALIGN=RIGHT VALIGN=TOP WIDTH="22%"/td
   
  -td VALIGN=TOP WIDTH="60%"img SRC="dotclear.gif" BORDER=0 height=1 width=1
  +td VALIGN=TOP WIDTH="60%"
   centerbfont size=+2mod_perl track's Call for Participation/font/b/center
   
  -pfont size=-1O'Reilly amp; Associates is pleased to announce the 5th
  +pfont size=+0O'Reilly amp; Associates is pleased to announce the 5th
   annual Perl Conference. This event is the central gathering place for the
   Perl community to exchange ideas, techniques, and to advance the language.
   The Perl Conference is a five-day event designed for Perl programmers and
   developers and technical staff involved in Perl technology and its applications.
   The conference will be held at the Sheraton San Diego Hotel and Marina,
   San Diego, California, July 23-27, 2001./font
  -pfont size=-1The Perl Conference 5 includes two days of intensely focused
  +pfont size=+0The Perl Conference 5 includes two days of intensely focused
   tutorials aimed mainly at intermediate and advanced Perl programmers. These
   tutorials are designed to provide concrete knowledge that leads directly
   to better Perl programs. Three days of multi-tracked conference sessions
   focus on the cutting edge Perl technology and feature talks, demonstrations,
   and panel debates on topics ranging from object-orientation to data mining./font
  -pfont size=-1Technically sophisticated individuals who are actively
  +pbThe mod_perl track will be held on Wednesday (25) and Thursday (26)/b.
  +pfont size=+0Technically sophisticated individuals who are actively
   working with Perl technology make all tutorial and conference presentations.
   Presentations by marketing staff or with a marketing focus will not be
  -accepted./fontfont size=-1/font
  +accepted./font
  +pfont size=+1bThis/b bCFP/b bis specialized for the a 
href="http://perl.apache.org"mod_perl
  +community/a.nbsp;/b/fontbfont size=+1/font/b
   pfont size=+1You can find the general a 
href="http://conferences.oreilly.com/perl5/"Call
  -for Papers here/a.nbsp;/fontfont size=+1/font
  -pfont size=+1bThis/b bCFP/bnbsp;bis specialized for the/b
  -ba href="http://perl.apache.org"mod_perl community/a./b/font
  +for Papers here/a.nbsp;/font
   h3
  -font size=-1Participation Opportunities/font/h3
  -font size=-1Individuals and companies interested in making technical
  +font size=+0Participation Opportunities/font/h3
  +font size=+0Individuals and companies interested 

cvs commit: modperl/Symbol Symbol.xs

2000-12-21 Thread dougm

dougm   00/12/21 22:02:32

  Modified:.Changes
   Symbol   Symbol.xs
  Log:
  rid PL_na usage in Symbol.xs
  
  Revision  ChangesPath
  1.561 +2 -0  modperl/Changes
  
  Index: Changes
  ===
  RCS file: /home/cvs/modperl/Changes,v
  retrieving revision 1.560
  retrieving revision 1.561
  diff -u -r1.560 -r1.561
  --- Changes   2000/12/21 20:00:09 1.560
  +++ Changes   2000/12/22 06:02:30 1.561
  @@ -10,6 +10,8 @@
   
   =item 1.24_02-dev
   
  +rid PL_na usage in Symbol.xs
  +
   INSTALL.win32 updates, obsolete INSTALL.activeperl removed
   [Randy Kobes [EMAIL PROTECTED]]
   
  
  
  
  1.5   +4 -7  modperl/Symbol/Symbol.xs
  
  Index: Symbol.xs
  ===
  RCS file: /home/cvs/modperl/Symbol/Symbol.xs,v
  retrieving revision 1.4
  retrieving revision 1.5
  diff -u -r1.4 -r1.5
  --- Symbol.xs 1998/11/24 19:10:56 1.4
  +++ Symbol.xs 2000/12/22 06:02:32 1.5
  @@ -2,11 +2,6 @@
   #include "perl.h"
   #include "XSUB.h"
   
  -#include "patchlevel.h" 
  -#if ((PATCHLEVEL = 4)  (SUBVERSION = 76)) || (PATCHLEVEL = 5) 
  -#define na PL_na 
  -#endif 
  -
   #ifdef PERL_OBJECT
   #define sv_name(svp) svp
   #define undef(ref) 
  @@ -102,8 +97,10 @@
mg_get(sv);
sym = SvPOKp(sv) ? SvPVX(sv) : Nullch;
}
  - else
  - sym = SvPV(sv, na);
  + else {
  +STRLEN n_a;
  +sym = SvPV(sv, n_a);
  +}
if(sym)
cv = perl_get_cv(sym, TRUE);
break;