Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Sam Horrocks

  Let me just try to explain my reasoning.  I'll define a couple of my
  base assumptions, in case you disagree with them.
  
  - Slices of CPU time doled out by the kernel are very small - so small
  that processes can be considered concurrent, even though technically
  they are handled serially.

 Don't agree.  You're equating the model with the implemntation.
 Unix processes model concurrency, but when it comes down to it, if you
 don't have more CPU's than processes, you can only simulate concurrency.

 Each process runs until it either blocks on a resource (timer, network,
 disk, pipe to another process, etc), or a higher priority process
 pre-empts it, or it's taken so much time that the kernel wants to give
 another process a chance to run.

  - A set of requests can be considered "simultaneous" if they all arrive
  and start being handled in a period of time shorter than the time it
  takes to service a request.

 That sounds OK.

  Operating on these two assumptions, I say that 10 simultaneous requests
  will require 10 interpreters to service them.  There's no way to handle
  them with fewer, unless you queue up some of the requests and make them
  wait.

 Right.  And that waiting takes place:

- In the mutex around the accept call in the httpd

- In the kernel's run queue when the process is ready to run, but is
  waiting for other processes ahead of it.

 So, since there is only one CPU, then in both cases (mod_perl and
 SpeedyCGI), processes spend time waiting.  But what happens in the
 case of SpeedyCGI is that while some of the httpd's are waiting,
 one of the earlier speedycgi perl interpreters has already finished
 its run through the perl code and has put itself back at the front of
 the speedycgi queue.  And by the time that Nth httpd gets around to
 running, it can re-use that first perl interpreter instead of needing
 yet another process.

 This is why it's important that you don't assume that Unix is truly
 concurrent.

  I also say that if you have a top limit of 10 interpreters on your
  machine because of memory constraints, and you're sending in 10
  simultaneous requests constantly, all interpreters will be used all the
  time.  In that case it makes no difference to the throughput whether you
  use MRU or LRU.

 This is not true for SpeedyCGI, because of the reason I give above.
 10 simultaneous requests will not necessarily require 10 interpreters.

What you say would be true if you had 10 processors and could get
true concurrency.  But on single-cpu systems you usually don't need
10 unix processes to handle 10 requests concurrently, since they get
serialized by the kernel anyways.
  
  I think the CPU slices are smaller than that.  I don't know much about
  process scheduling, so I could be wrong.  I would agree with you if we
  were talking about requests that were coming in with more time between
  them.  Speedycgi will definitely use fewer interpreters in that case.

 This url:

http://www.oreilly.com/catalog/linuxkernel/chapter/ch10.html

 says the default timeslice is 210ms (1/5th of a second) for Linux on a PC.
 There's also lots of good info there on Linux scheduling.

I found that setting MaxClients to 100 stopped the paging.  At concurrency
level 100, both mod_perl and mod_speedycgi showed similar rates with ab.
Even at higher levels (300), they were comparable.
  
  That's what I would expect if both systems have a similar limit of how
  many interpreters they can fit in RAM at once.  Shared memory would help
  here, since it would allow more interpreters to run.
  
  By the way, do you limit the number of SpeedyCGI processes as well?  it
  seems like you'd have to, or they'd start swapping too when you throw
  too many requests in.

 SpeedyCGI has an optional limit on the number of processes, but I didn't
 use it in my testing.

But, to show that the underlying problem is still there, I then changed
the hello_world script and doubled the amount of un-shared memory.
And of course the problem then came back for mod_perl, although speedycgi
continued to work fine.  I think this shows that mod_perl is still
using quite a bit more memory than speedycgi to provide the same service.
  
  I'm guessing that what happened was you ran mod_perl into swap again. 
  You need to adjust MaxClients when your process size changes
  significantly.

 Right, but this also points out how difficult it is to get mod_perl
 tuning just right.  My opinion is that the MRU design adapts more
 dynamically to the load.

   I believe that with speedycgi you don't have to lower the MaxClients
   setting, because it's able to handle a larger number of clients, at
   least in this test.

 Maybe what you're seeing is an ability to handle a larger number of
 requests (as opposed to clients) because of the performance benefit I
 mentioned above.
   
I don't follow.
  
  When not all processes are in use, I 

Re: perl calendar application

2001-01-06 Thread Blue Lang

On Sat, 6 Jan 2001 [EMAIL PROTECTED] wrote:

 On Fri, 5 Jan 2001, Jim Serio wrote:
  Why not just write one to suite your needs? If you want one

 I'd really like to hack on a freeware version, but it'd be nice to start
 with one that at least had some decent sheduling features so I could use

Eh, I'm prepared to take my lynching, but I'd just like to remind everyone
that there's nothing at all wrong with using PHP for things like this.
You'll never be a worse person for learning something new, and the
overheard required to manage a php+perl enabled apache is only minimally
more than managing one or the other.

IMHO, it's just lame to rewrite something for which there exists dozens of
good apps just because of the language im which it is written. You might
as well be arguing about GPL/BSD/Artistic at that point.

Just my two cents, and all that.

-- 
   Blue Lang, Unix Voodoo Priest
   202 Ashe Ave, Apt 3, Raleigh, NC.  919 835 1540
"I was born in a city of sharks and sailors!" - June of 44




newbie needs help

2001-01-06 Thread dave frost,,,
Hi everyone.

I have just built apache mod perl, things seem to be fine since /server-status reports


Apache Server Status for home.itchy.and.scratchy


Server Version: Apache/1.3.14 (Unix) mod_perl/1.24_01

Server Built: Jan  5 2001 17:49:02

I have also edited my httpd.conf file to include the following

#mod perl stuff
Alias /perl/ /usr/local/apache_modperl/perl_mods/
PerlFreshRestart On
Location /perl
 SetHandler perl-script
 PerlHandler testing
/Location
Location /perl-status
 SetHandler perl-script
 PerlHandler Apache::Status
/Location

the /perl-script runs fine. The problem is with my own (well written :-)
script. The module is in the @INC list, and is called testing.pm.

package testing;
use strict;

sub handler
{
 my $r = shift;
 $r-content_type("text/plain";
 $r-send_http_header();
 $r-print "this is daves module running";
}

1

I have tried the url http://home.itchy.and.scratchy/perl/but that reports an internal server error - any ideas why this is and what im doing wrong would be of most help,ThanksDave Frost





Re: newbie needs help

2001-01-06 Thread Gerd Kortemeyer


 $r-content_type("text/plain";

You are missing a ")" here. Check the error_log of Apache for errors like that.

- Gerd.

begin:vcard 
n:Kortemeyer;Gerd
tel;fax:(517) 432-2175
tel;work:(517) 432-5468
x-mozilla-html:TRUE
url:http://www.lite.msu.edu/kortemeyer/
org:Michigan State University;LITE Lab
adr:;;123 North Kedzie Labs;East Lansing;Michigan;48824;USA
version:2.1
email;internet:[EMAIL PROTECTED]
title:Instructional Technology Specialist
x-mozilla-cpt:;3
fn:Gerd Kortemeyer
end:vcard



Re: where can I get MIME/Head.pm modules

2001-01-06 Thread Gerd Kortemeyer



 zhang-sh wrote:

 Re: where can I get MIME/Head.pm  modules

www.cpan.org

begin:vcard 
n:Kortemeyer;Gerd
tel;fax:(517) 432-2175
tel;work:(517) 432-5468
x-mozilla-html:TRUE
url:http://www.lite.msu.edu/kortemeyer/
org:Michigan State University;LITE Lab
adr:;;123 North Kedzie Labs;East Lansing;Michigan;48824;USA
version:2.1
email;internet:[EMAIL PROTECTED]
title:Instructional Technology Specialist
x-mozilla-cpt:;3
fn:Gerd Kortemeyer
end:vcard



Re: Linux Hello World 2000, Results In!!

2001-01-06 Thread Perrin Harkins

Joshua Chamas wrote:
 The Hello World 2000 benchmark is complete, and my results are below

Kind of harsh results for Template Toolkit, but it makes sense given the
nature of the test.  Variable interpolation in TT provides extra
functionality to give transparent access to method calls, coderefs,
etc.  The other systems are just going direct to Perl data structures.

In practice, I wouldn't do all this variable assignment stuff in the
template; I'd do it in a handler before running the template.  That
wouldn't make sense for the Embperl/ASP/Mason style systems though.

By the way, it would be cool if the test ran a few times, dropped the
high and low for each system, and took an average of the remaining
times.  That would smooth out some of the inaccuracies that are
unavoidable with this kind of benchmarking.

Nice work.

- Perrin



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Perrin Harkins

Sam Horrocks wrote:
  Don't agree.  You're equating the model with the implemntation.
  Unix processes model concurrency, but when it comes down to it, if you
  don't have more CPU's than processes, you can only simulate concurrency.
[...]
  This url:
 
 http://www.oreilly.com/catalog/linuxkernel/chapter/ch10.html
 
  says the default timeslice is 210ms (1/5th of a second) for Linux on a PC.
  There's also lots of good info there on Linux scheduling.

Thanks for the info.  This makes much more sense to me now.  It sounds
like using an MRU algrorithm for process selection is automatically
finding the sweet spot in terms of how many processes can run within the
space of one request and coming close to the ideal of never having
unused processes in memory.  Now I'm really looking forward to getting
MRU and shared memory in the same package and seeing how high I can
scale my hardware.

- Perrin



Re: Configtest yields bad news...

2001-01-06 Thread andrewl

/usr/local/apache/bin/apachectl configtest

produces

"Cannot load /usr/local/apache/modules/libperl.so into server: undefined symbol:
ap_ctx_get"

I'm done some searches for clues at RedHat and other sites, but all I see is
something about and IBM issue.
Any clues?

Andrew L.




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Buddy Lee Haystack

Does this mean that mod_perl's memory hunger will curbed in the future using some of 
the neat tricks in Speedycgi?


Perrin Harkins wrote:
 
 Sam Horrocks wrote:
   Don't agree.  You're equating the model with the implemntation.
   Unix processes model concurrency, but when it comes down to it, if you
   don't have more CPU's than processes, you can only simulate concurrency.
 [...]
   This url:
 
  http://www.oreilly.com/catalog/linuxkernel/chapter/ch10.html
 
   says the default timeslice is 210ms (1/5th of a second) for Linux on a PC.
   There's also lots of good info there on Linux scheduling.
 
 Thanks for the info.  This makes much more sense to me now.  It sounds
 like using an MRU algrorithm for process selection is automatically
 finding the sweet spot in terms of how many processes can run within the
 space of one request and coming close to the ideal of never having
 unused processes in memory.  Now I'm really looking forward to getting
 MRU and shared memory in the same package and seeing how high I can
 scale my hardware.
 
 - Perrin

-- 
www.RentZone.org



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Perrin Harkins

Buddy Lee Haystack wrote:
 
 Does this mean that mod_perl's memory hunger will curbed in the future using some of 
the neat tricks in Speedycgi?

Yes.  The upcoming mod_perl 2 (running on Apache 2) will use MRU to
select threads.  Doug demoed this at ApacheCon a few months back.

- Perrin



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Les Mikesell


- Original Message -
From: "Sam Horrocks" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: "mod_perl list" [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Saturday, January 06, 2001 6:32 AM
Subject: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts
that contain un-shared memory



  Right, but this also points out how difficult it is to get mod_perl
  tuning just right.  My opinion is that the MRU design adapts more
  dynamically to the load.

How would this compare to apache's process management when
using the front/back end approach?

  I'd agree that the size of one Speedy backend + one httpd would be the
  same or even greater than the size of one mod_perl/httpd when no memory
  is shared.  But because the speedycgi httpds are small (no perl in them)
  and the number of SpeedyCGI perl interpreters is small, the total memory
  required is significantly smaller for the same load.

Likewise, it would be helpful if you would always make the comparison
to the dual httpd setup that is often used for busy sites.   I think it must
really boil down to the efficiency of your IPC vs. access to the full
apache environment.

  Les Mikesell
 [EMAIL PROTECTED]




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Joshua Chamas

Sam Horrocks wrote:
 
  Don't agree.  You're equating the model with the implemntation.
  Unix processes model concurrency, but when it comes down to it, if you
  don't have more CPU's than processes, you can only simulate concurrency.
 

Hey Sam, nice module.  I just installed your SpeedyCGI for a good 'ol
HelloWorld benchmark  it was a snap, well done.  I'd like to add to the 
numbers below that a fair benchmark would be between mod_proxy in front 
of a mod_perl server and mod_speedycgi, as it would be a similar memory 
saving model ( this is how we often scale mod_perl )... both models would
end up forwarding back to a smaller set of persistent perl interpreters.

However, I did not do such a benchmark, so SpeedyCGI looses out a
bit for the extra layer it has to do :(   This is based on the 
suite at http://www.chamas.com/bench/hello.tar.gz, but I have not
included the speedy test in that yet.

 -- Josh

Test Name  Test File  Hits/sec   Total Hits Total Time sec/Hits   
Bytes/Hit  
   -- -- -- -- -- 
-- 
Apache::Registry v2.01 CGI.pm  hello.cgi   451.9 27128 hits 60.03 sec  0.002213   
216 bytes  
Speedy CGI hello.cgi   375.2 22518 hits 60.02 sec  0.002665   
216 bytes  

Apache Server Header Tokens
---
(Unix)
Apache/1.3.14
OpenSSL/0.9.6
PHP/4.0.3pl1
mod_perl/1.24
mod_ssl/2.7.1



ap_ctx_get with libperl.so problem.

2001-01-06 Thread andrewl

This is definitely related to libperl.so ... When I comment out
LoadModule perl_module modules/libperl.so
in httpd.conf, then apache will start.  Unfortunately, I'd like to get
mod_perl working.

Does anyone know about this ap_ctx_get?   I understand that there could
be a bug in another module that I am loading infront of libperl.so, but
which one?

Andrew L.






Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Sam Horrocks

Right, but this also points out how difficult it is to get mod_perl
tuning just right.  My opinion is that the MRU design adapts more
dynamically to the load.
  
  How would this compare to apache's process management when
  using the front/back end approach?

 Same thing applies.  The front/back end approach does not change the
 fundamentals.

I'd agree that the size of one Speedy backend + one httpd would be the
same or even greater than the size of one mod_perl/httpd when no memory
is shared.  But because the speedycgi httpds are small (no perl in them)
and the number of SpeedyCGI perl interpreters is small, the total memory
required is significantly smaller for the same load.
  
  Likewise, it would be helpful if you would always make the comparison
  to the dual httpd setup that is often used for busy sites.   I think it must
  really boil down to the efficiency of your IPC vs. access to the full
  apache environment.

 The reason I don't include that comparison is that it's not fundamental
 to the differences between mod_perl and speedycgi or LRU and MRU that
 I have been trying to point out.  Regardless of whether you add a
 frontend or not, the mod_perl process selection remains LRU and the
 speedycgi process selection remains MRU.



Re: perl calendar application

2001-01-06 Thread James G Smith

Blue Lang [EMAIL PROTECTED] wrote:
On Sat, 6 Jan 2001 [EMAIL PROTECTED] wrote:

 On Fri, 5 Jan 2001, Jim Serio wrote:
  Why not just write one to suite your needs? If you want one

 I'd really like to hack on a freeware version, but it'd be nice to start
 with one that at least had some decent sheduling features so I could use

Eh, I'm prepared to take my lynching, but I'd just like to remind everyone
that there's nothing at all wrong with using PHP for things like this.
You'll never be a worse person for learning something new, and the
overheard required to manage a php+perl enabled apache is only minimally
more than managing one or the other.

IMHO, it's just lame to rewrite something for which there exists dozens of
good apps just because of the language im which it is written. You might
as well be arguing about GPL/BSD/Artistic at that point.

I have to agree.  At Texas AM, we just went production with a 
combination of TWIG (in php), custom php scripts to handle 
directory service tasks (LDAP), php scripts creating a CGI 
environment for some Perl scripts (Apache is 32- bit on Irix,
oracle is 64-bit...), and a smattering of tcl (mail store 
management), sh (kerberos), and Perl (PH management) scripts to 
help out when php couldn't quite do it.

My rule of thumb is to use whichever language makes the task 
easiest.  Most languages can work together.
+-
James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/
[EMAIL PROTECTED] | http://sourcegarden.org/
  [EMAIL PROTECTED]  | http://cis.tamu.edu/systems/opensystems/
+--



Re: ap_ctx_get with libperl.so problem.

2001-01-06 Thread Andrew Ho

Hello,

ALThis is definitely related to libperl.so ... When I comment out
ALLoadModule perl_module modules/libperl.so
ALin httpd.conf, then apache will start.  Unfortunately, I'd like to get
ALmod_perl working.
AL
ALDoes anyone know about this ap_ctx_get?   I understand that there could
ALbe a bug in another module that I am loading infront of libperl.so, but
ALwhich one?

This sounds like a DSO loading problem. Are you on Solaris? It's been my
experience that on Solaris, loading mod_perl as a DSO (especially if your
Perl already has a libperl.so) is usually a disaster. On our Solaris x86
boxen, we statically compile in mod_perl.

The ap_ctx_get was one of our core dumps (I don't remember which
permutation of mod_perl as DSO or Perl with shared libperl.so this was,
sorry). Others included failed calls to top_module.

Humbly,

Andrew

--
Andrew Ho   http://www.tellme.com/   [EMAIL PROTECTED]
Engineer   [EMAIL PROTECTED]  Voice 650-930-9062
Tellme Networks, Inc.   1-800-555-TELLFax 650-930-9101
--




Re: Configtest yields bad news...

2001-01-06 Thread G.W. Haywood

Hi there,

On Sat, 6 Jan 2001, andrewl wrote:

 "Cannot load /usr/local/apache/modules/libperl.so into server: undefined symbol:
 ap_ctx_get"

 Any clues?

Have you read .../mod_perl/SUPPORT

?

73,
Ged.




mod_perl / SSI conflict?

2001-01-06 Thread Tom Kralidis

Hi,

I am running Apache 1.3.14 w/ mod_perl 1.24 on a Linux RedHat 6.2 machine.

I have recently began to implement mod_perl to some of my content.

As per "Writing Apache Modules with Perl and C" book, I wrote a footer for 
each page, with the following directive in httpd.conf:

   Files ~ "\.html?$"
SetHandler perl-script
PerlHandler Apache::Footer
   /Files

This puts the footer nicely to all files .htm or .html

However, I have a few .shtml pages as part of my site, which have worked 
with no errors.  After making this change:

   Files ~ "\.s?html?$"
SetHandler perl-script
PerlHandler Apache::Footer
   /Files

..I imagined that .shtml files would show a footer at the bottom of pages.

However, what I found is that, for the .shtml pages, while the mod_perl 
footer works with no errors, the SSI does not work at all, and I can't find 
anything strange regarding this in error_log.

Has anyone had any similar problem?  Any advice / workarounds, etc. would be 
valued.

Thanks alot

..Tom


_
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Sam Horrocks

A few things:

- In your results, could you add the speedycgi version number (2.02),
  and the fact that this is using the mod_speedycgi frontend.
  The fork/exec frontend will be much slower on hello-world so I don't
  want people to get the wrong idea.  You may want to benchmark
  the fork/exec version as well.

- You may be able to eke out a little more performance by setting
  MaxRuns to 0 (infinite).  The is set for mod_speedycgi using the
  SpeedyMaxRuns directive, or on the command-line using "-r0".
  This setting is similar to the MaxRequestsPerChild setting in apache.

- My tests show mod_perl/speedy much closer than yours do, even with
  MaxRuns at its default value of 500.  Maybe you're running on
  a different OS than I am - I'm using Redhat 6.2.  I'm also running
  one rev lower of mod_perl in case that matters.


  Hey Sam, nice module.  I just installed your SpeedyCGI for a good 'ol
  HelloWorld benchmark  it was a snap, well done.  I'd like to add to the 
  numbers below that a fair benchmark would be between mod_proxy in front 
  of a mod_perl server and mod_speedycgi, as it would be a similar memory 
  saving model ( this is how we often scale mod_perl )... both models would
  end up forwarding back to a smaller set of persistent perl interpreters.
  
  However, I did not do such a benchmark, so SpeedyCGI looses out a
  bit for the extra layer it has to do :(   This is based on the 
  suite at http://www.chamas.com/bench/hello.tar.gz, but I have not
  included the speedy test in that yet.
  
   -- Josh
  
  Test Name  Test File  Hits/sec   Total Hits Total Time sec/Hits  
  Bytes/Hit  
     -- -- -- -- 
 -- -- 
  Apache::Registry v2.01 CGI.pm  hello.cgi   451.9 27128 hits 60.03 sec  0.002213  
  216 bytes  
  Speedy CGI hello.cgi   375.2 22518 hits 60.02 sec  0.002665  
  216 bytes  
  
  Apache Server Header Tokens
  ---
  (Unix)
  Apache/1.3.14
  OpenSSL/0.9.6
  PHP/4.0.3pl1
  mod_perl/1.24
  mod_ssl/2.7.1



Re: perl calendar application

2001-01-06 Thread modperl

I'll say just a little now since i'm moving semi slowly on this project.  

I'm working on writing a suite that at some point will have a calender
program in it.  The whole thing is perl based and the entire web enterface
is going to be done in mod perl. Whole setup will be databased
backed and released under the GPL.

As for a timeline,  I've been working more on the backend then any type of
display environment yet.  I'm starting to get into the front end now, so
by the end of the month, hopefully I will have something that can be shown
and that works reasonably well.

If you want more information about it, send me an e-mail offlist. As for
the statement that its not good to re-write something that exisits many
times for another language that works well,  Once I have the release, It
should make a good deal more sense.

Scott


On Fri, 5 Jan 2001 [EMAIL PROTECTED] wrote:
 I've looked around the web for perl-based calendar applications for
 several hours.  There are a significant number out there -- I've
 personally checked out a dozen, but they are generally pretty pathetic.  
 Even most of the ones you can pay for are ugly and have very limited
 functionality.  WebTrend and Calcium are decent, but cost $400 for our
 situation and any modifications I make would be unsharable.  (This
 presumes that their source code is even legible and in any shape to hack
 on.)  Am I totally missing something?
 More generally, does anybody have a page of mod_perl business
 applications?  Even more generally, are there any mod_perl applications





Re: mod_perl / SSI conflict?

2001-01-06 Thread Andrew Ho

Tom,

TKThis puts the footer nicely to all files .htm or .html
TKHowever, I have a few .shtml pages as part of my site, which have worked 
TKwith no errors.  After making this change:
TK
TK   Files ~ "\.s?html?$"
TKSetHandler perl-script
TKPerlHandler Apache::Footer
TK   /Files
TK
TK..I imagined that .shtml files would show a footer at the bottom of pages.
TK
TKHowever, what I found is that, for the .shtml pages, while the mod_perl 
TKfooter works with no errors, the SSI does not work at all, and I can't find 
TKanything strange regarding this in error_log.

The regular SSI mechanism (through mod_include) won't work with mod_perl,
e.g. you don't get mod_perl output run through SSI out of the box.
Basically, you've told Apache that .shtml files are now to be processed by
a mod_perl PerlHandler, NOT by mod_include. So naturally your SSI is not
showing up in the output.

There are a couple workarounds; see the documentation for Apache::SSI (you
can find it from CPAN) for more information/references.

Humbly,

Andrew

--
Andrew Ho   http://www.tellme.com/   [EMAIL PROTECTED]
Engineer   [EMAIL PROTECTED]  Voice 650-930-9062
Tellme Networks, Inc.   1-800-555-TELLFax 650-930-9101
--





re newbie needs help

2001-01-06 Thread dave frost,,,

Hi guys,

Sorry for the brain fart email, i now have it all working peachy now.

Thanks for the help though.

dave




RE: Strange log entry, Apache child messed up afterwards

2001-01-06 Thread Paul G. Weiss

I see it also.  However, I only see it after I get an entry of

200 -

in the access log, i.e. the page returned status code 200, but
the content-length is not recorded.

I'm still in the process of investigating.  BTW the above log
entry occurs 90% of the time with a script that does a file
upload.

-P

 -Original Message-
 From: Gerd Kortemeyer [mailto:[EMAIL PROTECTED]]
 Sent: Friday, January 05, 2001 7:14 PM
 To: [EMAIL PROTECTED]
 Subject: Strange log entry, Apache child messed up afterwards
 
 
 Hi,
 
 Did anybody ever see a message like this in the error log 
 after an "internal
 server error"?
 
  [error] Undefined subroutine Apache::lonhomework::handler 
 called at /dev/null
 line 65535.
 
 No further entries.
 
 lonhomework is the mod_perl handler attached to a URL, and is 
 called directly.
 This happens intermittently and seemingly at random; however, 
 after this
 happened once, that Apache child is messed up and will do 
 this again and again
 when hit - so depending on which child happens to answer, you 
 get the correct
 reply or another 500.
 
 Distributions:
 
  Red Hat Linux release 6.2 (Zoot)
  Kernel 2.2.16-3smp on a 2-processor i686
  [www@s10 www]$ rpm -q mod_perl
  mod_perl-1.23-3
  [www@s10 www]$ rpm -q perl
  perl-5.00503-10
  [www@s10 www]$ rpm -q apache
  apache-1.3.14-2.6.2
 
 Thanks to all in advance!
 
 - Gerd.
 



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-06 Thread Les Mikesell


- Original Message -
From: "Sam Horrocks" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; "mod_perl list" [EMAIL PROTECTED]
Sent: Saturday, January 06, 2001 4:37 PM
Subject: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts
that contain un-shared memory


Right, but this also points out how difficult it is to get mod_perl
 tuning just right.  My opinion is that the MRU design adapts more
 dynamically to the load.
  
   How would this compare to apache's process management when
   using the front/back end approach?

  Same thing applies.  The front/back end approach does not change the
  fundamentals.

It changes them drastically in the world of slow internet connections,
but perhaps not much in artificial benchmarks or LAN use.   I think
you can reduce the problem to:

 How much time do you spend in non-perl apache code vs. how
 much time  you spend in perl code.
and the solution to:
Only use the memory footprint of perl for the miminal time it is needed.

If your I/O is slow and your program complexity minimal, the bulk of
the wall-clock time is spent in i/o wait by non-perl apache code.  Using
a front-end proxy greatly reduces this time (and correspondingly the
ratio of time spent in non-perl code) for the backend where it matters
because you are tying up a copy of perl in memory. Likewise, increasing
the complexity of the perl code will reduce this ratio, reducing the
potential for saving memory regardless of what you do, so benchmarking
a trivial perl program will likely be misleading.

 I'd agree that the size of one Speedy backend + one httpd would be the
 same or even greater than the size of one mod_perl/httpd when no memory
 is shared.  But because the speedycgi httpds are small (no perl in
them)
 and the number of SpeedyCGI perl interpreters is small, the total
memory
 required is significantly smaller for the same load.
  
   Likewise, it would be helpful if you would always make the comparison
   to the dual httpd setup that is often used for busy sites.   I think it
must
   really boil down to the efficiency of your IPC vs. access to the full
   apache environment.

  The reason I don't include that comparison is that it's not fundamental
  to the differences between mod_perl and speedycgi or LRU and MRU that
  I have been trying to point out.  Regardless of whether you add a
  frontend or not, the mod_perl process selection remains LRU and the
  speedycgi process selection remains MRU.

I don't think I understand what you mean by LRU.   When I view the
Apache server-status with ExtendedStatus On,  it appears that
the backend server processes recycle themselves as soon as they
are free instead of cycling sequentially through all the available
processes.   Did you mean to imply otherwise or are you talking
about something else?

   Les Mikesell
 [EMAIL PROTECTED]





Single proc -multi proc

2001-01-06 Thread modperl

I've got 4 new machines coming in around the 22nd.  I'll have physical
access to them for two weeks before we colo them.  Probably the easiest
way to determine mod_perl's scalability by going to multiproc on linux
would be for me to test them.  They are dual proc machines, but I can pull
a proc out of them for testing pretty easily durring the two weeks that we
have them here.
Thoughts?
recomended scripts?
methods?
As for ab being coarse,  I'm not sure I'd agree totaly with that.  It
allows a fairly good amount of control.  There is also once called
sslclient that is actually worse except it does ssl.
Scott

On Thu, 4 Jan 2001, Blue Lang wrote:
 On Thu, 4 Jan 2001, Roger Espel Llima wrote:
  JR Mayberry [EMAIL PROTECTED] wrote:
   Linux does serious injustice to mod_perl. Anyone who uses Linux knows
   how horrible it is on SMP, I think some tests showed it uses as litle as
   25% of the second processor..
  A simple benchmark with 'ab' shows the number of requests per second
  almost double when the concurrency is increased from 1 to 2.  With a
  concurrency of 4, the number of requests per second increases to
  about 3.2 times the original, which is not bad at all considering
  that these are dynamic requests with DB queries.
 Eh, ab isn't really made as anything other than the most coarsely-grained
 of benchmarks. Concurrency testing is useless because it will measure the
 ratio of requests/second/processor, not the scalability of requests from
 single to multiple processors.