Re: Installing mod_perl-1.24_01 w/o super user and with global perl

2001-01-04 Thread Alexander Farber (EED)

Ian Kallen wrote:
 
 If I were you, I'd install my own perl in /home/eedalf, create
 /home/eedlf/apache and then do (assuming ~/bin/perl is before
 /opt/local/bin/perl in your path) something like:

Thanks, that's how I had it before - with Perl 5.6.0, Apache 
1.1.3 and mod_perl 1.24 in my home dir. However this tyme I'd 
like to use the system-wide Perl - because of my disk quota.

 perl Makefile.PL \
  APACHE_PREFIX=/home/eedalf/apache \
  APACHE_SRC=/home/eedalf/src/apache_1.3.14 \
  DO_HTTPD=1 \
  USE_APACI=1 \
  LIB=/home/eedalf/lib/perl \
  EVERYTHING=1

  is this error message, when calling "make install"



Re: Installing mod_perl-1.24_01 w/o super user and with global perl

2001-01-04 Thread Alexander Farber (EED)

Sorry, s#1\.1\.3#1.3.13#



RE: How do you run libapreq-0.31/eg/perl/file_upload.pl ?

2001-01-04 Thread Geoffrey Young



 -Original Message-
 From: Alexander Farber (EED) [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, January 04, 2001 4:58 AM
 To: [EMAIL PROTECTED]
 Subject: How do you run libapreq-0.31/eg/perl/file_upload.pl ?
 
[snip]
 
 2) After putting 
 
 PerlModule Apache::Request
 Location /cgi-bin/file_upload.pl
 SetHandler perl-script
 PerlHandler Apache::Request
 /Location

that won't work - Apache::Request is not a PerlHandler.  remember that
Perl*Handlers default to calling handler(), which Apache::Request doesn't
have (which is exactly what your error_log says)

 
 into my httpd.conf, I get the following in my error_log:
 
 [Thu Jan  4 10:47:39 2001] [notice] Apache/1.3.14 (Unix) 
 mod_perl/1.24_01 configured
 -- resuming normal operations
 [Thu Jan  4 10:47:51 2001] [error] Undefined subroutine 
 Apache::Request::handler cal
 led.
 
[snip]
 3) And the:
 
 PerlModule Apache::Request
 PerlModule Apache::Registry
 Location /cgi-bin/file_upload.pl
 SetHandler perl-script
 PerlHandler Apache::Registry
 Options ExecCGI
 PerlSendHeader On
 /Location

well, almost...  either change that to a Files directive or remove the
filename from the Location directive, so that Apache::Request applies to
all scripts within cgi-bin...

see http://perl.apache.org/guide/config.html for more details...

 
 Displays the web form, but nothing happens (same form
 displayed again), when I click the "Process File" button 
 and nothing is shown in the error_log.
 
 Also I have sometimes to reload several times to see a change.
 
 Should I wrap the code in the file_upload.pl into a "package"?

no - keep it simple while you figure things out...

 Can't I use Apache::Request w/o Apache::Registry? 

Apache::Request can be used without Apache::Registry, as long as you use it
properly.  They aren't the same type of module - compare man Apache::Request
and man Apache::Registry

 Am I loading
 the modules wrong way?
 
 I should of course read and re-read the complete guide and 
 finish the Eagle book (I will),

yes :)

 but maybe someone can provide 
 me a small kick-start? Thank you

HTH

--Geoff

 
 Regards
 Alex
 



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-04 Thread Sam Horrocks

Sorry for the late reply - I've been out for the holidays.

  By the way, how are you doing it?  Do you use a mutex routine that works
  in LIFO fashion?

 Speedycgi uses separate backend processes that run the perl interpreters.
 The frontend processes (the httpd's that are running mod_speedycgi)
 communicate with the backends, sending over the request and getting the output.

 Speedycgi uses some shared memory (an mmap'ed file in /tmp) to keep track
 of the backends and frontends.  This shared memory contains the queue.
 When backends become free, they add themselves at the front of this queue.
 When the frontends need a backend they pull the first one from the front
 of this list.

  
I am saying that since SpeedyCGI uses MRU to allocate requests to perl
interpreters, it winds up using a lot fewer interpreters to handle the
same number of requests.
  
  What I was saying is that it doesn't make sense for one to need fewer
  interpreters than the other to handle the same concurrency.  If you have
  10 requests at the same time, you need 10 interpreters.  There's no way
  speedycgi can do it with fewer, unless it actually makes some of them
  wait.  That could be happening, due to the fork-on-demand model, although
  your warmup round (priming the pump) should take care of that.

 What you say would be true if you had 10 processors and could get
 true concurrency.  But on single-cpu systems you usually don't need
 10 unix processes to handle 10 requests concurrently, since they get
 serialized by the kernel anyways.  I'll try to show how mod_perl handles
 10 concurrent requests, and compare that to mod_speedycgi so you can
 see the difference.

 For mod_perl, let's assume we have 10 httpd's, h1 through h10,
 when the 10 concurent requests come in.  h1 has aquired the mutex,
 and h2-h10 are waiting (in order) on the mutex.  Here's how the cpu
 actually runs the processes:

h1 accepts
h1 releases the mutex, making h2 runnable
h1 runs the perl code and produces the results
h1 waits for the mutex

h2 accepts
h2 releases the mutex, making h3 runnable
h2 runs the perl code and produces the results
h2 waits for the mutex

h3 accepts
...

 This is pretty straightforward.  Each of h1-h10 run the perl code
 exactly once.  They may not run exactly in this order since a process
 could get pre-empted, or blocked waiting to send data to the client,
 etc.  But regardless, each of the 10 processes will run the perl code
 exactly once.

 Here's the mod_speedycgi example - it too uses httpd's h1-h10, and they
 all take turns running the mod_speedycgi frontend code.  But the backends,
 where the perl code is, don't have to all be run fairly - they use MRU
 instead.  I'll use b1 and b2 to represent 2 speedycgi backend processes,
 already queued up in that order.

 Here's a possible speedycgi scenario:

h1 accepts
h1 releases the mutex, making h2 runnable
h1 sends a request to b1, making b1 runnable

h2 accepts
h2 releases the mutex, making h3 runnable
h2 sends a request to b2, making b2 runnable

b1 runs the perl code and sends the results to h1, making h1 runnable
b1 adds itself to the front of the queue

h3 accepts
h3 releases the mutex, making h4 runnable
h3 sends a request to b1, making b1 runnable

b2 runs the perl code and sends the results to h2, making h2 runnable
b2 adds itself to the front of the queue

h1 produces the results it got from b1
h1 waits for the mutex

h4 accepts
h4 releases the mutex, making h5 runnable
h4 sends a request to b2, making b2 runnable

b1 runs the perl code and sends the results to h3, making h3 runnable
b1 adds itself to the front of the queue

h2 produces the results it got from b2
h2 waits for the mutex

h5 accepts
h5 release the mutex, making h6 runnable
h5 sends a request to b1, making b1 runnable

b2 runs the perl code and sends the results to h4, making h4 runnable
b2 adds itself to the front of the queue

 This may be hard to follow, but hopefully you can see that the 10 httpd's
 just take turns using b1 and b2 over and over.  So, the 10 conncurrent
 requests end up being handled by just two perl backend processes.  Again,
 this is simplified.  If the perl processes get blocked, or pre-empted,
 you'll end up using more of them.  But generally, the LIFO will cause
 SpeedyCGI to sort-of settle into the smallest number of processes needed for
 the task.

 The difference between the two approaches is that the mod_perl
 implementation forces unix to use 10 separate perl processes, while the
 mod_speedycgi implementation sort-of decides on the fly how many
 different processes are needed.

Please let me know what you think I should change.  So far my
benchmarks only show one trend, but if you can tell me specifically
what I'm doing wrong (and it's something reasonable), I'll try it.
  
  Try setting MinSpareServers 

RE: mod_perl confusion.

2001-01-04 Thread Geoffrey Young



 -Original Message-
 From: Tom Karlsson [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, January 04, 2001 8:09 AM
 To: [EMAIL PROTECTED]
 Subject: mod_perl confusion.
 
 
 Hello All,
 
 I've recently looked through the mod_perl mail archives in 
 order to find
 someone who has/had the same problem as me. 
 
 It seems that a lot of people have had problems with 
 virtualhost IP and
 location in situations where both virtualhost sections has a similar
 URI and scriptname.
 
 This problem causes random execution of either of the scripts 
 no matter
 what virtualhost you're accessing. Like
 
 exampleA.com/cgi-bin/script
 exampleB.com/cgi-bin/script

try setting
$Apache::Registry::NameWithVirtualHost = 1;
in your startup.pl and make sure that you are use()ing Apache::Registry in
there as well...

and see if that fixes things...

HTH

--Geoff
 
[snip]
 Thanks.
 
 Friendly Regards
 /TK
 



Re: Linux Hello World Benchmarks: +PHP,JSP,ePerl

2001-01-04 Thread Roger Espel Llima

JR Mayberry [EMAIL PROTECTED] wrote:
 The Modperl handler benchmark, which was done on a dual P3 500mhz on
 Linux does serious injustice to mod_perl. Anyone who uses Linux knows
 how horrible it is on SMP, I think some tests showed it uses as litle as
 25% of the second processor..

It's an old post, but I simply cannot let this one pass uncommented.
I run a busy Apache/mod_perl server on a 4-way SMP Linux box (kernel
2.2.13 from VA Linux), and it sure seems to be using all CPUs quite
effectively.

A simple benchmark with 'ab' shows the number of requests per second
almost double when the concurrency is increased from 1 to 2.  With a
concurrency of 4, the number of requests per second increases to
about 3.2 times the original, which is not bad at all considering
that these are dynamic requests with DB queries.

Anyway, I wouldn't expect the OS's SMP to be the limiting factor on
Apache's dynamic page performance.  Apache uses multiple processes,
and dynamic page generation is generally CPU bound, not I/O bound.

-- 
Roger Espel Llima, [EMAIL PROTECTED]
http://www.iagora.com/~espel/index.html



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2001-01-04 Thread Sam Horrocks

This is planned for a future release of speedycgi, though there will
probably be an option to set a maximum number of bytes that can be
bufferred before the frontend contacts a perl interpreter and starts
passing over the bytes.

Currently you can do this sort of acceleration with script output if you
use the "speedy" binary (not mod_speedycgi), and you set the BufsizGet option
high enough so that it's able to buffer all the output from your script.
The perl interpreter will then be able to detach and go handle other
requests while the frontend process waits for the output to drain.

  Perrin Harkins wrote:
   What I was saying is that it doesn't make sense for one to need fewer
   interpreters than the other to handle the same concurrency.  If you have
   10 requests at the same time, you need 10 interpreters.  There's no way
   speedycgi can do it with fewer, unless it actually makes some of them
   wait.  That could be happening, due to the fork-on-demand model, although
   your warmup round (priming the pump) should take care of that.
  
  I don't know if Speedy fixes this, but one problem with mod_perl v1 is that
  if, for instance, a large POST request is being uploaded, this takes a whole
  perl interpreter while the transaction is occurring. This is at least one
  place where a Perl interpreter should not be needed.
  
  Of course, this could be overcome if an HTTP Accelerator is used that takes
  the whole request before passing it to a local httpd, but I don't know of
  any proxies that work this way (AFAIK they all pass the packets as they
  arrive).



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-04 Thread Les Mikesell


- Original Message -
From: "Sam Horrocks" [EMAIL PROTECTED]
To: "Perrin Harkins" [EMAIL PROTECTED]
Cc: "Gunther Birznieks" [EMAIL PROTECTED]; "mod_perl list"
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, January 04, 2001 6:56 AM
Subject: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts
that contain un-shared memory


  
   Are the speedycgi+Apache processes smaller than the mod_perl
   processes?  If not, the maximum number of concurrent requests you can
   handle on a given box is going to be the same.

  The size of the httpds running mod_speedycgi, plus the size of speedycgi
  perl processes is significantly smaller than the total size of the httpd's
  running mod_perl.

That would be true if you only ran one mod_perl'd httpd, but can you
give a better comparison to the usual setup for a busy site where
you run a non-mod_perl lightweight front end and let mod_rewrite
decide what is proxied through to the larger mod_perl'd backend,
letting apache decide how many backends you need to have
running?

  The reason for this is that only a handful of perl processes are required by
  speedycgi to handle the same load, whereas mod_perl uses a perl interpreter
  in all of the httpds.

I always see at least a 10-1 ratio of front-to-back end httpd's when serving
over the internet.   One effect that is difficult to benchmark is that clients
connecting over the internet are often slow and will hold up the process
that is delivering the data even though the processing has been completed.
The proxy approach provides some buffering and allows the backend
to move on more quickly.  Does speedycgi do the same?

  Les Mikesell
[EMAIL PROTECTED]





Rewrite arguments?

2001-01-04 Thread Les Mikesell

This may or may not be a mod_perl question: 
I want to change the way an existing request is handled and it can be done
by making a proxy request to a different host but the argument list must
be slightly different.It is something that a regexp substitution can
handle and I'd prefer for the front-end server to do it via mod_rewrite
but I can't see any way to change the existing arguments via RewriteRules.
To make the new server accept the old request I'll have to modify the name
of one of the arguments and add some extra ones.  I see how to make
mod_rewrite add something, but not modify the existing part. Will I
have to let mod_perl proxy with LWP instead or have I missed something
about mod_rewrite?   (Modifying the location portion is easy, but the
argument list seems to be handled separately).

Les Mikesell
  [EMAIL PROTECTED]





[bordering on OT] Re: Linux Hello World Benchmarks: +PHP,JSP,ePerl

2001-01-04 Thread Blue Lang

On Thu, 4 Jan 2001, Roger Espel Llima wrote:

 JR Mayberry [EMAIL PROTECTED] wrote:
  Linux does serious injustice to mod_perl. Anyone who uses Linux knows
  how horrible it is on SMP, I think some tests showed it uses as litle as
  25% of the second processor..

 A simple benchmark with 'ab' shows the number of requests per second
 almost double when the concurrency is increased from 1 to 2.  With a
 concurrency of 4, the number of requests per second increases to
 about 3.2 times the original, which is not bad at all considering
 that these are dynamic requests with DB queries.

Eh, ab isn't really made as anything other than the most coarsely-grained
of benchmarks. Concurrency testing is useless because it will measure the
ratio of requests/second/processor, not the scalability of requests from
single to multiple processors.

IOW, you would see almost exactly that same increase in req/second on a
single processor, most likely, unless you have a really slow machine.
You'd have to tune your load to give you one req/second/processor and then
go from there for it to mean anything.

Of course the original poster's statement on linux using only 25% of a
second CPU is a fuddy and false generalization, but that's a different
story. :P

-- 
   Blue Lang, Unix Voodoo Priest
   202 Ashe Ave, Apt 3, Raleigh, NC.  919 835 1540
"I was born in a city of sharks and sailors!" - June of 44




Re: the edge of chaos

2001-01-04 Thread G.W. Haywood

Hi there,

On Thu, 4 Jan 2001, Justin wrote:

 So dropping maxclients on the front end means you get clogged
 up with slow readers instead, so that isnt an option..

Try looking for Randall's posts in the last couple of weeks.  He has
some nice stuff you might want to have a play with.  Sorry, I can't
remember the thread but if you look in Geoff's DIGEST you'll find it.

Thanks again Geoff!

73,
Ged.




RE: the edge of chaos

2001-01-04 Thread Geoffrey Young



 -Original Message-
 From: G.W. Haywood [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, January 04, 2001 10:35 AM
 To: Justin
 Cc: [EMAIL PROTECTED]
 Subject: Re: the edge of chaos
 
 
 Hi there,
 
 On Thu, 4 Jan 2001, Justin wrote:
 
  So dropping maxclients on the front end means you get clogged
  up with slow readers instead, so that isnt an option..
 
 Try looking for Randall's posts in the last couple of weeks.  He has
 some nice stuff you might want to have a play with.  Sorry, I can't
 remember the thread but if you look in Geoff's DIGEST you'll find it.

I think you mean this:
http://forum.swarthmore.edu/epigone/modperl/phoorimpjun

and this thread:
http://forum.swarthmore.edu/epigone/modperl/zhayflimthu

(which is actually a response to Justin :)

 
 Thanks again Geoff!

glad to be of service :)

--Geoff

 
 73,
 Ged.
 



Re: the edge of chaos

2001-01-04 Thread Vivek Khera

 "J" == Justin  [EMAIL PROTECTED] writes:

J When things get slow on the back end, the front end can fill with
J 120 *requests* .. all queued for the 20 available modperl slots..
J hence long queues for service, results in nobody getting anything,

You simply don't have enough horsepower to serve your load, then.

Your options are: get more RAM, get faster CPU, make your application
smaller by sharing more code (pretty much whatever else is in the
tuning docs), or split your load across multiple machines.

If your front ends are doing nothing but buffering the pages for the
mod_perl backends, then you probably need to lower the ratio of
frontends to back ends from your 6 to 1 to something like 3 to 1.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2001-01-04 Thread Roger Espel Llima

"Jeremy Howard" [EMAIL PROTECTED] wrote:
 A backend server can realistically handle multiple frontend requests, since
 the frontend server must stick around until the data has been delivered
 to the client (at least that's my understanding of the lingering-close
 issue that was recently discussed at length here). 

I won't enter the {Fast,Speedy}-CGI debates, having never played
with these, but the picture you're painting about delivering data to
the clients is just a little bit too bleak.

With a frontend/backend mod_perl setup, the frontend server sticks
around for a second or two as part of the lingering_close routine,
but it doesn't have to wait for the client to finish reading all the
data.  Fortunately enough, spoonfeeding data to slow clients is
handled by the OS kernel.

-- 
Roger Espel Llima, [EMAIL PROTECTED]
http://www.iagora.com/~espel/index.html



Re: mod_perl confusion.

2001-01-04 Thread Vivek Khera

 "TK" == Tom Karlsson [EMAIL PROTECTED] writes:

TK I've recently looked through the mod_perl mail archives in order to find
TK someone who has/had the same problem as me. 

You should have found discussion about the variable
$Apache::Registry::NameWithVirtualHost in the archives.  Curiously, it
seems not to be documented in the perldoc for Apache::Registry.

It defaults to 1 in Registry version 2.01 so unless you're using a
*really* old mod_perl it should work as expected with virtual hosts.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/



missing docs

2001-01-04 Thread Vivek Khera

In answering another question today, I noticed that the variable
$Apache::Registry::NameWithVirtualHost is not documented in the
perldoc for Apache::Registry.

While scanning the Registry.pm file, I further noticed that there is a
call to $r-get_server_name for the virtual host name.  This too is
not documented in perldoc Apache.  The only documented way I see to
get this from the $r object is to use $r-server-server_hostname.

Should $r-get_server_name() be documented, or is it a "private"
method?  It seems wasteful to create an Apache::Server object just to
fetch the virtual host name.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/



RE: How do you run libapreq-0.31/eg/perl/file_upload.pl ?

2001-01-04 Thread Michael

  but maybe someone can provide 
  me a small kick-start? Thank you
 
short answer -- you don't need anything more that some simple 
scripting. Nothing at all in the server start up file.

client html file

form action=myupload.plx encoding ='MULTIPART/FORM-DATA'
  method=post
input type=file name=myfile size=15
input type=submit
/form

server side
BEGIN {
  use Apache;
  use Apache::Request;
# etc.
}

my $r = Apache-request;
my $apr = Apache::Request-new($r);

@_ = $apr-param;   # get the query parmaters
foreach(@_) {
  $qs{$_} = $apr-param($_);
}

open(F,'somefiletosave.xxx');
my $uploadhandle = $apr-upload-fh;
select F;
$| = 1;
select STDOUT;
while(read($fh,$_,1000)) {
  print F $_;
}
close F;
# bye now.
1;

This is short and sweet. It's up to you to insert appropriate error 
checking and failure escapes
[EMAIL PROTECTED]



Re: [bordering on OT] Re: Linux Hello World Benchmarks: +PHP,JSP,ePerl

2001-01-04 Thread Roger Espel Llima

On Thu, Jan 04, 2001 at 09:55:39AM -0500, Blue Lang wrote:
 Eh, ab isn't really made as anything other than the most coarsely-grained
 of benchmarks. Concurrency testing is useless because it will measure the
 ratio of requests/second/processor, not the scalability of requests from
 single to multiple processors.

Yeah, I agree 'ab' is a pretty coarse benchmark.  However, it does
in a way measure how much the various processors are helping,
because running ab with -c 1 should pretty much ensure that apache
only uses one processor at a time (except for a slight overlap while
one process does the logging and another could be reading the next
request from another processor), and similarily -c 2 should let
apache use 2 processors at one given time.  All approximately, of
course.

Anyway, on that 4way server it works that way; the requests per
second increase quickly with the concurrency up to 4, but don't
increase anymore after that.  That is serving relatively slow
dynamic pages; with static content I'd expect more rapidly
diminishing returns.

-- 
Roger Espel Llima, [EMAIL PROTECTED]
http://www.iagora.com/~espel/index.html



seg faults/bus errors

2001-01-04 Thread stujin


 Hi,
 
 I work on a high-traffic site that uses apache/mod_perl, and we're
 seeing some occasional segmentation faults and bus errors in our
 apache error logs.  These errors sometimes result in the entire apache
 process group going down, though it seems to me that the problems
 originate within one of apache's child processes (maybe shared memory
 is getting corrupted somehow?).
 
 I've searched through the archive of this list for similar situations,
 and I found a lot of questions about seg faults, but none quite
 matching our problem.
 
 We installed some signal handlers in our perl code that trap SIGSEGV
 and SIGBUS and then dump a perl stack trace to a log file (see below).
 Using this stack information, we can track the point of failure to a
 call to perl's "fork()" inside the IPC::Open3 standard module.  Since
 it seems very unlikely that fork() is broken, we're speculating that
 there's some funny business going on prior to the fork that's putting
 the process into an unstable state which prevents it from forking
 successfully.
 
 Due to a lot of sloppy, pre-existing Perl code, we're using PerlRun
 (not Registry) with "PerlRunOnce On" (children die after servicing one
 hit).
 
 Does anyone have any suggestions about what might be going on here?
 
 Thanks! 
 Justin Caballero
 
 
 The following are: a backtrace from a core dump, the stack trace from
 the perl signal handler, and the version information for our
 environment.
 
 -
 
 apache1.3.12
 mod_perl1.24
 
 -
 
 (gdb) where
 #0  0xe765c in Perl_sv_free ()
 #1  0xd89f4 in Perl_hv_free_ent ()
 #2  0xd8bd8 in Perl_hv_clear ()
 #3  0xd8b3c in Perl_hv_clear ()
 #4  0x10e760 in Perl_pp_fork ()
 #5  0x11b1d0 in Perl_runops_standard ()
 #6  0xa49e8 in perl_call_sv ()
 #7  0xa4490 in perl_call_method ()
 #8  0x2aea8 in perl_call_handler ()
 #9  0x2a6e0 in perl_run_stacked_handlers ()
 #10 0x28da0 in perl_handler ()
 #11 0x6e0f8 in ap_invoke_handler ()
 #12 0x8a8e8 in ap_some_auth_required ()
 #13 0x8a96c in ap_process_request ()
 #14 0x7e3e4 in ap_child_terminate ()
 #15 0x7e770 in ap_child_terminate ()
 #16 0x7ece8 in ap_child_terminate ()
 #17 0x7f54c in ap_child_terminate ()
 #18 0x7fe80 in main ()
 
 -
 
 SIGSEGV caught at:
 IPC::Open3, /opt/perl-5.005_03/lib/5.00503/IPC/Open3.pm, 102,
 main::cgi_stack_dump
 IPC::Open3, /opt/perl-5.005_03/lib/5.00503/IPC/Open3.pm, 150,
 IPC::Open3::xfork
 IPC::Open2, /opt/perl-5.005_03/lib/5.00503/IPC/Open2.pm, 91,
 IPC::Open3::_open3
 Cyxsub, /prod/APP/vobs/ssp/cgi-bin/Cyxsub.pm, 69, IPC::Open2::open2
 Cyxsub, /prod/APP/vobs/ssp/cgi-bin/Cyxsub.pm, 152, Cyxsub::sd_connect
 main,
 /prod/ssp_2.8_mp_prod_sv.001212/vobs/ssp_perl/cgi-bin/hy_inquiry_zc.pl,
 43, Cyxsub::xs_ods_main
 main,
 /prod/ssp_2.8_mp_prod_sv.001212/vobs/ssp_perl/cgi-bin/cp_pers-io_zc.pl,
 39, main::obtain_data
 main, /prod/APP/vobs/ssp/cgi-bin/cp_pers_ub.pl, 285,
 main::cp_pers_io_zc_get_data_from_host
 main, /prod/APP/vobs/ssp/cgi-bin/cp_pers_ub.pl, 208,
 main::cp_pers_ub_online_update
 main, /prod/APP/vobs/ssp/cgi-bin/cp_pers_ub.pl, 70,
 main::cp_pers_ub_update_ok
 main, /prod/APP/vobs/ssp/cgi-bin/cp_pers_ub.pl, 39, main::cp_pers_ub_main
 Apache::PerlRun,
 /opt/perl-5.005_03/lib/site_perl/5.005/sun4-solaris/Apache/PerlRun.pm,
 122, (eval)
 Apache::PerlRun,
 /opt/perl-5.005_03/lib/site_perl/5.005/sun4-solaris/Apache/PerlRun.pm,
 296, Apache::PerlRun::compile
 Apache::Constants,
 /prod/ssp_2.8_mp_prod_sv.001212/vobs/ssp_perl/cgi-bin/opa_common_zc.pl, 0,
 Apache::PerlRun::handler
 Apache::Constants,
 /prod/ssp_2.8_mp_prod_sv.001212/vobs/ssp_perl/cgi-bin/opa_common_zc.pl, 0,
 (eval)
 
 -
 
  perl -V
 Summary of my perl5 (5.0 patchlevel 5 subversion 3) configuration:
   Platform:
 osname=solaris, osvers=2.6, archname=sun4-solaris
 uname='sunos atlas 5.6 generic_105181-19 sun4u sparc sunw,ultra-250 '
 hint=recommended, useposix=true, d_sigaction=define
 usethreads=undef useperlio=undef d_sfio=undef
   Compiler:
 cc='gcc -B/usr/ccs/bin/', optimize='-O', gccversion=2.95.2 19991024
 (release)
 cppflags='-I/usr/local/include'
 ccflags ='-I/usr/local/include'
 stdchar='unsigned char', d_stdstdio=define, usevfork=false
 intsize=4, longsize=4, ptrsize=4, doublesize=8
 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
 alignbytes=8, usemymalloc=y, prototype=define
   Linker and Libraries:
 ld='gcc -B/usr/ccs/bin/', ldflags =' -L/usr/local/lib'
 libpth=/usr/local/lib /lib /usr/lib /usr/ccs/lib
 libs=-lsocket -lnsl -ldl -lm -lc -lcrypt
 libc=, so=so, useshrplib=false, libperl=libperl.a
   Dynamic Linking:
 dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' '
 cccdlflags='-fPIC', lddlflags='-G -L/usr/local/lib'
 
 
 Characteristics of this binary (from libperl): 
   Built under solaris
   Compiled at Jul 11 2000 15:12:53
   @INC:
 /opt/perl5.005_03-gcc/lib/5.00503/sun4-solaris
 /opt/perl5.005_03-gcc/lib/5.00503
 

RE: missing docs

2001-01-04 Thread Geoffrey Young



 -Original Message-
 From: Vivek Khera [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, January 04, 2001 12:23 PM
 To: Mod Perl List
 Subject: missing docs
 
 
 In answering another question today, I noticed that the variable
 $Apache::Registry::NameWithVirtualHost is not documented in the
 perldoc for Apache::Registry.
 
 While scanning the Registry.pm file, I further noticed that there is a
 call to $r-get_server_name for the virtual host name.  This too is
 not documented in perldoc Apache.  The only documented way I see to
 get this from the $r object is to use $r-server-server_hostname.
 
 Should $r-get_server_name() be documented, or is it a "private"
 method?  

they are both documented (along with their differences) in the Eagle book -
I guess perldoc Apache is just behind (as seems to be the usual case)...

Andrew Ford's new mod_perl Pocket Reference goes a long way toward
documenting new functionality as of 1.24, but I suppose it's virually
impossible to keep up with the pace of development.  Just take a look at the
Changes file from 1.24 to current cvs.  ack...

--Geoff

 It seems wasteful to create an Apache::Server object just to
 fetch the virtual host name.
 
 -- 
 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 Vivek Khera, Ph.D.Khera Communications, Inc.
 Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
 AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/
 



Re: seg faults/bus errors

2001-01-04 Thread Michael

 
  Hi,
 
  I work on a high-traffic site that uses apache/mod_perl, and we're
  seeing some occasional segmentation faults and bus errors in our
  apache error logs.  These errors sometimes result in the entire
  apache process group going down, though it seems to me that the
  problems originate within one of apache's child processes (maybe
  shared memory is getting corrupted somehow?).


Don't overlook the possibility of a flaky mbrd or memory. I finally 
gave up and replaced my BP6 with another mbrd and my segfaults 
magically went away. All the same components except the mbrd. The 
sucker seemed solid, would do big compiles, with high load averages, 
etc...

[EMAIL PROTECTED]



ab and cookies

2001-01-04 Thread JR Mayberry

does anyone have any experience with ab and sending multiple cookies ?

It appears to be chaining cookies together, ie:

I'm doing -C cookie1=value1 -C cookie2=value2

and im retreiving cookies with
CGI::Cookie-parse($r-header_in('Cookie'));
and foreaching %cookies and its doing something like

cookie1's value is 'value1%2C%20cookie2'...

any clues, the only bugs i've seen is someone saying ab didnt support
cookies, but that may have been an older message..

-- 
*



Re: the edge of chaos

2001-01-04 Thread Justin

Hi,
Thanks for the links! But. I wasnt sure what in the first link
was useful for this problem, and, the vacuum bots discussion
is really a different topic.
I'm not talking of vacuum bot load. This is real world load.

Practical experiments (ok - the live site :) convinced me that 
the well recommended modperl setup of fe/be suffer from failure
and much wasted page production when load rises just a little
above *maximum sustainable throughput* ..

If you want to see what happens to actual output when this
happens, check this gif:
   http://www.dslreports.com/front/eth0-day.gif
From 11am to 4pm (in the jaggie middle secton delineated by
the red bars) I was madly doing sql server optimizations to
get my head above water.. just before 11am, response time
was sub-second. (That whole day represents about a million
pages). Minutes after 11am, response rose fast to 10-20 seconds
and few people would wait that long, they just hit stop..
(which doesnt provide my server any relief from their request).

By 4pm I'd got the SQL server able to cope with current load,
and everything was fine after that..

This is all moot if you never plan to get anywhere near max
throughput.. nevertheless.. as a business, if incoming load
does rise (hopefully because of press) I'd rather lose 20% of
visitors to a "sluggish" site, than lose 100% of visitors
because the site is all but dead..

I received a helpful recommendation to look into "lingerd" ...
that would seem one approach to solve this issue.. but a
lingerd setup is quite different from popular recommendations.
-Justin

On Thu, Jan 04, 2001 at 11:06:35AM -0500, Geoffrey Young wrote:
 
 
  -Original Message-
  From: G.W. Haywood [mailto:[EMAIL PROTECTED]]
  Sent: Thursday, January 04, 2001 10:35 AM
  To: Justin
  Cc: [EMAIL PROTECTED]
  Subject: Re: the edge of chaos
  
  
  Hi there,
  
  On Thu, 4 Jan 2001, Justin wrote:
  
   So dropping maxclients on the front end means you get clogged
   up with slow readers instead, so that isnt an option..
  
  Try looking for Randall's posts in the last couple of weeks.  He has
  some nice stuff you might want to have a play with.  Sorry, I can't
  remember the thread but if you look in Geoff's DIGEST you'll find it.
 
 I think you mean this:
 http://forum.swarthmore.edu/epigone/modperl/phoorimpjun
 
 and this thread:
 http://forum.swarthmore.edu/epigone/modperl/zhayflimthu
 
 (which is actually a response to Justin :)
 
  
  Thanks again Geoff!
 
 glad to be of service :)
 
 --Geoff
 
  
  73,
  Ged.
  

-- 
Justin Beech  http://www.dslreports.com
Phone:212-269-7052 x252 FAX inbox: 212-937-3800
mailto:[EMAIL PROTECTED] --- http://dslreports.com/contacts



Re: the edge of chaos

2001-01-04 Thread Justin

I need more horsepower. Yes I'd agree with that !

However... which web solution would you prefer:

A. (ideal)
load equals horsepower:
  all requests serviced in =250ms
load slightly more than horsepower:
  linear falloff in response time, as a function of % overload

..or..

B. (modperl+front end)
load equals horsepower:
  all requests serviced in =250ms
sustained load *slightly* more than horsepower
  site too slow to be usable by anyone, few seeing pages

Don't all benchmarks (of disk, webservers, and so on),
always continue increasing load well past optimal levels,
to check there are no nasty surprises out there.. ?

regards
-justin

On Thu, Jan 04, 2001 at 11:10:25AM -0500, Vivek Khera wrote:
  "J" == Justin  [EMAIL PROTECTED] writes:
 
 J When things get slow on the back end, the front end can fill with
 J 120 *requests* .. all queued for the 20 available modperl slots..
 J hence long queues for service, results in nobody getting anything,
 
 You simply don't have enough horsepower to serve your load, then.
 
 Your options are: get more RAM, get faster CPU, make your application
 smaller by sharing more code (pretty much whatever else is in the
 tuning docs), or split your load across multiple machines.
 
 If your front ends are doing nothing but buffering the pages for the
 mod_perl backends, then you probably need to lower the ratio of
 frontends to back ends from your 6 to 1 to something like 3 to 1.
 
 -- 
 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 Vivek Khera, Ph.D.Khera Communications, Inc.
 Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
 AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

-- 
Justin Beech  http://www.dslreports.com
Phone:212-269-7052 x252 FAX inbox: 212-937-3800
mailto:[EMAIL PROTECTED] --- http://dslreports.com/contacts



Re: the edge of chaos

2001-01-04 Thread ___cliff rayman___

i see 2 things here, classic queing problem, and the fact
that swapping to disk is 1000's of times slower than serving
from ram.

if you receive 100 requests per second but only have the
ram to serve 99, then swapping to disc occurs which slows
down the entire system.  the next second comes and 100 new
requests come in, plus the 1 you had in the queue that did not
get serviced in the previous second. after a little while,
your memory requirements start to soar, lots of swapping is
occuring, and requests are coming in at a higher rate than can
be serviced by an ever slowing machine.  this leads to a rapid
downward spiral.  you must have enough ram to service all the apache
processes that are allowed to run at one time.  its been my experience
that once swapping starts to occur, the whole thing is going to spiral
downward very quickly.  you either need to add more ram, to service
that amount of apache processes that need to be running simultaneously,
or you need to reduce MaxClients and let apache turn away requests.


--
___cliff [EMAIL PROTECTED]http://www.genwax.com/

P.S. used your service several times with good results! (and no waiting) thanks!

Justin wrote:

 I need more horsepower. Yes I'd agree with that !

 However... which web solution would you prefer:

 A. (ideal)
 load equals horsepower:
   all requests serviced in =250ms
 load slightly more than horsepower:
   linear falloff in response time, as a function of % overload

 ..or..

 B. (modperl+front end)
 load equals horsepower:
   all requests serviced in =250ms
 sustained load *slightly* more than horsepower
   site too slow to be usable by anyone, few seeing pages

 Don't all benchmarks (of disk, webservers, and so on),
 always continue increasing load well past optimal levels,
 to check there are no nasty surprises out there.. ?

 regards
 -justin

 On Thu, Jan 04, 2001 at 11:10:25AM -0500, Vivek Khera wrote:
   "J" == Justin  [EMAIL PROTECTED] writes:
 
  J When things get slow on the back end, the front end can fill with
  J 120 *requests* .. all queued for the 20 available modperl slots..
  J hence long queues for service, results in nobody getting anything,
 
  You simply don't have enough horsepower to serve your load, then.
 
  Your options are: get more RAM, get faster CPU, make your application
  smaller by sharing more code (pretty much whatever else is in the
  tuning docs), or split your load across multiple machines.
 
  If your front ends are doing nothing but buffering the pages for the
  mod_perl backends, then you probably need to lower the ratio of
  frontends to back ends from your 6 to 1 to something like 3 to 1.
 
  --
  =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
  Vivek Khera, Ph.D.Khera Communications, Inc.
  Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
  AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

 --
 Justin Beech  http://www.dslreports.com
 Phone:212-269-7052 x252 FAX inbox: 212-937-3800
 mailto:[EMAIL PROTECTED] --- http://dslreports.com/contacts






Re: the edge of chaos

2001-01-04 Thread Perrin Harkins

Justin wrote:
 Thanks for the links! But. I wasnt sure what in the first link
 was useful for this problem, and, the vacuum bots discussion
 is really a different topic.
 I'm not talking of vacuum bot load. This is real world load.
 
 Practical experiments (ok - the live site :) convinced me that
 the well recommended modperl setup of fe/be suffer from failure
 and much wasted page production when load rises just a little
 above *maximum sustainable throughput* ..

The fact that mod_proxy doesn't disconnect from the backend server when
the client goes away is definitely a problem.  I remember some
discussion about this before but I don't think there was a solution for
it.

I think Vivek was correct in pointing out that your ultimate problem is
the fact that your system is not big enough for the load you're
getting.  If you can't upgrade your system to safely handle the load,
one approach is to send some people away when the server gets too busy
and provide decent service to the ones you do allow through.  You can
try lowering MaxClients on the proxy to help with this.  Then any
requests going over that limit will get queued by the OS and you'll
never see them if the person on the other end gets tired of waiting and
cancels.  It's tricky though, because you don't want a bunch of slow
clients to tie up all of your proxy processes.

It's easy to adapt the existing mod_perl throttling handlers to send a
short static "too busy" page when there are more than a certain number
of concurrent requests on the site.  Better to do this on the proxy side
though, so maybe mod_throttle could do it for you.

- Perrin



Re: Apache::AuthCookieDBI BEGIN problems...??

2001-01-04 Thread Jacob Davies

On Wed, Jan 03, 2001 at 12:02:15AM -0600, Jeff Sheffield wrote:
 I am ashamed ... I twitled with the shiny bits.
 my $auth_name = "WhatEver";
 $SECRET_KEYS{ $auth_name } = "thisishtesecretkeyforthisserver";
 ### END MY DIRTY HACK
 Note that without MY DIRTY LITTLE HACK it does not set those two
 variables. I am/was pretty sure that this somehow relates to
 "StackedHandlers"

I believe the problem is a known documentation bug with the module.  I need
to fix the docs and make a new release (have been meaning to for, oh,
five months now) but I no longer work at the place that originally paid
me to write the module and haven't had a chance so far.  I have a few other
patches to integrate too.  I believe your specific problem stems
from having the:

PerlModule Apache::AuthCookieDBI

line before the

PerlSetVar WhatEverDBI_SecretKeyFile /www/domain.com/test.key

line in the server config file.  Yes, the documentation has it the wrong
way around.  The reason is that the server reads this configuration
directive at module load time (i.e. with PerlModule, at server start time
when it's still running as root) so that it can preload the secret keys
from files on disk.  You want those files to be root-owned and only
readable by root, which is why it does it at start time.  Try putting
all your DBI_SecretKeyFile directives before the PerlModule line and
see if that fixes your problem.

It should give better diagnostics when this problem comes up, I need to
fix that.  Right now I don't even have this module running anywhere, but
I will install it again on my home machine at least, for testing.

Hopefully I will have a new release and an announce notice for this out
soon.

-- 
Jacob Davies
[EMAIL PROTECTED]



getting rid of multiple identical http requests (bad users double-clicking)

2001-01-04 Thread Ed Park

Does anyone out there have a clean, happy solution to the problem of users
jamming on links  buttons? Analyzing our access logs, it is clear that it's
relatively common for users to click 2,3,4+ times on a link if it doesn't
come up right away. This not good for the system for obvious reasons.

I can think of a few ways around this, but I was wondering if anyone else
had come up with anything. Here are the avenues I'm exploring:
1. Implementing JavaScript disabling on the client side so that links become
'click-once' links.
2. Implement an MD5 hash of the request and store it on the server (e.g. in
a MySQL server). When a new request comes in, check the MySQL server to see
whether it matches an existing request and disallow as necessary. There
might be some sort of timeout mechanism here, e.g. don't allow identical
requests within the span of the last 20 seconds.

Has anyone else thought about this?

cheers,
Ed




Re: getting rid of multiple identical http requests (bad users double-clicking)

2001-01-04 Thread Randal L. Schwartz

 "Ed" == Ed Park [EMAIL PROTECTED] writes:

Ed Has anyone else thought about this?

If you're generating the form on the fly (and who isn't, these days?),
just spit a serial number into a hidden field.  Then lock out two or
more submissions with the same serial number, with a 24-hour retention
of numbers you've generated.  That'll keep 'em from hitting "back" and
resubmitting too.

To keep DOS attacks at a minimum, it should be a cryptographically
secure MD5, to prevent others from lojacking your session.

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
[EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!



Re: getting rid of multiple identical http requests (bad users double-clicking)

2001-01-04 Thread Gunther Birznieks

Sorry if this solution has been mentioned before (i didn't read the earlier 
parts of this thread), and I know it's not as perfect as a server-side 
solution...

But I've also seen a lot of people use javascript to accomplish the same 
thing as a quick fix. Few browsers don't support javascript. Of the small 
amount that don't, the venn diagram merge of browsers that don't do 
javascript and users with an itchy trigger finger is very small. The 
advantage is that it's faster than mungling your own server-side code with 
extra logic to prevent double posting.

Add this to the top of the form:

 SCRIPT LANGUAGE="JavaScript"
 !--
 var clicks = 0;

 function submitOnce() {
 clicks ++;
 if (clicks  2) {
 return true;
 } else {
 // alert("You have already clicked the submit button. " + 
clicks + " clicks");
 return false;
 }
 }
 //--
 /SCRIPT

And then just add the submitOnce() function to the submit event for the 
form tag.

At 05:26 PM 1/4/01 -0800, Randal L. Schwartz wrote:
  "Ed" == Ed Park [EMAIL PROTECTED] writes:

Ed Has anyone else thought about this?

If you're generating the form on the fly (and who isn't, these days?),
just spit a serial number into a hidden field.  Then lock out two or
more submissions with the same serial number, with a 24-hour retention
of numbers you've generated.  That'll keep 'em from hitting "back" and
resubmitting too.

To keep DOS attacks at a minimum, it should be a cryptographically
secure MD5, to prevent others from lojacking your session.

--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
[EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!

__
Gunther Birznieks ([EMAIL PROTECTED])
eXtropia - The Web Technology Company
http://www.extropia.com/




Javascript - just say no(t required)

2001-01-04 Thread Randal L. Schwartz

 "Gunther" == Gunther Birznieks [EMAIL PROTECTED] writes:

Gunther But I've also seen a lot of people use javascript to accomplish the
Gunther same thing as a quick fix. Few browsers don't support javascript. Of
Gunther the small amount that don't, the venn diagram merge of browsers that
Gunther don't do javascript and users with an itchy trigger finger is very
Gunther small. The advantage is that it's faster than mungling your own
Gunther server-side code with extra logic to prevent double posting.

My browser "supports" Javascript, but has it turned off whenever I'm going
to an unknown web page.

Presuming that the CERT notices are being posted widely enough, there
are demonstratably *more* people with Javascript turned off today than
ever before.

That means you can use Javascript to enhance the experience, but I'll
come over and rip your throat out (if I knew your address) if you make
it required for basic services.

And don't forget the corporate firewalls that strip Javascript for
security reasons.  And the hundreds of new "net devices" showing up
that understand HTTP and XHTML, but nothing about Javascript.

Javascript.  Just say no(t required).

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
[EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!



Re: Javascript - just say no(t required)

2001-01-04 Thread Gunther Birznieks

Yeah, but in the real world regardless of the FUD about firewalls and the 
like...

The feedback that I have had from people using this technique is that the 
apps that have had this code implemented experience dramatic reduction in 
double postings to the point where they no longer exist.

And the code I posted is not making the basic application unavailable. It 
just allows double-postings if javascript is not enabled which in practice 
isn't that much when you consider the intersection of people who double 
click and the people likely to have JS disabled.

For a heavily used site, I would recommend ultimately a better server-side 
solution because the amount of time to develop and maintain a server side 
solution is "worth it", but it's not as easy and quick to fix an app in 
this respect as it is to add a quickie javascript fix for the short-term or 
for an app that it's not worth spending more time on.

There's a lot of similar FUD about using cookies (not accepted on PDAs, 
people scared of them, etc). Personally, I don't like to program using 
cookies and I have my browser explicitly warn me of the cookie before 
accepting (which does slow down my browsing experience but is most 
interesting),, but the reality is that shedloads of sites use them to 
enhance the user experience but don't make it a problem if they don't go 
and use them.

Anyway, whatever. Happy New Year! :)

Speaking of which, I guess the non-use of Cookies and JavaScript would make 
a great NY Resolution...

At 06:00 PM 1/4/2001 -0800, Randal L. Schwartz wrote:
  "Gunther" == Gunther Birznieks [EMAIL PROTECTED] writes:

Gunther But I've also seen a lot of people use javascript to accomplish the
Gunther same thing as a quick fix. Few browsers don't support javascript. Of
Gunther the small amount that don't, the venn diagram merge of browsers that
Gunther don't do javascript and users with an itchy trigger finger is very
Gunther small. The advantage is that it's faster than mungling your own
Gunther server-side code with extra logic to prevent double posting.

My browser "supports" Javascript, but has it turned off whenever I'm going
to an unknown web page.

Presuming that the CERT notices are being posted widely enough, there
are demonstratably *more* people with Javascript turned off today than
ever before.

That means you can use Javascript to enhance the experience, but I'll
come over and rip your throat out (if I knew your address) if you make
it required for basic services.

And don't forget the corporate firewalls that strip Javascript for
security reasons.  And the hundreds of new "net devices" showing up
that understand HTTP and XHTML, but nothing about Javascript.

Javascript.  Just say no(t required).

--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
[EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!

__
Gunther Birznieks ([EMAIL PROTECTED])
eXtropia - The Web Technology Company
http://www.extropia.com/




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-04 Thread Perrin Harkins

Hi Sam,

I think we're talking in circles here a bit, and I don't want to
diminish the original point, which I read as "MRU process selection is a
good idea for Perl-based servers."  Your tests showed that this was
true.

Let me just try to explain my reasoning.  I'll define a couple of my
base assumptions, in case you disagree with them.

- Slices of CPU time doled out by the kernel are very small - so small
that processes can be considered concurrent, even though technically
they are handled serially.
- A set of requests can be considered "simultaneous" if they all arrive
and start being handled in a period of time shorter than the time it
takes to service a request.

Operating on these two assumptions, I say that 10 simultaneous requests
will require 10 interpreters to service them.  There's no way to handle
them with fewer, unless you queue up some of the requests and make them
wait.

I also say that if you have a top limit of 10 interpreters on your
machine because of memory constraints, and you're sending in 10
simultaneous requests constantly, all interpreters will be used all the
time.  In that case it makes no difference to the throughput whether you
use MRU or LRU.

  What you say would be true if you had 10 processors and could get
  true concurrency.  But on single-cpu systems you usually don't need
  10 unix processes to handle 10 requests concurrently, since they get
  serialized by the kernel anyways.

I think the CPU slices are smaller than that.  I don't know much about
process scheduling, so I could be wrong.  I would agree with you if we
were talking about requests that were coming in with more time between
them.  Speedycgi will definitely use fewer interpreters in that case.

  I found that setting MaxClients to 100 stopped the paging.  At concurrency
  level 100, both mod_perl and mod_speedycgi showed similar rates with ab.
  Even at higher levels (300), they were comparable.

That's what I would expect if both systems have a similar limit of how
many interpreters they can fit in RAM at once.  Shared memory would help
here, since it would allow more interpreters to run.

By the way, do you limit the number of SpeedyCGI processes as well?  it
seems like you'd have to, or they'd start swapping too when you throw
too many requests in.

  But, to show that the underlying problem is still there, I then changed
  the hello_world script and doubled the amount of un-shared memory.
  And of course the problem then came back for mod_perl, although speedycgi
  continued to work fine.  I think this shows that mod_perl is still
  using quite a bit more memory than speedycgi to provide the same service.

I'm guessing that what happened was you ran mod_perl into swap again. 
You need to adjust MaxClients when your process size changes
significantly.

 I believe that with speedycgi you don't have to lower the MaxClients
 setting, because it's able to handle a larger number of clients, at
 least in this test.
  
   Maybe what you're seeing is an ability to handle a larger number of
   requests (as opposed to clients) because of the performance benefit I
   mentioned above.
 
  I don't follow.

When not all processes are in use, I think Speedy would handle requests
more quickly, which would allow it to handle n requests in less time
than mod_perl.  Saying it handles more clients implies that the requests
are simultaneous.  I don't think it can handle more simultaneous
requests.

   Are the speedycgi+Apache processes smaller than the mod_perl
   processes?  If not, the maximum number of concurrent requests you can
   handle on a given box is going to be the same.
 
  The size of the httpds running mod_speedycgi, plus the size of speedycgi
  perl processes is significantly smaller than the total size of the httpd's
  running mod_perl.
 
  The reason for this is that only a handful of perl processes are required by
  speedycgi to handle the same load, whereas mod_perl uses a perl interpreter
  in all of the httpds.

I think this is true at lower levels, but not when the number of
simultaneous requests gets up to the maximum that the box can handle. 
At that point, it's a question of how many interpreters can fit in
memory.  I would expect the size of one Speedy + one httpd to be about
the same as one mod_perl/httpd when no memory is shared.  With sharing,
you'd be able to run more processes.

- Perrin



Re: the edge of chaos

2001-01-04 Thread Les Mikesell


- Original Message - 
From: "Justin" [EMAIL PROTECTED]
To: "Geoffrey Young" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, January 04, 2001 4:55 PM
Subject: Re: the edge of chaos


 
 Practical experiments (ok - the live site :) convinced me that 
 the well recommended modperl setup of fe/be suffer from failure
 and much wasted page production when load rises just a little
 above *maximum sustainable throughput* ..

It doesn't take much math to realize that if you continue to try to
accept connections faster than you can service them, the machine
is going to die, and as soon as you load the machine to the point
that you are swapping/paging memory to disk the time to service
a request will skyrocket.   Tune down MaxClients on both the
front and back end httpd's to what the machine can actually
handle and bump up the listen queue if you want to try to let
the requests connect and wait for a process to handle them.  If
you aren't happy with the speed the machine can realistically
produce, get another one (or more) and let the front end proxy
to the other(s) running the backends.

 Les Mikesell
 [EMAIL PROTECTED]






Re: getting rid of multiple identical http requests (bad users double-clicking)

2001-01-04 Thread Les Mikesell


- Original Message -
From: "Ed Park" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, January 04, 2001 6:52 PM
Subject: getting rid of multiple identical http requests (bad users
double-clicking)


 Does anyone out there have a clean, happy solution to the problem of users
 jamming on links  buttons? Analyzing our access logs, it is clear that it's
 relatively common for users to click 2,3,4+ times on a link if it doesn't
 come up right away. This not good for the system for obvious reasons.

The best solution is to make the page come up right away...  If that isn't
possible, try to make at least something show up.  If your page consists
of a big table the browser may be waiting until the closure to compute
the column widths before it can render anything.

 I can think of a few ways around this, but I was wondering if anyone else
 had come up with anything. Here are the avenues I'm exploring:
 1. Implementing JavaScript disabling on the client side so that links become
 'click-once' links.
 2. Implement an MD5 hash of the request and store it on the server (e.g. in
 a MySQL server). When a new request comes in, check the MySQL server to see
 whether it matches an existing request and disallow as necessary. There
 might be some sort of timeout mechanism here, e.g. don't allow identical
 requests within the span of the last 20 seconds.

This might be worthwhile to trap duplicate postings, but unless your page
requires a vast amount of server work you might as well deliver it as
go to this much trouble.

  Les Mikesell
[EMAIL PROTECTED]





Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2001-01-04 Thread Joe Schaefer

Roger Espel Llima [EMAIL PROTECTED] writes:

 "Jeremy Howard" [EMAIL PROTECTED] wrote:

I'm pretty sure I'm the person whose words you're quoting here,
not Jeremy's.

  A backend server can realistically handle multiple frontend requests, since
  the frontend server must stick around until the data has been delivered
  to the client (at least that's my understanding of the lingering-close
  issue that was recently discussed at length here). 
 
 I won't enter the {Fast,Speedy}-CGI debates, having never played
 with these, but the picture you're painting about delivering data to
 the clients is just a little bit too bleak.

It's a "hypothetical", and I obviously exaggerated the numbers to show
the advantage of a front/back end architecture for "comparative benchmarks" 
like these.  As you well know, the relevant issue is the percentage of time 
spent generating the content relative to the entire time spent servicing 
the request.  If you don't like seconds, rescale it to your favorite 
time window.

 With a frontend/backend mod_perl setup, the frontend server sticks
 around for a second or two as part of the lingering_close routine,
 but it doesn't have to wait for the client to finish reading all the
 data.  Fortunately enough, spoonfeeding data to slow clients is
 handled by the OS kernel.

Right- relative to the time it takes the backend to actually 
create and deliver the content to the frontend, a second or
two can be an eternity.  

Best.
-- 
Joe Schaefer