Re: mod_perm and Java servlets

2001-01-17 Thread Les Mikesell


- Original Message - 
From: "Terry Newnham" <[EMAIL PROTECTED]>
To: "mod_perl list" <[EMAIL PROTECTED]>
Sent: Wednesday, January 17, 2001 6:51 PM
Subject: mod_perm and Java servlets


> 
> Hi
> 
> My boss has asked me to set up a web server on Solaris 8 with mod_perl
> and (if possible) Java servlet capabilities as well. Has anybody done
> this ? Any issues ?


If you expect the server to be busy you will probably want to set up a
lightweight front end apache without mod_perl and let it proxy the
mod_perl jobs to another server.   In this scheme it works well to
put apache jserve in the front end because it also uses a proxy-like
mechanism to hand off to the servlet engine (with load balancing
if you want to spread the servlets over multiple machines).  The
only problem I've seen have been memory leaks in the servlets
causing the jvm to grow, but apache will restart it for you if you
have to kill it once in a while.

 Les Mikesell
 [EMAIL PROTECTED]





Re: killing of greater than MaxSpareServers

2001-01-17 Thread ___cliff rayman___

i think its worth posting to the list.  it will be forever in the
archives when someone needs it.

thanks!

Balazs Rauznitz wrote:

> On Wed, 17 Jan 2001, ___cliff rayman___ wrote:
>
> > i and others have written on the list before, that pushing apache
> > children into swap causes a rapid downward spiral in performance.
> > I don't think that MaxClients is the right way to limit the # of children.  i think
> > MaxSpareCoreMemory would make more sense.  You could
> > set this to 1K if your server was designated for Apache
> > only, or set it to a higher value if it were a multipurpose machine.
> > mod_perl/apache and paging/swapping just don't mix.
>
> Once I wrote a patch to apache so that it would not spawn new children if
> a certain file was present in the filesystem. You can then have a watchdog
> process touch or delete that file based on any criteria you want. Imo
> having a separate and flexible process is better than apache trying to
> make these decisions...
> I'll dig it up if interested.
>
> -Balazs

--
___cliff [EMAIL PROTECTED]http://www.genwax.com/





Re: killing of greater than MaxSpareServers

2001-01-17 Thread Balazs Rauznitz


On Wed, 17 Jan 2001, ___cliff rayman___ wrote:

> i and others have written on the list before, that pushing apache
> children into swap causes a rapid downward spiral in performance.
> I don't think that MaxClients is the right way to limit the # of children.  i think
> MaxSpareCoreMemory would make more sense.  You could
> set this to 1K if your server was designated for Apache
> only, or set it to a higher value if it were a multipurpose machine.
> mod_perl/apache and paging/swapping just don't mix.

Once I wrote a patch to apache so that it would not spawn new children if
a certain file was present in the filesystem. You can then have a watchdog
process touch or delete that file based on any criteria you want. Imo
having a separate and flexible process is better than apache trying to
make these decisions...
I'll dig it up if interested.

-Balazs




IPC::SharedCache - problem

2001-01-17 Thread Alexandr Efimov

Hi,

I'm having a problem with IPC::SharedCache module:
when executing a code like this:
tie(%cache, 'IPC::SharedCache',load_callback => sub {return undef},
 validate_callback => sub {return 1},ipc_key => 'hash');
for ($i=1;$i<=1000;$i++) {
$cache{"key_$i"}=[$i];
}
it allocates a separate memory segment and a separate
semaphore for each cache record, thus exceeding system limit
on the number of semaphores and shared memory segments.
Is there any way to change that behaviour, in other
words, to allocate an entire hash in only one shared memory
segment?

Alexander





Re: mod_perm and Java servlets

2001-01-17 Thread Andrew Ho

Hello,

TN>My boss has asked me to set up a web server on Solaris 8 with mod_perl
TN>and (if possible) Java servlet capabilities as well. Has anybody done
TN>this ? Any issues ?

I've experimented with mod_perl and JRun on Solaris, and they've played
together nicely. As long as you're hefty on memory, it'll likely be a
while before you hit any significant memory problems as long as your load
is distributed evenly between Perl and Java.

If you use mostly one above the other, I'd do a proxied setup so you can
scale them separately. The management is also easier. However, for a dev
environment or a low-traffic one, having them co-exist is just fine.

Humbly,

Andrew

--
Andrew Ho   http://www.tellme.com/   [EMAIL PROTECTED]
Engineer   [EMAIL PROTECTED]  Voice 650-930-9062
Tellme Networks, Inc.   1-800-555-TELLFax 650-930-9101
--




Re: mod perl and embperl

2001-01-17 Thread Gerald Richter

>
> I have installed mod_perl as a DSO for apache, everything is ok!
> Now i am in need for embperl, the problem begins when i perl
> Makefile.PL, it asks me for apache source!
> Since i have every apache module as DSO i have deleted everything
> related to apache.
>
> Hoe to enable embperl with mod_perl ?
>

Embperl needs only the header files, not the whole source. Apache normaly
installs these headers under include (e.g. /usr/local/apache/include),
specify this directory when asked for the sources and it should work.

Gerald


-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-





Re: killing of greater than MaxSpareServers

2001-01-17 Thread ___cliff rayman___

if you are able to determine how much core memory
is left, you may also be able to determine average apache
process size and variance.  then, apache can determine
whether or not to start up any additional children.  i'm not
sure how much processor time would be taken to determine
free core memory.  might not be something worth doing on
every scoreboard cycle.

there is an additional bonus for determining child size mean
and variance.  if it is noted that a process is growing exceptionally
large compared to other processes, it could be slated for death
based on its growth size and rate, rather than the number of
requests that it served.

Perrin Harkins wrote:

> On Wed, 17 Jan 2001, ___cliff rayman___ wrote:
> > i and others have written on the list before, that pushing apache
> > children into swap causes a rapid downward spiral in performance. I
> > don't think that MaxClients is the right way to limit the # of
> > children.  i think MaxSpareCoreMemory would make more sense.  You
> > could set this to 1K if your server was designated for Apache only, or
> > set it to a higher value if it were a multipurpose machine.
>
> I've thought about that too.  The trick is, Apache would need to know
> things about your application to do that right.  It would need to know how
> big your processes were likely to be and how big they could get before
> they die.  Otherwise, it has no way of knowing whether or not there's
> enough room for another process.
>
> A combination of Apache::SizeLimit and a dynamically changing MaxClients
> could possibly accomplish this, but you wouldn't want to run it too close
> to the edge since you don't want to have to axe a process that's in the
> middle of doing something just because it got a little too big (i.e. no
> hard limits on per-process memory usage).
>
> You can't change MaxClients while the server is running, can you?
>
> - Perrin

--
___cliff [EMAIL PROTECTED]http://www.genwax.com/





Re: killing of greater than MaxSpareServers

2001-01-17 Thread Perrin Harkins

On Wed, 17 Jan 2001, ___cliff rayman___ wrote:
> i and others have written on the list before, that pushing apache
> children into swap causes a rapid downward spiral in performance. I
> don't think that MaxClients is the right way to limit the # of
> children.  i think MaxSpareCoreMemory would make more sense.  You
> could set this to 1K if your server was designated for Apache only, or
> set it to a higher value if it were a multipurpose machine.

I've thought about that too.  The trick is, Apache would need to know
things about your application to do that right.  It would need to know how
big your processes were likely to be and how big they could get before
they die.  Otherwise, it has no way of knowing whether or not there's
enough room for another process.

A combination of Apache::SizeLimit and a dynamically changing MaxClients
could possibly accomplish this, but you wouldn't want to run it too close
to the edge since you don't want to have to axe a process that's in the
middle of doing something just because it got a little too big (i.e. no
hard limits on per-process memory usage).

You can't change MaxClients while the server is running, can you?

- Perrin




RE: Prototype mismatch in Apache::PerlRun line 343

2001-01-17 Thread Wenzhong Tang

Since nobody seems care about this problem, I have to find a solution
myself.  Fortunately perl has a "prototype" function that returns the
prototype of a function.  Here is the difference between the original
PerlRun.pm in mod_perl 1.24_01 and the updated one:

343c343,350
< *{$fullname} = sub {};
---
> my $proto = prototype($fullname);
>   if (defined $proto) {
> my $mysub;
>   eval "\$mysub = sub {$proto} {}"; 
>   *{$fullname} = $mysub;
>   } else {
> *{$fullname} = sub {};
>   }

It works fine so far on my server.

It would be great if the next release of PerlRun.pm can do something like
this to greatly reduce the number of unnecessary "Prototype mismatch"
warnings in error_log.

Wenzhong Tang
Appliant.com
[EMAIL PROTECTED]

-Original Message-
From: Wenzhong Tang 
Sent: Friday, January 12, 2001 4:36 PM
To: '[EMAIL PROTECTED]'
Subject: Prototype mismatch in Apache::PerlRun line 343


Hi folks,

I am running a CGI script under Apache/mod_perl using Apache::PerlRun.  I
constantly got warning messaegs similiar to the one showing below:

[Thu Jan 11 18:45:49 2001] null: Prototype mismatch: sub
Apache::ROOT::login_2ecgi::gettimeofday () vs none at
/usr/lib/perl5/site_perl/5.005/i386-linux/Apache/PerlRun.pm line 343.

My CGI scripts seem to work just fine since the compliant happened in
flush_namespace.  I think I know why it is happening:  gettimeofday() in
Time::HiRes uses function prototype in the module, but line 343 in
PerlRun.pm 
*{$fullname} = sub {};
doesn't know anything about it.

Is there a good way of fixing it?

Thanks,

Wenzhong



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perlwithscripts that contain un-shared memory

2001-01-17 Thread Christian Jaeger

Hello Sam and others

If I haven't overseen, nobody so far really mentioned fastcgi. I'm 
asking myself why you reinvented the wheel. I summarize the 
differences I see:

+ perl scripts are more similar to standard CGI ones than with 
FastCGI (downside: see next point)
- it seems you can't control the request loop yourself
+ protocol is more free than the one of FastCGI (is it?)
- protocol isn't widespread (almost standard) like the one of FastCGI
- seems only to support perl (so far)
- doesn't seem to support external servers (on other machines) like 
FastCGI (does it?)

Question: does speedycgi run a separate interpreter for each script, 
or is there one process loading and calling several perl scripts? If 
it's a separate process for each script, then mod_perl is sure to use 
less memory.

As far I understand, IF you can collect several scripts together into 
one interpreter and IF you do preforking, I don't see essential 
performance related differences between mod_perl and speedy/fastcgi 
if you set up mod_perl with the proxy approach. With mod_perl the 
protocol to the backends is http, with speedy it's speedy and with 
fastcgi it's the fastcgi protocol. (The difference between mod_perl 
and fastcgi is that fastcgi uses a request loop, whereas mod_perl has 
it's handlers (sorry, I never really used mod_perl so I don't know 
exactly).)

I think it's a pity that during the last years there was such little 
interest/support for fastcgi and now that should change with 
speedycgi. But why not, if the stuff that people develop can run on 
both and speedy is/becomes better than fastcgi.

I'm developing a web application framework (called 'Eile', you can 
see some outdated documentation on testwww.ethz.ch/eile, I will 
release a new much better version soon) which currently uses fastcgi. 
If I can get it to run with speedycgi, I'll be glad to release it 
with support for both protocols. I haven't looked very close at it 
yet. One of the problems seems to be that I really depend on 
controlling the request loop (initialization, preforking etc all have 
to be done before the application begins serving requests, and I'm 
also controlling exits of childs myself). If you're interested to 
help me solving these issues please contact me privately. The main 
advantages of Eile concerning resources are a) one 
process/interpreter runs dozens of 'scripts' (called page-processing 
modules), and you don't have to dispatch requests to each of them 
yourself, and b) my new version does preforking.

Christian.
-- 
Web Office
Christian Jaeger
Corporate Communications, ETH Zentrum
CH-8092 Zurich

office: HG J43 e-mail:   [EMAIL PROTECTED]
phone: +41 (0)1 63 2 5780  [EMAIL PROTECTED]
home:  +41 (0)1 273 65 46 fax:  +41 (0)1 63 2 3525



Re: killing of greater than MaxSpareServers

2001-01-17 Thread ___cliff rayman___

i and others have written on the list before, that pushing apache
children into swap causes a rapid downward spiral in performance.
I don't think that MaxClients is the right way to limit the # of children.  i think
MaxSpareCoreMemory would make more sense.  You could
set this to 1K if your server was designated for Apache
only, or set it to a higher value if it were a multipurpose machine.
mod_perl/apache and paging/swapping just don't mix.

Perrin Harkins wrote:

> On Wed, 17 Jan 2001, ___cliff rayman___ wrote:
>
> > here is an excerpt from httpd.h:
>
> Good reading.  Thanks.
>
> It looks as if Apache should find the right number of servers for a steady
> load over time, but it could jump up too high for a bit when the load
> spike first comes in, pushing into swap if MaxClients is not configured
> correctly.  That may be what Sam was seeing.
>
> - Perrin

--
___cliff [EMAIL PROTECTED]http://www.genwax.com/





Re: mod_perm and Java servlets

2001-01-17 Thread Perrin Harkins

I've heard mod_perm costs a lot more than its worth.  There was an
open-source clone called mod_home_perm but it wasn't very successful.  
Some people say you should skip it altogether and just use mod_hat.

On Thu, 18 Jan 2001, Terry Newnham wrote:
> My boss has asked me to set up a web server on Solaris 8 with mod_perl
> and (if possible) Java servlet capabilities as well. Has anybody done
> this ? Any issues ?

None that I know of, except that you really don't want the additional
memory overhead of mod_perl in a process that isn't using mod_perl.  You
might save some memory by having a separate server that runs just
mod_perl, and having your jserv (or whatever) server send requests for
mod_perl apps to it using mod_proxy.  See the mod_perl Guide for more info
on using a proxy with mod_perl.

- Perrin




Re: killing of greater than MaxSpareServers

2001-01-17 Thread Perrin Harkins

On Wed, 17 Jan 2001, ___cliff rayman___ wrote:

> here is an excerpt from httpd.h:

Good reading.  Thanks.

It looks as if Apache should find the right number of servers for a steady
load over time, but it could jump up too high for a bit when the load
spike first comes in, pushing into swap if MaxClients is not configured
correctly.  That may be what Sam was seeing.

- Perrin




Re: [OT] Apache wedges when log filesystem is full

2001-01-17 Thread Tom Brown

On Wed, 17 Jan 2001, Andrew Ho wrote:

> Hello,
> 
> The other day we had a system fail because the partition that holds the
> logs became full, and Apache stopped responding to requests. Deleting some
> old log files in that partition solved the problem.
> 
> We pipe logs to cronolog (http://www.ford-mason.co.uk/resources/cronolog/)
> to roll them daily, so this introduces a pipe and also means that the
> individual logfile being written to was relatively small.
> 
> While this faulted our monitoring (we should have been monitoring free
> space on /var), I am also interested in something I was unable to find
> on-line: how to configure Apache to robustly failover in such a case,
> e.g. keep serving responses but just stop writing to logs.

I haven't tested anything I say below, but I believe that ...

... the child processes would have blocked because the pipe they
were writing to got full, the simple fix is to have "chronolog"
keep reading the logging info even if it can't write out the info
This isn't an Apache config issue, it's an operating
system/IPC/logging-agent issue. 

It is possible you could modify apache to set the pipe to
non-blocking and thus it would simply get back an error message
when it tried to write(2) to the logging process/pipe... but it's
probably a better idea to do it on the other side (keep reading
from the pipe)... at least in our shop the log-agent is much
simpler than apache, and a more logical place to put custom code.

Alternatively, of course you could write your own Log-handler
instead of using the default apache ones, and now that I think
about it, that probably was your question wasn't it :-(

-Tom

--
[EMAIL PROTECTED]   | Put all your eggs in one basket and 
http://BareMetal.com/  |  WATCH THAT BASKET!
web hosting since '95  | - Mark Twain





killing of greater than MaxSpareServers

2001-01-17 Thread ___cliff rayman___

here is an excerpt from httpd.h:
/*
 * (Unix, OS/2 only)
 * Interval, in microseconds, between scoreboard maintenance.  During
 * each scoreboard maintenance cycle the parent decides if it needs to
 * spawn a new child (to meet MinSpareServers requirements), or kill off
 * a child (to meet MaxSpareServers requirements).  It will only spawn or
 * kill one child per cycle.  Setting this too low will chew cpu.  The
 * default is probably sufficient for everyone.  But some people may want
 * to raise this on servers which aren't dedicated to httpd and where they
 * don't like the httpd waking up each second to see what's going on.
 */
#ifndef SCOREBOARD_MAINTENANCE_INTERVAL
#define SCOREBOARD_MAINTENANCE_INTERVAL 100
#endif

and the code from http_main.c:

if (idle_count > ap_daemons_max_free) { #retrieved indirectly from MaxSpareServers
/* kill off one child... we use SIGUSR1 because that'll cause it to
 * shut down gracefully, in case it happened to pick up a request
 * while we were counting
 */
kill(ap_scoreboard_image->parent[to_kill].pid, SIGUSR1);
idle_spawn_rate = 1;
}

Perrin Harkins wrote:

> On Wed, 17 Jan 2001, Sam Horrocks wrote:
> > If in both the MRU/LRU case there were exactly 10 interpreters busy at
> > all times, then you're right it wouldn't matter.  But don't confuse
> > the issues - 10 concurrent requests do *not* necessarily require 10
> > concurrent interpreters.  The MRU has an affect on the way a stream of 10
> > concurrent requests are handled, and MRU results in those same requests
> > being handled by fewer interpreters.
>
> On a side note, I'm curious about is how Apache decides that child
> processes are unused and can be killed off.  The spawning of new processes
> is pretty agressive on a busy server, but if the server reaches a steady
> state and some processes aren't needed they should be killed off.  Maybe
> no one has bothered to make that part very efficient since in normal
> circusmtances most users would prefer to have extra processes waiting
> around than not have enough to handle a surge and have to spawn a whole
> bunch.
>
> - Perrin

--
___cliff [EMAIL PROTECTED]http://www.genwax.com/





mod_perm and Java servlets

2001-01-17 Thread Terry Newnham


Hi

My boss has asked me to set up a web server on Solaris 8 with mod_perl
and (if possible) Java servlet capabilities as well. Has anybody done
this ? Any issues ?

Terry




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2001-01-17 Thread Perrin Harkins

On Wed, 17 Jan 2001, Sam Horrocks wrote:
> If in both the MRU/LRU case there were exactly 10 interpreters busy at
> all times, then you're right it wouldn't matter.  But don't confuse
> the issues - 10 concurrent requests do *not* necessarily require 10
> concurrent interpreters.  The MRU has an affect on the way a stream of 10
> concurrent requests are handled, and MRU results in those same requests
> being handled by fewer interpreters.

On a side note, I'm curious about is how Apache decides that child
processes are unused and can be killed off.  The spawning of new processes
is pretty agressive on a busy server, but if the server reaches a steady
state and some processes aren't needed they should be killed off.  Maybe
no one has bothered to make that part very efficient since in normal
circusmtances most users would prefer to have extra processes waiting
around than not have enough to handle a surge and have to spawn a whole
bunch.

- Perrin




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-17 Thread Sam Horrocks

There is no coffee.  Only meals.  No substitutions. :-)

If we added coffee to the menu it would still have to be prepared by the cook.
Remember that you only have one CPU, and all the perl interpreters large and
small must gain access to that CPU in order to run.

Sam


 > I have a wide assortment of queries on a site, some of which take several minutes to 
 >execute, while others execute in less than one second. If understand this analogy 
 >correctly, I'd be better off with the current incarnation of mod_perl because there 
 >would be more cashiers around to serve the "quick cups of coffee" that many customers 
 >request at my dinner.
 > 
 > Is this correct?
 > 
 > 
 > Sam Horrocks wrote:
 > > 
 > > I think the major problem is that you're assuming that just because
 > > there are 10 constant concurrent requests, that there have to be 10
 > > perl processes serving those requests at all times in order to get
 > > maximum throughput.  The problem with that assumption is that there
 > > is only one CPU - ten processes cannot all run simultaneously anyways,
 > > so you don't really need ten perl interpreters.
 > > 
 > > I've been trying to think of better ways to explain this.  I'll try to
 > > explain with an analogy - it's sort-of lame, but maybe it'll give you
 > > a mental picture of what's happening.  To eliminate some confusion,
 > > this analogy doesn't address LRU/MRU, nor waiting on other events like
 > > network or disk i/o.  It only tries to explain why you don't necessarily
 > > need 10 perl-interpreters to handle a stream of 10 concurrent requests
 > > on a single-CPU system.
 > > 
 > > You own a fast-food restaurant.  The players involved are:
 > > 
 > > Your customers.  These represent the http requests.
 > > 
 > > Your cashiers.  These represent the perl interpreters.
 > > 
 > > Your cook.  You only have one.  THis represents your CPU.
 > > 
 > > The normal flow of events is this:
 > > 
 > > A cashier gets an order from a customer.  The cashier goes and
 > > waits until the cook is free, and then gives the order to the cook.
 > > The cook then cooks the meal, taking 5-minutes for each meal.
 > > The cashier waits for the meal to be ready, then takes the meal and
 > > gives it to the customer.  The cashier then serves another customer.
 > > The cashier/customer interaction takes a very small amount of time.
 > > 
 > > The analogy is this:
 > > 
 > > An http request (customer) arrives.  It is given to a perl
 > > interpreter (cashier).  A perl interpreter must wait for all other
 > > perl interpreters ahead of it to finish using the CPU (the cook).
 > > It can't serve any other requests until it finishes this one.
 > > When its turn arrives, the perl interpreter uses the CPU to process
 > > the perl code.  It then finishes and gives the results over to the
 > > http client (the customer).
 > > 
 > > Now, say in this analogy you begin the day with 10 customers in the store.
 > > At each 5-minute interval thereafter another customer arrives.  So at time
 > > 0, there is a pool of 10 customers.  At time +5, another customer arrives.
 > > At time +10, another customer arrives, ad infinitum.
 > > 
 > > You could hire 10 cashiers in order to handle this load.  What would
 > > happen is that the 10 cashiers would fairly quickly get all the orders
 > > from the first 10 customers simultaneously, and then start waiting for
 > > the cook.  The 10 cashiers would queue up.  Casher #1 would put in the
 > > first order.  Cashiers 2-9 would wait their turn.  After 5-minutes,
 > > cashier number 1 would receive the meal, deliver it to customer #1, and
 > > then serve the next customer (#11) that just arrived at the 5-minute mark.
 > > Cashier #1 would take customer #11's order, then queue up and wait in
 > > line for the cook - there will be 9 other cashiers already in line, so
 > > the wait will be long.  At the 10-minute mark, cashier #2 would receive
 > > a meal from the cook, deliver it to customer #2, then go on and serve
 > > the next customer (#12) that just arrived.  Cashier #2 would then go and
 > > wait in line for the cook.  This continues on through all the cashiers
 > > in order 1-10, then repeating, 1-10, ad infinitum.
 > > 
 > > Now even though you have 10 cashiers, most of their time is spent
 > > waiting to put in an order to the cook.  Starting with customer #11,
 > > all customers will wait 50-minutes for their meal.  When customer #11
 > > comes in he/she will immediately get to place an order, but it will take
 > > the cashier 45-minutes to wait for the cook to become free, and another
 > > 5-minutes for the meal to be cooked.  Same is true for customer #12,
 > > and all customers from then on.
 > > 
 > > Now, the question is, could you get the same throughput with fewer
 > > cashiers?  Say you had 2 cashiers instead.  The 10 customers are
 > > there waiting.  The 2 cashiers take orders from customers #1 and #2.
 > > Cashie

Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-17 Thread Sam Horrocks

 > I guess as I get older I start to slip technically. :) This helps me a bit, 
 > but it doesn't really help me understand the final arguement (that MRU is 
 > still going to help on a fully loaded system).
 > 
 > With some modification, I guess I am thinking that the cook is really the 
 > OS and the CPU is really the oven. But the hamburgers on an Intel oven have 
 > to be timesliced instead of left to cook and then after it's done the next 
 > hamburger is put on.
 > 
 > So if we think of meals as Perl requests, the reality is that not all meals 
 > take the same amount of time to cook. A quarter pounder surely takes longer 
 > than your typical paper thin McDonald's Patty.
 > 
 > The fact that a customer requests a meal that takes longer to cook than 
 > another one is relatively random. In fact in the real world, it is likely 
 > to be random. This means that it's possible for all 10 meals to be cooking 
 > but the 3rd meal gets done really fast, so another customer gets time 
 > sliced to use the oven for their meal -- which might be a long meal.

I don't like your mods to the analogy, because they don't model how
a CPU actually works.  Even if the cook == the OS and the oven == the
CPU, the oven *must* work on tasks sequentially.  If you look at the
assembly language for your Intel CPU you won't see anything about it
doing multi-tasking.  It does adds, subtracts, stores, loads, jumps, etc.
It executes code sequentially.  You must model this somewhere in your
analogy if it's going to be accurate.

So I'll modify your analogy to say the oven can only cook one thing at
a time.  Now, what you could do is have the cook take one of the longer
meals (the 10 minute meatloaf) out of the oven in order to cook something
small, then put the meatloaf back later to finish cooking.  But the oven
does *not* cook things in parallel.  Remember that things have
to cook for a very long time before they get timesliced -- 210ms is a
long time for a CPU, and that's the default timeslice on a Linux PC.

If we say the oven cooks things sequentially, it doesn't really change
the overall results that I had in the previous example.  The cook just
puts things in the oven sequentially, in the order in which they were
received from the cashiers - this represents the run queue in the OS.
But the cashiers still sit there and wait for the meals from the cook,
and the cook just stands there waiting for the oven to cook meals
sequentially.

 > In your testing, perhaps the problem is that you are benchmarking with a 
 > homogeneous process. So of course you are seeing this behavior that makes 
 > it look like serializing 10 connections is just the same wait as time 
 > slicing them and therefore an MRU algorithm works better (of course it 
 > works better, because you keep releasing the systems in order)...
 > 
 > But in the world where the 3rd or 5th or 6th process may finish sooner and 
 > release sooner than others, then an MRU algorithm doesn't matter. And 
 > actually a process that finishes in 10 seconds shouldn't have to wait until 
 > a process than takes 30 seconds to complete has finished.

No, homogeneity (or the lack of it) wouldn't make a difference.  Those 3rd,
5th or 6th processes run only *after* the 1st and 2nd have finished using
the CPU.  And at that poiint you could re-use those interpreters that 1 and 2
were using.

 > And all 10 interpreters are in use at the same time, serving all requests 
 > and randomly popping off the queue and starting again where no MRU or LRU 
 > algorithm will really help. It's all the same.

If in both the MRU/LRU case there were exactly 10 interpreters busy at
all times, then you're right it wouldn't matter.  But don't confuse
the issues - 10 concurrent requests do *not* necessarily require 10
concurrent interpreters.  The MRU has an affect on the way a stream of 10
concurrent requests are handled, and MRU results in those same requests
being handled by fewer interpreters.

 > Anyway, maybe I am still not really getting it. Even with the fast food 
 > analogy. Maybe it is time to throw in the network time and other variables 
 > that seemed to make a difference in Perrin understanding how you were 
 > approaching the explanation.

Please again take a look at the first analogy.  The CPU can't do multi-tasking.
Until that gets straightened out, I don't think adding more to the analogy
will help.

Also, I think the analogy is about to break - that's why I put in extra
disclaimers at the top.  It was only intended to show that 10 concurrent
requests don't necessarily require 10 perl interpreters in order to
achieve maximum throughput.

 > I am now curious -- on a fully loaded system of max 10 processes, did you 
 > see that SpeedyCGI scaled better than mod_perl on your benchmarks? Or are 
 > we still just speculating?

It is actually possible to benchmark.  Given the same concurrent load
and the same number of httpds running, speedycgi will use fewer perl
interpreters than mod_perl.  This

Re: cannot execute my cgi perls

2001-01-17 Thread Gustavo Vieira Goncalves Coelho Rios

"G.W. Haywood" wrote:
> 
> Hi G,
> 
> On Wed, 17 Jan 2001, Gustavo Vieira Goncalves Coelho Rios wrote:
> 
> > [Wed Jan 17 18:04:41 2001] [error] [client 192.168.1.11] Premature end
> > of script headers: /home/grios/.public_html/cgi-bin/bench3.cgi
> 
> Who knows?  Something isn't finishing what it started.  Post the script.
> 
> 73,
> Ged.

Got the problem kicked out!

I have not seted suexec properly!

Now it works fine.

Thanks for all feed back.



RE: cannot execute my cgi perls

2001-01-17 Thread Wilt, Paul

Gustavo:

This usually happens if you get an error before the application has a chance
to send the appropriate HTTP headers back to the client.

Paul E Wilt 
Principal Software Engineer

XanEdu, Inc. ( a division of Bell+Howell Information&Learning)
http://www.XanEdu.com
mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]
300 North Zeeb Rd   Phone: (734) 975-6021  (800)
521-0600 x6021
Ann Arbor, MI 48106 Fax:(734) 973-0737




-Original Message-
From: Gustavo Vieira Goncalves Coelho Rios [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 17,2001 3:09 PM
To: [EMAIL PROTECTED]
Subject: cannot execute my cgi perls


Hi folks, i have setted my apache box, but i am facing some problem with
executing my cgi (of course perl ones).

That's all i get from error_log:

[Wed Jan 17 18:04:41 2001] [error] [client 192.168.1.11] Premature end
of script headers: /home/grios/.public_html/cgi-bin/bench3.cgi

May some one here explains me what is happening ?

Another questions:

Is there an apache related mailing list?



Re: cannot execute my cgi perls

2001-01-17 Thread G.W. Haywood

Hi G,

On Wed, 17 Jan 2001, Gustavo Vieira Goncalves Coelho Rios wrote:

> [Wed Jan 17 18:04:41 2001] [error] [client 192.168.1.11] Premature end
> of script headers: /home/grios/.public_html/cgi-bin/bench3.cgi

Who knows?  Something isn't finishing what it started.  Post the script.

73,
Ged.





cannot execute my cgi perls

2001-01-17 Thread Gustavo Vieira Goncalves Coelho Rios

Hi folks, i have setted my apache box, but i am facing some problem with
executing my cgi (of course perl ones).

That's all i get from error_log:

[Wed Jan 17 18:04:41 2001] [error] [client 192.168.1.11] Premature end
of script headers: /home/grios/.public_html/cgi-bin/bench3.cgi

May some one here explains me what is happening ?

Another questions:

Is there an apache related mailing list?



Re: [OT] Apache wedges when log filesystem is full

2001-01-17 Thread G.W. Haywood

Hi Andrew,

On Wed, 17 Jan 2001, Andrew Ho wrote:

> The other day we had a system fail because the partition that holds the
> logs became full, and Apache stopped responding to requests.

http://perl.apache.org/guide - Controlling and Monitoring the Server

Look for watchdog.pl (if it's still in there:).

73,
Ged.




[OT] Apache wedges when log filesystem is full

2001-01-17 Thread Andrew Ho

Hello,

The other day we had a system fail because the partition that holds the
logs became full, and Apache stopped responding to requests. Deleting some
old log files in that partition solved the problem.

We pipe logs to cronolog (http://www.ford-mason.co.uk/resources/cronolog/)
to roll them daily, so this introduces a pipe and also means that the
individual logfile being written to was relatively small.

While this faulted our monitoring (we should have been monitoring free
space on /var), I am also interested in something I was unable to find
on-line: how to configure Apache to robustly failover in such a case,
e.g. keep serving responses but just stop writing to logs.

Thanks in advance if anybody has any pointers.

Humbly,

Andrew

--
Andrew Ho   http://www.tellme.com/   [EMAIL PROTECTED]
Engineer   [EMAIL PROTECTED]  Voice 650-930-9062
Tellme Networks, Inc.   1-800-555-TELLFax 650-930-9101
--




mod perl and embperl

2001-01-17 Thread Gustavo Vieira Goncalves Coelho Rios

hi folks!

Here am i again with some doubts once more.

I have installed mod_perl as a DSO for apache, everything is ok!
Now i am in need for embperl, the problem begins when i perl
Makefile.PL, it asks me for apache source!
Since i have every apache module as DSO i have deleted everything
related to apache.

Hoe to enable embperl with mod_perl ?



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-17 Thread Buddy Lee Haystack

I have a wide assortment of queries on a site, some of which take several minutes to 
execute, while others execute in less than one second. If understand this analogy 
correctly, I'd be better off with the current incarnation of mod_perl because there 
would be more cashiers around to serve the "quick cups of coffee" that many customers 
request at my dinner.

Is this correct?


Sam Horrocks wrote:
> 
> I think the major problem is that you're assuming that just because
> there are 10 constant concurrent requests, that there have to be 10
> perl processes serving those requests at all times in order to get
> maximum throughput.  The problem with that assumption is that there
> is only one CPU - ten processes cannot all run simultaneously anyways,
> so you don't really need ten perl interpreters.
> 
> I've been trying to think of better ways to explain this.  I'll try to
> explain with an analogy - it's sort-of lame, but maybe it'll give you
> a mental picture of what's happening.  To eliminate some confusion,
> this analogy doesn't address LRU/MRU, nor waiting on other events like
> network or disk i/o.  It only tries to explain why you don't necessarily
> need 10 perl-interpreters to handle a stream of 10 concurrent requests
> on a single-CPU system.
> 
> You own a fast-food restaurant.  The players involved are:
> 
> Your customers.  These represent the http requests.
> 
> Your cashiers.  These represent the perl interpreters.
> 
> Your cook.  You only have one.  THis represents your CPU.
> 
> The normal flow of events is this:
> 
> A cashier gets an order from a customer.  The cashier goes and
> waits until the cook is free, and then gives the order to the cook.
> The cook then cooks the meal, taking 5-minutes for each meal.
> The cashier waits for the meal to be ready, then takes the meal and
> gives it to the customer.  The cashier then serves another customer.
> The cashier/customer interaction takes a very small amount of time.
> 
> The analogy is this:
> 
> An http request (customer) arrives.  It is given to a perl
> interpreter (cashier).  A perl interpreter must wait for all other
> perl interpreters ahead of it to finish using the CPU (the cook).
> It can't serve any other requests until it finishes this one.
> When its turn arrives, the perl interpreter uses the CPU to process
> the perl code.  It then finishes and gives the results over to the
> http client (the customer).
> 
> Now, say in this analogy you begin the day with 10 customers in the store.
> At each 5-minute interval thereafter another customer arrives.  So at time
> 0, there is a pool of 10 customers.  At time +5, another customer arrives.
> At time +10, another customer arrives, ad infinitum.
> 
> You could hire 10 cashiers in order to handle this load.  What would
> happen is that the 10 cashiers would fairly quickly get all the orders
> from the first 10 customers simultaneously, and then start waiting for
> the cook.  The 10 cashiers would queue up.  Casher #1 would put in the
> first order.  Cashiers 2-9 would wait their turn.  After 5-minutes,
> cashier number 1 would receive the meal, deliver it to customer #1, and
> then serve the next customer (#11) that just arrived at the 5-minute mark.
> Cashier #1 would take customer #11's order, then queue up and wait in
> line for the cook - there will be 9 other cashiers already in line, so
> the wait will be long.  At the 10-minute mark, cashier #2 would receive
> a meal from the cook, deliver it to customer #2, then go on and serve
> the next customer (#12) that just arrived.  Cashier #2 would then go and
> wait in line for the cook.  This continues on through all the cashiers
> in order 1-10, then repeating, 1-10, ad infinitum.
> 
> Now even though you have 10 cashiers, most of their time is spent
> waiting to put in an order to the cook.  Starting with customer #11,
> all customers will wait 50-minutes for their meal.  When customer #11
> comes in he/she will immediately get to place an order, but it will take
> the cashier 45-minutes to wait for the cook to become free, and another
> 5-minutes for the meal to be cooked.  Same is true for customer #12,
> and all customers from then on.
> 
> Now, the question is, could you get the same throughput with fewer
> cashiers?  Say you had 2 cashiers instead.  The 10 customers are
> there waiting.  The 2 cashiers take orders from customers #1 and #2.
> Cashier #1 then gives the order to the cook and waits.  Cashier #2 waits
> in line for the cook behind cashier #1.  At the 5-minute mark, the first
> meal is done.  Cashier #1 delivers the meal to customer #1, then serves
> customer #3.  Cashier #1 then goes and stands in line behind cashier #2.
> At the 10-minute mark, cashier #2's meal is ready - it's delivered to
> customer #2 and then customer #4 is served.  This continues on with the
> cashiers trading off between serving customers.
> 
> Does the scenario with two cash

Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-17 Thread Gunther Birznieks

I guess as I get older I start to slip technically. :) This helps me a bit, 
but it doesn't really help me understand the final arguement (that MRU is 
still going to help on a fully loaded system).

With some modification, I guess I am thinking that the cook is really the 
OS and the CPU is really the oven. But the hamburgers on an Intel oven have 
to be timesliced instead of left to cook and then after it's done the next 
hamburger is put on.

So if we think of meals as Perl requests, the reality is that not all meals 
take the same amount of time to cook. A quarter pounder surely takes longer 
than your typical paper thin McDonald's Patty.

The fact that a customer requests a meal that takes longer to cook than 
another one is relatively random. In fact in the real world, it is likely 
to be random. This means that it's possible for all 10 meals to be cooking 
but the 3rd meal gets done really fast, so another customer gets time 
sliced to use the oven for their meal -- which might be a long meal.

In your testing, perhaps the problem is that you are benchmarking with a 
homogeneous process. So of course you are seeing this behavior that makes 
it look like serializing 10 connections is just the same wait as time 
slicing them and therefore an MRU algorithm works better (of course it 
works better, because you keep releasing the systems in order)...

But in the world where the 3rd or 5th or 6th process may finish sooner and 
release sooner than others, then an MRU algorithm doesn't matter. And 
actually a process that finishes in 10 seconds shouldn't have to wait until 
a process than takes 30 seconds to complete has finished.

And all 10 interpreters are in use at the same time, serving all requests 
and randomly popping off the queue and starting again where no MRU or LRU 
algorithm will really help. It's all the same.

Anyway, maybe I am still not really getting it. Even with the fast food 
analogy. Maybe it is time to throw in the network time and other variables 
that seemed to make a difference in Perrin understanding how you were 
approaching the explanation.

I am now curious -- on a fully loaded system of max 10 processes, did you 
see that SpeedyCGI scaled better than mod_perl on your benchmarks? Or are 
we still just speculating?

At 03:19 AM 1/17/01 -0800, Sam Horrocks wrote:
>I think the major problem is that you're assuming that just because
>there are 10 constant concurrent requests, that there have to be 10
>perl processes serving those requests at all times in order to get
>maximum throughput.  The problem with that assumption is that there
>is only one CPU - ten processes cannot all run simultaneously anyways,
>so you don't really need ten perl interpreters.
>
>I've been trying to think of better ways to explain this.  I'll try to
>explain with an analogy - it's sort-of lame, but maybe it'll give you
>a mental picture of what's happening.  To eliminate some confusion,
>this analogy doesn't address LRU/MRU, nor waiting on other events like
>network or disk i/o.  It only tries to explain why you don't necessarily
>need 10 perl-interpreters to handle a stream of 10 concurrent requests
>on a single-CPU system.
>
>You own a fast-food restaurant.  The players involved are:
>
> Your customers.  These represent the http requests.
>
> Your cashiers.  These represent the perl interpreters.
>
> Your cook.  You only have one.  THis represents your CPU.
>
>The normal flow of events is this:
>
> A cashier gets an order from a customer.  The cashier goes and
> waits until the cook is free, and then gives the order to the cook.
> The cook then cooks the meal, taking 5-minutes for each meal.
> The cashier waits for the meal to be ready, then takes the meal and
> gives it to the customer.  The cashier then serves another customer.
> The cashier/customer interaction takes a very small amount of time.
>
>The analogy is this:
>
> An http request (customer) arrives.  It is given to a perl
> interpreter (cashier).  A perl interpreter must wait for all other
> perl interpreters ahead of it to finish using the CPU (the cook).
> It can't serve any other requests until it finishes this one.
> When its turn arrives, the perl interpreter uses the CPU to process
> the perl code.  It then finishes and gives the results over to the
> http client (the customer).
>
>Now, say in this analogy you begin the day with 10 customers in the store.
>At each 5-minute interval thereafter another customer arrives.  So at time
>0, there is a pool of 10 customers.  At time +5, another customer arrives.
>At time +10, another customer arrives, ad infinitum.
>
>You could hire 10 cashiers in order to handle this load.  What would
>happen is that the 10 cashiers would fairly quickly get all the orders
>from the first 10 customers simultaneously, and then start waiting for
>the cook.  The 10 cashiers would queue up.  Casher #1 would put in the
>first order.  Cashiers 2-9 wou

Re: [Re: With high request rate, server stops responding with load zero]

2001-01-17 Thread emarkert

I ran into a problem similar to this...

As it turned out it was due to a commo problem between my web server and a
remote database - specifically, the database was being used for access
control.  When the commo link between apache and MySQL crapped out apache
sat there spun its wheels and then refused further connections...

MySQL, on the other hand, acted fine - it accepted connections and
performed like it was designed to.  I found this out because I wrote a
perl script that tested connections from each server, e.g. apache =>
MySQL, MySQL => MySQL...

Hope this helps...

Perrin Harkins <[EMAIL PROTECTED]> wrote:
> On Tue, 16 Jan 2001, Honza Pazdziora wrote:
> > The machines are alright memorywise, they seem to be a bit slow on
> > CPU, however what bothers me is the deadlock situation to which they
> > get. No more slow crunching, they just stop accepting connections.
>
> I've only seen that happen when something was hanging them up, like
> running out of memory or waiting for a database resource.  Are you using
> NFS by any chance?
>
> > Is there a way to allow a lot of children to be spawned but limit
> > the number of children that serve requests?
>
> I don't think you want that.  If the server is busy, Apache will spawn
> more as soon as it can.  Of course PerlRunOnce is a huge
> liability.  Getting rid of that would surely help a lot.
>
> - Perrin


===
"If you put three drops of poison into a 100 percent pure Java, you get -
Windows. If you put a few drops of Java into Windows, you still have
Windows."
-- Sun Microsystems CEO, Scott McNealy


Get your own FREE, personal Netscape WebMail account today at 
http://home.netscape.com/webmail



Re: FileMan - Not enough arguments for mkdir

2001-01-17 Thread George Sanderson

At 08:59 AM 1/17/2001 +0200, you wrote:
>Hi,
>
>This is me again. Thanks for quick response. Another two questions:
>in your demo http://www.xorgate.com/FileMan/demo/.XFM/
>just tried to upload file "1", it reported me "ERROR: MkFile: Parent access
>denied" but I suspect it catchup to do open() before... so you can see now
>"1" with 0k size.
>a hole? - someone maliciose can create as many files as he wants this way...
>
Fixed.  The parent directory check was being checked after the open.
The source is available from:
http://www.xorgate.com/FileMan
as file FileMan-0.05a.tar.gz
Thanks
>
>and another one. I tried to install it(0.04 version), and after a hour of
>struggle it got it compiled and make test passed, but I can't get it running
>in apache(1.3.14+1.24_01).
>
Sorry about the struggle part.  This is my first module.  
Since you are just installing this as a stand alone module, the other Perl
module
dependances are not being automatically install for you like hopefully CPAN
will be doing for you once the code get released.

Others may have some advise to me about how CPAN determines the Perl module
dependencies and automatically install them.

You need the following other Perl modules installed (CPAN works great for
this.)
use Archive::Zip
use File::Copy
use File::Path
use File::NCopy
use Data::Dumper
use Storable
use HTML::HeadParser





Re: [Fwd: AuthenDBI idea]

2001-01-17 Thread Tim Bunce

On Mon, Jan 15, 2001 at 08:37:18PM -0800, Ask Bjoern Hansen wrote:
> On Mon, 15 Jan 2001, Edmund Mergl wrote:
> 
> > any comments ?
> 
> [count number of times a user has logged in and such things]
> 
> Other people would like to count / note / ... other things.
> 
> It would be neater if you made an API the programmer could plug his
> own stuff into. Like "call this class/sub/foobar" when the user logs
> in, enters an invalid password, ...  

I agree entirely.

Tim.

p.s. I have a patch in the works that makes AuthenDBI store all the
fields in the user table for the given user into a $r->pnotes().
That way users don't have to query the user table again to get
extra information, which can be very expensive for busy sites.



Re: installing modperl

2001-01-17 Thread Matt Sergeant

On Wed, 17 Jan 2001, George Sanderson wrote:

> At 09:52 AM 1/17/2001 -0200, you wrote:
> >The documentation in the package is extremely poor, so, may somebody
> >here points me the right source of documentation?
> >
> I had the same first impression, but I have readjusted my thinking to:
> "The documentation is extremely good, once it is located."
>
> Start with:
> For mod_perl:
>
> http://perl.apache.org/

You forgot:

http://take23.org/

-- 


/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




Re: installing modperl

2001-01-17 Thread G.W. Haywood

Hi there,

On Wed, 17 Jan 2001, Gustavo Vieira Goncalves Coelho Rios wrote:

> I trying the following program:
[snip]
> I get printed and not execute in the browser.

There is not enough information in your question to give an answer but
I suspect that you might be trying to run the program from the command
line.  You need to configure Apache so that IT runs the program when a
browser requests a URI - one which you choose.

For full details, read the documentation to which I have already given
to you references.

> My apache error log is:
> [Wed Jan 17 10:04:16 2001] [error] Can't locate object method "inh_tree" via package 
>"Devel::Symdump"
> at /usr/local/lib/perl5/site_perl/5.005/i386-freebsd/Apache/Status.pm line 222.

This could a different problem altogether, it looks like you are trying
to run Apache::Status somehow.  Maybe by accident?

73,
Ged.




Re: installing modperl

2001-01-17 Thread George Sanderson

At 09:52 AM 1/17/2001 -0200, you wrote:
>The documentation in the package is extremely poor, so, may somebody
>here points me the right source of documentation?
>
I had the same first impression, but I have readjusted my thinking to:
"The documentation is extremely good, once it is located."

Start with:
For mod_perl:

http://perl.apache.org/

For Apache:

http://httpd.apache.org/docs

http://httpd.apache.org/docs/mod/core.html

http://modules.apache.org/

The "digest" email messages on this list are also very informative.
When you have a problem, the archives are "a very valuable resource".








Re: installing modperl

2001-01-17 Thread Gustavo Vieira Goncalves Coelho Rios

"G.W. Haywood" wrote:
> 
> Hi again,
> 
> On Wed, 17 Jan 2001, Gustavo Vieira Goncalves Coelho Rios wrote:
> 
> > Security (First),
> > Performance (Second).
> 
> These are large subjects in their own right and will not properly be
> covered by the mod_perl nor the Apache documentation for good reasons.
> Your Apache server does not live in isolation but with a whole bunch
> of other software on which it may depend for services, or about which
> it may know nothing at all.  All of the other software on the machine
> can have an impact on security and performance.
> 
> Of course that isn't to say that there are no security and performance
> issues which are specific to Apache and mod_perl.  There are, and they
> are given a roasting regularly on the List.
> 
> You need to spend a lot of time reading the List, its archives, the
> Guide, the Eagle Book, and anything else you can get your hands on.
> There are books about Apache itself, "Professional Apache" by Peter
> Wainwright is one I liked.  Only by studying in depth will you have
> any hope of securing your high-performance server.
> 
> 73,
> Ged.


Thanks all for help!

Since this is my first attempt to get mod_perl running with a single
program o' mine, i would appreciate your patience if i asked for some
stupid thing here.

I trying the following program:

#!/usr/bin/perl
use strict;
# print the environment in a mod_perl program under Apache::Registry

print "Content-type: text/html\n\n";

print "Apache::Registry Environment\n";

print "\n";
print map { "$_ = $ENV{$_}\n" } sort keys %ENV;
print "\n";


I get printed and not execute in the browser.

My apache error log is:
[Wed Jan 17 10:04:16 2001] [error] Can't locate object method "inh_tree"
via package "Devel::Symdump" at
/usr/local/lib/perl5/site_perl/5.005/i386-freebsd/Apache/Status.pm line
222.


May some explain what i lacked here?



Re: installing modperl

2001-01-17 Thread G.W. Haywood

Hi again,

On Wed, 17 Jan 2001, Gustavo Vieira Goncalves Coelho Rios wrote:

> Security (First),
> Performance (Second).

These are large subjects in their own right and will not properly be
covered by the mod_perl nor the Apache documentation for good reasons.
Your Apache server does not live in isolation but with a whole bunch
of other software on which it may depend for services, or about which
it may know nothing at all.  All of the other software on the machine
can have an impact on security and performance.

Of course that isn't to say that there are no security and performance
issues which are specific to Apache and mod_perl.  There are, and they
are given a roasting regularly on the List.

You need to spend a lot of time reading the List, its archives, the
Guide, the Eagle Book, and anything else you can get your hands on.
There are books about Apache itself, "Professional Apache" by Peter
Wainwright is one I liked.  Only by studying in depth will you have
any hope of securing your high-performance server.

73,
Ged.





Re: [Fwd: AuthenDBI idea]

2001-01-17 Thread George Sanderson

At 08:37 PM 1/15/2001 -0800, you wrote:
>On Mon, 15 Jan 2001, Edmund Mergl wrote:
>
>> any comments ?
>
>[count number of times a user has logged in and such things]
>
Hope I am not out of place here, and also that the ideas are generic enough
to be applied to a wide number of authentication requirements.

Here are two ideas.
I.=
The first idea for authentication: 
Provide a directive to perform a comparison on any or all fields of the
current user's record.  If the comparison is true, provide a URL to
REDIRECT the original request.

The supporting directives could be something like:
Auth_DBI_comp  {regexp}
Auth_DBI_url"http://www.redirect.com/ok/"

Where regexp is a comparison string and url is where to REDIRECT the user
if the comparison is true.

The original request URL should be passed as a url ? argument, so that a
REDIRECT cgi target script could determine the original requested url.  The
target script could update any fields as required.

The regexp needs to be able to easily access any arbitrary field values for
the current user's record. Perhaps simply by pre appending a '$' to the
field name.  For example:

Auth_DBI_comp {$username='xyz' && $usecount<4}

This would REDIRECT every login with field "usercount" less than 4 for
field "username" equal 'xyz'.

A pass and fail condition would also be needed, perhaps just designated as
PASS and FAIL.

Being able to specify multiple conditions per authorization attempt would
be useful.

II.
A second idea (for authorization) is to provide a generic way to set an
Apache environment variable with the contents of any field for the current
user.  For example:

Auth_DBI_env  field1,field2

This would set two environment variables called: "FIELD1" and "FIELD2" with
their field data content of the current user's record.

I suppose if the data base had multiple records for a user, then the
environment variables would contain a list of values.







Re: installing modperl

2001-01-17 Thread Gustavo Vieira Goncalves Coelho Rios

"G.W. Haywood" wrote:
> 
> Hi there,
> 
> On Wed, 17 Jan 2001, Gustavo Vieira Goncalves Coelho Rios wrote:
> 
> > Where can i obtain such packages ?
> 
> CPAN.  LWP is not necessary for a working mod_perl but it's
> recommended for the test suite (and lots of other things:).
> 
> You will find more documentation in the Guide
> 
> http://perl.apache.org/guide
> 
> and in the Eagle Book
> 
>   "Writing Apache Modules with Perl and C", ISBN 1-56592-567-X,
>   by Lincoln Stein and Doug MacEachern.
> 
> 
> 73,
> Ged.

Ok! Now, i got every thing working correctly. But now i need to known
how to setup apache correctly with two points in minds:

Security (First),
Performance (Second).

The documentation in the package is extremely poor, so, may somebody
here points me the right source of documentation?

Thanks a lot for your help and time.



Re: installing modperl

2001-01-17 Thread G.W. Haywood

Hi there,

On Wed, 17 Jan 2001, Gustavo Vieira Goncalves Coelho Rios wrote:

> Where can i obtain such packages ?

CPAN.  LWP is not necessary for a working mod_perl but it's
recommended for the test suite (and lots of other things:).

You will find more documentation in the Guide

http://perl.apache.org/guide

and in the Eagle Book

  "Writing Apache Modules with Perl and C", ISBN 1-56592-567-X,
  by Lincoln Stein and Doug MacEachern.

 
73,
Ged.




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-17 Thread Sam Horrocks

I think the major problem is that you're assuming that just because
there are 10 constant concurrent requests, that there have to be 10
perl processes serving those requests at all times in order to get
maximum throughput.  The problem with that assumption is that there
is only one CPU - ten processes cannot all run simultaneously anyways,
so you don't really need ten perl interpreters.

I've been trying to think of better ways to explain this.  I'll try to
explain with an analogy - it's sort-of lame, but maybe it'll give you
a mental picture of what's happening.  To eliminate some confusion,
this analogy doesn't address LRU/MRU, nor waiting on other events like
network or disk i/o.  It only tries to explain why you don't necessarily
need 10 perl-interpreters to handle a stream of 10 concurrent requests
on a single-CPU system.

You own a fast-food restaurant.  The players involved are:

Your customers.  These represent the http requests.

Your cashiers.  These represent the perl interpreters.

Your cook.  You only have one.  THis represents your CPU.

The normal flow of events is this:

A cashier gets an order from a customer.  The cashier goes and
waits until the cook is free, and then gives the order to the cook.
The cook then cooks the meal, taking 5-minutes for each meal.
The cashier waits for the meal to be ready, then takes the meal and
gives it to the customer.  The cashier then serves another customer.
The cashier/customer interaction takes a very small amount of time.

The analogy is this:

An http request (customer) arrives.  It is given to a perl
interpreter (cashier).  A perl interpreter must wait for all other
perl interpreters ahead of it to finish using the CPU (the cook).
It can't serve any other requests until it finishes this one.
When its turn arrives, the perl interpreter uses the CPU to process
the perl code.  It then finishes and gives the results over to the
http client (the customer).

Now, say in this analogy you begin the day with 10 customers in the store.
At each 5-minute interval thereafter another customer arrives.  So at time
0, there is a pool of 10 customers.  At time +5, another customer arrives.
At time +10, another customer arrives, ad infinitum.

You could hire 10 cashiers in order to handle this load.  What would
happen is that the 10 cashiers would fairly quickly get all the orders
from the first 10 customers simultaneously, and then start waiting for
the cook.  The 10 cashiers would queue up.  Casher #1 would put in the
first order.  Cashiers 2-9 would wait their turn.  After 5-minutes,
cashier number 1 would receive the meal, deliver it to customer #1, and
then serve the next customer (#11) that just arrived at the 5-minute mark.
Cashier #1 would take customer #11's order, then queue up and wait in
line for the cook - there will be 9 other cashiers already in line, so
the wait will be long.  At the 10-minute mark, cashier #2 would receive
a meal from the cook, deliver it to customer #2, then go on and serve
the next customer (#12) that just arrived.  Cashier #2 would then go and
wait in line for the cook.  This continues on through all the cashiers
in order 1-10, then repeating, 1-10, ad infinitum.

Now even though you have 10 cashiers, most of their time is spent
waiting to put in an order to the cook.  Starting with customer #11,
all customers will wait 50-minutes for their meal.  When customer #11
comes in he/she will immediately get to place an order, but it will take
the cashier 45-minutes to wait for the cook to become free, and another
5-minutes for the meal to be cooked.  Same is true for customer #12,
and all customers from then on.

Now, the question is, could you get the same throughput with fewer
cashiers?  Say you had 2 cashiers instead.  The 10 customers are
there waiting.  The 2 cashiers take orders from customers #1 and #2.
Cashier #1 then gives the order to the cook and waits.  Cashier #2 waits
in line for the cook behind cashier #1.  At the 5-minute mark, the first
meal is done.  Cashier #1 delivers the meal to customer #1, then serves
customer #3.  Cashier #1 then goes and stands in line behind cashier #2.
At the 10-minute mark, cashier #2's meal is ready - it's delivered to
customer #2 and then customer #4 is served.  This continues on with the
cashiers trading off between serving customers.

Does the scenario with two cashiers go any more slowly than the one with
10 cashiers?  No.  When the 11th customer arrives at the 5-minute mark,
what he/she sees is that customer #3 is just now putting in an order.
There are 7 other people there waiting to put in orders.  Customer #11 will
wait 40 minutes until he/she puts in an order, then wait another 10 minutes
for the meal to arrive.  Same is true for customer #12, and all others arriving
thereafter.

The only difference between the two scenarious is the number of cashiers,
and where the waiting is taking place.  In the first scenario, 

installing modperl

2001-01-17 Thread Gustavo Vieira Goncalves Coelho Rios

Dear gentleman/madam,

this is my first post to the modperl mailing list.
I am trying to get modperl compiling over FreeBSD-4.2Stable, but until
now i am having some problem, here is what i get when i try to build it:

grios@etosha$ perl Makefile.PL USE_APXS=1
WITH_APXS=/usr/local/apache/bin/apxs
Will configure via APXS (apxs=/usr/local/apache/bin/apxs)
PerlDispatchHandler.disabled (enable with PERL_DISPATCH=1)
PerlChildInitHandlerenabled
PerlChildExitHandlerenabled
PerlPostReadRequestHandler..disabled (enable with
PERL_POST_READ_REQUEST=1)
PerlTransHandlerdisabled (enable with PERL_TRANS=1)
PerlHeaderParserHandler.disabled (enable with PERL_HEADER_PARSER=1)
PerlAccessHandler...disabled (enable with PERL_ACCESS=1)
PerlAuthenHandler...disabled (enable with PERL_AUTHEN=1)
PerlAuthzHandlerdisabled (enable with PERL_AUTHZ=1)
PerlTypeHandler.disabled (enable with PERL_TYPE=1)
PerlFixupHandlerdisabled (enable with PERL_FIXUP=1)
PerlHandler.enabled
PerlLogHandler..disabled (enable with PERL_LOG=1)
PerlInitHandler.disabled (enable with PERL_INIT=1)
PerlCleanupHandler..disabled (enable with PERL_CLEANUP=1)
PerlRestartHandler..disabled (enable with PERL_RESTART=1)
PerlStackedHandlers.disabled (enable with
PERL_STACKED_HANDLERS=1)
PerlMethodHandlers..disabled (enable with
PERL_METHOD_HANDLERS=1)
PerlDirectiveHandlers...disabled (enable with
PERL_DIRECTIVE_HANDLERS=1)
PerlTableApidisabled (enable with PERL_TABLE_API=1)
PerlLogApi..disabled (enable with PERL_LOG_API=1)
PerlUriApi..disabled (enable with PERL_URI_API=1)
PerlUtilApi.disabled (enable with PERL_UTIL_API=1)
PerlFileApi.disabled (enable with PERL_FILE_API=1)
PerlConnectionApi...enabled
PerlServerApi...enabled
PerlSectionsdisabled (enable with PERL_SECTIONS=1)

PerlSSI.disabled (enable with PERL_SSI=1)

Will run tests as User: 'grios' Group: 'ordinary'
Configuring mod_perl for building via APXS
 + Creating a local mod_perl source tree
 + Setting up mod_perl build environment (Makefile)
 + id: mod_perl/1.24_01
 + id: Perl/5.00503 (freebsd) [perl]
Now please type 'make' to build libperl.so
Checking CGI.pm VERSION..ok
Checking for LWP::UserAgent..failed
Can't locate LWP/UserAgent.pm in @INC (@INC contains: ./lib
/usr/libdata/perl/5.00503/mach /usr/libdata/perl/5.00503
/usr/local/lib/perl5/site_perl/5.005/i386-freebsd
/usr/local/lib/perl5/site_perl/5.005 .) at Makefile.PL line 1072.

The libwww-perl library is needed to run the test suite.
Installation of this library is recommended, but not required.   

Checking for HTML::HeadParserfailed
Can't locate HTML/HeadParser.pm in @INC (@INC contains: ./lib
/usr/libdata/perl/5.00503/mach /usr/libdata/perl/5.00503
/usr/local/lib/perl5/site_perl/5.005/i386-freebsd
/usr/local/lib/perl5/site_perl/5.005 .) at Makefile.PL line 1090.

The HTML-Parser package is needed (by libwww-perl) to run the test
suite.
Checking if your kit is complete...
Looks good
Writing Makefile for Apache
Writing Makefile for Apache::Connection
Writing Makefile for Apache::Constants
Writing Makefile for Apache::File
Writing Makefile for Apache::Leak
Writing Makefile for Apache::Log
Writing Makefile for Apache::ModuleConfig
Writing Makefile for Apache::PerlRunXS
Writing Makefile for Apache::Server
Writing Makefile for Apache::Symbol
Writing Makefile for Apache::Table
Writing Makefile for Apache::URI
Writing Makefile for Apache::Util
Writing Makefile for mod_perl
grios@etosha$ 



Since, i have some questions:

Why am i getting this :

Checking for LWP::UserAgent..failed
Can't locate LWP/UserAgent.pm in @INC (@INC contains: ./lib
/usr/libdata/perl/5.00503/mach /usr/libdata/perl/5.00503
/usr/local/lib/perl5/site_perl/5.005/i386-freebsd
/usr/local/lib/perl5/site_perl/5.005 .) at Makefile.PL line 1072.

The libwww-perl library is needed to run the test suite.
Installation of this library is recommended, but not required.   

Checking for HTML::HeadParserfailed
Can't locate HTML/HeadParser.pm in @INC (@INC contains: ./lib
/usr/libdata/perl/5.00503/mach /usr/libdata/perl/5.00503
/usr/local/lib/perl5/site_perl/5.005/i386-freebsd
/usr/local/lib/perl5/site_perl/5.005 .) at Makefile.PL line 1090.


Where can i obtain such packages ?



My current environment is:

Apache: Server Version: Apache/1.3.12 (Unix) PHP/4.0.4pl1
mod_fastcgi/2.2.6
OS: FreeBSD etosha 4.2-STABLE FreeBSD 4.2-STABLE #0: Fri Dec 29 03:17:20
GMT 2000 root@etosha:/usr/obj/usr/src/sys/ETOSHA i386


Thanks a lot for your time and cooperation.



Re: With high request rate, server stops responding with load zero

2001-01-17 Thread modperl

Ok,  just to check it,  find out which file descriptor that your processes
is hanging on and then do an ls -l /proc/$PID/fd

Check a few of them and see if they are all hanging on the same
file.  Obviously replace the proper information for the variables listed
there.

Idealy something should be shown here to figure out what is the
problem.  If its reading from a file, check to make sure that the file is
still availble.  Network file systems can just wait on this forever.

Scott




AW: Help! --- mod_perl + DBI:mysql:connect - Error.

2001-01-17 Thread yen-ying . chen-dreger

Hi Haywood, 

thanks for your help! Yes, we compiled everything from source, as usual.
Therefore, it should be not the cause. I have looked at the mod_perl Guide
for help and can not find any proper solution for my problem. Following your
suggestion, I shall try to debug my program. 
Cheers,
Yen-Ying

> --
> Von:  G.W. Haywood[SMTP:[EMAIL PROTECTED]]
> Antwort an:   G.W. Haywood
> Gesendet: Dienstag, 16. Januar 2001 17:52
> An:   [EMAIL PROTECTED]
> Cc:   [EMAIL PROTECTED]
> Betreff:  Re: Help! --- mod_perl + DBI:mysql:connect - Error. 
> 
> Hi there,
> 
> On Tue, 16 Jan 2001 [EMAIL PROTECTED] wrote:
> 
> > from a mod_perl module using DBI and Apache::Registry as PerlModule i
> can
> > not connect to a mysql database. 
> 
> Have you looked at the mod_perl Guide?
> 
> http://perl.apache.org/guide
> 
> > The Browser gets an error message that the document hat no data
> > while in the logfile of apache stands "child pid 31955 exit signal
> > Segmentation fault (11)". I am sure that the connection fails.
> 
> You Apache child is crashing.  You may need to debug that.
> See the debugging section of the Guide too.  Did you compile
> everything from source?
>  
> 73,
> Ged.
> 
> 



Re: Apache::Session::Postgres error

2001-01-17 Thread Dirk Lutzebaeck


Hi, I would just like to tune in here. I am using Apache::Session 1.53
also with postgres and have the problem that sessions are not closed
in any case. I am also using Embperl 1.3.0 but maintaining the session
variable on my own. The effect is that when apache is restarted
everything works fine but after some time (or heavy load) some
sessions start not to store modification anymore. This culminates
after a while into an usable system which needs a restart. The
application code needs very careful examination for closing the
session in all cases.

To find out if it is the application that is not closing the session
or if there is a problem in Apache::Session or in the database is very
hard to track down. I checked all the application code and using
vanilla Apache::Session::Store::Postgres with FOR UPDATE, ie. using
locks on the session table. I would appreciate any suggestion on how
to debug/find unclosed sessions on a high volume site. Eg. one could
work with timeouts, this is when a session is not closed after some
period of time a warning would be issued. This would help much to find
out who is not closing the session.

Regards,

Dirk