Re: Loading modules in Parent??

2000-12-20 Thread Doug MacEachern

On Thu, 28 Sep 2000, Bill Moseley wrote:

 Hi,
 
 I'm seeing the opposite results from pre-loading modules in the parent
 process than I would expect.  It looks like pre-loading modules ends up
 using more non-shared ("private") memory.
...
 Here's the pre-loaded module list. When running as non-pre-loaded I'm
 commenting out Search, SWISH::Fork, and CGI-compile lines below.  That's
 the only difference. 

that's a BIG difference.

% perlbloat 'require CGI'
require CGI added  784k

% perlbloat 'require CGI; CGI-compile(":all")'
require CGI; CGI-compile(":all") added  2.0M

try without preloading CGI.pm/CGI-compile in either.

p.s. this is the perlbloat script:

use GTop ();

my $gtop = GTop-new;
my $before = $gtop-proc_mem($$)-size;

for (@ARGV) {
if (eval "require $_") {
eval {
$_-import;
};
}
else {
eval $_;
die $@ if $@;
}
}

my $after = $gtop-proc_mem($$)-size;

printf "@ARGV added %s\n", GTop::size_string($after - $before);





Re: cgi scripts

2000-12-20 Thread G.W. Haywood

Hi there,

On Tue, 19 Dec 2000, Mike Egglestone wrote:

 ScriptAlias /cgi-bin/ /var/www/Scripts/
 ...and later down the file
 AddHandler cgi-script .cgi

Look for mention of the ScriptaAlias directive in
http://perl/apache.org/guide

 Options ExecCGI FollowSymLinks

You might want to investigate the use of '+' in front of those.

 I'm thinking that the script isn't being executed by perl ... 

It's that Guide again!

 gotta hunt through those docs again

Start with the one entitled "SUPPORT".  It comes with mod_perl.
Then the Guide.  (You'll be a while, reading that:)

73,
Ged.




Re: Document contains no data

2000-12-20 Thread G.W. Haywood

Hi there,

On Tue, 19 Dec 2000, Darren Duncan wrote:

 I have been having a problem with my scripts during the where I 
 periodically get a Netscape 4 error saying "Document contains no 
 data" when they run under mod_perl, but not with the same script 
 under CGI.

Is this only on Netscape 4?

That message often means either bad HTML or a server crash.  Have you
looked in the error_log?  If you can't make sense of it, connect to
the server using

telnet my.host.wherever 80

and then say

GET /your/troublesome/uri HTTP/1.0

followed by two returns and look at the output.  It's instructive if
you just say "GET / HTTP/1.0" (again followed by two returns).

When something fishy happens then it's time to look in the error_log.
When I'm debugging I usually stop the server, rename the error_log to
something like error_log.2000.12.20, then restart it.  That way apache
creates a new, empty log and I can follow everything that happens from
the moment I start the server without wading through reams of output.

Wouldn't you be better off using the request object methods to send
your headers?  The Eagle Book will tell you all about it.  For debug,
if there's a server on port 80 listening to real-world requests and I
had no other machine to play with I'd want to set up another Apache
for debugging which listens on another port, above 1024.  That way I
can crash the server (almost) all I want without hurting real users.
It's all in the Guide.  Sorry, have to drive 1000 miles North now...

73,
Ged.




Re: Apache::LogSTDERR

2000-12-20 Thread Kees Vonk 7249 24549

Has anyone found this module yet?


Kees



mod_perl training

2000-12-20 Thread Gunther Birznieks

I got swamped with an unexpected project last Friday and coupled with XMas 
stuff, I am probably not going to be able to give any more input until 
after XMas (next week).

Anyway, I know this topic has been very quiet since last week. But I just 
wanted to say that I don't want to let it die (for those that expressed 
interest), and I am definitely still interested even if I am going to 
shutup for the next week (which many of you may be happy about). :)




POST with PERL

2000-12-20 Thread czwartag

Hi!

I have a little problem. A wrote a perl-script to manage guestbook-like
section of my page. Script is working, but from the shell. When I try to
post a data through html form I get an error saying that post method is
not suported by this document. It's in directory with options ExecCGI and
FollowSymLinks. There is also a script handler for .cgi. What's the
matter?
Thanks from advance

IronHandmailto:[EMAIL PROTECTED]




Re: [crit] (98)Address already in use: make_sock: could not bind to port 8529

2000-12-20 Thread G.W. Haywood

Hi there,

On Wed, 20 Dec 2000 [EMAIL PROTECTED] wrote:

 I have the same problem,
 
There are so many things to look at and so many things you haven't
told me I hardly know where to start!  What are you doing that gives
this error?  Starting 'make test'?  Are you the superuser?  What port
are you telling Apache to bind to in your httpd.conf?  Can you start
Apache without using a script?  Have you tried 'apachectl configtest'?
Is there anything useful in your error_log?

Have a look in SUPPORT and the Guide.  SUPPORT tells you about the
sort of information you can give to help people help you.  The Guide
gives you loads of useful configuration information.  I'm just about
to leave for a Christmas break so I won't be able to get at my email
for a couple of days.  I'll have a look for your messages when I can
but in the meantime post as much information as concicely as you can
to the List and I'm sure you'll get a helpful response.

73,
Ged.




Re: Apache::LogSTDERR

2000-12-20 Thread Stas Bekman

 Has anyone found this module yet?

Did you read my reply to your original question? with original post link
from Doug?

It doesn't exist in public domain yet.

_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: mod_perl training

2000-12-20 Thread Randal L. Schwartz

 "Gunther" == Gunther Birznieks [EMAIL PROTECTED] writes:

Gunther Anyway, I know this topic has been very quiet since last
Gunther week. But I just wanted to say that I don't want to let it
Gunther die (for those that expressed interest), and I am definitely
Gunther still interested even if I am going to shutup for the next
Gunther week (which many of you may be happy about). :)

We at Stonehenge have also taken very seriously all the comments
spoken here so far, and have been having direction-setting
conversations amongst our development team based in part on the what
I've heard recently.  We can't make any announcements just yet, but I
hope you'll be pleased when we do.

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
[EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!



RE: mod_perl training

2000-12-20 Thread Geoffrey Young



 -Original Message-
 From: Gunther Birznieks [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, December 20, 2000 5:28 AM
 To: mod_perl list
 Subject: mod_perl training
 
 Anyway, I know this topic has been very quiet since last 
 week. But I just 
 wanted to say that I don't want to let it die (for those that 
 expressed 
 interest), and I am definitely still interested even if I am going to 
 shutup for the next week (which many of you may be happy about). :)

not to supress an important topic, but do you think we could impress on the
interested parties to move the discussion to [EMAIL PROTECTED] and
away from the main list?

--Geoff

 



Re: Apache::LogSTDERR

2000-12-20 Thread Kees Vonk 7249 24549

Stas,

I am sorry I didn't see the 'it has not been released yet' 
bit of your message. I read Doug's note, which says:

 it's in our cvs tree here at CP, not on CPAN.  it shouldn't
 be a problem to release this one to CPAN, I'll check.

but I didn't realise that CP wasn't public domain, I just 
couldn't find it anywhere.

My apologies once more,


Kees Vonk



Re: Apache::LogSTDERR

2000-12-20 Thread Stas Bekman

On Wed, 20 Dec 2000, Kees Vonk 7249 24549 wrote:

 Stas,
 
 I am sorry I didn't see the 'it has not been released yet' 
 bit of your message. I read Doug's note, which says:
 
  it's in our cvs tree here at CP, not on CPAN.  it shouldn't
  be a problem to release this one to CPAN, I'll check.
 
 but I didn't realise that CP wasn't public domain, I just 
 couldn't find it anywhere.

In the mod_perl jargon CP stands for Critical Path, Inc.
(www.criticalpath.net)

 My apologies once more,

No prob :)


_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: Apache::LogSTDERR

2000-12-20 Thread brian moseley


no worries. i'll go put it on cpan.

On Wed, 20 Dec 2000, Kees Vonk 7249 24549 wrote:

 Stas,
 
 I am sorry I didn't see the 'it has not been released yet' 
 bit of your message. I read Doug's note, which says:
 
  it's in our cvs tree here at CP, not on CPAN.  it shouldn't
  be a problem to release this one to CPAN, I'll check.
 
 but I didn't realise that CP wasn't public domain, I just 
 couldn't find it anywhere.
 
 My apologies once more,
 
 
 Kees Vonk
 




[OT]Problems with `use locale`

2000-12-20 Thread martin langhoff

hi,

sorry for being so OT. The problem is showing up in a mod_perl app, but
it's certainly not related at all. 

Dealing with Spanish as we are, we always have problems with regexp,
uc() and lc(). I've found that on my dev box, just adding `use locale`
at least uc() and lc() would work allright (meaning ñ got changed into Ñ
properly).

Now I've built a customer's machine with a newer distro and my uc() is
broken where it was working. 

The devbox has RHLinux 6.1 and perl 5.005_03-6
The customer's box has RHLinux 6.2 and perl 5.005_03-10

And the output of `locale` and `locale -a` is identical on both boxes.
Unluckily I'm not about to downgrade the box to 6.1 ... it's a complex
Compaq beast that doesn't like 6.1 ...

Have you seen anything similar? have any pointers? flames? rants? 

Thanks!



martin



sorting subroutine and global variables

2000-12-20 Thread Alexander Farber (EED)

Hi,

http://perl.apache.org/guide/perl.html#my_Scoped_Variable_in_Nested_S
advises not to use external "my $variables"
from subroutines. I have
the following subroutine in my CGI-script, which I would like to keep
mod_perl-kosher, just in case:


# Sort %hoh-values by the 'sort'-parameter or by default ("MHO")   #


sub mysort
{
my $param = $query - param ('sort') || 'MHO'; # XXX global $query,
   # not mod_perl clean?
return $a - {$param} cmp $b - {$param};
}

This subroutine is called later as:

for my $href (sort mysort values %$hohref)
{
...
}

Is using the "outside" $query dangerous here and how would you handle it?

Thank you
Alex

PS: Is there something to be aware of, when using the new "our" keyword?



Re: recommendation for image server with modperl

2000-12-20 Thread Justin

I did try thttpd.

As I understood it, and I did send an email to acmesoftware to ask
but got no reply, thttpd does not handle keep-alive, and indeed
users complained that images "came in slowly". I also observed this.
I'm happy to be corrected, maybe I picked up the wrong version or
did not study the source carefully enough. I could not find any
config variables relating to keep-alive either..

I found some benchmarks which showed mathopd and thttpd similar in
speed. Only linux kernel httpd can do better than either.. but
request rates per second of 1000+ is of academic interest only..

-Justin

On Tue, Dec 19, 2000 at 08:37:23PM -0800, Perrin Harkins wrote:
 On Tue, 19 Dec 2000, Justin wrote:
  I've been catching up on the modperl list archives, and would 
  just like to recommend "mathopd" as an image web server.
 
 I think you'll find thttpd (http://www.acme.com/software/thttpd/) faster
 and somewhat better documented.  However, I'd like to point out that we've
 had no problems using Apache as an image server.  We need the ability to
 serve HTTPS images, which mathopd and thttpd can't do, but more than that
 we've found the performance to be more than good enough with a stripped
 down Apache server.
 
  After having difficulties with the sheer number of front end apache
  processes necessary to handle 10 backend modperls, (difficulties: high
  load average and load spikes, kernel time wasted scheduling lots of
  httpds, higher than expected latency on simple requests)
 
 Load averages are tricky beasts.  The load can get high on our machines
 when many processes are running, but it doesn't seem to mean much: almost
 no CPU is being used, the network is not saturated, the disk is quiet,
 response is zippy, etc.  This leads me to think that these load numbers
 are not significant.
 
 Select-based servers are very cool though, and a good option for people
 who don't need SSL and want to squeeze great performance out of budget
 hardware.
 
 - Perrin

-- 
Justin Beech  http://www.dslreports.com
Phone:212-269-7052 x252 FAX inbox: 212-937-3800
mailto:[EMAIL PROTECTED] --- http://dslreports.com/contacts



RE: [OT]Problems with `use locale`

2000-12-20 Thread Khachaturov, Vassilii

Did you try perlfaq - it has a couple of questions on locales.
To start off, make sure your Perl has locale support in it: open
your perl's Config.pm
(it's /usr/local/lib/perl5/5.6.0/sun4-solaris/Config.pm on my system,
do 'locate Config.pm' to find one on yours).
Lines with "locale" there should have "define" in them, e.g.
d_setlocale='define'.

(Don't just change it if it's undef, rather, have your Perl reconfigured 
rebuilt appropriately).

HTH,
Vassilii

-Original Message-
From: martin langhoff [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 20, 2000 12:39 PM
To: [EMAIL PROTECTED]
Subject: [OT]Problems with `use locale`
Dealing with Spanish as we are, we always have problems with regexp,
uc() and lc(). I've found that on my dev box, just adding `use locale`
at least uc() and lc() would work allright (meaning ñ got changed into Ñ
properly).

Now I've built a customer's machine with a newer distro and my uc()
is
broken where it was working. 

The devbox has RHLinux 6.1 and perl 5.005_03-6
The customer's box has RHLinux 6.2 and perl 5.005_03-10



Re: sorting subroutine and global variables

2000-12-20 Thread Stas Bekman

On Wed, 20 Dec 2000, Alexander Farber (EED) wrote:

 Hi,
 
 http://perl.apache.org/guide/perl.html#my_Scoped_Variable_in_Nested_S
 advises not to use external "my $variables"
 from subroutines. I have
 the following subroutine in my CGI-script, which I would like to keep
 mod_perl-kosher, just in case:
 
 
 # Sort %hoh-values by the 'sort'-parameter or by default ("MHO")   #
 
 
 sub mysort
 {
 my $param = $query - param ('sort') || 'MHO'; # XXX global $query,
# not mod_perl clean?
 return $a - {$param} cmp $b - {$param};
 }
 
 This subroutine is called later as:
 
 for my $href (sort mysort values %$hohref)
 {
 ...
 }

Your code is better written as:

  my $param = $query-param('sort') || 'MHO';
  for my $href (sort {$a-{$param} cmp $b-{$param}} values %$hohref) { }

why wasting resources...

 Is using the "outside" $query dangerous here and how would you handle it?

Yes, inside the script. Read again
http://perl.apache.org/guide/perl.html#my_Scoped_Variable_in_Nested_S

 PS: Is there something to be aware of, when using the new "our" keyword?

our == use vars, which declares global variables.


_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Freeing cyclic refferences

2000-12-20 Thread Radovan Semancik

Hello!

I have perl objects with cyclic refferences to each othen in mod_perl
environment. I know that these objects will never be freed unless I
break the refference cycle. But, how to do it transparently for user of
object?

Is there a way in perl for making a refference to object that is not
counted in refferrence count? I tought of symbolic refferences, but I
suppose they are far more ineffective that 'normal' refferences. Am I
right?

-- 
Ing. Radovan Semancik ([EMAIL PROTECTED])
 System Engineer, Business Global Systems a.s.
   http://storm.alert.sk



RE: [crit] (98)Address already in use: make_sock: could not bind to port 8529

2000-12-20 Thread Phil_Stubbington

Hi Ged,

I'm running "make test" for mod_perl, logged in as root.

error_log reads:-

"[crit] (98) Address already in use: make_sock: could not bind to port 8529"

Port in httpd.conf is 8529

Haven't tried "apachectl configtest" - will when I get to the machine
tonight.

Apologies for contacting you directly - I've been searching all over the
place and yours was the only article that was close to suggesting a
solution.

Thanks.

regards,
phil

-Original Message-
From: G.W. Haywood [mailto:[EMAIL PROTECTED]]
Sent: 20 December 2000 11:12
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: [crit] (98)Address already in use: make_sock: could not
bind to port 8529


Hi there,

On Wed, 20 Dec 2000 [EMAIL PROTECTED] wrote:

 I have the same problem,
 
There are so many things to look at and so many things you haven't
told me I hardly know where to start!  What are you doing that gives
this error?  Starting 'make test'?  Are you the superuser?  What port
are you telling Apache to bind to in your httpd.conf?  Can you start
Apache without using a script?  Have you tried 'apachectl configtest'?
Is there anything useful in your error_log?

Have a look in SUPPORT and the Guide.  SUPPORT tells you about the
sort of information you can give to help people help you.  The Guide
gives you loads of useful configuration information.  I'm just about
to leave for a Christmas break so I won't be able to get at my email
for a couple of days.  I'll have a look for your messages when I can
but in the meantime post as much information as concicely as you can
to the List and I'm sure you'll get a helpful response.

73,
Ged.




slight mod_perl problem

2000-12-20 Thread Jamie Krasnoo

Ok, it seems that my startup.pl is being run twice on server start.

Startup init running
startup.pl - loading templates into memory
--- Loaded template file user_reg.tmpl
Startup init running
startup.pl - loading templates into memory
--- Loaded template file user_reg.tmpl
[Wed Dec 20 15:18:21 2000] [notice] Apache/1.3.14 (Unix) mod_perl/1.24_01\
configured -- resuming normal operations

Anyone have an explanation as to why this is happening, I have no hair
left due to trying to figure this one out.


Thanks for your help,

Jamie





RE: [crit] (98)Address already in use: make_sock: could not bind to port 8529

2000-12-20 Thread Stas Bekman

On Wed, 20 Dec 2000 [EMAIL PROTECTED] wrote:

 Hi Ged,
 
 I'm running "make test" for mod_perl, logged in as root.
 
 error_log reads:-
 
 "[crit] (98) Address already in use: make_sock: could not bind to port 8529"
 
 Port in httpd.conf is 8529
 
 Haven't tried "apachectl configtest" - will when I get to the machine
 tonight.
 
 Apologies for contacting you directly - I've been searching all over the
 place and yours was the only article that was close to suggesting a
 solution.

% make kill_httpd

See
http://perl.apache.org/guide/install.html#Built_Server_Testing_make_test_


_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: Freeing cyclic refferences

2000-12-20 Thread Darren Duncan

On Wed, 20 Dec 2000, Radovan Semancik wrote:
 I have perl objects with cyclic refferences to each othen in mod_perl
 environment. I know that these objects will never be freed unless I
 break the refference cycle. But, how to do it transparently for user of
 object?

What you need to do is to have a "starting" node in your
reference cycle that doesn't have anything pointing to it.  Such as the
"head" pointer of a linked list.  If the DESTROY method of this object
were to explicitely call some clean-up method of yours that is in the
regular node objects (clearing internode references), then the whole
process should be transparent to the user because all they have to do is
remove any references to the starting node, and they all vanish.  If your
entire set of linked objects is encapsulated in another one, then that
would effectively be your "start".

// Darren Duncan




Re: slight mod_perl problem

2000-12-20 Thread Stas Bekman

On Wed, 20 Dec 2000, Jamie Krasnoo wrote:

 Ok, it seems that my startup.pl is being run twice on server start.
 
 Startup init running
 startup.pl - loading templates into memory
 --- Loaded template file user_reg.tmpl
 Startup init running
 startup.pl - loading templates into memory
 --- Loaded template file user_reg.tmpl
 [Wed Dec 20 15:18:21 2000] [notice] Apache/1.3.14 (Unix) mod_perl/1.24_01\
 configured -- resuming normal operations
 
 Anyone have an explanation as to why this is happening, I have no hair
 left due to trying to figure this one out.

See
http://perl.apache.org/guide/config.html#Apache_Restarts_Twice_On_Start

Apache restarts twice indeed, but it shouldn't rerun the startup.pl since
it's already require()d and in %INC. 

 
 
 Thanks for your help,
 
 Jamie
 
 
 



_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: slight mod_perl problem

2000-12-20 Thread Darren Duncan

On Wed, 20 Dec 2000, Jamie Krasnoo wrote:
 Ok, it seems that my startup.pl is being run twice on server start.

Since configuration scripts can include other scripts, you probably have
more than one other script that includes startup.pl, or more than one
script that includes something that includes startup.pl, or other such.
This is a commonly encountered situation in programming.

What you need to do is scan your other relevant config files to look for
multiple includes of another file and remove one.  Or, if this isn't
feasible, then there may be some conditional that you can use to check if
you already ran a script, and then not do it again.

Of course, there could be a different reason that this is happening...

// Darren Duncan






RE: slight mod_perl problem

2000-12-20 Thread Douglas Wilson

Would this be the reason?
http://perl.apache.org/guide/config.html#Apache_Restarts_Twice_On_Start

 -Original Message-
 From: Jamie Krasnoo [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, December 20, 2000 3:21 PM
 To: [EMAIL PROTECTED]
 Subject: slight mod_perl problem


 Ok, it seems that my startup.pl is being run twice on server start.

 Startup init running
 startup.pl - loading templates into memory
 --- Loaded template file user_reg.tmpl
 Startup init running
 startup.pl - loading templates into memory
 --- Loaded template file user_reg.tmpl
 [Wed Dec 20 15:18:21 2000] [notice] Apache/1.3.14 (Unix) mod_perl/1.24_01\
 configured -- resuming normal operations

 Anyone have an explanation as to why this is happening, I have no hair
 left due to trying to figure this one out.


 Thanks for your help,

 Jamie







Re: slight mod_perl problem

2000-12-20 Thread Jamie Krasnoo

startup.pl does not get repeated on a restart. However it will when
started with ./apachectl start. I have never encountered this with Apache
1.3.12 or 13.

Jamie


On Thu, 21 Dec 2000, Stas Bekman wrote:

 On Wed, 20 Dec 2000, Jamie Krasnoo wrote:
 
  Ok, it seems that my startup.pl is being run twice on server start.
  
  Startup init running
  startup.pl - loading templates into memory
  --- Loaded template file user_reg.tmpl
  Startup init running
  startup.pl - loading templates into memory
  --- Loaded template file user_reg.tmpl
  [Wed Dec 20 15:18:21 2000] [notice] Apache/1.3.14 (Unix) mod_perl/1.24_01\
  configured -- resuming normal operations
  
  Anyone have an explanation as to why this is happening, I have no hair
  left due to trying to figure this one out.
 
 See
 http://perl.apache.org/guide/config.html#Apache_Restarts_Twice_On_Start
 
 Apache restarts twice indeed, but it shouldn't rerun the startup.pl since
 it's already require()d and in %INC. 
 
  
  
  Thanks for your help,
  
  Jamie
  
  
  
 
 
 
 _
 Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
 http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
 mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
 http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  
 
 




Re: slight mod_perl problem

2000-12-20 Thread Stas Bekman

 startup.pl does not get repeated on a restart. However it will when
 started with ./apachectl start. I have never encountered this with Apache
 1.3.12 or 13.

I've just tested it -- it's not.

The Istartup.pl file and similar loaded via CPerlModule or
CPerlRequire are compiled only once. Because once the module is
compiled it enters the special C%INC hash. When Apache
restarts--Perl checks whether the module or script in question is
already registered in C%INC and won't try to compile it again.

 
 Jamie
 
 
 On Thu, 21 Dec 2000, Stas Bekman wrote:
 
  On Wed, 20 Dec 2000, Jamie Krasnoo wrote:
  
   Ok, it seems that my startup.pl is being run twice on server start.
   
   Startup init running
   startup.pl - loading templates into memory
   --- Loaded template file user_reg.tmpl
   Startup init running
   startup.pl - loading templates into memory
   --- Loaded template file user_reg.tmpl
   [Wed Dec 20 15:18:21 2000] [notice] Apache/1.3.14 (Unix) mod_perl/1.24_01\
   configured -- resuming normal operations
   
   Anyone have an explanation as to why this is happening, I have no hair
   left due to trying to figure this one out.
  
  See
  http://perl.apache.org/guide/config.html#Apache_Restarts_Twice_On_Start
  
  Apache restarts twice indeed, but it shouldn't rerun the startup.pl since
  it's already require()d and in %INC. 
  
   
   
   Thanks for your help,
   
   Jamie
   
   
   
  
  
  
  _
  Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
  http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
  mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
  http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  
  
  
 
 



_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





RE: [OT]Problems with `use locale`

2000-12-20 Thread Enrique I.Rodriguez

From: martin langhoff [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 20, 2000 12:39 PM
To: [EMAIL PROTECTED]
Subject: [OT]Problems with `use locale`
   Dealing with Spanish as we are, we always have problems with regexp,
uc() and lc(). I've found that on my dev box, just adding `use locale`
at least uc() and lc() would work allright (meaning ñ got changed into Ñ
properly).

   Now I've built a customer's machine with a newer distro and my uc()
is
broken where it was working. 

Test your LC's (man's for locale(1) and locale(7)).
---
Enrique I.Rodriguez - http://club.idecnet.com/~esoft
Las Palmas de Gran Canaria - Canary Islands
Spain



experience on modperl-killing vacuum bots

2000-12-20 Thread Justin

Hi again,

Tracing down periods of unusual modperl overload I've
found it is usually caused by someone using an agressive
site mirror tool of some kind.

The Stonehenge Throttle (a lifesaver) module was useful
to catch the really evil ones that masquerade as a real
browser ..  although the version I grabbed did need to be
tweaked as when you get really hit hard, the determination
that yes, it is that spider again, involved a long read loop
of a rapidly growing fingerprint of doom.. to the point
where the determination that it was the same evil spider
was taking quite a long time per hit! (some real nasty ones
can hit you with 1000s of requests per minute!)

Also - sleeping to delay the reader as it reached the
soft limit was also bad news for modperl.

So I changed it to be more brutal about number of requests
per time frame, and bytes read per time frame, and also
black-list the md5 of the IP/useragent combination for
longer when that does happen. Matching on IP/useragent
combo is necessary rather than just IP to avoid blocking
big proxy on one IP which are in use in some large companies
and some telco ISPs.

In filtering error_logs over time, I've assembled a list
of nastys that have triggered the throttle repeatedly.

The trouble is, the throttle can take some time to 
wake up which can still floor your server for very
short periods..
So I also simply outright ban these user agents:

(EmailSiphon)|(LinkWalker)|(WebCapture)|(w3mir)|
(WebZIP)|(Teleport Pro)|(PortalBSpider)|(Extractor)|
(Offline Explorer)|(WebCopier)|(NetAttache)|(iSiloWeb)|
(eCatch)|(ecila)|(WebStripper)|(Oxxbot)|(MuscatFerret)|
(AVSearch)|(MSIECrawler)|(SuperBot 2.4)

Nasty little collection huh..

MSIECrawler is particularly annoying. I think that is
when somebody uses one of the bill gates IE5 "ideas":
save for offline view, or something.

Anyway.. hope this is helpful next time your modperl
server gets so busy you have to wait 10 seconds just to
get a server-status URL to return.

This also made me think that perhaps it would be nice
to design a setup that reserved 1 or 2 modperl processes
for serving (say) the home page .. that way, when the site
gets jammed up at least new visitors get a reasonably 
fast home page to look at (perhaps including an alert
warning against slow response lower down..).. that is
better than them coming in from a news article or search
engine, and getting no response at all.

It would also be nice for mod_proxy to have a better
way of controlling timeout on fetching from the backend,
and the page to show in case timeout occurs.. has anyone
done something here? then after 10 seconds (say) mod_proxy
can show a pretty page explaining that due to the awesome
success of your product/service, the website is busy and
please try again very soon :-) [we should be so lucky].
At the moment what happens under load is mod_proxy seems
to queue the request up (via the tcp listen queue) .. the
user might give up and press stop or reload (mod_proxy does
not seem to know this) and thus queue up another request via
another front end, and pretty soon there is a 10 second
page backlog for everyone and loads of useless requests to
start to fill ..

-Justin



Fwd: [speedycgi] Speedycgi scales better than mod_perl with scripts that contain un-shared memory

2000-12-20 Thread Gunther Birznieks

FYI --

Sam just posted this to the speedycgi list just now.

X-Authentication-Warning: www.newlug.org: majordom set sender to 
[EMAIL PROTECTED] using -f
To: [EMAIL PROTECTED]
Subject: [speedycgi] Speedycgi scales better than mod_perl with scripts 
that contain un-shared memory
Date: Wed, 20 Dec 2000 20:18:37 -0800
From: Sam Horrocks [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]

Just a point in speedy's favor, for anyone interested in performance tuning
and scalability.

A lot of mod_perl performance tuning involves trying to keep from creating
"un-shared" memory - that is memory that a script uses while handling
a request that is private to that script.  All perl scripts use some
amount of un-shared memory - anything derived from user-input to the
script via queries or posts for example has to be un-shared because it
is unique to that run of that script.

You can read all about mod_perl shared memory issues at:

 http://perl.apache.org/guide/performance.html#Sharing_Memory

The underlying problem in mod_perl is that apache likes to spread out
web requests to as many httpd's, and therefore as many mod_perl interpreters,
as possible using an LRU selection processes for picking httpd's.  For
static web-pages where there is almost zero un-shared memory, the selection
process doesn't matter much.  But when you load in a perl script with
un-shared memory, it can really bog down the server.

In SpeedyCGI's case, all perl memory is un-shared because there's no
parent to pre-load any of the perl code into memory.  It could benefit
somewhat from reducing this amount of un-shared memory if it had such
a feature, but the fact that SpeedyCGI chooses backends using an MRU
selection process means that it is much less prone to problems that
un-shared memory can cause.

I wanted to see how this played out in real benchmarks, so I wrote the
following test script that uses un-shared memory:

use CGI;
$x = 'x' x 5;   # Use some un-shared memory (*not* a memory leak)
my $cgi = CGI-new();
print $cgi-header();
print "Hello ";
print "World";

I then ran ab to benchmark how well mod_speedycgi did versus mod_perl
on this script.  When using no concurrency ("ab -c 1 -n 1")
mod_speedycgi and mod_perl come out about the same.  However, by
increasing the concurrency level, I found that mod_perl performance drops
off drastically, while mod_speedycgi does not.  In my case at about level
100, the rps number drops by 50% and the system starts paging to disk
while using mod_perl, whereas the mod_speedycgi numbers stay at about
the same level.

The problem is that at a high concurrency level, mod_perl is using lots
and lots of different perl-interpreters to handle the requests, each
with its own un-shared memory.  It's doing this due to its LRU design.
But with SpeedyCGI's MRU design, only a few speedy_backends are being used
because as much as possible it tries to use the same interpreter over and
over and not spread out the requests to lots of different interpreters.
Mod_perl is using lots of perl-interpreters, while speedycgi is only using
a few.  mod_perl is requiring that lots of interpreters be in memory in
order to handle the requests, wherase speedy only requires a small number
of interpreters to be in memory.  And this is where the paging comes in -
at a high enough concurency level, mod_perl starts using lots of memory
to hold all of those interpreters, eventually running out of real memory
and at that point it has to start paging.  And when the paging starts,
the performance really nose-dives.

With SpeedyCGI, at the same concurrency level, the total memory
requirements for all the intepreters are much much smaller.  Eventually
under a large enough load and with enough un-shared memory, SpeedyCGI
would probably have to start paging too.  But due to its design the point
at which SpeedyCGI will start doing this is at a much higher level than
with mod_perl.

__
Gunther Birznieks ([EMAIL PROTECTED])
eXtropia - The Web Technology Company
http://www.extropia.com/




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl with scripts that contain un-shared memory

2000-12-20 Thread Ken Williams

Well then, why doesn't somebody just make an Apache directive to control how
hits are divvied out to the children?  Something like 

  NextChild most-recent
  NextChild least-recent
  NextChild (blah...)

but more well-considered in name.  Not sure whether a config directive
would do it, or whether it would have to be a startup command-line
switch.  Or maybe a directive that can only happen in a startup config
file, not a .htaccess file.


[EMAIL PROTECTED] (Gunther Birznieks) wrote:
FYI --

Sam just posted this to the speedycgi list just now.

X-Authentication-Warning: www.newlug.org: majordom set sender to 
[EMAIL PROTECTED] using -f
To: [EMAIL PROTECTED]
Subject: [speedycgi] Speedycgi scales better than mod_perl with scripts 
that contain un-shared memory
Date: Wed, 20 Dec 2000 20:18:37 -0800
From: Sam Horrocks [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]

Just a point in speedy's favor, for anyone interested in performance tuning
and scalability.

A lot of mod_perl performance tuning involves trying to keep from creating
"un-shared" memory - that is memory that a script uses while handling
a request that is private to that script.  All perl scripts use some
amount of un-shared memory - anything derived from user-input to the
script via queries or posts for example has to be un-shared because it
is unique to that run of that script.

You can read all about mod_perl shared memory issues at:

 http://perl.apache.org/guide/performance.html#Sharing_Memory

The underlying problem in mod_perl is that apache likes to spread out
web requests to as many httpd's, and therefore as many mod_perl interpreters,
as possible using an LRU selection processes for picking httpd's.  For
static web-pages where there is almost zero un-shared memory, the selection
process doesn't matter much.  But when you load in a perl script with
un-shared memory, it can really bog down the server.

In SpeedyCGI's case, all perl memory is un-shared because there's no
parent to pre-load any of the perl code into memory.  It could benefit
somewhat from reducing this amount of un-shared memory if it had such
a feature, but the fact that SpeedyCGI chooses backends using an MRU
selection process means that it is much less prone to problems that
un-shared memory can cause.

I wanted to see how this played out in real benchmarks, so I wrote the
following test script that uses un-shared memory:

use CGI;
$x = 'x' x 5;   # Use some un-shared memory (*not* a memory leak)
my $cgi = CGI-new();
print $cgi-header();
print "Hello ";
print "World";

I then ran ab to benchmark how well mod_speedycgi did versus mod_perl
on this script.  When using no concurrency ("ab -c 1 -n 1")
mod_speedycgi and mod_perl come out about the same.  However, by
increasing the concurrency level, I found that mod_perl performance drops
off drastically, while mod_speedycgi does not.  In my case at about level
100, the rps number drops by 50% and the system starts paging to disk
while using mod_perl, whereas the mod_speedycgi numbers stay at about
the same level.

The problem is that at a high concurrency level, mod_perl is using lots
and lots of different perl-interpreters to handle the requests, each
with its own un-shared memory.  It's doing this due to its LRU design.
But with SpeedyCGI's MRU design, only a few speedy_backends are being used
because as much as possible it tries to use the same interpreter over and
over and not spread out the requests to lots of different interpreters.
Mod_perl is using lots of perl-interpreters, while speedycgi is only using
a few.  mod_perl is requiring that lots of interpreters be in memory in
order to handle the requests, wherase speedy only requires a small number
of interpreters to be in memory.  And this is where the paging comes in -
at a high enough concurency level, mod_perl starts using lots of memory
to hold all of those interpreters, eventually running out of real memory
and at that point it has to start paging.  And when the paging starts,
the performance really nose-dives.

With SpeedyCGI, at the same concurrency level, the total memory
requirements for all the intepreters are much much smaller.  Eventually
under a large enough load and with enough un-shared memory, SpeedyCGI
would probably have to start paging too.  But due to its design the point
at which SpeedyCGI will start doing this is at a much higher level than
with mod_perl.

__
Gunther Birznieks ([EMAIL PROTECTED])
eXtropia - The Web Technology Company
http://www.extropia.com/



  ------
  Ken Williams Last Bastion of Euclidity
  [EMAIL PROTECTED]The Math Forum