PerlInitHandler w/o PerlPostReadRequestHandler.

2001-10-28 Thread Steve Piner


I came across an unexpected feature of PerlInitHandler today.

In the distant past, for whatever reason, I configured mod_perl to have
PerlInitHandler, but not PerlPostReadRequestHandler.

Today I tried to hook a handler to PerlPostReadRequestHandler, and got
an error on start up, saying that I hadn't compiled it in.

Rather than recompile Apache, I dug out the Eagle book to find another
suitable handler. I discovered PerlInitHandler.

The Eagle says that PerlInitHandler is an alias for
PerlPostReadRequestHandler when used at the 'top-level' of a
configuration file.

Great, I thought. I'll try it.

Apache started quite happily.

But my handler wasn't being called.

Hmm, I thought. So I had a closer read of the PerlInitHandler section,
and noted it said *alias*.

To cut the story short, I recompiled apache, adding the
'PERL_POST_READ_REQUEST=1' necessary. It worked perfectly first time.


My question is this: should PerlInitHandler have given me an error
message?


Steve

-- 
Steve Piner
Web Applications Developer
Marketview Limited
http://www.marketview.co.nz



Re: When to use 'use' for accessing modules?

2001-10-23 Thread Steve Piner



Perrin Harkins wrote:

 Chris Allen wrote:
[...]
  Is $ENV{foo}='bar'; in startup.pl equivalent to PerlSetEnv foo bar
  in httpd.conf?
 
 Yes.

Are you sure? I experimented a few months ago, and found that
$ENV{foo}='bar'; would only last in each child until the first request
of the child completed.

Steve

-- 
Steve Piner
Web Applications Developer
Marketview Limited
http://www.marketview.co.nz



Re: mod_info.c and others via libexec?

2001-10-03 Thread Steve Cotton

Sounds like you did a static compile, read the docs regarding the DSO
mechanism. Also, Apache 1.3.20 and mod_perl 1.26 are available.

Steve.

- Original Message -
From: El Capitan [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, October 03, 2001 4:03 PM
Subject: mod_info.c and others via libexec?


 Hi folks,
 I'm new to the list as well as mod_perl and have just recently loaded
 Apache/mod_perl (Server Version: Apache/1.3.19 (Unix) mod_perl/1.25) and
 cant find any of the modules such as mod_info.c and others which are
 supposed to be in ../libexec.  In fact ../libexec is empty!  Did I do
 something wrong or do I need to load them separately...
 Any help would be appreciated.

 Kirk







Re: AuthCookie access denied messages

2001-08-20 Thread Steve van der Burg

you can set these in yourself by overwriting 
the AuthCookie Response method

you should catch these in your 
own subs and send back messages

for instance
in my Auth.pm authen_ses_key sub
[ snip ]

In addition to that, what I found confusing was actually getting authen_ses_key to be 
called in the first place, after a failed login attempt.
The stock authen_cred returns data that will be loaded into a cookie only if 
authentication is successful.  To get authen_ses_key to be called after an 
unsuccessful attempt, your authen_cred needs to do this:

if ( check_creds() ) {
   # make a ticket, start a session, etc
   return $valid_ticket_data;
}
else {
   return oops;   # make sure we never accept this as a valid cookie!
}

Now authen_ses_key gets called and AuthCookie will set AuthCookieReason to bad_cookie 
if you return undef.  Also, you now have a chance to set other environment variables.

...Steve


-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]




Embperl, modules, cleanup and the stop button

2001-08-20 Thread Steve Smith

Hi,

I realise this is covered in part in the modperl guide, but I'd to ask
for a bit of clarification/confirmation ...

I have pages generated with embperl, with each page having it's own
module to pull data from the database and pass it back to page in a
hash, the first line of the embperl page being the call to the module
(a pipeline/callback hybrid, if you like).  This module in turn
creates the appropriate database object.

As the database routines may create table locks, and as I'm using
Apache::DBI, the DB object constructor registers a cleanup handler
that will unlock the tables if an abort has occured
($r-connection-aborted).

My feeling that this cleanup is infact unnecessary (all though worth
having regardless), as the pipeline nature of the request (call
database for *all* data required, *then* output), any database calls
(and corresponding locks) will have completed before the abort
(SIGPIPE) is recognised.  This should go double for embperl, which
executes all perl code before outputting any headers and data.

So I'd like to ask the group, am I right in this analysis, or am I
missing anything here?

Thanks,
Steve



Re: Problem with arrayrefs in PSections

2001-07-29 Thread Steve Piner



Geoffrey Young wrote:
 
 without having an environment to test on or the Eagle book to reference...
 
 I seem to recall something in the Eagle book about arguments to Allow and
 Deny - that from 10.3.4.1 is really a single argument and not two (in the
 TAKE2 sense), so maybe your approach is wrong and you need to make each of
 those entries in your array a single string.

Thanks, but that's not it.

Allow = ['from 1.2.3.0/24', 'from 192.168.1.0/24'],

is treated as the directive 'Allow from 1.2.3.0/24 from 192.168.1.0/24'
which of course doesn't work.

Allow = [['from 1.2.3.0/24'], ['from 192.168.1.0/24']],

gives me the following error:

[Mon Jul 30 09:55:21 2001] [error] Perl: allow requires at least two
arguments, 'from' followed by hostnames or IP-address wildcards

The Eagle says that directives that occur multiple times should be an
array of arrays. And it works when I'm not using a single arrayref for
the configuration.

Steve

 -Original Message-
 From: Steve Piner
 To: [EMAIL PROTECTED]
 Sent: 7/27/01 12:26 AM
 Subject: Problem with arrayrefs in PSections
 
 I've come across an oddity in configuring Apache through Perl sections.
 
 If I have a local as follows,
 
 my %access = (
 Order = 'deny,allow',
 Deny = 'from all',
 Allow = [['from', '1.2.3.0/24'],
   ['from', '192.168.1.0/24']],
 );
 
 then set up locations (or directorys) as follows
 
 %Location = (
 '/server-status' = {
 SetHandler = 'server-status',
 %access,
 },
 '/server-info' = {
 SetHandler = 'server-info',
 %access,
 },
 );
 
 Then only one of the locations will let me access it.
 
 http://servername/server-status will let me in,
 http://servername/server-info won't.
 
 The problem seems to be with the shared reference: changing the 'Allow'
 line above to
 Allow = 'from all' works - though without the desired restriction of
 course, as does
 changing the code above to the following.
 
 %Location = (
 '/server-status' = {
 SetHandler = 'server-status',
 %access,
 Allow = [['from', '1.2.3.0/24'],
   ['from', '192.168.1.0/24']],
 },
 '/server-info' = {
 SetHandler = 'server-info',
 %access,
 Allow = [['from', '1.2.3.0/24'],
   ['from', '192.168.1.0/24']],
 },
 );
 
 Is this a bug, a stupid-user problem, or something else?
 
 I'm using Apache/1.3.20, mod_perl/1.25 and 1.26, and Perl v5.6.1
 
 Steve
 
 --
 Steve Piner
 Web Applications Developer
 Marketview Limited
 http://www.marketview.co.nz

-- 
Steve Piner
Web Applications Developer
Marketview Limited
http://www.marketview.co.nz



Problem with arrayrefs in PSections

2001-07-26 Thread Steve Piner


I've come across an oddity in configuring Apache through Perl sections.

If I have a local as follows,

my %access = (
Order = 'deny,allow',
Deny = 'from all',
Allow = [['from', '1.2.3.0/24'],
  ['from', '192.168.1.0/24']],
);

then set up locations (or directorys) as follows

%Location = (
'/server-status' = {
SetHandler = 'server-status',
%access,
},
'/server-info' = {
SetHandler = 'server-info',
%access,
},
);

Then only one of the locations will let me access it. 

http://servername/server-status will let me in,
http://servername/server-info won't.

The problem seems to be with the shared reference: changing the 'Allow'
line above to
Allow = 'from all' works - though without the desired restriction of
course, as does 
changing the code above to the following.

%Location = (
'/server-status' = {
SetHandler = 'server-status',
%access,
Allow = [['from', '1.2.3.0/24'],
  ['from', '192.168.1.0/24']],
},
'/server-info' = {
SetHandler = 'server-info',
%access,
Allow = [['from', '1.2.3.0/24'],
  ['from', '192.168.1.0/24']],
},
);

Is this a bug, a stupid-user problem, or something else?

I'm using Apache/1.3.20, mod_perl/1.25 and 1.26, and Perl v5.6.1


Steve

-- 
Steve Piner
Web Applications Developer
Marketview Limited
http://www.marketview.co.nz



Capturing CGI output

2001-06-17 Thread Steve Wells

I have a mod_perl library that utilizes the Tempate Toolkit so that html
files are parsed through the toolkit before being sent back to the
browser.  It works great but now they require that the library be
upgraded to include support for CGI's.  In other words I need to capture
the output of the CGI as though it were a template and parse it as well.

I can use $r-lookup_uri('/cgifile.cgi') to gather up the subrequest and
run it using the run() command.  However, the information from the CGI
is passed back to the browser instead of handed off to me for
processing.  Is there some way to capture that information?

Thanks,
STEVE



Re: comparison of templating methods?

2001-06-06 Thread Steve Smith

 HTML::Embperl

For me, this has one major win over the other toolkits: auto form
population from a hash.  The online mortgage application system I
wrote has about 1,800 form fields, which have to be populated with
data from a database.  By making the form fields match DB column
names, I can reduce the code to do this to:

   my $data = $dbh-fetchrow_hashref($query);
   %fdat = (%fdat, %$data);

Embperl then parses the form and populates it with the matching
name=value pairs in %fdat, including select options.  Beautiful!

Steve



RE: Trying to find correct format for PerlSetVar's -- or get Apache::AuthNetLDAP working.

2001-06-01 Thread Steve Haslam

Hi,

(btw threading broken because I'm replying from a digest)

I do multiple PerlSetVars from inside a Location (itself inside a
VirtualHost) like this:

  PerlSetVar = [['User', $user],
 ['Locale', $locale],
 ...
]

I've also had some wierdness happen with mod_perl not reporting syntax
errors and other problems in Perl sections that I'm now unable to
reproduce :|

SRH
-- 
+ Steve Haslam |W: +44-20-7447-1839+
+   /excite/intl/uk/softeng|M: +44-7775-645618 +
.Spare a thought for me because I see the things that you don't see.



Re: Content-Disposition to change type and action?

2001-05-29 Thread Steve Piner



Jay Jacobs wrote:

 I've got a form that will (should) send various formats back to the client
 depending on form values.  They may want the results back in csv, pdf or
 plain html.  The form always submits to a .html, and the browser usually
 expects an html.

My suggestion is to use mod_rewrite to create a mapping so that the
actual file name doesn't matter. I have a rule in the Apache conf file:

RewriteRule ^/reports/ /bin/report.pl [PT]

So going to http://www.mysite.com/reports/foo.csv?param1=val1 would be
the same as going to http://www.mysite.com/bin/report.pl?param1=val1
except if the page is to be downloaded, the browser will use the name
foo.csv.

There's another parameter which gets passed to /reports/whatever.csv to
indicate that it should generate a csv, and send a suitable
Content-Type, but getting the 'name' right solves half the problem.

Steve

-- 
Steve Piner
Web Applications Developer
Marketview Limited
http://www.marketview.co.nz



Re: Content-Disposition to change type and action?

2001-05-29 Thread Steve Smith

Steve == Steve Piner [EMAIL PROTECTED] writes:
 So going to http://www.mysite.com/reports/foo.csv?param1=val1
 would be the same as going to
 http://www.mysite.com/bin/report.pl?param1=val1 except if the page
 is to be downloaded, the browser will use the name foo.csv.

This also works :

  http://www.mysite.com/bin/report.pl/foo.csv?param1=val1

report.pl gets called with param1=val1, but if you set the appropriate
Content-Type the browser prompt to save it to foo.csv.  Works in
Netscape and IE.

Of course, this is what Content-Disposition is *supposed* to do; ho hum.

Cheers,
Steve



Re: Reverse engineered HTML

2001-05-15 Thread Steve van der Burg

Does a package exist that will read an HTML document and generate an =
Apache::Registry cgi script? Even better if it accepts an !--Perl tag.

Randal Schwartz has a Web Techniques column about this.  Well, it parses
an HTML document and spits out CGI.pm code that reproduces it, so that
should get you most of the way there.

The columns are here:
   http://www.stonehenge.com/merlyn/WebTechniques/ 

and it's column 30.

...Steve  

-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]




Re: Reading the environment in perl block

2001-05-07 Thread Steve Willer

On Mon, 7 May 2001, Benoit Caron wrote:

 The way I've setup whole thing is like that : a script name restart is 
 called with some parameters telling him to reload one or all the 
 developpers environment, or the testing copy. This script would have some 
 environments variables called SITE_USER and SITE_USER_PORT that will give 
 me the value (read in a file defining the different users) of the username 
 (and by the same way the files path) and the port where the user should work.
 
 My problem is that my envirnoment variables are not set. If I do a 
 Dumper(\%ENV), I only got values for the variables TZ, GATEWAY_INTERFACE, 
 MOD_PERL and PATH. (I do double-check that my variables where well setup).

You could try PerlPassEnv:

PerlPassEnv SITE_USER
PerlPassEnv SITE_USER_PORT

...but the solution I've used is to have the startup script dynamically
build a configuration based on a configuration template. In fact, the
script doesn't even live in /etc anywhere -- it's part of the CVS checked
out area that each developer has individually. The config template is just
a standard Apache config file with special @@ tokens in it like the Apache
*.orig files:

ServerRoot @@SERVERROOT@@
Port @@SERVERPORT@@

The script changes these tokens when (re)starting Apache, and runs apache
-f /tmp/httpd-[user]-[port].conf.

It also takes things a step further in that it automatically calculates a
port number based on the value of the server root, by running it through
sum. This way, you don't need to decide on a port for everybody -- they
just check out a copy of the website and start it. If they want to have
another copy of the website, let's say under ~/website-hacking, that's
fine -- it'll decide on a different port automatically.

I can send the startup script and config template if you want to see what
I mean.

 The only way I still see to make it work is having my restart script 
 saving the current user/port in a file and letting the perl section read 
 it's configuration from there. But it look so patchy...

Eyuuc




RE: Exception modules

2001-04-30 Thread Steve Coco
Title: RE: Exception modules





unsubscribe please- thanks


-Original Message-
From: Matt Sergeant [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 30, 2001 4:29 PM
To: Jeffrey W. Baker
Cc: [EMAIL PROTECTED]
Subject: Re: Exception modules



On Mon, 30 Apr 2001, Jeffrey W. Baker wrote:


 
 
 On Mon, 30 Apr 2001, Matt Sergeant wrote:
 
 
   [1] for my Perl exception package (yes, another one :) which, in its
   development version, now mostly does the Right Thing for mod_perl. See
   http://sourceforge.net/projects/perlexception/ for the curious.
 
  Since I'm doing the mod_perl exception handling talk at TPC, I feel
  obligated to ask about this...
 
  It doesn't seem any different from Error.pm to me, except in syntax. Maybe
  you could expand on why/where it is different?
 
 I tried using some different exception packages in the past. What I
 realized is, die() and eval {} ARE Perl's exception handling mechanism.
 die() and eval {}, together, have complete exception throwing and handling
 functionality. As a bonus, they lack Java's exception bondage and
 discipline.
 
 So, what's wrong with die() and eval {}?


Nothing, IMHO. In fact I've now switched away from using Error.pm's
try/catch syntax, because it creates closures and it's really easy to
generate memory leaks that way with mod_perl. But I still use Error.pm's
exception object structure...


Without some sort of structured exception handling, you don't know exactly
what type of exception was thrown. For example, in AxKit I need to know in
certain places if an IO exception occured, or if it was some other kind of
exception. I could do this with regexps, but then I'm relying on people
using the right strings in their error messages. Plus exception objects
can give you a stack trace, which eval catching a string can't (well, it
kinda can in a few ways, but not in quite as clean a manner).


Try it though, you might be surprised you like it. (unless by die() and
eval{} you mean you're already using exception objects, in which case I'm
preaching to the choir ;-)


-- 
Matt/


 /|| ** Founder and CTO ** ** http://axkit.com/ **
 //|| ** AxKit.com Ltd ** ** XML Application Serving **
 // || ** http://axkit.org ** ** XSLT, XPathScript, XSP **
// \\| // ** mod_perl news and resources: http://take23.org **
 \\//
 //\\
 // \\





Must restart Apache when any .pm changes?

2001-04-22 Thread Steve Leibel

I'm experimenting with using Perl modules (.pm files) underneath 
Mason components.

As far as I can see, the only way to guarantee that changes made in 
the .pm are seen by the Mason code is to restart Apache whever the 
.pm file changes.  This is true whether the "use" statement is in 
handler.pl or in the component.

I believe the way this works is that the first time any Apache child 
process sees "use Foo" that is the version of Foo.pm that will be 
used by that process.  No subsequent "use Foo" within components will 
have any effect during the life of that Apache process.

Am I understanding this correctly?



Re: Shared memory between child processes

2001-03-30 Thread Steve Leibel

At 5:30 PM -0800 3/30/01, Randy J. Ray wrote:
I understand the forking model of Apache, and what that means in terms of
data initialized in the start-up phase being ready-to-go in each child
process. But what I need to do is manage it so that a particular value is
shared between all children, such that changes made by one are recognized
by the others. In a simple case, imagine wanting to count how many times
total the given handler is invoked. Bumping a global counter will still be
local to the given child process, and if part of the handler's interface is
to report this counter value, then the reported number is going to be
dependent upon which child answers the request.

I'm needing to implement a handler that uses a true Singleton pattern for
the class instance. One per server, not just one per process (or thread).



You'll need to use some form of persistance mechanism such as a 
database, file, or perhaps (assuming you're on a Unix system) 
something like System V shared memory or semaphores.

One quick 'n cheap way to implement mutual exclusion between Unix 
processes (executing on the same processor) is to use mkdir, which is 
atomic (ie once a process requests a mkdir, the mkdir will either be 
done or rejected before the requesting process is preempted by any 
other process).

So you can do

mkdir "xyz"
if "xyz" already exists, wait or return an error
read or write shared variable on disc
rmdir "xyz"

to guarantee that only one process at a time can be trying to access 
a disc file.

There are many possible variations on this theme.




Re: /dev/null problems

2001-03-28 Thread Steve Reppucci


Not answering your mod_perl question here, but I believe this
suggestion in the guide isn't useful advice in any event -- 
this isn't 'echo'ing to /dev/null as su (root); rather it's 'echo'ing
a line as su, and you (normal user) are redirecting that output to 
/dev/null.

I.e., the grouping of that command is like so (yeah, I know, this is
in no way intended to be real shell syntax, just to show the
semantics...):

(sudo echo)  /dev/null

rather than:

sudo (echo /dev/null)

Not sure what is trying to be accomplished by either of these, but in
the interests of clarity in the guide, I think this ought to be either
corrected or removed entirely.  I'll volunteer to make the changes, if
someone can clarify exactly what the intended result is.

Stas? What do you say? Am I missing something here?

Steve Reppucci


On 28 Mar 2001, Matthew Kennedy wrote:

 Hello,
 
 From the mod_perl guide:
 
   syntax error at /dev/null line 1, near "line arguments:"
   Execution of /dev/null aborted due to compilation errors.
   parse: Undefined error: 0
   There is a chance that your /dev/null device is broken. Try:
   % sudo echo  /dev/null
 
 This is exactly the problem I have been getting when starting Apache
 mod_perl, however the suggested fix does not work for me. We're on a
 HPUX 11 machine. Is there another way to solve this problem? As I
 understand it, if /dev/null is being used as the $0 argument to the
 handler, perhaps I could somehow explicitly set it to another (empty)
 file? How would I go about that?
 
 Does anyone have any suggestions?
 
 Thanks,
 
 Matt
 

-- 
=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |





Re: [OT] ApacheCon BOF

2001-03-21 Thread Steve Reppucci


Well, I've been resisting any replies here, especially since I've *never*
been accused of being "politically correct", but since we're tossing
in pennies, here are my two:

I agree that the use of *any* symbols of a race or religion to represent a
sports team (or anything else of that ilk) is at least distasteful, and
probably even downright insulting given way those symbols are used in a
typical sports setting -- see the Atlanta Braves and their idiot fans'
"Tomahawk Chop".

However, IMHO, the use of the name "Apache" shouldn't in any way be
interpreted as demeaning here. We're using it for something that we all
hold in the highest respect -- well written, open, highly useable software
that's the most popular in the world for its task. I don't believe I've
ever seen any representation of the Apache logos used in any way that
connoted anything but respect and admiration.

Yes, I'm an *not* of Native American descent (I'd love to hear the
viewpoint of someone who *is*...), so maybe there's something that I
don't understand here.  But I don't think I'd be personally offended if we
were calling this "the Italian server", or "the French-Canadian server",
or "the American server" (which covers my ethnic backgrounds...;^) In
fact, I think I'd feel some pride in having a quality product associated
with what I identify with.

(And in a little tangent to give folks something to flame, I've never
understood why people get so offended about sports teams using "warrior"
in their names. My home town recently changed their team names from "The
Golden Warriors" to "The Golden Eagles", because of a discussion like the
one we're involved in here.  Isn't "warriors" a generic term?  Weren't
there Amazon warriors?  The Vikings? The Romans? etc.?)

Some folks spend way too much time looking for something to be offended
by, again IMHO.

That's my 2 (or 3) cents...
Steve

On Wed, 21 Mar 2001, Bakki Kudva wrote:

 
 I am not trolling here nor am I particularly trying to be 'politically
 correct' but after seeing Sherman Alexie's award winning movie "Smoke
 Signals" and listening to him (just yesterday on 60 Minutes II) I have a
 developed a new understanding and respect for Native American symbologies
 and their relegious significance to them.
 
 To quote Alexie:(http://www.fallsapart.com/art-side.html)
 
 "Alexie: It's part of the national consciousness. If people start dealing
 with Indian culture and Indian peoples truthfully in this country, we're
 going to have to start dealing with the genocide that happened here. In
 order to start dealing truthfully with our cultures, they have to start
 dealing truthfully with that great sin, the original sin of this country,
 and that's not going to happen.
 
  Just look at the sports teams. You couldn't have a team called the
 Washington Kikes or the Washington Micks. But yet you can have the
 Washington Redskins and this Indian with a big nose and big lips running
 around. How would you feel if it was the Washington Rabbis and you had a
 guy with braids running around throwing bagels? Or the Washington Jesuits
 with some guy handing out communion wafers. It wouldn't happen. So, it's
 an insult. It's proof of the ways in which we get ignored."
 
 So it MIGHT be distasteful to use these Native American metaphors no
 matter how innocuous they might seem to us.
 
 My 2cents worth,

-- 
=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |




Shared variables, inner subs and our

2001-03-16 Thread Steve Hay

Hi,

I was just tidying up an old mod_perl script which had some ugly "use
vars qw(...);" lines in it which I thought I'd replace with "our ...;".
I realise this isn't always a good idea since "our" is not intended as a
replacement for "use vars", but its often OK and I thought it would be
in my case.

I was only half right:  The script still works fine, but emits warnings
which it previously didn't about "variable will not stay shared".

The mod_perl Guide (1.28) refers to such problems in section 3.5:

It gives as an example the following program:

use strict;
use warnings;

sub print_power_of_2 {
my $x = shift;
sub power_of_2 {
 return $x ** 2;
}
my $result = power_of_2();
print "$x^2 = $result\n";
}

print_power_of_2(5);
print_power_of_2(6);

This prints:

Variable "$x" will not stay shared at ./nested.pl line 7.
5^2 = 25
6^2 = 25

The solution is to use a package-global $x which won't get deep-bound
into power_of_2():

use strict;
use warnings;

sub print_power_of_2 {
use vars qw($x);
$x = shift;
sub power_of_2 {
 return $x ** 2;
}
my $result = power_of_2();
print "$x^2 = $result\n";
}

print_power_of_2(5);
print_power_of_2(6);

This prints:

5^2 = 25
6^2 = 36

However, if you change the ugly "use vars" to the
sexier-although-not-quite-the-same "our":

use strict;
use warnings;

sub print_power_of_2 {
our $x = shift;
sub power_of_2 {
 return $x ** 2;
}
my $result = power_of_2();
print "$x^2 = $result\n";
}

print_power_of_2(5);
print_power_of_2(6);

then it prints:

Variable "$x" will not stay shared at ./nested.pl line 7.
5^2 = 25
6^2 = 36

!!!

In other words, we get a bizarre cross between the two: the warning
about $x not staying shared is emitted, but of course its nonsense (?)
because package-globals don't get deep-bound into subroutines anyway,
and the program actually works fine!

The eagle-eyed will have noticed that the above "use vars" solution is
not *exactly* as presented in the mod_perl Guide:  the solution there
puts the "use vars" *outside" of the declaration of print_power_of_2(),
not *inside* as above.  This, of course, makes no difference to "use
vars" which affects the package, not a lexical scope.

But it *does* make a big difference to "our", which applies to a lexical
scope, not a package:  If we move the "our" *outside* of the declaration
of print_power_of_2():

use strict;
use warnings;

our $x;

sub print_power_of_2 {
$x = shift;
sub power_of_2 {
 return $x ** 2;
}
my $result = power_of_2();
print "$x^2 = $result\n";
}

print_power_of_2(5);
print_power_of_2(6);

then the confusing warning goes away:

5^2 = 25
6^2 = 36

Why am I bringing this up?

(a) because I think the mod_perl Guide needs to mention the use of "our"
as well as "use vars" (they're only very briefly mentioned, regarding
something else, in section 10.6.5);
(b) because I can't actually do what I just did above in my mod_perl
script!

I run my mod_perl script under Apache::Registry, which (as we all know)
makes the script into a subroutine, and therefore any subroutines into
inner subroutines.

In the example above, print_power_of_2() is like my script, power_of_2()
is like a subroutine in my script, and the two calls to
print_power_of_2() are like my script being run twice.

Obviously I can't move the "our" declaration *outside* my script like I
did above (unless Apache::Registry did this for me when it does its
stuff with my script), so I'm stuck with the warning (or else "use
vars").

Is there some reason why the warning gets emitted with "our" inside
print_power_of_2()?  Was I just lucky that this particular example
worked and I should really heed the warning, or is the warning actually
bogus?

Is there any way I can use "our" rather than "use vars" and not get
these warnings?

- Steve Hay





Re: Perl incompat. with apache/mod_perl upgrade

2001-03-12 Thread Steve Leibel

At 12:26 PM -0500 3/12/01, Khachaturov, Vassilii wrote:
When I upgraded from
Solaris Apache/1.3.14 (Unix) mod_perl/1.24_01
to
Solaris Apache/1.3.17 (Unix) mod_perl/1.25

the following code in my debugging httpd.conf broke:

Perl
sub WWW_DIR () { $ENV{'HOME'} . '/www' ; } # this sub will persist to next
Perl
... more code, using WWW_DIR sometimes
/Perl


When I built Apache 1.3.17 with mod_perl 1.2.5 and numerous other 
mods, the resulting httpd was unable to read its usual configuration 
file.  When I upgraded to Apache 1.3.19 the problem went away.

I'd try 1.3.19.




Re: Stop button (was: Re: General Question)

2001-02-27 Thread Steve Hay

Bill Moseley wrote:

 At 02:02 PM 02/26/01 +, Steve Hay wrote:
 I have a script which I wish to run under either mod_perl or CGI which does
 little more than display content and I would like it to stop when the user
 presses Stop, but I can't get it working.

 You need to do different things under mod_perl and mod_cgi.  Refer to the
 Guide for running under mod_perl -- you probably should check explicitly
 for an aborted connection as the guide shows.

Oh dear.  The program has to run on various different machines around the place,
some of which run Apache/mod_perl and some of which run Microsoft IIS/CGI, so I
really want one solution which works in both environments if at all possible.

 [This is all from my memory, so I hope I have the details correct]

 Under mod_cgi Apache will receive the SIGPIPE when it tries to print to the
 socket.  Since your CGI script is running as a subprocess (that has been
 marked "kill_after_timeout", I believe), apache will first close the pipe
 from your CGI program, send it a SIGTERM, wait three seconds, then send a
 SIGKILL, and then reap.  This all happens in alloc.c, IIRC.

 This is basically the same thing that happens when you have a timeout.

 So, you can catch SIGTERM and then have three seconds to clean up.  You
 won't see a SIGPIPE unless you try to print in that three second gap.

I'm fairly sure the program does print in any given three second gap -- I see
the "x"s appearing in my browser window (since output is "unbuffered") at the
rate of two or three per second, so I really should get the SIGPIPE.

I've also tried adding in a similar handler to try and catch a SIGTERM and
exit(), but that doesn't seem to work either.

Has anybody else had any luck responding to "Stop" on NT?

Cheers,
Steve Hay





Re: Stop button (was: Re: General Question)

2001-02-26 Thread Steve Hay

Hi,

Stas Bekman wrote:

 Apache 1.3.6 and up -- STOP pressed:

 the code keeps on running until it tries to read from or write to the
 socket. the moment this happens, the script will stop the execution, and
 run cleanup phase.

 I think it's the same under mod_perl and mod_cgi. Am I right?

I have a script which I wish to run under either mod_perl or CGI which does
little more than display content and I would like it to stop when the user
presses Stop, but I can't get it working.

I've been trying to figure things out with the following test program:

---
use strict;
use warnings;
$SIG{PIPE} = \handler;
$| = 1;
print "Content-Type: text/plain\n\n";
for (;;) {
for (1 .. 100) { ; }
print "x\n";
}
sub handler {
# Unreliable signals on NT:-
$SIG{PIPE} = \handler;
exit;
}
---

(The pointless time-wasting loop just before each print() is so that I can
easily see whether the program actually has exited or not -- I'm running on NT
(groan!) and I can see in my "Task Manager" display that the Apache child
process is flat out 100% CPU while its running.)

I would expect that when the user presses Stop and the script next tries a
print() it'll get a SIGPIPE, call the handler(), and exit().

But it doesn't -- the Apache child process just carries on at 100% CPU.

It makes no difference whether I run it under mod_perl or mod_cgi (except that,
of course, I get a Perl process at 100% CPU instead of the Apache child), and it
also makes no difference if I take out the first "$SIG{PIPE} = \handler;" line
(and rely on mod_perl to handle the SIGPIPE for me as Stas described above)
and/or put the "PerlFixupHandler Apache::SIG" directive in my httpd.conf.

Can anybody help/explain?

I'm running Apache/1.3.17 and mod_perl/1.25 on Windows NT 4.

Cheers,
Steve Hay





Re: possible solution for exec cgi SSI in mod_perl

2001-02-25 Thread Steve Reppucci


If you build modperl with 'perl Makefile.PL EVERYTHING=1' (or, at least
with 'PERL_SSI=1', then your server side includes will have an additional
option that looks like this:

  !--#perl sub="DoSomething"--

This will invoke routine 'DoSomething' when this page is expanded.
You'll need to pre-load your module with a PerlRequire or PerlModule
directive.

You could also use Apache::SSI as the handler to do the same type of
thing. Many details of how this works in the Eagle book.

One warning: mod_perl *must* be built statically for PerlSSI stuff to
work -- if you try to build it dynamically, the build tool prints a
warning that "PerlSSI disabled in DSO build", or something like that.

HTH,
Steve


On Sun, 25 Feb 2001, Surat Singh Bhati wrote:

 Hi,
 
 I am using lots of exec cgi SSI in my site, all the 
 CGI called using exec are written in perl.
 !--#exec cgi="standardcgi.cgi"--
 
 I want to take advantage of mod_perl for performance,
 but as I know "exec" will run as mod_cgi , not as mod_perl.
 
 Can I use !--#include virtual="modperlscript.pl"-- ?
 If above will run, will be run as a sub request.. ? 
 
 Any other better solution to server the page included mod_perl scripts 
 SSI in it, without running the subrequest/new process? 
 
 Regards,
 
 -Surat Singh Bhati
 
 
 
 
 
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |




[OT] Re: Just learning, and a little stuck...

2001-02-23 Thread Steve Reppucci


Yes, this doesn't belong on this list.

But: Couple of immediate problems:

- If you pass a hash in an argument list, it gets inserted into the list
as just a sequence of key,value pairs -- there's no way in the subroutine
to determine that a hash was passed, as opposed to a simple list, or an
array.  You *can* pass it the way you're doing in your first example
(since the final thing you're taking off the argument list in the
subroutine is the hash), but I think most perl folks would agree that the
second example (passing a hash reference) is better form.
Obviously, this depends upon the semantics of the function you're
writing.

- 'shift' shifts one item off a list.  You seem to be inferring that it 
will shift off as many as needed -- you can just use a list assignment,
if that's what you want to do.

I think you want one of these options:

  join( $db, \%post);

  sub join {
my ($db, $post_ref) = @_;
foreach $key (keys %$post_ref) {

or:

  join( $db, %post);

  sub join {
my ($db, %post) = @_;
foreach $key (keys %post) {

No, you definitely want to limit yourself from using 'local' until you
understand the semantic differences between it and 'my'. 'Effective Perl
Programming', by Joseph Hall has a nice description of this.

HTH,
Steve

On Fri, 23 Feb 2001, Alec Smith wrote:

 This isn't specifically a mod_perl question, but something I'm having
 trouble doing within mod_perl code. I'm far from a Perl expert, but I
 try...
 
 I've got a hash called %post which contains submitted form info and a
 variable $db which is the result of a DBI-connect call. I need to take
 these 2 values and pass them into a subroutine. I've tried something like
 
 join($db, %post);
 
 sub join
 {
my ($db, %post) = shift (@_);
...
 
foreach $key (keys(%post))
{
   ...
}
 }
 
 and
 
 join ($db, \%post);
 
 sub join
 {
   my ($db, $post) = shift (@_);
 
   foreach $key (keys(%$post))
   {
  %$post{$key} = $db-quote("%$post{$key}");
   }
 }
 
 Using CGI-based Perl I suppose I could just use local() instead of my()
 to avoid having to pass arguments, but didn't think this would be
 advisable in mod_perl code.
 
 How can I manage to do what I'm trying to do?
 
 
 Alec
 
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |




Re: mod_vhost_alias / ProxyPassReverse problem

2001-02-12 Thread Steve Reppucci


I was going to suggest having the backend server listen on localhost:80,
with the proxying server listening on public addres:80, then the
redirects from either would be to port 80.

This suggestion of Tony's certainly seems like a cleaner solution though.
(Plus, I learned something I didn't know about this interaction of Listen
and Port directives -- always "a good thing" (tm) ;^)

On Mon, 12 Feb 2001, Ime Smits wrote:

 | Use the following config:
 | Listen 81
 | Port 80
 | In the presence of a Listen directive, the Port directive acts like
 | ServerName, i.e. it's what the server calls itself regardless of the
 | name that other people use to get to it.
 
 OK, thanks a lot, that does the trick. I tried that earlier today, but I
 overlooked an explicit ServerName directive somewhere else in httpd.conf, 
 overruling mod_vhost_alias...
 
 Thanks again!
 
 Ime
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |




Re: Apache::SubProcess failures

2001-02-10 Thread Steve Reppucci


Are you certain that your problem is in the output redirection?  That
message seems to indicate that the problem is in exec'ing /usr/bin/ls. Are
you sure that it (ls) exists at that path, rather than just /bin/ls?


On Sat, 10 Feb 2001, Aaron Kennedy wrote:

 Hi all,
 
 I'm having some issues involving directing the output of system() to the
 client.  I'm trying to use Apache::Subprocess to over-ride
 system().  However, whenever I use it, I get the following errors in my
 Apache error log:
 
 --- test.pl ---
 
 use strict;
 use Apache::SubProcess qw(system);
 
 select STDOUT; $| = 1;
 print "Content-type: text/html\n\n";
 system ("/usr/bin/ls");
 
 --- error.log ---
 
 [Sat Feb 10 22:54:29 2001] [error] fdopen failed! at
 /usr/local/lib/perl5/site_perl/5.6.0/i586-linux/Apache/SubProcess.pm line
 36.
 
 [Sat Feb 10 22:54:29 2001] [error] (2)No such file or
 directory: Apache::SubProcess exec of /usr/bin/ls failed
 
 I'm using Apache version 1.3.17, mod_perl version 1.25 and
 Apache::SubProcess version 0.02.  Any help would be greatly appreciated!
 
 Cheers,
 
 Aaron
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |




Re: Send a cookie, AND a redirect ?

2001-02-08 Thread Steve Reppucci


I believe you want to use 'err_header_out' rather than 'header_out' if
you're returning a status other than OK.

HTH,
Steve

On Thu, 8 Feb 2001, Harrison wrote:

 Dear All.
 
 I can set a cooke fine using:
 
 $r-content_type('text/html');
 $r-header_out('Set-Cookie' =$cookie);
 $r-send_http_header;
 
 And i can also send a redirect fine using:
 
 $r-content_type('text/html');
 $r-header_out('Location'=$the_url);
 return REDIRECT;
 
 BUT! 
 
 how do i do both? if i use my redirect code, and add an extra header_out , the 
cookie is not sent (because i have not called send_http_header ? ).
 
 If i add send_http_header, i see the full sent http_header in my browser.
 
 My idea was to have something like 
 
 $r-content_type('text/html');
 $r-header_out('Location'=$the_url);
 $r-header_out('Set-Cookie' =$cookie);
 $r-send_http_header;
 return REDIRECT;
 
 
 Which does not work.
 
 Thinking about it whilst typing this email, does header_out have a field where i can 
set the REDIRECT status?
 
 Thanks in advance, 
 
 Richard Harrison.
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |




Using Filter Module under mod_perl

2001-02-08 Thread Steve Hay

Hi,

I'm having trouble trying to use the Filter module under mod_perl.

The attached script + module correctly outputs "Goodbye, world." under
Apache/CGI, but says "Hello, world." under Apache/mod_perl (with
Apache::Registry), i.e. the filter is not being applied.

I looked into this once before, but got nowhere with it.

Doug MacEachern had a *very* quick look last time and suggested maybe:

"... the filter mechanism is tied into the perl_parse() and/or
perl_run() functions, which are only called once by the perl (command
line) binary, and only once by mod_perl.  So it could be the case that
Apache::Registry is simply too late in the game to use Perl filters."

I e-mailed the module's author (Paul Marquess) recently.  He is not
familiar with the internals of mod_perl (neither am I), but he said:

"If mod_perl calls perl_parse, I'm not sure why the filters aren't
working. The filters hooks all live in yylex, which get called
indirectly by perl_parse."

Is there anyone familiar with both Filter and mod_perl who could shed
any more light on what's going on here?

Thanks,
Steve Hay


 Hello2Goodbye.pm
 filtertest.pl


Re: Best GCC compiler options for Intel (perl apache)

2001-02-01 Thread Steve Fink

Tim Bunce wrote:
 
 Can anyone recommend extra gcc options to squeeze the last ounce of
 performance out of code (perl and apache in this case) on Intel?
 
 I don't mind tying the code down to one cpu type or loosing the ability
 to debug etc. We're already doing -O6 and are looking for more.
 
 I recall Malcom Beattie (CC'd, Hi Malcolm!) experimenting in this area,
 something about not wasting a register for the frame pointer.

That particular option would be gcc -fomit-frame-pointer.

You might try -ffast-math -fexpensive-optimizations (never played with
the latter, though, and it's probably on with -O6 anyway).

If you really want to go crazy, you could try -fbranch-probabilities
(requires more than just turning it on; read the gcc man page.) I doubt
it's worth the trouble.

And you'd probably want -march=i686 (or whatever CPU you're using).

I don't know the state of pentium-specific optimizations, but does
Cygnus's Code Fusion still have a gcc with Pentium-specific
optimizations that aren't in the main tree? I just remember the numbers
saying that they'd slightly overtaken Intel's compiler, but that was a
year and a half ago.

Unrelated to the compiler, if you're throwing around significant chunks
of data, you might want to try tuning your drives. Especially if they're
IDE, since UDMA is often disabled for safety by default. I don't know
much about SCSI tuning, but whichever interface you're using, make sure
the heads are able to go around in circles really fast.

You can also play tricks with RAM disks, or solid-state hard drives like
the ones from platypustechnologies.com. But this gets too far afield.



Re: [RFC] mod_perl Digest path...

2001-01-30 Thread Steve Reppucci


My vote is to keep a plain text version available.  I don't use an
html-capable mail reader, so sending a link normally means "I'll save this
and read it later when I have time", which often means I'll delete it
three weeks later in cleaning out my 'READ' mail file...

I like the text version because I can quickly scan it to see if there are
any interesting topics that I missed during the week.

My 2 cents...

Steve

On Tue, 30 Jan 2001, Geoffrey Young wrote:

 sorry again for all the confusion with this morning's digest (I do code more
 carefully than I write, really I do...)
 
 this does present the opportune time to ask the list about the future of
 this digest...
 
 currently, the digest does not have a HTML home.  Matt at take23.org has
 graciously agreed to host it and work on the XML stylesheets required for
 the site.  This is a very good thing - but unfortunately, there is no easy
 way to derive a decent plain text version from an XML base...
 
 thus, the move to take23.org may mean that the digest no longer appears on
 the list in plaintext, but merely as a posting with a link to the current
 version...
 
 how does this strike everyone?
 
 --Geoff
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |




Re: Runaways

2001-01-29 Thread Steve Reppucci


Yes, I've seen this happen often, maybe once a day on a relatively heavily
used site running mod_perl, where a child process goes into a state where
it consumes lots of memory and cpu cycles.  I did some investigation, but
(like you, it sounds) couldn't garner any useful info from gdb traces.

I solved (?) this by writing a little perl script to run from cron
and watch for and kill these runaways, but it's an admittedly lame
solution.  I've meant for a while to look into Stas'
Apache::Watchdog::RunAway module to handle these more cleanly, but never
did get around to doing this.

Let us know if you do get to the bottom of this.

Steve

On Mon, 29 Jan 2001, Robert Landrum wrote:

 I have some very large httpd processes (35 MB) running our 
 application software.  Every so often, one of the processes will grow 
 infinitly large, consuming all available system resources.  After 300 
 seconds the process dies (as specified in the config file), and the 
 system usually returns to normal.  Is there any way to determine what 
 is eating up all the memory?  I need to pinpoint this to a particular 
 module.  I've tried coredumping during the incident, but gdb has yet 
 to tell me anything useful.
 
 I was actually playing around with the idea of hacking the perl 
 source so that it will change $0 to whatever the current package 
 name, but I don't know that this will translate back to mod perl 
 correctly, as $0 is the name of the configuration from within mod 
 perl.
 
 Has anyone had to deal with this sort of problem in the past?
 
 Robert Landrum
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |




Re: Runaways

2001-01-29 Thread Steve Reppucci

On Mon, 29 Jan 2001, Robert Landrum wrote:

 I did the exact same thing... But the kill(-9,$pid) didn't work, even 
 when run as root.  Unfortunatly, Apache::Watchdog::RunAway is just as 
 lame as our solutions (Sorry Stas), in that it relies on an external 
 process that checks the apache scoreboard and kills anything that's 
 been running for "X" amount of time.

Yep, we've had a few of these too -- but it seems I can avoid these if I
kill the runaways early enough before they become too brain dead.

 You could, in theory just reduce the "Timeout" option in apache to 
 "X" above to achieve the same result, and avoid the external process 
 altogether.

Hmmm, are you sure about that?  According to the apache manual:

   The TimeOut directive currently defines the amount of time Apache
   will wait for three things: 

   1.The total amount of time it takes to receive a GET request. 
   2.The amount of time between receipt of TCP packets on a POST
  or PUT request. 
   3.The amount of time between ACKs on transmissions of TCP packets
 in responses. 

I've never known 'Timeout' to affect the amount of time a child process
takes to service a request though...

 The problem, I'm afraid, is that I start hemorrhaging memory at the 
 rate about 4 megs per second, and after 300 seconds, I have a process 
 with just over 1200 megs of memory.  The machine itself handles this 
 fine, but if the user stops and  does whatever it is they're doing 
 again, I end up with two of those 1200 meg processes... which the 
 machine cannot handle.
 
 I'm hoping someone else has a more sophisticated solution to tracing 
 runaway processes to their source.  If not, I'll have to write some 
 internal stuff to do the job...

Afraid I can't offer anything better than what it sounds like you already
have...

Steve

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |




Re: Upgrading mod_perl on production machine (again)

2001-01-16 Thread Steve Reppucci


Not that I have an answer to this complete problem, but I have had similar
situation, so I'll also be interested in the solutions you uncover.

I've always handled the support of multiple perl versions by installing
new versions of perl using a prefix like /usr/local/perl/5.6.0, etc.,
(I also place CPAN's build directory under that tree.)

This makes it easy to install and test new versions of perl without
affecting running applications that have been built against a previous
perl version, as well as making it possible to test with specific versions
just by referencing the appropriate version in the script's shebang line
(or by setting my PATH appropriately when building mod_perl.)

Using this method, I symlink the "current default" version of perl and its
tools in the standard public directory (/usr/bin or /usr/local/bin).

Upgrading mod_perl versions has been a headache though, as I'm similarly
hesitant to simply 'make install' new mod_perl releases without being able
to test that all of my running applications work correctly.  I suppose
your idea of archiving the lib tree (/usr/local/perl/5.6.0/lib in my
setup) before running the 'make install' so that it's easy to roll back
should something fail is prudent.  But it still doesn't solve the problem
of being able to fully install (in its real final location, not in a
private directory...) new versions of mod_perl without affecting stuff
that's already running.  (Maybe I'm just tilting at windmills in worrying
about testing modperl from a private directory install...) 

So how *do* others handle this upgrade situation?

Steve

On Tue, 16 Jan 2001, Bill Moseley wrote:

 This is a revisit of a question last September where I asked about
 upgrading mod_perl and Perl on a busy machine.
 
 IIRC, Greg, Stas, and Perrin offered suggestions such as installing from
 RPMs or tarballs, and using symlinks.  The RPM/tarball option worries me a
 bit, since if I do forget a file, then I'll be down for a while, plus I
 don't have another machine of the same type where I can create the tarball.
  Sym-linking works great for moving my test application into live action,
 but it seems trickier to do this with the entire Perl tree.
 
 Here's the problem: this client only has this one machine, yet I need to
 setup a test copy of the application on the same machine running on a
 different port for the client and myself to test.  And I'd like to know
 that when the test code gets moved live, that all the exact same code is
 running (modules and all).
 
 What to do in this situation?
 
 a) not worry about it, and just make install mod_perl and restart the server
 and hope all goes well?
 
 b) cp -rp /usr/local/lib/perl5 and use symlinks to move between the two?
 When ready to move, kill httpd, change the perl symlinks for the binary,
 perl lib, and httpd, and restart?
 
 c) setup a new set of perl, httpd, and my application and when ready to go
 live just change the port number? 
 
 Or simply put - how would you do this:
 
 With one machine I want to upgrade perl to 5.6.0, upgrade your application
 code, new version of mod_perl, and allow for testing of the new setup for a
 few weeks, yet only require a few seconds of downtime to switch live (and
 back again if needed)?
 
 Then I wonder which CPAN module I'll forget to install...
 
 
 
 Bill Moseley
 mailto:[EMAIL PROTECTED]
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |




Re: [OT] Availability of Jobs -- was Re: [SOLICITATION] Programmer available for contracting..

2001-01-10 Thread Steve Smith

 The most important thing I learned from fuckedcompany.com is the
 term "Javateer".

So what does it mean?  The fuckedcompany search isn't very forthcoming :(



[JOB WANTED]: Boston area modperl contract

2001-01-09 Thread Steve Reppucci


I'm looking for Boston area companies, or possibly something that can be
done on a telecommuting basis, requiring expertise with perl (modperl) and
apache in a large-scale environment.

I've got some pretty good experience using modperl, please email me for a
resume.  I'm mainly targeting contract work, but would be interested in
talking about a full time position if an interesting opportunity arises.

Interested parties please email me off list.

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |




Mod Perl v1.24_01

2001-01-02 Thread Steve Haemelinck

I want to  configure my apache server (1.3.14) with mod_perl (1.24_01)

But when I configure mod_perl with  the following command

*   perl Makefile.PL APACHE_SRC=/usr/src/http/apache_1.3.14/src
DO_HTTPD=1 USE_APACI=1 PREP_HTTPD=1 EVERYTHING=1
*   make
*   make test -- Gives the following error : Syntax error in Line 3
.../t/conf/http.conf Invalid Command POD

I am quite new with Apache  Mod_Perl.

Is it also possible to receive a configuration file for Apache from someone.
I use a SuSE Linux 7.0 system

THX


 winmail.dat


Configuration File for Apache

2001-01-02 Thread Steve Haemelinck

Can someone send me a Configuration File for Apache please?

THX

 winmail.dat


Re: session expiration

2000-11-21 Thread Steve van der Burg

So basically I want to set a cookie that will allow them to enter the site
under their userid, but I can't allow them to enter if they are currently
logged in from elsewhere.

Any ideas?

I use cookie-based auth in a few places, with a "can be logged in
only once" restriction, but I duck the "don't allow them to enter"
scenario by letting each new session supercede the old one. 
I use a database that maps logged-in user IDs to cookies, and once
authentication is done (which happens if the user doesn't send a
cookie, or doesn't send the right cookie), the new cookie simply
overwrites the old one, and the new session becomes the "allowed" one.

...Steve

-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Tempfile and send_fd()

2000-11-16 Thread Steve Smith

Hi,

Could somebody tell me why the following testcase doesn't work?

use Apache (); 
use Apache::File ();

my $r = Apache-request();
$r-content_type('text/plain');
$r-send_http_header();

my $f = Apache::File-tmpfile();
print $f "test\ntest\n";
$r-send_fd($f);
$f-close;

All I get is an empty document.  My understanding is that the data
written to the tmpfile should be available immediately through the
filehandle even if it hasn't been flushed.

This is running under Registry, on Linux.

Thanks,
Steve



Re:Tempfile and send_fd()

2000-11-16 Thread Steve Smith

"Steve" == Steve Smith [EMAIL PROTECTED] writes:
 Hi, Could somebody tell me why the following testcase doesn't work?

snip

Nevermind, I got it from the archives eventually :

seek $f, 0, 0;
$r-send_fd($f);

Cheers,
Steve



Subject Matter Expert- Web Performance Reliability

2000-11-15 Thread Steve Coco



Good 
Afternoon: 

I am seeking an 
expert in the field of Web Performance and Reliability. The 
responsibilities include having subject matter expertise in performance and web 
assessment. 

This is to help 
an emerging product company focusing on back-end infrastructure of 
the web. Our goal is to provide a full assessment of a company's web 
infrastructure across seven different attributes: security, performance, 
reliability, scalability, manageability, flexibility and long term 
viability. While other companies in this space are primarily niche 
players, we provide an end to end web assessment tool. Headquartered in 
Wakefield, MA, we will be expanding to the seacoast of NH (Portsmouth) after the 
first of the year. Currently we are still in stealth mode while we are 
completing our initial product development and develop a formal marketing 
campaign.

This person would 
own the metrics around performance and reliability and serve in multiple roles 
(Client assessment, product development, customer delivery, 
etc).

I am not sure if 
anyone on this list would know of interested parties, but please feel free to 
contact me or pass this info along.

I appreciate the 
time. Best, 

Stephen L. Coco Emerging Markets Associate Darwin Partners 100 Quannapowitt Parkway Wakefield, MA 01880 
Office 
800-274-1174 x7842 Direct781-213-7842 Mobile 
617-233-9900 Email 
[EMAIL PROTECTED] Web www.darwinpartners.com







Re: a web interface to visualize tables

2000-11-01 Thread Steve Lloyd

Hi Guys,

The technology called Datilink has just been bought out by a new company called
Inshift Technologies.
If you are interested I can get you a killer price on a copy right now since I
have connections with both companies.
We have been using it for several years now and really like its flexibility and
ease of use.
Let me know and I will send you a win32 demo version.

Steve Lloyd
801 318-0591

Tim Harsch wrote:

 As a part of further research into this area I am going to seriously look
 into Oracle WebDB.  Other users in my shop have had great success with it.
 And the output is *very* high quality.  I would appreciate hearing more
 about it from any users here that have experience with it.

 Also there is a product that seems to be a cross platform version of what
 you want called "DatiLink".  It hooks up natively to all the major
 databases.  It's a bit pricy but the output seems to be high quality.
 http://datigen.com

  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
  Behalf Of Louis-David Mitterrand
  Sent: Wednesday, November 01, 2000 7:58 AM
  To: dbi-users; [EMAIL PROTECTED]
  Subject: a web interface to visualize tables
 
 
 
  Hello,
 
  I need a tool to interactively visualize DB tables from a web interface.
  Ideally this tool would let me:
 
  - rename column headers,
  - set cell alignments, widths, background colors,
  - reorder columns,
  - save all these visualisation settings in a DB,
  - it would be written in perl (even better: mod_perl),
 
  Does such a beast exist? I am in the process of writing one, so I
  thought I'd check first...
 
  Thanks in advance,
 
  --
  Louis-David Mitterrand - [EMAIL PROTECTED] - http://www.apartia.org
 
"Kill a man, and you are an assassin. Kill millions of men, and you
are a conqueror. Kill everyone, and you are a god." -- Jean Rostand
 
 
  --
  
  DBI HOME PAGE AND ARCHIVES:
 http://www.symbolstone.org/technology/perl/DBI/
 To unsubscribe from this list, please visit:
 http://www.isc.org/dbi-lists.html
 If you are without web access, or if you are having trouble with the web
 page,
 please send mail to [EMAIL PROTECTED] with the subject line of:
 'unsubscribe'.
 
 --

 --
 DBI HOME PAGE AND ARCHIVES: http://www.symbolstone.org/technology/perl/DBI/
 To unsubscribe from this list, please visit: http://www.isc.org/dbi-lists.html
 If you are without web access, or if you are having trouble with the web page,
 please send mail to [EMAIL PROTECTED] with the subject line of:
 'unsubscribe'.
 --




Re: PUT handling (somewhat off-topic)

2000-09-06 Thread Steve van der Burg

When I send Apache a PUT request using 'telnet', the request is
received.  However, my PUT script does not run.  Instead, Apache
fabricates a 200 response that looks like this:

I just added
   Script PUT /cgi-bin/put-handler
to my Apache config (apache 1.3.12  mod_perl 1.24 on Solaris 8 SPARC),
copied http://www.apacheweek.com/issues/put1 to put-handler, added
some more logging code, and tried uploading something from
Netscape Composer.

It worked like a charm, the first time, and the request was handled by
the script (the script's own log says what I expected it to say) which
means I've been of almost no help!

If it hadn't worked, I probably would've trussed Apache while I made the
request to see what was going on.

...Steve


-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]




Re: HTML Template Comparison Sheet ETA

2000-09-04 Thread Steve Manes

At 11:26 AM 9/4/00 -0300, Nelson Correa de Toledo Ferraz wrote:
I agree that one shouldn't put lots of code inside of a template, but
variables and loops are better expressed in Perl than in a "little
crippled language".

Your example makes perfect sense to me.  But that's why I'm in "Tech" and 
not "Creative".  I wrote my own quick 'n nasty templating package a few 
years ago that allowed Perl code to be embedded inside PERL/PERL 
brackets.  So long as I was coding the pages, it worked great, if not as 
efficiently as embperl or mason.  But in the real world of NYC new media, 
Creative typically drives the project.  It's more common for the site to be 
built by artists and HTML sitebuilders, not programmers.  The first time I 
see the pages is when they get handed off to Tech to glue it all together. 
This usually happens sometime past Tech's scheduled hand-off date, i.e. 
five days to do fifteen budgeted days' work in order to make the launch date.

I had more success with Sam's HTML::Template package.  The sitebuilders 
seemed to better understand how to work with its simpler concept, although 
I had to stay away from HTML::Template's looping constructs for the same 
reason.  No doubt, if there had been better communications and coordination 
between Tech and Creative and I'd had more hands-on input on what Creative 
was doing to those templates I could have eliminated most of the 
screwups.  But in practice, I've found turf warfare to be status quo 
between Tech and Creative in larger agencies.

My favorite anecdote with embedded Perl templates: after a 100-page 
creative update to an existing site, nothing worked.  Turned out that some 
funky HTML editor had HTML-escaped the Perl code.   That was a fun all-nighter.

---[ http://www.magpie.com ]---=oo---
Steve Manes
Brooklyn, N'Yawk




Re: Getting data from external URL

2000-08-29 Thread Steve Reppucci


Hmmm

Looking at _trivial_http_get:

if ($code =~ /^30[1237]/  $buf =~ /\012Location:\s*(\S+)/) {
   # redirect

So it certainly seems like it's *trying to handle it.

As I recall (it was a late night when I had an application that wasn't
working), I had single stepped down into the guts of LWP::Simple and
realized that it was returning a failure indicator when encountering a 302
status.  I had assumed that this was intended behavior, but now that I
look at the pod of what we've currently got installed (1.32), it sure
seems like it should work.

I'll look into this a bit to see if I can recreate it, but for now, let's
chalk it up to either (1) something that's been fixed since the version
that I was using at the time, or (2) I'm just out of my head.

More likely the latter...

Sorry for the confusion.
Steve


On 29 Aug 2000, Gisle Aas wrote:

 Steve Reppucci [EMAIL PROTECTED] writes:
 
  Just a word of warning: LWP::Simple doesn't follow redirects (at least,
  the last I checked, not sure if it's been changed in the 3 or 4
  months since I've last used it...),
 
 If it does not follow redirects then that is a bug.  Do you have a
 test case?
 
 Not much has changed in the last 3 or 4 months either.
 
 Regards,
 Gisle
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |
508/958-0183 Be Open |




Re: Getting data from external URL

2000-08-28 Thread Steve Reppucci


Just a word of warning: LWP::Simple doesn't follow redirects (at least,
the last I checked, not sure if it's been changed in the 3 or 4
months since I've last used it...), so you need to be certain that you're
using it in a context where you're fetching something that won't return a
redirect.

HTH...

On Sat, 26 Aug 2000, Stas Bekman wrote:

 On Sat, 26 Aug 2000, Rodney Broom wrote:
 
  OK, lots of banter...
  
  Hey V, if you are on a *NIX system, then this is a fast way:
  
  open U, "lynx -source www.some.url.dom |";
  $data = join '', U;
  
  There, you're finished. Admittedly, this isn't terribly efficiant, but it works
  just fine and has short devel time.
 
 This one is much more efficient and requires even less coding:
 
 use LWP::Simple;
 $content = get("http://www.sn.no/")
 
 And it doesn't require you to be on any particular OS, as far as I know.
 
 see perldoc LWP::Simple and as advised by many others LWP::UserAgent for
 more advanced uses.


=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |
508/958-0183 Be Open |




Re: PerlAuthenHandler -- doesn't get there...? SOLVED

2000-08-21 Thread Steve van der Burg

[ previous discussion snipped ]

httpd.conf or .htaccess (PerlModule hasta be in httpd.conf,
from my experience)--
   PerlAccessHandler My::Auth::access_handler
   PerlSetVar Intranet "10.10.10.1 = userA, 10.10.10.2 = userB"
   PerlAuthenHandler My::Auth::authen_handler
   AuthName realm
   AuthType Basic
   Require valid-user

   order deny,allow
   deny from all
   #
   # add 'order/deny', and we're done (as far as i can tell)
   #


Before any changes to the Guide solidify out of this, I'd like to know that we're not 
pushing bad information into it.

- order, deny, allow are all handled by mod_access, which worries about hostname- and 
IP address-based restrictions.
- AuthType Basic is handled right in the core Apache code, where it, along with 
digest, is special-cased for in http_request and elsewhere.  You aren't really doing 
Basic auth with your module, are you?  That is, you're not putting the Auth-Required 
headers into your responses (to cause the browser to prompt for credentials) if you 
don't see the Basic auth headers in the requests, right?

I'm using Apache::AuthCookie, not doing this from scratch, so that clouds things a bit 
for me, but I've been looking at Apache's behaviour a lot.

Here's my test config (for Apache::AuthCookie):

Location /some/where
 AllowOverride None
 Options +ExecCGI
 SetHandler cgi-script
 AuthType Site::AuthCookieHandler
 AuthName Testing
 PerlAuthenHandler  Site::AuthCookieHandler-authenticate
 PerlAuthzHandler   Site::AuthCookieHandler-authorize
 require valid-user
/Location

Notice that there are no order, allow, deny directives in sight, and this works as it 
should.
If I truss apache while I hit this spot with a request, I see the results of the 
handlers being invoked, which in AuthCookie's case is a redirection to a login form.
If I replace "AuthType Site::AuthCookieHandler" with "AuthType Basic", the handlers 
don't get invoked, and I instead see this error from apache:

  configuration error: couldn't check user.  No user file?: /some/where

This comes from http_request.c, which is responding to "AuthType Basic".  It's giving 
an error because I haven't told it where to find a user file (AuthUserFile) or 
database (AuthDBMUserFile) to check requests against, but I've requested Basic auth.

...Steve

-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]




Re: PerlAuthenHandler -- doesn't get there...?

2000-08-18 Thread Steve van der Burg

 i canna get the PerlAuthenHandler to do ANYTHING. first
 line of code after $r = shift is $r-warn() but nothing
 shows up in the log. aaugh!

[snip]

 Location /auth
 PerlAccessHandler Serensoft::Auth::access_handler
 PerlSetVar Intranet "this = that"
 PerlAuthenHandler Serensoft::Auth::authen_handler
 AuthName "dontUthink subscriber"
 AuthType Basic
 Require valid-user
 /Location

[snip]

After looking at my own configuration for Apache::AuthCookie, and snooping in the 
Apache source a bit, I think that your "AuthType Basic" needs to be changed to 
"AuthType Serensoft::Auth".

...Steve

-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]




Win32: system() calls with STDOUT re-directed

2000-08-11 Thread Steve Hay


I've finally solved a problem which I've had for a long time which may
be of interest.

I know some people looked at it for me at the time, including Randy
Kobes.

The problem was that the following script did not correctly execute it's
system() call. The ip.txt file was never written and the status was set
to 256. Running under CGI the file was written and the status was 0.
(The problem never occurred if STDOUT from the ipconfig program is not
re-directed to a file.)

$| = 1;
$pg  = "Content-Type: text/plain\n\n";
$status = system "D:\\WINNT\\system32\\ipconfig.exe 
D:\\Temp\\ip.txt";
$pg .= "The system call exited with status $status.\n";
print $pg;

I've found that since re-building everything with the inclusion of
mod_ssl (2.6.5) the problem goes away!  I kept everything else the same
(Perl 5.6.0, Apache 1.3.12, mod_perl 1.24 on NT4 SP6).

I wonder if this is anything to do with the EAPI (extended API) which
mod_ssl patches the Apache core code with, since I don't actually need
to *use* mod_ssl, just build with it - i.e. I don't even need to have a
"LoadModule ssl_module modules/ApacheModuleSSL.dll" line in my
httpd.conf file!

It could also explain why other people were unable to re-produce my
problem.

Randy:  I think you looked at this for me around the beginning of April
and couldn't re-produce it.  Do you think you had mod_ssl included in
your build?

Does this have any other implications for mod_perl???  Does mod_perl
need the EAPI like mod_ssl does (at least on Win32)???

Steve Hay





Full-time web programmer needed.

2000-08-07 Thread Steve Chitwood

Full-time web programmer needed.

Denver-based firm is looking to add an additional full-time web programmer
to our staff. Main responsibility is working with development team to
develop web applications for client projects.

Fluency in developing web-based applications in PERL/mod_perl/apache/mySQL
on BSD/LINUX a must.  Additional experience in jscript/java a plus as well
as experience with Oracle or additional web-development technologies.

CONTACT:
--
STEVE CHITWOOD
Summit Communication Design, Inc.
6065 South Quebec Street, Suite 202
Englewood, CO 80111
303/290-1898 voice
303/265-9379 fax
http://www.SummitDesign.net




Re: can't get unbuffered output to work

2000-07-13 Thread Steve van der Burg

Hi, unbuffered output in a handler just doesn't work for me:
[ details of setup and handler snipped ]

If I 'GET /unbuffered' in Netscape nothing is printed until I stop the
server. Setting $|++ does not help. Something is still buffering. This
is modperl 1.21 and Apache 1.3.12.

Any clues?

Netscape is waiting for an HTML visual break of some kind before showing any output - 
if you modify your test handler to spew "bHello!/bp" before sleeping, you should 
see it.

...Steve

-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]




HTTP Headers

2000-07-10 Thread Steve Parker

My mod_perl configuration seems to automatically send a header everytime
(Content-type:  text/html)

I know this because, Perl scripts that print back out do not get a
malformed header error

This is causing big problems b/c I cannot det the header myself (for
cookies and redirects).

In my httpd.conf, I made sure to have:
-PerlSendHeader Off
-comment out the DefaultType directive



Does anybody have any idea where I might look in my configuration for
the cause of this?


Any help is very greatly appreciated.

Thanks,
steve




Re: Simple program _setting_ REMOTE_ADDR

2000-07-04 Thread Steve van der Burg

 Ack!  That was pretty stupid of me.  It doesn't explain why
 SetEnvIf Request_URI "/cgi-bin/VENDOR" REMOTE_ADDR=1.2.3.4
 didn't work, but I can take that to the Apache folks.

It's because mod_cgi sets the environment variable just before it
runs the program (and thus overriding whatever you set before).

A fixup handler to set the fake ip address (as you figured out) and
possibly a loghandler to set remote_ip to the right address again is
the way to go.

Thanks.  Between the help I got on the list, and a quick reading of some of the Apache 
source (esp. mod_cgi), I had a working solution up and running 90 minutes (!) after 
first deciding to attack the problem from that angle.

...Steve 

-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]




Simple program _setting_ REMOTE_ADDR

2000-06-23 Thread Steve van der Burg

In order to get a certain buggy, poorly-designed application from a well-known vendor 
to maintain its sessions in the face of the user's IP address changing (vendor doesn't 
understand that addresses may change between requests), I'd like to have Apache lie to 
the vendor's canned CGI app about it.
That is, I'd like to set REMOTE_ADDR like so:

Location /cgi-bin/VENDOR
# Feed vendor's crappy CGI code a fake address that won't change:
PerlSetEnv REMOTE_ADDR 1.2.3.4
/Location

When I test this with a simple dump-the-environment script (/cgi-bin/VENDOR/test) 
still shows my real IP address!  I've tried doing it with Apache's SetEnvIf (OT here, 
I know), and that doesn't do it either.
A quick check of the Eagle book, and a search through dejanews didn't turn up 
anything, and this should be easy...

Help!

...Steve

-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]




Re: Simple program _setting_ REMOTE_ADDR

2000-06-23 Thread Steve van der Burg

 Vivek Khera [EMAIL PROTECTED] wrote:
 "SvdB" == Steve van der Burg [EMAIL PROTECTED] wrote:
SvdB That is, I'd like to set REMOTE_ADDR like so:
SvdB Location /cgi-bin/VENDOR
SvdB # Feed vendor's crappy CGI code a fake address that won't change:
SvdB PerlSetEnv REMOTE_ADDR 1.2.3.4
SvdB /Location

But is /cgi-bin running under Perl?

Ack!  That was pretty stupid of me.  It doesn't explain why
SetEnvIf Request_URI "/cgi-bin/VENDOR" REMOTE_ADDR=1.2.3.4
didn't work, but I can take that to the Apache folks.

I think what you have to do is make a handler that calls Apache's
remote_ip() method with the proper value, then runs a sub-request for
your CGI program, taking care not to recurse on itself.

How about an access handler that just sets it?  As long as I can get to it before 
mod_cgi launches the vendor's code, I should be okay.  Of course, that's what SetEnvIf 
is supposed to do...

If they don't understand how an IP address can change on a client, do
they understand that clients can share the same IP?

They're fine that way.  A couple of LWP tests confirmed that.

These people should be taken out and shot if they don't understand
that their app fails this way.  Really.  Just shoot 'em.

I agree.  How a major vendor with lots of staff and billions of dollars can get the 
basics wrong, I don't know.

Thanks for your help.

...Steve

-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]



Re: Simple program _setting_ REMOTE_ADDR - SOLUTION

2000-06-23 Thread Steve van der Burg

Like that but work-around earlier today for Apache::Request to work
around a MIME formatting but in IE on the Mac.  Lame.

Taking your remote_ip hint, and reading the Eagle a bit more closely, I came up with 
this:

In httpd.conf:

Location /cgi-bin/VENDOR
PerlAccessHandler LHSC::FakeRemoteIP
/Location

Here's the handler:

#!/bin/perl

package LHSC::FakeRemoteIP;

use Apache::Constants qw /:common/;
use strict;

sub handler {
   my $r = shift;
   $r-connection-remote_ip("1.2.3.4");
   return OK;
}

1;

I've tested it and it works perfectly.

...Steve


-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]




Re: Simple program _setting_ REMOTE_ADDR - SOLUTION

2000-06-23 Thread Steve van der Burg

 In httpd.conf:
 
 Location /cgi-bin/VENDOR
 PerlAccessHandler LHSC::FakeRemoteIP
 /Location

Why an Access handler?  I realize it works, but a more appropriate
phase would be PerlFixupHandler, since you aren't doing any access
control in your module.  A couple other nitpicky points: you probably
should return 'DECLINED' at the end, not 'OK', in case there are more
handlers that want to do something during that phase and it also probably
would be a good idea to restore the "real" address after so your logs
show the actual client IP.  Something like this:

Good points.  This is my first real foray outside of content handlers, so I chose 
something early on in the request phase.  I'll give the code you've provided a try 
this afternoon.

...Steve

-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]




mod_perl caches compiled quotes?

2000-06-17 Thread Steve Smith

Hi,

I'm seeing some very disturbing behavior while running mod_perl.
I'm sure there must be a workaround for this.  Sorry if this is a FAQ,
but I haven't seen anything discussing this.

I use mod_perl to generate pages based on a user from cookies and a
database (DBI/MySql).  Use strict is in effect everywhere (I've
checked). I print this data using syntax of the form: 

print qq( Your personal details are $name $salary);

(Simplified obviously, but you get the idea).  This works fine the
first time, but the person who accesses the page frequently gets the
details of the last person to access the server.  Needless to say this
is a Bad Thing.

Is it the case that the results of the string interpolation are cached
and used in the next run of the script?  This would seem to be the
case, as touching the script or restarting the server seems to remove
the problem, until the next run.

I've attached some version info below.

   Apache/1.3.12 (Unix) mod_perl/1.23 mod_ssl/2.6.3 OpenSSL/0.9.5
   Perl: 5.005_03
   Redhat6.1

However, I tried compiling up to the latest versions (including perl
5.6.0) on a development machine and saw the same behavior.

What am I doing wrong?

Thanks,
Steve




Re: Apache children hanging

2000-06-01 Thread Steve Reppucci


This is *exactly* the symptoms we see, and we're just about always up to
date with Apache/Perl/modperl releases.

We've spent a fair amount of time trying to isolate the cause of these,
but haven't been able to point the finger at any one cause.  Some of the
things we've determined:

- The same behavior is displayed under Solaris (5.6 and 5.7) and Linux
  (2.2.14).
- We've seen this through through a bunch of releases of
  Apache/Perl/modperl over the past 6 months.
- When a child process goes astray, it is in a tight loop, quickly growing
  to consume 95 to 100% of the cpu cycles.
- Under Linux, running strace on the runaway results in nothing -- 
  no system calls are shown whatsoever, so it's apparently spinning in
  a tight CPU loop (though see the next bullet -- it's possible I've
  just never caught it at an early enough stage.)
- Under Solaris, I've managed to catch a few of these at an early stage
  and observed (via truss) an endless series of 'sbrk' calls, eventually
  this gets bound up tight with no system calls displayed, like the
  Linux case.
- This seems to happen more often under heavy load, but we've also seen it
  fairly regularly during low traffic periods.
- We did have some luck in doing a thorough read of our handlers that use
  DBI, making sure that all database connections are explicitly closed 
  at the end of a request (we *don't* use Apache::DBI).  This cut down on
  the number of runaways, but we still see them.

We've kept our runaways under control by running a watchdog script that
looks for modperl processes with the correct load numbers (cpu%  10% and
run time  something), but we've all along thought that this would be a
temporary solution until we determined what we're doing wrong.

Now that I've seen this report from a couple of others on the list, I'm
wondering if it's not something we're doing, but rather something within
Apache or modperl.

If there's anything anyone on the list can recommend that I do to try to
collect more clues on the cause, I'll be happy to try it.

Or maybe if there are others who've seen the same behavior, pipe in so
that we can get a feeling for how many sites are experiencing this?

Steve Reppucci

On Thu, 1 Jun 2000, Gustavo Duarte wrote:

 Hi there people,
 
 I have inherited a web server running mod_perl and I am experiencing a
 somewhat critical problem: http processes sometimes get into an infinite
 loop, using 100% cpu time, and given enough time bring the machine to a
 halt.
 
 I've done a lot of testing, and there isn't a specific http request that
 triggers the behaviour, eventhough it always happens after a request. It
 seems to happen every few hours: the httpd process simply starts hogging
 up the CPU, and won't let go of it. After a while, I have a few of these
 processes running, and the machine's load average skyrockets. Sometimes
 it's bad enough I'm not even able to log in via console.
 
 I'll upgrade all the software to new versions, but apparently this
 problem has been ocurring for a while, and survived a couple of
 hardware/software upgrades. I'll also be rewriting the perl code running
 there to see if it stops the problem (the code isn't too clean - lots of
 global variables, not written under strict, etc, but "it works").
 However, it would be cool if someone could enlighten me on what's going
 on, and possibly suggest a fix :).
 
 Thanks a lot!
 
 signed,
 gustavo
 
 begin debugging info
 
 = our OS is:
 
 [root@blueland /root]# uname -a
 Linux blueland 2.2.14-5.0 #4 Wed Apr 12 20:28:28 MDT 2000 i586 unknown
 
 = Apache:
 
 Server Version: Apache/1.3.6 (Unix) mod_perl/1.19 mod_ssl/2.2.8
 OpenSSL/0.9.2b
 
 = let's look into one of the monster processes:
 
 497 ?R288:06 /usr/local/apache_1.3.6/bin/httpd
 
 = (nice cpu time there...)
 = now for gdb...
 
 [root@blueland /root]# gdb /usr/local/apache/bin/httpd 497
 GNU gdb 4.18
 Copyright 1998 Free Software Foundation, Inc.
 GDB is free software, covered by the GNU General Public License, and you
 are
 welcome to change it and/or distribute copies of it under certain
 conditions.
 Type "show copying" to see the conditions.
 There is absolutely no warranty for GDB.  Type "show warranty" for
 details.
 
 Attaching to program: /usr/local/apache/bin/httpd, Pid 497
 Reading symbols from /lib/libNoVersion.so.1...done.
 Reading symbols from /lib/libm.so.6...done.
 Reading symbols from /lib/libcrypt.so.1...done.
 Reading symbols from
 /usr/lib/perl5/5.00502/i586-linux/CORE/libperl.so...done.
 Reading symbols from /lib/libnsl.so.1...done.
 Reading symbols from /lib/libdl.so.2...done.
 Reading symbols from /lib/libc.so.6...done.
 Reading symbols from /lib/ld-linux.so.2...done.
 Reading symbols from /lib/libnss_files.so.2...done.
 Reading symbols from
 /usr/lib/perl5/site_perl/5.005/i586-linux/auto/Sybase/DBlib/DBlib.none...done.
 
 Reading symbols from /opt/sybase/lib/libsybdb.so...done.
 Reading symbols from /opt/sybase/lib/libinsck.so

Re: REPOST: Limiting Resources

2000-05-25 Thread Steve van der Burg

I am trying to limit the execution of a mod_perl script by setting the 
limit of RLimitCPU. But
this does not seem to work. I am using apache 1.3.12 , mod_perl. I 
tired using the module
Apache::Resource but even that did not work.

Does any body know why this is or am i missing something.


The RLimit stuff only affects processes forked from the httpd children (ie. CGI 
processes).  mod_perl code is part of the webserver child, so the limits don't apply.  
You'll need to look into limiting the httpd processes using another method (like in 
the shell that's used to launch the httpd parent).

...Steve


-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]




Re: Problem compiling mod_perl 1.23 on Solaris 2.4

2000-05-08 Thread Steve Hay

John D Groenveld wrote:

 -Xa is a Sun WorkShop Compiler C 4.2 option,
  -X[a|c|s|t]
   Specifies the degree of conformance to the ANSI C stan-
   dard.  Specifies one of the following:

   a (ANSI)
ANSI C plus KR C compatibility  extensions,  with
semantic  changes required by ANSI C.  Where KR C
and ANSI C specify  different  semantics  for  the
same  construct,  the compiler will issue warnings
about the conflict and use the ANSI C  interpreta-
tion. This is the default compiler mode.

The machine I was building on was Solaris 2.4 with the Sun Compiler v3.0
in which the cc manpage says "-Xt" ('transition') is the default.  I
also have a Solaris 2.6 machine with the Sun Compiler v4.0 which says,
like the snippet above, that "-Xa" is the default.

Steve Hay





Re: make test fails, httpd fails to start

2000-05-08 Thread Steve Bauer

The file t/logs/error_log is never created.  httpd fails because 
/t/conf/httpd.conf looks more like a pod file than a httpd.conf.

How is the correct httpd.conf file supposed to be created?

What can I do to determine why the creation is not happening??

Steve Bauer

Stas Bekman wrote:
 
 On Mon, 8 May 2000, Steve Bauer wrote:
 
  I have apache 1.3.12, and mod_perl 1.23.  both apache and mod_perl build
  successfully, but make test fails. the output from "make test" is:
 
  cp t/conf/mod_perl_srm.conf t/conf/srm.conf

httpd fails to start because of the following.
  will write error_log to: t/logs/error_log
  Syntax error on line 3 of /tmp/mod_perl-1.23/t/conf/httpd.conf:
  Invalid command '=pod', perhaps mis-spelled or defined by a module not
  included in the server configuration
  letting apache warm up...done
  /usr/local/bin/perl t/TEST 0
  still waiting for server to warm up...not ok
  server failed to start! (please examine t/logs/error_log) at t/TEST line
  95.
  *** Error code 146
  make: Fatal error: Command failed for target `run_tests'
 
 Yeah, how about:
  server failed to start! (please examine t/logs/error_log) at t/TEST line
^^^
  95.
 
 
 
  The file /t/httpd.conf contain several pod commands.
 
  Anybody have any ideas of where to look or how to fix this??
 
  Steve Bauer
  Cyber Database Solutions.
 
 
 __
 Stas Bekman | JAm_pH--Just Another mod_perl Hacker
 http://stason.org/  | mod_perl Guide  http://perl.apache.org/guide
 mailto:[EMAIL PROTECTED]  | http://perl.orghttp://stason.org/TULARC/
 http://singlesheaven.com| http://perlmonth.com http://sourcegarden.org
 --



Re: Problem compiling mod_perl 1.23 on Solaris 2.4

2000-04-27 Thread Steve Hay

Steve Hay wrote:

 I'm having a problem compiling mod_perl 1.23 (with Apache 1.3.12 / Perl
 5.6.0) as a DSO using APXS on Solaris 2.4.

In case anyone is interested...

I've solved my own problem (just as well, really).  If I re-compile
everything with the -Xa compiler flag then it all works out fine.





Compiling mod_perl on Windows NT

2000-04-26 Thread Steve Hay

Hi,

I've just been building Perl 5.6.0 / mod_perl 1.23 / Apache 1.3.12 on
Windows NT and found something which may be of some use.

The Apache installation does not seem to copy the headers and library
files from the build directory into the install directory (which it DOES
on Unix), so to build mod_perl you need to keep that Apache build build
directory handy.

I found that manually copying

apache_1.3.12\src\include
apache_1.3.12\src\os\win32\*.h
apache_1.3.12\src\CoreR\ApacheCore.lib

into the install directory and then pointing the DevStudio
include/library paths at these locations in the install directory works
fine, thus removing the need to keep the build directory floating about.

This presumably means that I can now ditch the build directory and still
be able to build mod_perl 1.24 when it comes using only my Apache
install directory.

- Steve Hay





Problem with CGI::Carp under mod_perl

2000-04-18 Thread Steve Hay


I'm having problems using "CGI::Carp qw(fatalsToBrowser);" in modperl
scripts.
Below are three short scripts and their output under Apache/CGI
and Apache/modperl. All three of them produce (more or less) useful
output under Apache/CGI, but only the last one does under Apache/modperl.
The first one calls die() itself. Under Apache/CGI the die() message
appears in the web browser (albeit preceded by a spurious Content-Type
line), but under Apache/modperl the message goes to the error.log and a
bizarre message appears in the web browser consisting of some output from
the script, followed by a "200 OK" HTTP header, followed by a message suggesting
that all was not OK after all (all the same as if CGI::Carp was not being
used).
The second one has a syntax error. Under Apache/CGI a message about
a compilation error appears in the web browser (but not the detail of the
syntax error itself, which disappears completely - not even going to error.log!);
under Apache/modperl an "Internal Server Error" message appears in the
web browser (again just like CGI::Carp was not being used) and (curiously)
the detail of the syntax error does now at least appear in error.log!
The third one attempts a division by zero and correctly says so
in the web browser under both Apache/CGI and Apache/modperl.
Can anybody explain what's going on here???
The first script is closest to the problem I've really got.
I'm using DBI/DBD::mysql and I want SQL syntax errors (which I keep making)
to appear in the web browser instead of having to keep opening the error.log.
Running under Apache/CGI I get useful messages like:
Software error:
DBD::mysql::st execute failed: You have an error in your SQL syntax
near 'BINARY USER_NAME LIKE 'mk-%' LIMIT 10' at line 1
at d:/inetpub/cgi-bin/mysql.pl line 300.
but under Apache/modperl I just get useless garbage like the error_die.pl
below produces.
I'm running Perl 5.005_03 / Apache 1.3.6 / mod_perl 1.22 on NT 4.

error_die.pl

 use CGI::Carp qw(fatalsToBrowser);
 $| = 1;
 print "Content-Type: text/html\n\n";
 print "I'm about to die() ...\n";
 die "I'm dead.\n";
Apache/CGI:
I'm about to die() ... Content-type: text/html
Software error:
I'm dead.
For help, please send mail to the webmaster ([EMAIL PROTECTED]),
giving this error message and the time and date of the error.
Apache/modperl:
I'm about to die() ... HTTP/1.1 200 OK Date: Tue, 18 Apr 2000 11:09:35
GMT Server: Apache/1.3.6 (Win32) mod_perl/1.22 Connection: close Content-Type:
text/html
OK
The server encountered an internal error or misconfiguration and
was unable to complete your request.
Please contact the server administrator, [EMAIL PROTECTED]
and inform them of the time the error occurred, and anything you might
have done that may have caused the error.
More information about this error may be available in the server
error log.

error_syntax.pl
---
 use CGI::Carp qw(fatalsToBrowser);
 $| = 1;
 print "Content-Type: text/html\n\n";
 print "Syntax error at the end of this line
...\n"
 print "blah blah blah.\n";
Apache/CGI:
Software error:
Execution of d:/inetpub/cgi-bin/error_syntax.pl aborted due to
compilation errors.
For help, please send mail to the webmaster ([EMAIL PROTECTED]),
giving this error message and the time and date of the error.
Apache/modperl:
Internal Server Error
The server encountered an internal error or misconfiguration and
was unable to complete your request.
Please contact the server administrator, [EMAIL PROTECTED]
and inform them of the time the error occurred, and anything you might
have done that may have caused the error.
More information about this error may be available in the server
error log.

error_divide.pl
---
 use CGI::Carp qw(fatalsToBrowser);
 $| = 1;
 print "Content-Type: text/html\n\n";
 print "I'm about to divide by zero ...\n";
 my $x = 1 / 0;
Apache/CGI:
Software error:
Illegal division by zero at d:/inetpub/cgi-bin/error_divide.pl
line 5.
For help, please send mail to the webmaster ([EMAIL PROTECTED]),
giving this error message and the time and date of the error.
Apache/modperl:
Software error:
Illegal division by zero at d:/inetpub/cgi-bin/error_divide.pl
line 5.
For help, please send mail to the webmaster ([EMAIL PROTECTED]),
giving this error message and the time and date of the error.




Problem with CGI::Carp under mod_perl

2000-04-18 Thread Steve Hay

Sorry!  Here it is again in text/plain this time...

(My mail client doesn't ask whether I want to send in text or HTML,
hence the slip.  Maybe *I* should get a new one!)

---

I'm having problems using "CGI::Carp qw(fatalsToBrowser);" in modperl
scripts.

Below are three short scripts and their output under Apache/CGI and
Apache/modperl.  All three of them produce (more or
less) useful output under Apache/CGI, but only the last one does under
Apache/modperl.

The first one calls die() itself. Under Apache/CGI the die() message
appears in the web browser (albeit preceded by a
spurious Content-Type line), but under Apache/modperl the message goes
to the error.log and a bizarre message appears in
the web browser consisting of some output from the script, followed by a
"200 OK" HTTP header, followed by a message
suggesting that all was not OK after all (all the same as if CGI::Carp
was not being used).

The second one has a syntax error. Under Apache/CGI a message about a
compilation error appears in the web browser (but not
the detail of the syntax error itself, which disappears completely - not
even going to error.log!); under Apache/modperl an
"Internal Server Error" message appears in the web browser (again just
like CGI::Carp was not being used) and (curiously)
the detail of the syntax error does now at least appear in error.log!

The third one attempts a division by zero and correctly says so in the
web browser under both Apache/CGI and
Apache/modperl.

Can anybody explain what's going on here???

The first script is closest to the problem I've really got.  I'm using
DBI/DBD::mysql and I want SQL syntax errors (which I
keep making) to appear in the web browser instead of having to keep
opening the error.log.  Running under Apache/CGI I get
useful messages like:

Software error:
DBD::mysql::st execute failed: You have an error in your SQL syntax near
'BINARY USER_NAME LIKE 'mk-%' LIMIT 10' at line 1
at d:/inetpub/cgi-bin/mysql.pl line 300.

but under Apache/modperl I just get useless garbage like the
error_die.pl below produces.

I'm running Perl 5.005_03 / Apache 1.3.6 / mod_perl 1.22 on NT 4.


error_die.pl


use CGI::Carp qw(fatalsToBrowser);
$| = 1;
print "Content-Type: text/html\n\n";
print "I'm about to die() ...\n";
die "I'm dead.\n";

Apache/CGI:

I'm about to die() ... Content-type: text/html
Software error:
I'm dead.
For help, please send mail to the webmaster ([EMAIL PROTECTED]),
giving this error message and the time and date of
the error.

Apache/modperl:

I'm about to die() ... HTTP/1.1 200 OK Date: Tue, 18 Apr 2000 11:09:35
GMT Server: Apache/1.3.6 (Win32) mod_perl/1.22
Connection: close Content-Type: text/html
OK
The server encountered an internal error or misconfiguration and was
unable to complete your request.
Please contact the server administrator, [EMAIL PROTECTED] and
inform them of the time the error occurred, and
anything you might have done that may have caused the error.
More information about this error may be available in the server error
log.


error_syntax.pl
---

use CGI::Carp qw(fatalsToBrowser);
$| = 1;
print "Content-Type: text/html\n\n";
print "Syntax error at the end of this line ...\n"
print "blah blah blah.\n";

Apache/CGI:

Software error:
Execution of d:/inetpub/cgi-bin/error_syntax.pl aborted due to
compilation errors.
For help, please send mail to the webmaster ([EMAIL PROTECTED]),
giving this error message and the time and date of
the error.

Apache/modperl:

Internal Server Error
The server encountered an internal error or misconfiguration and was
unable to complete your request.
Please contact the server administrator, [EMAIL PROTECTED] and
inform them of the time the error occurred, and
anything you might have done that may have caused the error.
More information about this error may be available in the server error
log.


error_divide.pl
---

use CGI::Carp qw(fatalsToBrowser);
$| = 1;
print "Content-Type: text/html\n\n";
print "I'm about to divide by zero ...\n";
my $x = 1 / 0;

Apache/CGI:

Software error:
Illegal division by zero at d:/inetpub/cgi-bin/error_divide.pl line 5.
For help, please send mail to the webmaster ([EMAIL PROTECTED]),
giving this error message and the time and date of
the error.

Apache/modperl:

Software error:
Illegal division by zero at d:/inetpub/cgi-bin/error_divide.pl line 5.
For help, please send mail to the webmaster ([EMAIL PROTECTED]),
giving this error message and the time and date of
the error.





Re: [OT] mysql-modules for Win32 platform

2000-04-17 Thread Steve Hay

Erich Markert wrote:

 I've been trying to get the msql-mysql-modules compiled and installed on
 my Win98 machine for a couple weeks without much luck.

I managed to get these working on NT4 with both 5.005_03 and 5.6.0 (both
built myself from the standard distribution, not the ActiveState build)
after a bit of hacking...

 I tried using the latest version of perl 5.6 from active state but ran
 into nothing but problems.  Basically running perl Makefile.PL for
 Data::ShowTable (a required module) failed because the version of perl
 couldn't be determined - even after reinstalling.

I also had trouble with this when I had mSQL installed.  It said "Unable to
find a perl 5" and then proceeded to name the files it was looking for
(including perl.exe!) and the directories where it was looking (including
D:\perl5\bin, which is where it was!).  To my surprise, I found that
uninstalling mSQL and then trying to build Data::ShowTable again worked
fine!  Weird.  Anyway, I've now ditched mSQL in favour of the much better
MySQL which doesn't suffer this problem.

I don't know if this is the same problem you had -- you might have a
different problem because you're using the ActiveState build?  I never had
much luck building any CPAN modules with that, which is why I never use
it...

I do still get 17/17 tests failed when running "nmake test" (!!!), but its
enough to stop msql-mysql-modules complaining that a pre-requisite is
missing.

 I finally reverted back the a previous version of perl (Gurusamy
 Sarathy's version 5.004_02) and was able to get Data::ShowTable and DBI
 installed but now when I run perl Makefile.PL for Msql-Mysql-modules I
 receive these errors:

 Note (probably harmless): No library found for 'm.lib'

I got a similar message regarding "-lm" which I just ignored (!!!... well,
it said it was probably harmless :-)

I also found that I had to hack the Makefile generated by "perl Makefile.PL"
to change the two lines which say:

-e ppp '...' '...' '...'

to:

-e "ppp('...', '...', '...')"

and I hacked the Makefile in the mysql sub-directory to change:

OTHERLDFLAGS = -LD:\mysql/lib/opt

to:

OTHERLDFLAGS = -LIBPATH:D:\mysql/lib/opt

None of this was necessary on my Solaris 2.6 box, however, where everything
went like a dream...


- Steve Hay




Problem Compiling with Perl 5.6.0

2000-03-30 Thread Steve Hay

Since I had no reply to my previous problem (re-directing STDOUT in
system() calls), I thought I would try using Perl 5.6.0 instead of
5.005_03 (probably a good idea anyway) to see if that helped.

Unfortunately, now I can't get (the Apache side of) mod_perl to compile.

I'm using MSVC++ 6.0 on Windows NT 4.  Perl 5.6.0 and Apache 1.3.12
compiled fine, as does the Perl side of mod_perl 1.22, but when I go
into Dev Studio to compile the Apache side I get the following output
for each file:

D:\Temp\apache_1.3.12\src\include\../os/win32/os.h(87) : warning C4005:
'crypt' : macro redefinition
D:\perl5\lib\CORE\win32iop.h(301) : see previous definition of
'crypt'
D:\Temp\apache_1.3.12\src\include\../os/win32/os.h(109) : warning C4142:
benign redefinition of type
D:\Temp\apache_1.3.12\src\include\../os/win32/os.h(110) : warning C4142:
benign redefinition of type
D:\Temp\apache_1.3.12\src\include\../os/win32/os.h(112) : error C2371:
'mode_t' : redefinition; different basic types
D:\perl5\lib\CORE\win32.h(197) : see declaration of 'mode_t'
D:\Temp\apache_1.3.12\src\include\../os/win32/os.h(146) : warning C4005:
'sleep' : macro redefinition
D:\perl5\lib\CORE\win32iop.h(279) : see previous definition of
'sleep'
D:\Program Files\Microsoft Visual Studio\VC98\Include\stddef.h(78) :
warning C4005: 'errno' : macro redefinition
D:\perl5\lib\CORE\win32iop.h(188) : see previous definition of
'errno'
D:\Temp\apache_1.3.12\src\include\../os/win32/os.h(165) : warning C4005:
'stat' : macro redefinition
D:\perl5\lib\CORE\win32iop.h(223) : see previous definition of
'stat'
D:\Temp\apache_1.3.12\src\include\../os/win32/readdir.h(34) : error
C2373: 'win32_opendir' : redefinition; different type modifiers
D:\perl5\lib\CORE\win32iop.h(116) : see declaration of
'win32_opendir'
D:\Temp\apache_1.3.12\src\include\../os/win32/readdir.h(35) : error
C2373: 'win32_readdir' : redefinition; different type modifiers
D:\perl5\lib\CORE\win32iop.h(117) : see declaration of
'win32_readdir'
D:\Temp\apache_1.3.12\src\include\../os/win32/readdir.h(36) : error
C2373: 'win32_closedir' : redefinition; different type modifiers
D:\perl5\lib\CORE\win32iop.h(121) : see declaration of
'win32_closedir'

Any ideas, anyone?

Has anyone else got 5.6.0 / 1.3.12 / 1.22 going on NT 4?


Steve Hay





Re: Problem Compiling with Perl 5.6.0

2000-03-30 Thread Steve Hay

"G.W. Haywood" wrote:

 Come to think of it, NT probably wasn't the best idea you ever had
 either.

I agree, but we're selling a web application and most of our customers
want it on NT.

  Unfortunately, now I can't get (the Apache side of) mod_perl to
  compile.

 You aren't alone.  You really are on the bleeding edge with that lot.
 My advice would be ro try Linux, and stick with Perl 5.005_03 and
 mod_perl 1.21/Apache 1.3.11 (or .12) for a few weeks.

Well, I got mod_perl 1.22 going on NT with Perl 5.005_03 / Apache 1.3.12
(apart from the problem with system() calls...) after two quick hacks (one
to lib\Apache\src.pm and one to src\modules\perl\Util.xs) so it just seems
a shame I can't get it to go with Perl 5.6.0.  I just wondered if anyone
out there new of any more hacks to help...





Re: Problem Compiling with Perl 5.6.0

2000-03-30 Thread Steve Hay

Thanks for this!

I tried it with the latest mod_perl cvs: mine now compiles perfectly too
(_never_ seen that before!!!), and there's now only 1 unresolved external
symbol instead of 7.  Looks like it could be nearly there - I'll try another
one some time.

I look forward to mod_perl 1.23 ...

Steve Hay


Randy Kobes wrote:

 On Thu, 30 Mar 2000, Steve Hay wrote:

  Has anyone else got 5.6.0 / 1.3.12 / 1.22 going on NT 4?

 Hi,
  There's a couple things you can do -

 - add the flag /D "WIN32IOP_H" - this handles the win32_opendir
 and similar errors.
 - for the mode_t error, in apache/src/os/win32/os.h, change
 the typedef of mode_t from 'int' to 'unsigned short', so as
 to agree with the mode_t typedef of perl in perl/lib/core/win32.h.
 In the same apache os.h file, if you change the typedef of
 uid_t and gid_t from 'int' to 'long', again so as to agree
 with Perl's typedefs, then the compilation, at least for me,
 proceeeds without warnings.

 Unfortunately, there's some problems in the linking phase - some
 symbols that were present in perl.lib of 5.005_03 have been
 removed from perl56.lib of 5.6.0. Doug has worked on this - you
 may want to get the latest mod_perl cvs snapshot from
 http://perl.apache.org/ and try that.




Problem re-directing STDOUT in system() calls

2000-03-29 Thread Steve Hay

Hi,

I've had this problem before, but never got to the bottom of it.

I'm cursed with a situation in which I need to run some .exe file from a
(mod)perl script.  The program concerned is a console application so it
just writes its output on STDOUT.  I need to re-direct that output to a
temporary file, and then read the file in to process in the perl script.

Sounds simple enough, and it works fine running Apache without
mod_perl.  But as soon as I put mod_perl into the equation I find that I
can't re-direct STDOUT in the system() call.  The following script
illustrates the problem:

print "Content-Type: text/html\n\n";
$status = system "D:\\WINNT\\system32\\ipconfig.exe 
D:\\Temp\\ip.txt";
print "The system() call exited with status $status.\n";

Without mod_perl this works fine: "ip.txt" is created and $status is 0.
But with mod_perl "ip.txt" is not created, $status is 256 and the
following line appears in error.log:

The handle could not be opened
during redirection of handle 1.

Can anyone help?


My setup is as follows:

- NT 4 Workstation, Service Pack 6

- Perl 5.005_03 built with VC++ 6 and the Makefile options:
CFG = Optimize
USE_PERLCRT
PERL_MALLOC

- Apache 1.3.12 built with VC++ 6

- mod_perl 1.22 built with VC++ 6

- D: is a local disk which I have full access to


I've found that the problem goes away if I downgrade to Apache 1.3.6 and
keep everything else the same!


Steve Hay





Re: Problem re-directing STDOUT in system() calls

2000-03-29 Thread Steve Hay

"Andrei A. Voropaev" wrote:

 See the guide. Under modperl the output from system will not go to the
 user unless your perl was compiled with sfio. The reason for that I
 guess is that under modperl STDOUT is tied to a package, while system
 commands expect file descriptor. The easiest way to overcome it is to
 use `` (backtics) and capture all output into variable and then print
 it out.

 Andrei

The guide actually says:

3.5.5  Output from system calls
Output of system(), exec(), and open(PIPE,"|program") calls will not be
sent to the browser
unless your Perl was configured with sfio.

which is fair enough, but does it explain the problem I've got?

It DOES explain why the following script prints the output of IPCONFIG in the
browser when running under Apache and doesn't when running under Apache +
mod_perl:

$| = 1;
print "Content-Type: text/plain\n\n";
$status = system "D:\\WINNT\\system32\\ipconfig.exe";
print "The system() call exited with status $status.\n";

but that isn't my problem.

How does it explain why the following works under Apache 1.3.6 + mod_perl and
not under Apache 1.3.12 + mod_perl:

print "Content-Type: text/plain\n\n";
$status = system "D:\\WINNT\\system32\\ipconfig.exe  D:\\Temp\\ip.txt";
print "The system() call exited with status $status.\n";

?

I'm not trying to get the output of the system() call into the browser - I
want to re-direct it to a file - and the difference between the one which
works and the one which doesn't work is not mod_perl: it's the Apache version!

Am I also correct in thinking that configuring Perl with sfio is only an
option on Unix (which, BTW, doesn't have my problem anyway!)?

Help!


Steve Hay





RE: ANNOUNCE: HTML::Embperl 1.3b2

2000-02-13 Thread Steve Willer


On Mon, 14 Feb 2000, Gerald Richter wrote:

 If you really like to do so, we have to compile the perl (of every object)
 for every namepsace it will run into. Currently I think more of a feature
 like exporting variables (like Perl modules can do), so that they are
 visible in all object during the request. What do you think?

Yes, that makes sense. Perhaps an optional argument for Execute that lists
the variables to export? Actually, there has to be a way to get variables
in both directions. Perhaps an export list and an import list in the
params to Execute? Or maybe if you just make sure we can export hashes
like %fdat, %cdat and %errors, then that would take care of the
bidirectional need. 

 Currently you can get $ENV{SCRIPT_NAME} that contains /x/y/z.html and
 $ENV{PATH_TRANSLATED} will contain the filename of page your are in at the
 moment. Everything else you have strip out of these two at the moment.

Yep. I'm kinda thinking of making templates easy to use for newbie
engineers. Although specialized variables tend to duplicate things that
are in other variables, they are useful for making things easier to use. I
don't know if there's an elegant way to do it, though -- after all my
meetings, I don't have a lot of brainpower left right now for design. 

  This is useful in situations where I want to have nav bars that
  are page-specific, then directory-specific, then site-default.
  In this example, for the top nav, I would look for y/z_top.html,
  then y/_top.html, then _top.html.
  Anyway, that's what I do. I don't know that my solution is the
  most elegant, though.
 
 If you do an Execute ("_top.html") then it will look in the same directories
 for _top.html, as it looked for the _template.html. In your example this
 means Embperl will automaticly look for /x/y/_top.html and /x/_top.html, but
 not /_top.html. Maybe it would be usefull to search for /_top.html also, but
 I am not sure how it is the best to determinate where to stop the serach for
 such sub-objects. The current solution would be to have a /_template.html
 which that calls a _another_template.html and _another_template.html is the
 one that is overwritten (and calls _top.html). In that way EmbperlObject
 will always serach "/x/y", "/x", "/".

I realized this, and I definitely like the way you did it. The only thing
it doesn't give me is _per page_ "_top.html" files. If you see my example
at the top, you see it looks for y/z_top.html first. This functionality
isn't supported in the path right now. Just a thought: What if the path
somehow supported a prefix on filenames? Like if it was "y/z:y/:./", then
you tack on the filename after each element in the path. The only problem
is that this would require a trailing "/" on true directory names, but if
you changed EmbperlObject to generate conforming paths it wouldn't be so
much of a problem (I think).

 But this could be easily done with a Perl regex and I don't want to setup to
 much globals, because they pollute your namespace and it takes time to set
 them up. Both are bad in cases you don't need them. When we have the export
 feature (I decribed above, or something which serve the same purpose), you
 could simple set them up at the top of your template.

Yes. I may do this, since there's typically only one template per website.
Or perhaps in a single function each template calls (which is what I'm
currently doing).

  The reason I ask is that we use these things called "actions",
  where the filename in the URL doesn't usually exist. The "action"
  is a library object referenced by $fdat{AID}. The library object
  is executed, and depending on error or success state, it returns
  back different names for the object that should be loaded.
  *Then* we load the object. This very useful for form handlers that
  have different states, as it avoids redirects or big [$if$]
  blocks in html files.
  Any ideas?
 
 I think we can make Embperl here more smart/configurable it is able to
 handle this case also in your way

I really liked this "action" idea when I found out about it, as it solves
multi-state responses to forms (error state, success state, etc.) very
elegantly. Out of curiosity, what do you typically do for forms? An
example might be a registration form where there are required fields. How
does your page flow work?

  Okay, so I'm a little verbose. Now I'll report a bug I've found lately:
  Remember, I run templates and objects in the same namespace? I've found
  that variables declared in the template often are not cleaned up at the
  end of the request. The next page load goes to the same template, of
  course, and the variable is still there.
 
 I guess this is because you call the object from different pages (with
 different namespaces). As mentioned above Perl can only compile them for one
 namespace, so the object is compiled for namespace A, but Embperl things it
 has to cleanup namespace B. Could this be the 

Slight performance enhancement for dynamic sites.

2000-02-02 Thread Steve Reppucci


Here's a tiny performance tweak that I stumbled across that I don't
believe I've seen in any of the other online docs.  (Stas, maybe you
can suck this into the guide if you think it's something new...)

In doing some tweaking on one of our modperl servers earlier this
week, I noticed via 'truss' that a bunch of 'stat's were being done
for non-existent files.

The fact that they are non-existent is cool, because it's a site that
has no document root, its sole purpose is to dynamically generate and
cache date bar images in a bunch of different styles
(http://date.boston.com/, BTW.)

But I didn't like the thought that these stat calls were happening for
each request.

So, my solution was to add the following to the VirtualHost section
for that host:

VirtualHost 199.95.74.82:80
.
.
  PerlTransHandler  "sub { return OK; }"
.
.
/VirtualHost

This has the effect of short circuiting the normal TransHandler
processing of trying to find a filesystem component that matches the
given URI -- no more 'stat's!

For this vhost, I was able to apply this globally, but I can imagine
cases where others have portions of a document tree that are static
along with portions that are dynamic, and needing to supply a bit more
sophisticated TransHandler routine to only return OK if the requested
URI matches the dynamic portions of the document tree.

If anyone has a better way of handling this, or a pointer to somewhere
in the book or online where this is discussed, I'll be happy to hear
of it.

Steve

-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  =-=-=-=-=-=-=-=-=-=
Steve Reppucci  617/929-7003
Director of Software Development [EMAIL PROTECTED]
Boston.com (Times Company Digital)   Be Open



As long as we're at it...

2000-02-01 Thread Steve Reppucci


As long as we're on the job thread:

Boston.com is looking to hire another programmer, preferably one with
modperl experience, but we're extremely willing to hire someone who's
just a good, solid web programmer, with an interest in learning about
apache and modperl.

Requirements are a strong knowledge of perl and a desire explore new
ways to use it to help drive site traffic.

We'd be willing to hire someone on a contract basis, but would prefer
a fulltime employee. Database background would be nice.  

Other stuff: We're in Boston, MA (USA) near the Children's Museum,
about 3 blocks from South Station.  A good, young, fun group of
people, who are committed to writing clean, fast code.  (You'll be
hard pressed to find stuff without 'use strict' here...)  We're all
Unix, all the time (Solaris, Linux), so please, MSCE me no MSCEs.

mailto:[EMAIL PROTECTED]

Thanks.
Steve

-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  =-=-=-=-=-=-=-=-=-=
Steve Reppucci  617/929-7003
Director of Software Development [EMAIL PROTECTED]
Boston.com (Times Company Digital)   Be Open



Re: splitting mod_perl and sql over machines

2000-01-18 Thread Steve Reppucci


Stas:

One other thing you might want to mention in your thread: the use of
Apache::DBI to maintain persistent connections to the DB can cause a
problem if you have multiple modperl servers all talking to the same DB
server.

For instance, on our site, we have 2 hosts running modperl, each of which
is set to have a MaxClients of 128 (probably too much, but...) In
addition, there are various conventional CGIs talking to the same
host running a MySQL server.  If we try to run more modperl servers (or
even during heavy traffic times with only 2 modperl servers), we
frequently see MySQL errors from "maximum number of connections exceeded".
This makes sense, as all of those long-lived, persistent DB connections
are presumably tieing up MySQL resources...

Granted, this is using MySQL pretty much out of the box, without much
attention spent on whether it is possible to configure a larger connection
limit, but I think it's something folks might want to be aware of.

Not sure if I've added anything to this thread, but...

Steve

On Tue, 18 Jan 2000, Leslie Mikesell wrote:

 According to Stas Bekman:
 
  We all know that mod_perl is quite hungry for memory, but when you have
  lots of SQL requests, the sql engine (mysql in my case) and httpd are
  competing for memory (also I/O and CPU of course). The simplest solution
  is to bump in a stronger server until it gets "outgrown" as the loads
  grow and you need a more sophisticated solution.
 
 In a single box you will have contention for disk i/o, RAM, and CPU.
 You can avoid most of the disk contention (the biggest time issue)
 by putting the database on it's own drive.  I've been running dual
 CPU machines, which seems to help with the perl execution although
 I haven't really done timing tests against a matching single
 CPU box.  RAM may be the real problem when trying to expand a
 Linux pentium box.
 
  My question is a cost-effectiveness of adding another cheap PC vs
  replacing with new expensive machine. The question is what are the
  immediate implications on performace (speed)? Since the 2 machines has to
  interact between them. e.g. when setting the mysql to run on one machine
  and leaving mod_perl/apache/squid on the other. Anyone did that? 
 
 Yes, and a big advantage is that you can then add more web servers
 hitting the same database server.
 
  Most of my requests are served within 0.05-0.2 secs, but I afraid that
  adding a network (even a very fast one) to deliver mysql results, will
  make the response answer go much higher, so I'll need more httpd processes
  and I'll get back to the original situation where I don't have enough
  resources. Hints?
 
 The network just has to match the load.  If you go to a switched 100M
 net you won't add much delay.  You'll want to run persistent DBI
 connections, of course, and do all you can with front-end proxies
 to keep the number of working mod_perl's as low as possible.
 
  I know that when you have a really big load you need to build a cluster of
  machines or alike, but when the requirement is in the middle - not too
  big, but not small either it's a hard decision to do... especially when
  you don't have the funds :)
 
 The real killer time-wise is virtual memory paging to disk.  Try to 
 estimate how much RAM you are going to need at once for the mod_perl
 processes and the database and figure out whether it is cheaper to
 put it all in one box or two.  If you are just boarderline on needing
 the 2nd box, you might try a different approach.  You can use a
 fairly cheap box as a server for images and static pages, and perhaps
 even your front-end proxy server as long as it is reliable.
 
   Les Mikesell
[EMAIL PROTECTED]
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |
508/958-0183 Be Open |



Re: Apache 1.3.9 + mod_perl 1.21 + Solaris 2.7 dumps core

2000-01-13 Thread Steve van der Burg

$ httpd -X
Segmentation Fault(coredump)

Ouch!

My config:

Apache 1.3.9
mod_perl 1.21
Solaris 2.7

$ cat makepl_args.modperl
APACHE_SRC=/home2/web/build/apache_1.3.9/src
EVERYTHING=1
USE_APXS=1
WITH_APXS=/home2/web/apache_1.3.9/bin/apxs

This is almost identical to my setup (same OS, architecture (sun4u), same apache  
mod_perl versions, same compiler), except I don't do APXS:

APACHE_SRC=../apache_1.3.9/src
DO_HTTPD=1
USE_APACI=1
PREP_HTTPD=1
EVERYTHING=1

$ perl -V
[ snipped ]

Here's mine, used in production for about 4 months now, under heavy usage.  Note that 
my compiler optimizations are different than yours.

Summary of my perl5 (5.0 patchlevel 5 subversion 3) configuration:
  Platform:
osname=solaris, osvers=2.7, archname=sun4-solaris
uname='sunos titan 5.7 generic sun4u sparc sunw,ultra-250 '
hint=previous, useposix=true, d_sigaction=define
usethreads=undef useperlio=undef d_sfio=undef
  Compiler:
cc='cc', optimize='-O -native', gccversion=
cppflags='-I/opt/include'
ccflags ='-I/opt/include'
stdchar='char', d_stdstdio=define, usevfork=false
intsize=4, longsize=4, ptrsize=4, doublesize=8
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
alignbytes=8, usemymalloc=y, prototype=define
  Linker and Libraries:
ld='cc', ldflags ='-L/opt/lib'
libpth=/opt/lib /lib /usr/lib /usr/ccs/lib
libs=-lgdbm -lsocket -lnsl -ldl -lm -lc -lcrypt
libc=/lib/libc.so, so=so, useshrplib=false, libperl=libperl.a
  Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' '
cccdlflags='-KPIC', lddlflags='-G -L/opt/lib'

Characteristics of this binary (from libperl):
  Built under solaris
  Compiled at Sep 17 1999 14:21:43
  @INC:
/opt/lib/perl5/5.00503/sun4-solaris
/opt/lib/perl5/5.00503
/opt/lib/perl5/site_perl/5.005/sun4-solaris
/opt/lib/perl5/site_perl/5.005
.



...Steve

-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]



Re: perl -V ??

2000-01-11 Thread Steve van der Burg

perl -V got me the following, BUT how do I tell if Perl modules: 
Digest::MD5, Crypt::DES, Crypt::CBC
are installed?

# perl -V
 [snipped]

You can poke around in under the directories mentioned by @INC in the perl -V output, 
or you can do things like:

% perl -MDigest::MD5 -le 'print $Digest::MD5::VERSION'

for each module.

...Steve


-- 
Steve van der Burg
Information Services
London Health Sciences Centre
(519) 685-8300 ext 35559
[EMAIL PROTECTED]



Re: Reasons why DBI should fail w/mod_perl

1999-11-24 Thread Steve Willer


On Wed, 24 Nov 1999, Martin A. Langhoff wrote:
 Wow! 41 words and not a single colon|comma|period|semicolon  :)

Congrats. :-)

 Is there a list of possible reasons to explain why a DBI connect to
 a mysql server (apache and mysqld running on the same host) fails if
 called from mod_perl and succeeds from a standard CGI perl script?

Is this a reasonably busy site? How many Apache children are typically
running? Perhaps you're hitting a max-connections limit.

Do you get an error string when the connect fails?



Re: still having errors on 'make test'

1999-11-10 Thread Steve Reppucci


I've seen this problem too.  There's a 'sleep 5' in the test harness,
apparently to wait for the server to start.  On a heavily loaded system,
this is too short.  Whenever I've encountered this, I change the sleep to
10 seconds and it works fine...

HTH,
Steve

On Mon, 8 Nov 1999, Scott R. Every wrote:

 Whenever I run make test:
 httpd listening on port 8529
 will write error_log to: t/logs/error_log
 letting apache warm up...\c
 Syntax error on line 3 of
 /data/test/ssl_apache/mod_perl-1.21/t/conf/httpd.conf:
 Invalid command '=pod', perhaps mis-spelled or defined by a module not
 included
 in the server configuration
 done
 /usr/bin/perl t/TEST 0
 still waiting for server to warm upnot ok
 server failed to start! at t/TEST line 95.
 make: *** [run_tests] Error 9
 
 Everything appears to compile correctly.
 
 Does anyone have any ideas what I'm doing wrong?
 
 tia
 
 s
 
 --
 Scott R. Every - mailto:[EMAIL PROTECTED]
 EMJ Internet - http://www.emji.net
 voice : 1-888-258-8959  fax : 1-919-363-4425
 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |
508/958-0183 Be Open |



mod_perl with APXS plus Raven equals segfault

1999-10-18 Thread Steve Snodgrass

I've been using mod_perl with Raven's SSL package for some time now, but I'm
building a refresh of our environment with new versions of everything and I
ran into trouble.  I decided to use APXS this time instead of building
mod_perl statically.  Everything compiled and installed fine but Apache
immediately segfaults on startup.  The details:

Sun Ultra Enterprise 3500
Solaris 7 (HW 5/99)
Apache 1.3.9 (built from Raven pre-patched source)
Raven SSL 1.4.1
mod_perl 1.21
perl 5.005_03
gcc 2.95.1 (regular Solaris ld, GNU ld is not even on the system)

I guess I can go back to compiling mod_perl statically, but it would be nice
to get this fixed.  Any thoughts?  Thanks.

-- 
Steve "Pheran" Snodgrass * [EMAIL PROTECTED] * FORE Systems Unix Administrator
Geek Code: GCS d? s: a- C++ US$ P+++ L+ w PS+ 5++ b++ DI+ D++ e++ r++ y+*
"What to do I find it hard to know/The road I walk is not the one I chose" -Yes



<    1   2