Re: does pnotes() work at all in 1.27? [RESOLVED]

2003-07-20 Thread Mark Maunder
An upgrade to 1.28 fixed this. Never found out what caused it under
1.27.

On Sat, 2003-07-19 at 13:20, Mark Maunder wrote:
 Hi. This is a rather comprehensive (read 'cathartic') message, so if you
 have something productive to go and do, then you'd probably be better
 off doing that. For all other interested parties, read on
 
 I've done a few more tests and isolated this to my production server
 only. pnotes() works fine on my workstation. And the problems described
 below apply to notes() too. So it's something to do with my
 apache/mod_perl installation. Take a look at this:
 
 I've cut down the production server to a bare bones httpd.conf with the
 following virtual hosts section:
 VirtualHost *
 DocumentRoot /home/mark/lib
 ServerName localhost
 Location /
 PerlFixupHandler Handler1
 PerlHandler Handler2
 PerlLogHandler Handler3
 /Location
 /Virtualhost
 
 my startup.pl looks like this:
 #!/usr/bin/perl
 require '/home/mark/lib/Test.pl';
 1;
 
 And Test.pl looks like this:
 package Handler1;
 sub handler {
 my $r = shift @_;
 $r-pnotes('note1', 'msg1');
 warn HANDLER1 says:  . $r-pnotes('note1');
 return OK;
 }
 package Handler2;
 sub handler {
 my $r = shift @_;
 $r-pnotes('note2', 'msg2');
 warn HANDLER2 says:  . $r-pnotes('note2');
 warn HANDLER2 got:  . $r-pnotes('note1');
 $r-send_http_header('text/html');
 $r-print(Hello World!\n);
 return OK;
 }
 package Handler3;
 sub handler {
 my $r = shift @_;
 warn HANDLER3 got:  . $r-pnotes('note1') .
 ' and ' . $r-pnotes('note2');
 return OK;
 }
 1;
 
 This gives an output in error_log of the following:
 HANDLER1 says: msg1 at /home/mark/lib/Test.pl line 5.
 HANDLER1 says: msg1 at /home/mark/lib/Test.pl line 5.
 HANDLER1 says: msg1 at /home/mark/lib/Test.pl line 5.
 HANDLER2 says: msg2 at /home/mark/lib/Test.pl line 12.
 HANDLER2 got: msg1 at /home/mark/lib/Test.pl line 13.
 HANDLER3 got: msg1 and  at /home/mark/lib/Test.pl line 21.
 
 Which shows that pnotes can pass data from the Fixup handler to the
 Response handler, but anything the Response handler sets is lost by the
 time the Logging handler is called. Although the data that the Fixup
 handler sets is still there. 
 
 Question: Why is the Fixup handler being called 3 times? If you look at
 the sniffer output I've included, you'll see there's a single request
 and response. I checked the URI that was being called and it's '/' in
 all three cases.
 
 Just to be sure, I added this to Handler2 (the Response handler)
 if($r-is_main())
 {
$r-print('You are in main');
 }
 And it prints out the string. So it is the main request. 
 
 Here is some more info:
 When I did this test, I stripped out everything from httpd.conf (relying
 heavily on vim's undo feature because this server will be live in 48
 hours, pnotes or no pnotes!).
 
 Here is the output from httpd -l
 Compiled-in modules:
   http_core.c
   mod_env.c
   mod_log_config.c
   mod_mime.c
   mod_negotiation.c
   mod_status.c
   mod_include.c
   mod_autoindex.c
   mod_dir.c
   mod_cgi.c
   mod_asis.c
   mod_imap.c
   mod_actions.c
   mod_userdir.c
   mod_alias.c
   mod_rewrite.c
   mod_access.c
   mod_auth.c
   mod_proxy.c
   mod_headers.c
   mod_setenvif.c
   mod_gzip.c
   mod_perl.c
 suexec: disabled; invalid wrapper /usr/local/apache/bin/suexec
 
 Here is the output from httpd -V:
 Server version: ZipTree (Unix)
 Server built:   Jul  8 2003 12:56:03
 Server's Module Magic Number: 19990320:13
 Server compiled with
  -D HAVE_MMAP
  -D HAVE_SHMGET
  -D USE_SHMGET_SCOREBOARD
  -D USE_MMAP_FILES
  -D HAVE_FCNTL_SERIALIZED_ACCEPT
  -D HAVE_SYSVSEM_SERIALIZED_ACCEPT
  -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
  -D HARD_SERVER_LIMIT=256
  -D HTTPD_ROOT=/usr/local/apache
  -D SUEXEC_BIN=/usr/local/apache/bin/suexec
  -D DEFAULT_PIDLOG=logs/httpd.pid
  -D DEFAULT_SCOREBOARD=logs/apache_runtime_status
  -D DEFAULT_LOCKFILE=logs/accept.lock
  -D DEFAULT_ERRORLOG=logs/error_log
  -D TYPES_CONFIG_FILE=conf/mime.types
  -D SERVER_CONFIG_FILE=conf/httpd.conf
  -D ACCESS_CONFIG_FILE=conf/access.conf
  -D RESOURCE_CONFIG_FILE=conf/srm.conf
 
 Got a sniffer on the wire too and the output looks like this. The
 request is:
 GET / HTTP/1.1
 Host: testserver
 User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.1)
 Gecko/20021003
 Accept:
 text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,text/css,*/*;q=0.1
 Accept-Language: en-us, en;q=0.50
 Accept-Encoding: gzip, deflate, compress;q=0.9
 Accept-Charset: ISO-8859-1, utf-8;q=0.66, *;q=0.66
 Keep-Alive: 300
 Connection: keep-alive
 Cache-Control: max-age=0
 
 
 And the response is:
 HTTP/1.1 200 OK
 Date: Sat, 19 Jul 2003 20:17:05 GMT
 Server: ZipTree
 Keep-Alive: timeout=15, max=100
 Connection: Keep-Alive
 Transfer-Encoding

Re: does pnotes() work at all in 1.27?

2003-07-19 Thread Mark Maunder
Hi Stas,

Thanks for the input. Tried that and no luck. I tried using -instance()
instead of -new() in both handlers, and it didn't work. Just for kicks
I tried using a few combinations of new() and instance() and no go there
either. I also checked that I had the main request using is_main just to
be safe after retreiving the existing Apache::Request instance. 

I'm going to upgrade to 1.28 in a coupla days so hopefully that'll solve
it. wierd that noone seems to have reported this. The server I'm using
is a stock RH 8.0 server - I dont run redcarpet or anything like that,
and I've only upgraded the minimum to keep it secure. I'm no C
developer, just a perl geek, but I was wondering where pnotes() stores
it's data. In an Apache::Table object? Is there a way for me to manually
access the store somehow at various phases to figure out where the data
gets deleted? Any suggestions would help.

Mark.

On Fri, 2003-07-18 at 23:40, Stas Bekman wrote:
 James Hartling wrote:
  I use pnotes all over the place in 1.27, and haven't noticed any problems.
  I just stepped through some code and everything looks good between the Init
  phase and the content handling phase.
  
  I'm using Apache::Request-instance everywhere so I'm dealing with the same
  request object, but even if you're using Apache::Request-new I'd still
  expect that to work.
 
 Apache::Request-instance is probably the magic pill for Mark ;)
 
 __
 Stas BekmanJAm_pH -- Just Another mod_perl Hacker
 http://stason.org/ mod_perl Guide --- http://perl.apache.org
 mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
 http://modperlbook.org http://apache.org   http://ticketmaster.com
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



Re: does pnotes() work at all in 1.27?

2003-07-19 Thread Mark Maunder
).   Mark, have you tested $r-notes, and is that working?  I notice
 that your 'marktest' pnote test used a string value.  Perhaps pnotes insists
 on storing refs, not strings - I've never tried it though.  You might also
 start farther back in the request cycle and see if pnotes are being passed
 along at any stage (Init, Access, Fixup, etc).
 
 Jim
 
 - Original Message -
 From: Stas Bekman [EMAIL PROTECTED]
 To: Mark Maunder [EMAIL PROTECTED]
 Cc: James Hartling [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Sent: Saturday, July 19, 2003 4:55 AM
 Subject: Re: does pnotes() work at all in 1.27?
 
 
  Mark Maunder wrote:
   Hi Stas,
  
   Thanks for the input. Tried that and no luck. I tried using -instance()
   instead of -new() in both handlers, and it didn't work. Just for kicks
   I tried using a few combinations of new() and instance() and no go there
   either. I also checked that I had the main request using is_main just to
   be safe after retreiving the existing Apache::Request instance.
 
  What happens if you remove Apache::Request altogether and try 2 simple
  handlers, e.g. response handler and logging handler. Does that setup work
 for
  you? If it does, it has something to do with Apache::Request and not
 mod_perl
  per se.
 
   I'm going to upgrade to 1.28 in a coupla days so hopefully that'll solve
   it. wierd that noone seems to have reported this. The server I'm using
   is a stock RH 8.0 server - I dont run redcarpet or anything like that,
   and I've only upgraded the minimum to keep it secure. I'm no C
   developer, just a perl geek, but I was wondering where pnotes() stores
   it's data. In an Apache::Table object? Is there a way for me to manually
   access the store somehow at various phases to figure out where the data
   gets deleted? Any suggestions would help.
 
  You can debug on the C level.
 
 
  __
  Stas BekmanJAm_pH -- Just Another mod_perl Hacker
  http://stason.org/ mod_perl Guide --- http://perl.apache.org
  mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
  http://modperlbook.org http://apache.org   http://ticketmaster.com
 
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



Re: does pnotes() work at all in 1.27?

2003-07-18 Thread Mark Maunder
Hi Perrin, thanks for the reply.

No progress yet. I just tested pnotes in the same handler and it works.
Tested it again by setting a value in the content handler and trying to
retreive it it my logging handler and no luck.

#The line in my content handler is:
$sess-get_r()-pnotes('marktest', 'anotherpnotestest');
warn PNOTES:  . $sess-get_r()-pnotes('marktest')
if($sess-get_r()-is_main());
#$sess is my session object where I store $r and my $dbh etc.

#And the one in my logging phase handler is:
warn PNOTES2:  . $r-pnotes('marktest') if($r-is_main());

This prints out the following:
PNOTES: anotherpnotestest at /usr/local/ziptree/lib/ZT/ViewItem.pm line
16.
PNOTES2:  at /usr/local/ziptree/lib/ZT/Logger.pm line 11.

I'm using Apache::Request in the content handler, but I've tried it
using the standard Apache-request object in both handlers and still no
luck.

Thanks,

Mark.

On Fri, 2003-07-18 at 10:09, Perrin Harkins wrote:
 On Thu, 2003-07-17 at 16:51, Mark Maunder wrote:
   And then install those as a content and logging phase handler. If you
   have the time and the interest. I've tried this and the logging handler
   comes up with nothing in pnotes. I've also checked that it's not a sub
   request.
 
 Did you get any further with this?  I've never heard of any problems
 with pnotes(), but I also don't have a 1.27 installed to check it with. 
 Does it work if you just set and read a note in the same handler?
 
 - Perrin
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



Re: templating system opinions

2003-07-18 Thread Mark Maunder
Hey Peter,

Template Toolkit rocks! (Sorry about the overt glee, but I am just
finishing a project where it has been very good to me) Besides the
complete seperation that it gives you between presentation and back-end
coding, it's super fast. I benchmarked a 2GHz server with 256 Megs of
RAM using ab (Apache bench) with around 10 concurrent requests and a
total of 10,000 requests and was able to handle over 40 hits per second
on our most dynamic page which has lots of conditionals and loops and
even does a few function calls like this [% IF sess.is_logged_in %]
where 'sess' is a perl object. NOTE: Make sure you cache your template
object in package globals or something like that, or you'll lose
performance.

I've written a couple of workable templating systems myself with good
old $html =~ s///egs and a content handler (as a perl developers rite of
passage don't ya know) and I wouldn't recommend it because you end up
with something non-standard, and are basically re-inventing template
toolkit which seems to have become the standard in templating over the
last coupla years.

Old, but still useful benchmarks if you're interested:
http://www.chamas.com/bench/

mark.

On Fri, 2003-07-18 at 13:26, Ken Y. Clark wrote:
 On Fri, 18 Jul 2003, Patrick Galbraith wrote:
 
  Date: Fri, 18 Jul 2003 14:25:32 -0700 (PDT)
  From: Patrick Galbraith [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Subject: templating system opinions
 
  Hi there,
 
  Just wondering what the best templating system is to use and/or learn.
 
  I've briefly read up on the pros and cons of each, and am just wondering
  which one is the most widely _used_ and best to learn if you're wanting to
  know something that there are jobs for.
 
  thanks ;)
 
 Search the guide:
 
 http://perl.apache.org/search/swish.cgi?query=templatesbm=submit=search
 
 ky
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



Re: pnotes and notes not working from Apache::Registry to handler

2003-07-17 Thread Mark Maunder
thanks. :)

I've ploughed through the manual and I'm pretty sure I've been trying
both notes and pnotes correctly. There's also a fairly common mistake of
assuming you're dealing with the initial request, when in fact
Apache-request is a sub-request and you need to use $r-main to
retrieve the main request to access your pnotes. I'm not making that
mistake either. 

The last option you mention about mod_perl compilation options: I've
compiled with EVERYTHING=1 and I even grep'd the apache and mod_perl
source just now for 'pnotes' to see if I could spot anything. Nothing
popped out at me. 

I'll try to recompile with
PLUS_THAT_OTHER_LITTLE_THING_NOT_INCLUDED_IN_EVERYTHING=1 now and see if
I have better luck ;)

On Wed, 2003-07-16 at 22:58, Dennis Stout wrote:
  I'm trying to store data about a user who has authenticated in
  $r-pnotes so that a perl logging phase handler can stick the user_id in
  the db. I call $r-pnotes('keyname' = 'somevalue'); in an apache
  registry script, and then call $r-pnotes('keyname') in the logging
  handler later on during the logging phase, but am getting nothing back.
  No errors, just undef. I've tried notes too, and no luck their either.
  I'm using Apache::Request btw. I've also tried retreiving a tied hash
  using $r-pnotes() and there are no key/values in that either.
 
 the mod_perl API book specifically said pnotes is the way to communicate
 between handlers.  As I have hte PDF version, I can't exactly cut  paste it
 easily...
 
 pnotes gets cleared after every request, so good thinking on trying notes, as
 it apearently doesn't.
 
 the basic usage is this:
 
 $r-pnotes(MY_HANDLER = [qw(one two)]);
 my $val = $r-pnotes(MY_HANDLER);
 print $val-[0]; # prints one
 
 So basically, $r-pnotes(MY_HANDLER = [qw(one two)]); will create a hash
 where MY_HANDLER is a key to an anonymous array.
 
 my $val = $r-pnotes(MY_HANDLER); sets $val to be the reference to that
 array.
 
 print $val-[0]; dereferences the first spot in the array reference.  The
 dereferencing thing is key here.  $val[0] will throw errors about globals not
 being declared as arrays or something of that sort.
 
 
  Did I forget to compile apache or mod_perl with an option of some sort?
  I can't think of any other explanation. I compiled mod_perl with
  EVERYTHING=1
 
 There is the problem right there.  It needs to be compiled with EVERYTHING=1
 PLUS_THAT_OTHER_LITTLE_THING_NOT_INCLUDED_IN_EVERYTHING=1.
 
 :P
 
 Dennis
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



Re: Double erroneous requests in POST with multipart/form-data

2003-07-17 Thread Mark Maunder
Thanks for all the feedback. This problem is strange in that I haven't
been able to duplicate it. IE got itself in a wierd situation where I
would hit Reload (and press OK to confirm a rePOST) and the problem
would consistently occur. Once I took IE out of that loop, I couldn't
duplicate it. 

The thing that bugged me is that when I google'd it I found a few other
people who had the same problem with POSTed multipart data. So I suspect
it's just some IE strangeness that causes the 'boundary' spec in the
header to be malformed in some circumstances and causes Apache to think
it's two pipelined requests, the second one which is asking for '/'.

I'll re-post if I come up with anything useful or am able to reproduce
this.

On Thu, 2003-07-17 at 04:39, Stas Bekman wrote:
 Mark Maunder wrote:
  I'm running all scripts under Apache::Registry and using Apache::Request
  because I'm handling file uploads. Sorry, should have included that. 
  
  I did test this: I modified the Apache::Registry script that was being
  posted to so that it didn't create an Apache::Registry request object,
  but simply did a print Content-type: text/html\n\nTesting123\n; And I
  got the same double request problem.  So it seems that it's Apache not
  liking that particular request for some reason. 
  
  Here's something wierd. I telnetted to my server's port 80 and pasted
  the request, and it didn't reproduce the problem. 
 
 Try using the command line user agent, e.g. lwp, lynx or else, where you can 
 control the involvement of the user agent. For example you can see response 
 headers, which probably aren't available under IE.
 
 [...]
 
 __
 Stas BekmanJAm_pH -- Just Another mod_perl Hacker
 http://stason.org/ mod_perl Guide --- http://perl.apache.org
 mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
 http://modperlbook.org http://apache.org   http://ticketmaster.com
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



does pnotes() work at all in 1.27?

2003-07-17 Thread Mark Maunder
Sorry about the repost, but this is driving me nuts.

Has anyone gotten $r-pnotes() to work under Apache 1.3.27 and mod_perl
1.27? A simple yes will do because then at least I'll know if it's my
mistake or a bug. 

It's this posting that makes me think it's a bug:
http://groups.yahoo.com/group/modperl/message/45472





Re: does pnotes() work at all in 1.27?

2003-07-17 Thread Mark Maunder
Thanks - would be helpful if you could try to use pnotes to communicate
between two mod_perl handlers. Just some really basic code like:
package MyContentHandler;
use Apache::Constants qw( :common );
sub handler
{
my $r = shift @_;
$r-pnotes('mykey', 'A regular string scalar');
$r-send_header('text/html');
print Hi there. This is a regular content handler.;
return OK;
}
1;

and 
package MyLoggingHandler;
use Apache::Constants qw( :common );
sub handler
{
my $r = shift @_;
#Do some logging
warn Value of pnotes is:  . $r-pnotes('mykey');
return OK;
}
1;

And then install those as a content and logging phase handler. If you
have the time and the interest. I've tried this and the logging handler
comes up with nothing in pnotes. I've also checked that it's not a sub
request. Thanks :)

On Thu, 2003-07-17 at 12:44, Dennis Stout wrote:
  Has anyone gotten $r-pnotes() to work under Apache 1.3.27 and mod_perl
  1.27? A simple yes will do because then at least I'll know if it's my
  mistake or a bug.
 
 I'll work on it when I get home again.  weee, thursday, gotta have this
 project done monday..  *sigh*
 
 The good news, is that I run Apache 1.3.27 and mod_perl 1.27.
 
 Anyways, thought you might like to know I'll work on it and someone out there
 HAS read your email :)
 
 Dennis
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



Re: does pnotes() work at all in 1.27?

2003-07-17 Thread Mark Maunder
(That's supposed to be send_http_header() - and there's prob a few other
errors in there. :)

On Thu, 2003-07-17 at 13:50, Mark Maunder wrote:
 Thanks - would be helpful if you could try to use pnotes to communicate
 between two mod_perl handlers. Just some really basic code like:
 package MyContentHandler;
 use Apache::Constants qw( :common );
 sub handler
 {
   my $r = shift @_;
   $r-pnotes('mykey', 'A regular string scalar');
   $r-send_header('text/html');
   print Hi there. This is a regular content handler.;
   return OK;
 }
 1;
 
 and 
 package MyLoggingHandler;
 use Apache::Constants qw( :common );
 sub handler
 {
   my $r = shift @_;
   #Do some logging
   warn Value of pnotes is:  . $r-pnotes('mykey');
   return OK;
 }
 1;
 
 And then install those as a content and logging phase handler. If you
 have the time and the interest. I've tried this and the logging handler
 comes up with nothing in pnotes. I've also checked that it's not a sub
 request. Thanks :)
 
 On Thu, 2003-07-17 at 12:44, Dennis Stout wrote:
   Has anyone gotten $r-pnotes() to work under Apache 1.3.27 and mod_perl
   1.27? A simple yes will do because then at least I'll know if it's my
   mistake or a bug.
  
  I'll work on it when I get home again.  weee, thursday, gotta have this
  project done monday..  *sigh*
  
  The good news, is that I run Apache 1.3.27 and mod_perl 1.27.
  
  Anyways, thought you might like to know I'll work on it and someone out there
  HAS read your email :)
  
  Dennis
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



Double erroneous requests in POST with multipart/form-data

2003-07-16 Thread Mark Maunder
This has got me stumped, any help is much appreciated:

I'm using IE6 and mod_perl 1.27 with apache 1.3.27. I have mod_rewrite
and mod_proxy and mod_gzip compiled into the server, but have now
disabled all of them until I sort this problem out. IE generates a
request who's headers look like this from a sniffer's point of view:

POST /e/myg HTTP/1.1
Accept: */*
Referer: http://ziptree.com/e/myg
Accept-Language: en-us
Content-Type: multipart/form-data;
boundary=---7d31a435d08
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)
Host: ziptree.com
Content-Length: 797
Connection: Keep-Alive
Cache-Control: no-cache
Cookie: ztid=52616e646f6d4956aca247f49143acab646412868d6eda23;
ztid=52616e646f6d495616e14f825d3799273ac52995e708d08b


It's only generating on request - I've double checked that. But in my
access log I see this:
68.5.106.9 - - [16/Jul/2003:14:37:51 -0500] POST /e/myg HTTP/1.1 200
16
68.5.106.9 - - [16/Jul/2003:14:37:51 -0500]
-7d31a435d08 501 -

(The two lines above have probably been split by your mail reader, but
they both start with the ip 68.5...)

Also, intermittently I get Invalid method in request reported in the
error_log like this:
[Wed Jul 16 14:37:51 2003] [error] [client 68.5.106.9] Invalid method in
request -7d31a435d08

It looks like Apache is getting confused by the boundary data and thinks
it's another request. It's occured to me that this could be a bug in IE
incorrectly specifying the boundry?

One of the unpleasant side effects of this is that my user loses their
session because what Apache considers the first 'request' does not
contain a cookie, so we just issue a fresh session ID which overwrites
the previous one.

I found these in the list archives, but no replies to either.
http://groups.yahoo.com/group/modperl/message/34118
http://groups.yahoo.com/group/modperl/message/52778

-- 
Mark Maunder [EMAIL PROTECTED]




Re: Double erroneous requests in POST with multipart/form-data

2003-07-16 Thread Mark Maunder
I'm running all scripts under Apache::Registry and using Apache::Request
because I'm handling file uploads. Sorry, should have included that. 

I did test this: I modified the Apache::Registry script that was being
posted to so that it didn't create an Apache::Registry request object,
but simply did a print Content-type: text/html\n\nTesting123\n; And I
got the same double request problem.  So it seems that it's Apache not
liking that particular request for some reason. 

Here's something wierd. I telnetted to my server's port 80 and pasted
the request, and it didn't reproduce the problem. 

Also, this doesn't happen on every POST to that script. Just that
particular one. So I kept hitting Reload and got prompted by IE whether
I wanted to retry the POST (in less technical terms) and said yes. And
every time it would kick out the errors described. Then when I left that
page and went back in, everything was fine. 

It's one of those toughies that is hard to reproduce, but my gut feel
says it's going to come up again.

On Wed, 2003-07-16 at 13:18, David Dick wrote:
 What are you using to parse the request? CGI.pm?
 
 Mark Maunder wrote:
 
 This has got me stumped, any help is much appreciated:
 
 I'm using IE6 and mod_perl 1.27 with apache 1.3.27. I have mod_rewrite
 and mod_proxy and mod_gzip compiled into the server, but have now
 disabled all of them until I sort this problem out. IE generates a
 request who's headers look like this from a sniffer's point of view:
 
 POST /e/myg HTTP/1.1
 Accept: */*
 Referer: http://ziptree.com/e/myg
 Accept-Language: en-us
 Content-Type: multipart/form-data;
 boundary=---7d31a435d08
 Accept-Encoding: gzip, deflate
 User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)
 Host: ziptree.com
 Content-Length: 797
 Connection: Keep-Alive
 Cache-Control: no-cache
 Cookie: ztid=52616e646f6d4956aca247f49143acab646412868d6eda23;
 ztid=52616e646f6d495616e14f825d3799273ac52995e708d08b
 
 
 It's only generating on request - I've double checked that. But in my
 access log I see this:
 68.5.106.9 - - [16/Jul/2003:14:37:51 -0500] POST /e/myg HTTP/1.1 200
 16
 68.5.106.9 - - [16/Jul/2003:14:37:51 -0500]
 -7d31a435d08 501 -
 
 (The two lines above have probably been split by your mail reader, but
 they both start with the ip 68.5...)
 
 Also, intermittently I get Invalid method in request reported in the
 error_log like this:
 [Wed Jul 16 14:37:51 2003] [error] [client 68.5.106.9] Invalid method in
 request -7d31a435d08
 
 It looks like Apache is getting confused by the boundary data and thinks
 it's another request. It's occured to me that this could be a bug in IE
 incorrectly specifying the boundry?
 
 One of the unpleasant side effects of this is that my user loses their
 session because what Apache considers the first 'request' does not
 contain a cookie, so we just issue a fresh session ID which overwrites
 the previous one.
 
 I found these in the list archives, but no replies to either.
 http://groups.yahoo.com/group/modperl/message/34118
 http://groups.yahoo.com/group/modperl/message/52778
 
   
 
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



Re: cookies

2003-07-16 Thread Mark Maunder
Forgot to include the list.

-Forwarded Message-
 From: Mark Maunder [EMAIL PROTECTED]
 To: Dennis Stout [EMAIL PROTECTED]
 Subject: Re: cookies
 Date: 16 Jul 2003 14:19:27 -0700
 
 Hi Dennis,
 
 One possibility: Check the -path option. It's supposed to set it to '/'
 by default if you dont specify it, but it doesn't. I discovered this
 about 20 minutes ago with a similar bug. So manually specify something
 like:
 my $cookie = Apache::Cookie-new($r,
 -name = 'cookiename',
 -value = 'someval',
 -expires = '+7d',
 -domain = '.dontvisitus.org',
 -path = '/',
 );
 
 CGI::Cookie works the same in case that's what you're using. If you have
 Mozilla, go to Preferences/Privacy/Cookies, run cookie manager and check
 the path that's being set. That's how I discovered this. 
 
 Hope that helps.
 
 Mark.
 
 On Wed, 2003-07-16 at 14:13, Dennis Stout wrote:
  Okay, so technically this isn't really mod_perl speific...  but the cookie
  is being set with mod_perl and it's a huge mod_perl program being affected by
  this:)
  
  I have a cookie, the domain is set to .stout.dyndns.org (with the leading .).
  
  I set the cookie just fine now (thanks to those helping me on thatr)
  
  I had a problem parsing the cookie.  Added some debugging (okay, warn lines up
  the yingyang) and after cycling through the headers and warning them out to
  the errorlog...  I never saw any cookie info.
  
  So... If the website is ttms.stout.dyndns.org shouldn't the cookie domain be
  .stout.dyndns.org?
  
  *sigh*  6 more days to finish this database.  I doubt I'll make it.
  
  Dennis
 -- 
 Mark Maunder [EMAIL PROTECTED]
 ZipTree Inc.
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



pnotes and notes not working from Apache::Registry to handler

2003-07-16 Thread Mark Maunder
Hi,

I'm trying to store data about a user who has authenticated in
$r-pnotes so that a perl logging phase handler can stick the user_id in
the db. I call $r-pnotes('keyname' = 'somevalue'); in an apache
registry script, and then call $r-pnotes('keyname') in the logging
handler later on during the logging phase, but am getting nothing back.
No errors, just undef. I've tried notes too, and no luck their either.
I'm using Apache::Request btw. I've also tried retreiving a tied hash
using $r-pnotes() and there are no key/values in that either.

Is it possible to use pnotes to pass data from an Apache::Registry
script to a handler? Perhaps thats the prob - didn't find anything that
said otherwise.

Did I forget to compile apache or mod_perl with an option of some sort?
I can't think of any other explanation. I compiled mod_perl with
EVERYTHING=1

Thanks,

Mark.




Re: pnotes and notes not working from Apache::Registry to handler

2003-07-16 Thread Mark Maunder
Found this piece of info in the archives. I'm also running 1.27. Is this
a known bug?

http://groups.yahoo.com/group/modperl/message/45472
*snip*
Subject:  notes/pnotes broke between 1.25=1.27


So I got the advisory about the Apache servers having a security hole,
so I
decided to upgrade some servers. I've been on v1.25 for awhile, so
decided
to upgrade to 1.27 while I was at it... big mistake.

NONE of my notes/pnotes were getting thru, on the new version.
*snip*

On Wed, 2003-07-16 at 19:37, Mark Maunder wrote:
 Hi,
 
 I'm trying to store data about a user who has authenticated in
 $r-pnotes so that a perl logging phase handler can stick the user_id in
 the db. I call $r-pnotes('keyname' = 'somevalue'); in an apache
 registry script, and then call $r-pnotes('keyname') in the logging
 handler later on during the logging phase, but am getting nothing back.
 No errors, just undef. I've tried notes too, and no luck their either.
 I'm using Apache::Request btw. I've also tried retreiving a tied hash
 using $r-pnotes() and there are no key/values in that either.
 
 Is it possible to use pnotes to pass data from an Apache::Registry
 script to a handler? Perhaps thats the prob - didn't find anything that
 said otherwise.
 
 Did I forget to compile apache or mod_perl with an option of some sort?
 I can't think of any other explanation. I compiled mod_perl with
 EVERYTHING=1
 
 Thanks,
 
 Mark.
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



Re: cookies

2003-07-16 Thread Mark Maunder
From perldoc CGI::Cookie
# fetch existing cookies
%cookies = fetch CGI::Cookie;
$id = $cookies{'ID'}-value;
#You're doing $cookies-value;

ID == the name that you used when you set the cookie.

On Wed, 2003-07-16 at 21:27, Dennis Stout wrote:
 *pounds head against brick wall*  why must it work against me???
 
 A cookie for anyone who solves this.
 
 sub handler {
 my $r = shift;
 my $result = undef;
 
 eval { $result = inner_handler($r) };
 return $result unless $@;
 
 warn Uncaught Exception: $@;
 
 return SERVER_ERROR;
 }
 
 sub inner_handler {
 my $r = shift;
 
 my %q = ($r-args, $r-content);
 my %state = (r = $r, q = \%q);
 
 $state{title} = '';
 $state{template} = '';
 $state{auth_status} = password_boxes(\%state);
 
 #   warn %ENV: \n;
 #   foreach (keys %ENV) {
 #   warn $_ = $ENV{$_}\n;
 #   }
 #   my %headers = $r-headers_in;
 #   warn Headers: \n;
 #   foreach (keys %headers) {
 #   warn $_: $headers{$_}\n;
 #   }
 my $cookie = Apache::Cookie-fetch;
 warn z - $cookie-value;
 validate_auth_cookie(\%state, $cookie);
 
 my $function = $r-uri;
 if (($state{login_user} eq '') and ($function ne '/login.cgi')) {
 $function = '/login.html';
 }
 my $func = $Dispatch{$function} || $Dispatch{DEFAULT};
 
 return DECLINED unless $func;
 return $func-(\%state);
 }
 
 Upon accessing a page (therefore generating lots of warning info in logs...) I
 get this in my error log.
 
 z - HASH(0x916ea08)-value at /home/httpd/ttms/perl/RequestHandler.pm line
 108.
 
 (the z is there so I know where at in my code the line in the log file is
 being generated.  I like z's and a's more than I do
 some/long/path/and/filename line 108)
 
 I have tried using $cookie as a value in and of itself, I've tried
 $cookie-{ttms_user}  (the name of hte cookie is ttms_user), I've tried
 changing $cookie to %cookie and doing a $cookie{ttms_user} ..
 
 I might break down, declare this a bug, and use $ENV{HTTP_COOKIE} instead.
 
 Any ideas how to fix this to return to me the cookie itself?  Thanks.
 
 Dennis
 
 - Original Message - 
 From: Dennis Stout [EMAIL PROTECTED]
 To: Dennis Stout [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Sent: Wednesday, July 16, 2003 20 13
 Subject: Re: cookies
 
 
  Well I'll be damned.
 
  My computer at home does the cookie thing perfectly well.  My workstation at
  work does not do cookies.  So my mod_perl creation is working fine as far as
  getting the cookies.
 
  rant
  YAY FOR WIN2K DOMAINS AND ADMIN WHO USE HELP DESK TECHS TO PROGRAM TICKETING
  SYSTEMS FOR DSL, DIGITAL TV, AND DOMAINS!
  /rant
 
  I still have a problem tho.  The cookie string itself is not being passed
  along.  Instead, I am getting Apache::Cookie=SCALAR(0x9115c24).
 
  I imagine somewhere I need to do something like -as_string or something.
  blah
 
  Thanks for helping, sorry I didn't spot that the error was infact, in the
  dumbterminal called a win2k box I was using, and not in any actual code
 
  Dennis Stout
 
  - Original Message - 
  From: Dennis Stout [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: Wednesday, July 16, 2003 13 13
  Subject: cookies
 
 
   Okay, so technically this isn't really mod_perl speific...  but the
 cookie
   is being set with mod_perl and it's a huge mod_perl program being affected
  by
   this:)
  
   I have a cookie, the domain is set to .stout.dyndns.org (with the leading
  .).
  
   I set the cookie just fine now (thanks to those helping me on thatr)
  
   I had a problem parsing the cookie.  Added some debugging (okay, warn
 lines
  up
   the yingyang) and after cycling through the headers and warning them out
 to
   the errorlog...  I never saw any cookie info.
  
   So... If the website is ttms.stout.dyndns.org shouldn't the cookie domain
 be
   .stout.dyndns.org?
  
   *sigh*  6 more days to finish this database.  I doubt I'll make it.
  
   Dennis
  
 
-- 
Mark Maunder [EMAIL PROTECTED]
ZipTree Inc.



Re: cookies

2003-07-16 Thread Mark Maunder
Cool dude. Now if you know why $r-pnotes() isn't working under
apache/modperl .27 you'll make my day! 

:wq

On Wed, 2003-07-16 at 21:42, Dennis Stout wrote:
 w00t!
 
 ttms_user: mp2Ti5p1JkhCObm9LKBFGsiAltop8aAWwl6vLLDr/3rtb09MRzZrEg==
 
 Here,
 
 your $cookie = Apache::Cookie-new($state-{r},
 -name   = 'Mark',
 -value  = 'AWESOME!!!',
 -expires= time + 86400*30*7,
 -domain = '.dyndns.org',
 -path   = '/',
 );
 
 (okay, I made up your, it sounds better than my, and sinec this is fake
 nayways... heh)
 
 oop, looking at that, I should set the domain to something more sane again,
 like stout.dyndns.org.  :P
 
 Dennis
 
 P.S. Does anyone else try to use Outlook Express like vi and get odd error
 messages after a days worth of coding?
 
 - Original Message - 
 From: Mark Maunder [EMAIL PROTECTED]
 To: Dennis Stout [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Wednesday, July 16, 2003 20 33
 Subject: Re: cookies
 
 
  From perldoc CGI::Cookie
  # fetch existing cookies
  %cookies = fetch CGI::Cookie;
  $id = $cookies{'ID'}-value;
  #You're doing $cookies-value;
 
  ID == the name that you used when you set the cookie.
 
  On Wed, 2003-07-16 at 21:27, Dennis Stout wrote:
   *pounds head against brick wall*  why must it work against me???
  
   A cookie for anyone who solves this.
  
   sub handler {
   my $r = shift;
   my $result = undef;
  
   eval { $result = inner_handler($r) };
   return $result unless $@;
  
   warn Uncaught Exception: $@;
  
   return SERVER_ERROR;
   }
  
   sub inner_handler {
   my $r = shift;
  
   my %q = ($r-args, $r-content);
   my %state = (r = $r, q = \%q);
  
   $state{title} = '';
   $state{template} = '';
   $state{auth_status} = password_boxes(\%state);
  
   #   warn %ENV: \n;
   #   foreach (keys %ENV) {
   #   warn $_ = $ENV{$_}\n;
   #   }
   #   my %headers = $r-headers_in;
   #   warn Headers: \n;
   #   foreach (keys %headers) {
   #   warn $_: $headers{$_}\n;
   #   }
   my $cookie = Apache::Cookie-fetch;
   warn z - $cookie-value;
   validate_auth_cookie(\%state, $cookie);
  
   my $function = $r-uri;
   if (($state{login_user} eq '') and ($function ne '/login.cgi')) {
   $function = '/login.html';
   }
   my $func = $Dispatch{$function} || $Dispatch{DEFAULT};
  
   return DECLINED unless $func;
   return $func-(\%state);
   }
  
   Upon accessing a page (therefore generating lots of warning info in
 logs...) I
   get this in my error log.
  
   z - HASH(0x916ea08)-value at /home/httpd/ttms/perl/RequestHandler.pm line
   108.
  
   (the z is there so I know where at in my code the line in the log file is
   being generated.  I like z's and a's more than I do
   some/long/path/and/filename line 108)
  
   I have tried using $cookie as a value in and of itself, I've tried
   $cookie-{ttms_user}  (the name of hte cookie is ttms_user), I've tried
   changing $cookie to %cookie and doing a $cookie{ttms_user} ..
  
   I might break down, declare this a bug, and use $ENV{HTTP_COOKIE} instead.
  
   Any ideas how to fix this to return to me the cookie itself?  Thanks.
  
   Dennis
  
   - Original Message - 
   From: Dennis Stout [EMAIL PROTECTED]
   To: Dennis Stout [EMAIL PROTECTED]; [EMAIL PROTECTED]
   Sent: Wednesday, July 16, 2003 20 13
   Subject: Re: cookies
  
  
Well I'll be damned.
   
My computer at home does the cookie thing perfectly well.  My
 workstation at
work does not do cookies.  So my mod_perl creation is working fine as
 far as
getting the cookies.
   
rant
YAY FOR WIN2K DOMAINS AND ADMIN WHO USE HELP DESK TECHS TO PROGRAM
 TICKETING
SYSTEMS FOR DSL, DIGITAL TV, AND DOMAINS!
/rant
   
I still have a problem tho.  The cookie string itself is not being
 passed
along.  Instead, I am getting Apache::Cookie=SCALAR(0x9115c24).
   
I imagine somewhere I need to do something like -as_string or
 something.
blah
   
Thanks for helping, sorry I didn't spot that the error was infact, in
 the
dumbterminal called a win2k box I was using, and not in any actual
 code
   
Dennis Stout
   
- Original Message - 
From: Dennis Stout [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, July 16, 2003 13 13
Subject: cookies
   
   
 Okay, so technically this isn't really mod_perl speific...  but the
   cookie
 is being set with mod_perl and it's a huge mod_perl program being
 affected
by
 this:)

 I have a cookie, the domain is set to .stout.dyndns.org (with the
 leading
.).

 I set the cookie just fine now (thanks to those helping me on thatr)

 I

Re: [ANNOUNCE] Apache::FillInForm

2002-03-17 Thread Mark Maunder

Why not just use HTML::FillInForm?

Maurice Aubrey wrote:

 http://www.creation.com/~maurice/Apache-FillInForm-0.01.tar.gz
 I'll put it on CPAN if there's interest.

 NAME
 Apache::FillInForm - mod_perl interface to HTML::FillInForm

 SYNOPSIS
 httpd.conf:

   PerlModule Apache::Filter
   PerlModule Apache::FillInForm
   FilesMatch \.foo$
 PerlSetVar Filter on
 PerlHandler Apache::RegistryFilter Apache::FillInForm
   /FilesMatch

 And then somewhere in your application:

   use Apache::FillInForm;
   Apache::FillInForm-fill; # We have a form to fill out

 DESCRIPTION
 This is a mod_perl filter that uses HTML::FillInForm and Apache::Request
 to automatically populate forms with user submitted data.

 Your application should call Apache::FillInForm-fill to indicate that
 you need a form filled in. If you don't do that, the filter passes the
 content through unmodified to minimize the performance hit for pages
 with no forms. Regardless of how many times you call
 Apache::FillInForm-fill, your content will only be filtered once per
 request.

 The data source for the forms is taken from Apache::Request by calling
 its instance() method. If you're unfamiliar with how the instance()
 method works, see the Apache::Request documentation.

 If you don't want to use Apache::Request you should be able to subclass
 this module and override its data() method. The data() method should
 return either a hash reference or an object that has a CGI.pm-style
 param() interface.

 BUGS
 May want to allow specific forms to be targeted by name and use separate
 data sources for each.

 Warning: This interface is experimental and may change based on
 experience and feedback.

 AUTHOR
 Copyright 2002, Maurice Aubrey [EMAIL PROTECTED]. All rights
 reserved.

 This module is free software; you may redistribute it and/or modify it
 under the same terms as Perl itself.

 SEE ALSO
 perl(1), mod_perl(3), Apache(3), Apache::Filter(3), HTML::FillInForm(3),
 Apache::Request(3)




exit with alarm

2002-03-11 Thread Mark Maunder

I've written a rather large mod_perl app that initiates a bunch of
socket connects using a subclassed IO::Socket::INET. Every now and then
I'm getting this:

child pid 22743 exit signal Alarm clock (14)

It happens infrequently and I can't find a pattern of any kind. The
request that seems to cause it gets served up without any problems. I
supect something I'm using is setting $SIG{ALRM}, but I thought this was
fixed a while ago so that mod_perl saves SIGALRM for restoring at the
end of a request? (I'm running mod_perl 1.26)

thanks

Mark.





Re: modperl growth

2002-02-05 Thread Mark Maunder

Rod Butcher wrote:

 My .05... I run a small communal webserver. Software had to be free, secure,
 stable, support Perl, multiple domains and ASP, be reasonably simple,
 originally run on Win32 and be capable of migration to Linux later.
 Nobrainer -- Apache, mod_perl, Apache::ASP.
 Only difficulty was getting mod_perl installed, it helped that I had a
 background in IT, I suspect a non-professional would find it impossible.
 Which is a shame because Win$ users expect everything to work out of the box
 wihout having to know anything. That's not meant as a criticism, but I think
 it's the reality now.

I was thinking that too, but then I remembered that if you're not from an IT
background, you're probably not going to be able to write a line of mod_perl
code anyhoo.

But, yeah, the installation/compilation process is daunting for a
javascript/html jockey who is trying to pick which server side language (PHP,
Perl, Python, JSP, etc.) to learn.




Re: Single login/sign-on for different web apps?

2002-01-16 Thread Mark Maunder

Daniel Little wrote:

  From: Mark Maunder [mailto:[EMAIL PROTECTED]]
 
   Here's one idea that worked for me in one application:
  
1) assume that all hosts share the same domain suffix:
  
 www.foo.com
 www.eng.foo.com
 www.hr.foo.com
  
2) Define a common authentication cookie that is sent to *.foo.com.
   This cookie could might contain the following information:
  
  username, timestamp
 
  The only way I could come up with, was to have the browser
  redirected to every domain name with an encrypted uri variable
  to prove it is signed on which causes each host included in
  the single sign on to assign an auth cookie to the browser.
 
  So the browser is logged into foo.com, bar.com baz.com and
  boo.com by logging into foo.com which assigns a cookie and
  redirects to bar.com which assigns a cookie and redirects
  it to baz.com which assigns a cookie and redirects it to
  boo.com which assigns a cookie and redirects it back to
  foo.com. It has now collected all cookies required for
  signon to all domain names and is logged into all of them.

 An alternative to this scheme - and depending on how much control you have
 over the applications / servers at each end - is to do this in a delayed
 fashion. The only time you really need to get authenticated at each server
 is when the browser is sent off to the new site. Instead of redirecting the
 browser to the new site directly, it sends it to a script on the server that
 they are currently connected to (and therefore already authenticated with)
 which requests a 'transition' token of some kind from the authentication
 server. The transition token then is used to transfer them to the requested
 server, which based on the token, does a lookup on the authentication server
 to find out if its a valid transition token, and if so, generates a new
 cookie for them (if necessary) and logs them into the site.


This assumes they dont just type in the url of the other site they want to visit
manually. It limits the user to visiting sites via links on sites they are
currently logged on to.






Re: Request Limiter

2002-01-14 Thread Mark Maunder

Geoffrey Young wrote:

  Ken Miller wrote:
 
  There was a module floating around a while back that did request
  limiting (a DOS preventional tool).  I've searched the archives
  (unsuccessfully), and I was wondering if anyone knows what the heck
  I'm talking about.

 maybe you had Stonehenge::Throttle in mind?


I wrote something a while back in response to users holding down the F5
key in IE and DOS'ing our website. It's called Apache::GateKeeper and is
more polite than Throttle in that it serves cached content to the client
instead of sending a 'come back later' message. It's configurable so after
exceeding a threshold the client gets content from the shared memory
cache, and if a second threshold is exceeded (ok this guy is getting
REALLY irritating) then they get the 'come back later' message. They will
only get cached content if they exceed x number of requests within y
number of seconds.

It works with Apache::Filter and there are two components -
Apache::GateKeeper which is the first handler in the line of filters, and
Apache::GateKeeper::Gate, which is the last in the line of filters and
does the caching of content which will be served to the client if they are
naughty.

I would have liked to write this so that it just drops into an existing
mod_perl app, but I couldn't find a way to grab an application's output
before it got sent to the client for storage in the cache, so I set it up
with Apache::Filter. Any suggestions on how to solve this?

I've put the source on http://www.swiftcamel.com/gatekeeper.tgz

It isn't packaged at all, and only includes the two modules I've grabbed
straight out of our app - Apache::GateKeeper and Apache::GateKeeper::Gate.
Currently this uses pnotes to pass POST data and messages between modules
that are in the Apache::Filter chain, so it's really not the kind of thing
you can drop into an app.

Any ideas on how to write a version of this that one CAN simply drop into
an existing application would be most welcome.

~mark.




Re: Request Limiter

2002-01-14 Thread Mark Maunder

Perrin Harkins wrote:

  It's configurable so after
  exceeding a threshold the client gets content from the shared memory
  cache, and if a second threshold is exceeded (ok this guy is getting
  REALLY irritating) then they get the 'come back later' message. They will
  only get cached content if they exceed x number of requests within y
  number of seconds.

 Nice idea.  I usually prefer to just send an ACCESS DENIED if someone is
 behaving badly, but a cached page might be better for some situations.

 How do you determine individual users?  IP can be a problem with large
 proxies.  At eToys we used the session cookie if available (we could verify
 that it was not faked by using a message digest) and wold fall back to the
 IP if there was no cookie.


I'm also using cookies with a digest. There's also the option of using the IP
instead which I added in as an afterthought since my site requires cookie
support.  But I have nighmares of large corporate proxies seeing the same page
over and over.

I wonder if this would be easier to implement as a drop-in with mod_perl2 since
filters are supposed to be replacing handlers? And while I'm at it, is there a
mod_perl 2 users (or testers) mailing list yet?





Re: Apache and Perl togheter

2002-01-09 Thread Mark Maunder

Alan Civita wrote:

 Surely...
 and I've done all of it...
 ..have i to use sonme particular option during the
 configuration and installation of apache in order to
 use/enable the perl in Apache?
 thx again

Alan,

For basic CGI under apache, you will need to make sure your scripts
print out the following before sending anything else:
Content-type: text/html\n\n

If they dont,  you will get internal server error. Take a look at the
perl CGI module in CPAN. It should get you started with creating some
reasonably complex cgi apps. If you have any problems or questions about
CGI, direct them to the perl beginners list. Once you've mastered the
basics of CGI, check out http://perl.apache.org/guide for an intro to
using mod_perl for increased performance and flexibility.

Please do take a look at the error_log and if you dont understand what
it means, then cut and paste the line you dont understand into Google
and hit search. It will usually come up with an email discussion about
your exact problem. I have included a hello world script that should
work:

#!/usr/bin/perl
print Content-type: text/html\n\n;
print CENTERH1Hello world!/H1/CENTER\n;






Re: [ANNOUNCE] Apache::AppCluster 0.2

2002-01-09 Thread Mark Maunder

Gunther Birznieks wrote:

 Is this a lot different from PlRPC?

 Perhaps this should be a transport layer that is added to PlRPC rather than
 creating a brand-new service?

 Ideally it would be nice if it were architected with mod_perl and HTTP
 being the default mechanism of transport but that if you wanted to, you
 could make it into a standalone Perl daemon that forked by itself.

The difference is that AppCluster allows you to call multiple remote methods on
the server (or on multiple distributed servers) simultaneously. However, I wasn't
aware of PlRPC and I really like the interface i.e. the way it creates a copy of
the remote object locally and allows you to call methods on it as if it were just
another object.

I could not do this with AppCluster without sacrificing the concurrency it
offers. At present you create a single appcluster object and then register the
method calls you would like it to call, one by one. You then call a single method
(with a timeout) that calls all registered remote methods simultaneously. (It
uses non-blocking IO and so provides concurrency in a single thread of execution)

re: allowing the server to be a standalone daemon that forks by itself.
This project actually started exactly like that. I grabbed the pre-forking server
example from the Cookbook and used that as the basis for the server. I found that
performance was horrible though, because forking additional servers took too long
and I also was using DBI and I missed Apache::DBI (and all the other great
apache::* mods) too much. So I used a good solid server that offers a great Perl
persistence engine. I'm not sure why anyone would want to roll their own server
in Perl. If there is a reason, then I could change the server class to create a
server abstraction layer of some kind.

~mark.




Re: [ANNOUNCE] Apache::AppCluster 0.2

2002-01-09 Thread Mark Maunder

brian moseley wrote:

 On Wed, 9 Jan 2002, Mark Maunder wrote:

  The difference is that AppCluster allows you to call
  multiple remote methods on the server (or on multiple
  distributed servers) simultaneously. However, I wasn't
  aware of PlRPC and I really like the interface i.e. the
  way it creates a copy of the remote object locally and
  allows you to call methods on it as if it were just
  another object.

 would it require too much surgery and api change if you
 added the concurrency support to PlRPC?

Well, I guess two methods could be added to the client object. One to
add a concurrent request to be called (register_request()) and one to
send all registered requests concurrently. I'm not the author though, so
you'll have to chat to Jochen about that.

The server and transport would have to be rewritten pretty much from
scratch I think. The transport needs to be HTTP POST requests and
responses. The server needs to be set up as a mod_perl handler that
takes advantage of everything mod_perl has to offer.

From my point of view, it's easier to duplicate Jochen's work in
AppCluster by adding the same type of interface i.e. creating a copy of
a remote object locally and calling methods on that object as per normal
while the actual method call is submitted via a POST to a remote
mod_perl app server.

I dont really mind whether we incorporate this stuff into PlClient or
AppCluster or both, but I do think that both the concurrency in
AppCluster and tied object API in PlRPC are really useful and would be
even better with the remote app server being mod_perl.

An idea might be to incorporate both the AppCluster concurrency and
PlRPC style api  into an Apache::* module that gives us the best of both
worlds with mod_perl performance (etc.) on the server side. (and then
get rid of AppCluster since it will be redundant)

Let me know if that sounds like a good idea and I'll start work on it.
Perhaps we could call it Apache::PlRPC (now with added concurrency!)

~mark





Re: [ANNOUNCE] Apache::AppCluster 0.2

2002-01-09 Thread Mark Maunder

brian moseley wrote:

 On Wed, 9 Jan 2002, Mark Maunder wrote:

  Well, I guess two methods could be added to the client
  object. One to add a concurrent request to be called
  (register_request()) and one to send all registered
  requests concurrently. I'm not the author though, so
  you'll have to chat to Jochen about that.

 couldn't you just subclass RPC::PlClient?

The transport is different (HTTP/POST) and I dont think I can easily
just drop in another (alternative) transport - I may as well rewrite.

  The server and transport would have to be rewritten
  pretty much from scratch I think. The transport needs to
  be HTTP POST requests and responses. The server needs to
  be set up as a mod_perl handler that takes advantage of
  everything mod_perl has to offer.

 why needs? i'm sure lots of people would rather run a very
 lightweight non-http/apache-based server.


Agreed. Are there any more besides a standalone pure perl daemon and
mod_perl/apache?


  I dont really mind whether we incorporate this stuff
  into PlClient or AppCluster or both, but I do think that
  both the concurrency in AppCluster and tied object API
  in PlRPC are really useful and would be even better with
  the remote app server being mod_perl.

 seems like the ideal api gives you the best functionality
 from both original apis and abstracts away the choice of
 transport and server.


yeah - agreed.


  An idea might be to incorporate both the AppCluster
  concurrency and PlRPC style api into an Apache::* module
  that gives us the best of both worlds with mod_perl
  performance (etc.) on the server side. (and then get rid
  of AppCluster since it will be redundant)

 perhaps i misunderstand, but you're suggesting making the
 client an Apache module? why?

Well, the server component (at present) is a mod_perl handler, and I
wanted to bundle both together so I stuck in in the Apache namespace
(pending any objections of course). Seems like RPC might make more sense
if it becomes platform/server neutral, since Apache::* binds the server
platform to mod_perl.

 i like the idea of being able to write client code that uses
 the same rpc api no matter whether i choose to use soap,
 xml-rpc, a more specific http post, plrpc's transport
 (whatever that might be), or whatever as the transport. not
 all of the drivers would have to support the same feature
 set (i think your mechanism supports arbitrarily deep data
 structures?).

 that rpc api is one of the things p5ee is looking for, i
 believe.

It seems like you're asking a bit much here (the holy grail of RPC?).
SOAP and xml-rpc are their own client/server system. Are we going to
integrate this with both of them and support standalone perl daemons
etc.? I considered writing a soap client that allows for concurrent
requests, but I found SOAP::Lite to be slow under mod_perl so opted for
rolling my own instead. Also SOAP is platform neutral and I'm not sure,
but I think it wont allow for perl data structures as complex as
Storable does.

I think you should probably distinguish between transport and encoding.
Transports could be http, https or a plain old socket (if we're talking
to a perl daemon) and an encrypted socket connection using CBC.
Encodings we've chatted about are Storable's freeze/thaw and SOAP's XML,
and then xml-rpc (which I assume has it's own encoding of method name
and params etc in xml - although I dont know it at all). I think having
various transports is fine. But I'm not sure what the motivation is for
varying the encodings, unless we want our client to talk to SOAP and
xml-rpc servers, or vice versa.

Perhaps integrating PlRPC and AppCluster's client API's and allowing for
either standalone daemon or mod_perl server is a good start? We can use
HTTP, HTTPS, direct sockets and encrypted sockets as the first
transports. The client can have two modes - concurrent remote procedure
calls, or creating a copy of the remote object PlRPC style.

~mark.







Re: [ANNOUNCE] Apache::AppCluster 0.2

2002-01-08 Thread Mark Maunder

Apache::AppCluster is now in CPAN and can be accessed at:
http://search.cpan.org/search?dist=Apache-AppCluster

This consists of a client and server module that can be used to develop
mod_perl clustered web services. A client application can make multiple
simultaneous API calls to a mod_perl server (or multiple servers) passing
and receiving any perl reference or object as parameters. The server module
runs the called module::method in an eval loop and handles errors gracefully
while taking advantage of the persistence offered by mod_perl. The client
has a bunch of useful methods and includes a timeout for the total time
allowed for all remote calls to complete.

Let me know if you have any suggestions or find this useful. (or find any
bugs)

~mark

*snip*
The uploaded file

Apache-AppCluster-0.02.tar.gz

has entered CPAN as

  file: $CPAN/authors/id/M/MA/MAUNDER/Apache-AppCluster-0.02.tar.gz
  size: 23995 bytes
   md5: 0abff0a4a2aa053c9f8ae3c00dd86434

No action is required on your part
Request entered by: MAUNDER (Mark D. Maunder)
Request entered on: Sun, 06 Jan 2002 16:31:57 GMT
Request completed:  Sun, 06 Jan 2002 16:33:33 GMT







[ANNOUNCE] Apache::AppCluster 0.2

2002-01-06 Thread Mark Maunder

Hi all,

I'm about to post this module to CPAN. Please take a look and let me
know if you think this is appropriate for the Apache::* namespace and if
you have any problems with it ('make test' is quite comprehensive).

The module is available from:
http://www.swiftcamel.com/modules/Apache-AppCluster-0.02.tar.gz

~mark

Here is the readme:

Apache::AppCluster is a lightweight mod_perl RPC mechanism that allows
you to
use your mod_perl web servers as distributed application servers that
serve
multiple concurrent RPC requests to remote clients across a network. The
client
component has the ability to fire off multiple simultaneous requests to
multiple remote application servers and collect the responses
simultaneously.

This is similar to SOAP::Lite in that it is a web based RPC mechanism,
but
it has the advantage of being able to send/receive multiple concurrent
requests
to the same or different remote application servers and the
methods/functions
called on the remote servers may receive and return Perl data structures
of
arbritary complexity - entire objects can be flung back and forth with
ease.

Please see Apache::AppCluster::Client and Apache::AppCluster::Server
documentation for full details on server configuration (very easy) and
Client
usage (OO interface).

INSTALLATION:

Untar the distribution into a directory that will be readable by the
user
nobody. (i.e. dont use /root for installation). The test suite runs a
web server on port 8228 and this runs as user nobody.

As per usual do the following:

perl Makefile.PL
make
make test
make install


If you run into problems during the 'make test' stage, please email me
the
error log which is at: Server/t/error_log. Also include the last few
lines of 'make test' output.

APACHE CONFIG:

The documentation for Apache::AppCluster::Server contains everything
you'll
need to set up the server component. The only thing to keep in mind is
that
if you are going to be sending multiple concurrent requests from the
client
to an apache server, make sure the server is set up to handle the load.
Do this by setting MaxClients, StartServers, MinSpareServers and
MaxSpareServers.
If you're going to be hitting it with 20 concurrent requests, make sure
there
are 20 child servers standing by to handle your requests.




Re: Fast template system

2001-12-31 Thread Mark Maunder

Ryan Thompson wrote:

 Mark Maunder wrote to Ryan Thompson:

  Ryan Thompson wrote:
 
   There must be a faster way. I have thought about pre-compiling each
   HTML file into a Perl module, but there would have to be an automated
   (and secure) way to suck these in if the original file changes.
  
   Either that, or maybe someone has written a better parser. My code
   looks something like this, to give you an idea of what I need:
 
  Sure there are tons of good template systems out there. I think
  someone made a comment about writing a template system being a
  right of passage as a perl developer. But it's also more fun to do
  it yourself.

 :-)

  I guess you've tried compiling your regex with the o modifier?

 Yep, problem is there are several of them. I've done some work
 recently to simplify things, which might have a positive effect.

  Also, have you tried caching your HTML in global package variables
  instead of shared memory?  I think it may be a bit faster than
  shared memory segments like Apache::Cache uses. (The first request
  for each child will be slower, but after they've each served once,
  they'll all be fast). Does your engine stat (access) the html file
  on disk for each request? You mentioned you're caching, but
  perhaps you're checking for changes to the file. Try to stat as

 My caching algorithm uses 2 levels:

 When an HTML file is requested, the instance of my template class
 checks in its memory cache. If it finds it there, great... everything
 is done within that server process.

 If it's not in the memory cache, it checks in a central MySQL cache
 database on the local machine. These requests are on the order of a
 few ms, thanks to an optimized query and Apache::DBI. NOT a big deal.

 If it's not in either cache, it takes it's lumps and goes to disk.


If you're using a disk based table, in most cases, mysql would access the
disk itself anyway. So whether you're getting the cached data from mysql or a
file, it's still coming from disk. (yes mysql caches - especially if you're
using InnoDB tables, but you're not gauranteed to save a disk access). Not
sure how much html/content you have, but any chance you can stick it all in
shared memory, or even better, give each child their own copy in a package
global variable (like a hashref)? If it's under a meg (maybe even 2) you
might be able to get away with that.


 In each cache, I use a TTL. (time() + $TTL), which is configurable,
 and usually set to something like 5 minutes in production, or 60
 seconds during development/bug fixes. (And, for this kind of data, 5
 minutes is pretty granular, as templates don't change very often.. but
 setting it any higher would, on average, have only a negligible
 improvement in performance at the risk of annoying developers :-).

 And, with debugging in my template module turned on, it has been
 observed that cache misses are VERY infrequent ( 0.1% of all
 requests).

 In fact, if I use this cache system and disable all parsing (i.e.,
 just use it to include straight HTML into mod_perl apps), I can serve
 150-200 requests/second on the same system.

 With my parsing regexps enabled, it drops to 50-60 requests/second.

 So, to me, it is clear where performance needs to be improved. :-)

How about instead of having a cache expiry/TTL, you parse the HTML on the
first request only and then always serve from the cache. To refresh the
cache, you set a flag in shared memory. Then whenever a child is about to
serve from cache, it just checks the flag in shared memory to see if it needs
to refresh it's cache. That way you can 'push' out new code by just setting
the flag and unsetting it once all running children have read from the cache.
You'll also have every request served from the cache except the first. You'll
also get the benifit of having each child serve live code for every request
by keeping the flag set. That way your developers can code in realtime
without a 60 second latency for new HTML to take effect when you want them
to.








Re: Fast template system

2001-12-30 Thread Mark Maunder

Ryan Thompson wrote:

 There must be a faster way. I have thought about pre-compiling each
 HTML file into a Perl module, but there would have to be an automated
 (and secure) way to suck these in if the original file changes.

 Either that, or maybe someone has written a better parser. My code
 looks something like this, to give you an idea of what I need:


Sure there are tons of good template systems out there. I think someone
made a comment about writing a template system being a right of passage as
a perl developer. But it's also more fun to do it yourself.

I guess you've tried compiling your regex with the o modifier? Also, have
you tried caching your HTML in global package variables instead of shared
memory? I think it may be a bit faster than shared memory segments like
Apache::Cache uses. (The first request for each child will be slower, but
after they've each served once, they'll all be fast). Does  your engine
stat (access) the html file on disk for each request? You mentioned you're
caching, but perhaps you're checking for changes to the file. Try to stat
as infrequently as possible, or if you have to then use the _ special
filehandle for multiple stats.

Just my 2c.

~mark.




Re: [OT] Tips tricks needed :)

2001-12-20 Thread Mark Maunder

Mark Fowler wrote:

 I'd really appreciate it other people could check this and confirm that IE6
 is not
 offering any actual privacy level protection and is just discriminated
 against people that don't have P3P headers.


I tried a few header combinations before I got IE6 to send cookies in frames
where one frame is an external site, so it is parsing the header, not just
requiring its existence. I'm not sure if it actually looks at a users settings
to determine if the policy is acceptable based on user prefs.




Re: Deleting a cookie

2001-11-27 Thread Mark Maunder

Jon Robison wrote:

 I have created a login system using the wonderful Ticket system from the
 Eagle book.  I have modified TicketAccess so that after authentication,
 it reviews the arguments in the query string and does push_handler, the
 handler being chosen based on the args.

 My only problem is that I want to provide the users with a logout button
 which will delete the cookie from thier browser, yet I cannot find how!.

Jon,

I had the same problem and could not succesfully delete the cookie from all browsers 
(IE, Netscape, Konqueror, Lynx, Opera etc.). I eventually solved
it by keeping the existing (session) cookie which was assigned when the user first 
logged in, but marking the user as logged out on the server side.
i.e. associate a user cookie with session data stored in a database, and instead of 
deleting the cookie on the client side, just set something on the
server side session information that marks the user as having logged out. If the user 
then logs in again, just reuse the same cookie and mark the user
as having logged in. This way you only have to assign an authentication cookie once 
per browser session.

This may be tough to drop into TicketTool because IIRC it stores the authentication 
info in the cookie itself, rather than a server side session it
associates with a cookie. Not very helpful, but it's another approach. I'd like to 
hear if you get it working across various browsers by expiring the
cookie - for future ref.

~mark




Re: [OT] Re: search.cpan.org

2001-11-27 Thread Mark Maunder

Nick Tonkin wrote:

 Because it does a full text search of all the contents of the DB.


Not sure what he's using for a back end, but mysql 4.0 (in alpha) has very fast and
feature rich full text searching now, so perhaps he can migrate to that once it's
released in December sometime. I'm using it on our site and searching fulltext
indexes on three fields (including a large text field) in under 3 seconds on over
70,000 records on a p550 with 490 megs of ram.




Re: [OT] A couple of dubious network problems...

2001-11-27 Thread Mark Maunder

Dave Hodgkinson wrote:

 1. On a RH6.0 (yes, ick) box without persistent DBI connections, the
 server side of the DBD::mysql connection was successfully closed
 (netstat shows nothing), but the client side shows a TIME_WAIT state,
 which hangs around for 30 seconds or so before
 disappearing. Obviously, using Apache::DBI makes this go away, but
 it's disturbing nonetheless. Does this ring any bells?

Dunno about number 2, but 1 is perfectly normal. TIME_WAIT is a condition
the OS puts a closed socket into to prevent another app from using the
socket, just in case the peer host has any more packets to send to that
port. The host that closes the socket will put the old socket into
TIME_WAIT. BSD IP stack implementations keep sockets in time_wait for
about 30 seconds, others go up to 2 minutes. The duration is called 2MSL
(2 * max_segment_lifetime). Don't worry about it and dont mess with it
(unless you're consuming 64000+ sockets per 30 seconds, in which case you
have other problems to deal with ;-)

~mark




Re: [OT] Re: Seeking Legal help

2001-11-22 Thread Mark Maunder

Matt Sergeant wrote:

 Step three: Once you've given them 90 days after date of invoice, get a
 solicitor (not a barrister) to draft a threatening letter. It'll cost you
 about $100. I'm afraid you'll have to give them another 30 days at this
 point.

 Step four: Get a lawyer. Sue. $25,000 is not to be sniffed at.

What many small companies and one man operations dont realise is that debt
collecting is an art. Also, some large companies (large banks in particular)
have a policy of 'If you want to do business with us, we take 60 days to pay.
It's all about keeping the cashflow on their side.

I did some work for a certain Linux distributor in the UK recently and they
took 100 days to pay after much harrasment. If you're small you have to be
tough - put the geek aside and become that vicious old lady that is usually
hired to badger late payers.

Since you're also UK based, a good line you might want to try is I've already
paid the VAT on this invoice. I'd like to know is whether I should write you
off as a bad debt so I can claim the VAT back. - assuming you're VAT
registered that is.

~mark




[OT] open source jobsite

2001-10-31 Thread Mark Maunder

Hi,

We launched a free open source jobsite today (open source jobs only, and
non-profit). Check it out at http://www.freeusall.com/

It's built on mod_perl and Apache. Any feedback would be much
appreciated. (please send directly to me as this is very OT).

thanks,

~mark




Re: Apache::Compress - any caveats?

2001-10-29 Thread Mark Maunder

 Ged Haywood wrote:

 There was one odd browser that didn't seem to deal with gzip encoding
 for type text/html, it was an IE not sure 4.x or 5.x, and when set
 with a proxy but not really using a proxy, it would render garbage
 to the screen.  This was well over a year ago at this point when this
 was seen by QA.  The compression technique was the same used as
 Apache::Compress, where all of the data is compressed at once.
 Apparently, if one tries to compress in chunks instead, that will
 also result in problems with IE browsers.

We've been testing with Opera, Konqueror, NS 4.7 and 6 and IE 5, 5.5 and 6,
AOL and Lynx and haven't had any probs. (haven't tested IE below version 5
though *gulp*) The only real problem was NS 4.7 which freaked out when you
compressed the style sheet and the HTML (it wouldn't load the style sheet) so
we're just compressing text/html.

 Note that it wasn't I that gave up on compression for the project,
 but a lack of management understanding the value of squeezing 40K
 of HTML down to 5K !!  I would compress text/html output to
 netscape browsers fearlessly, and approach IE browsers more
 carefully.

I differ in that NS instils fear and IE seems to cause less migranes. Agree on
your point about management ignorance though. Isn't bandwidth e-commerce's
biggest expense?





[OT] P3P policies and IE 6 (something to be aware of)

2001-10-29 Thread Mark Maunder

Just thought I'd share a problem I've found with IE 6 and sites (like
mine) that insist on cookie support.

If you use cookies on your site and you send a customer an email
containing a link to your site:
If the customer's email address is based at a web based mail service
like hotmail, IE 6's default behaviour is to disable cookies when the
customer clicks on the link and your site is opened in frames. This is
because IE 6 considers your site to be a third party (the frames cause
this) and unless you have a compact P3P policy set up, which it approves
of, it disables cookies. A compact P3P policy is just an HTTP header
containing an abbreviated version of a full P3P policy, which is an XML
document. Here's how you set up a compact P3P policy under mod_perl:

#This policy will make IE6 accept your cookies as a third party, but you
should generate
# your own policy using one of the apps at the W3C site.
my $p3p_compact_policy = CP=\ALL DSP COR CURa ADMa DEVa TAIa PSAa PSDa
IVAa IVDa CONa TELa OUR STP UNI NAV STA PRE\;
$r-err_header_out(P3P = $p3p_compact_policy);
$r-header_out(P3P = $p3p_compact_policy);

Check out http://www.w3.org/P3P/ for the full info on P3P.
Check out
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpriv/html/ie6privacyfeature.asp

for M$ info on IE6 and cookies/privacy

Appologies for the OT post, but I'm just hoping I'll save someone else
the same trouble I just went through.

~mark




Re: Apache::Compress - any caveats?

2001-10-25 Thread Mark Maunder

Ged Haywood wrote:

 Hi there,

 On Wed, 24 Oct 2001, Mark Maunder wrote:

  I noticed that there are very few sites out there using
  Content-Encoding: gzip - in fact yahoo was the only one I could
  find. Is there a reason for this

 I think because many browsers claim to accept gzip encoding and then
 fail to cope with it.

The only bug I have noticed is Netscape 4.7 which does not like anything
other than HTML to be compressed. So the only thing I'm compressing is
text/html. I dont know of any browsers that wont accept compressed html
(so far).





Re: stacked handlers return vals

2001-10-22 Thread Mark Maunder

Geoffrey Young wrote:

   what is wrong with DONE?  DONE immediatly closes the client
  connection
   and starts the logging phase.  if you have sent the content already
   then there is nothing to worry about.  the call to send_http_header
   will pick up on the any status you set previously or use the default
   HTTP_OK set at the start of the request.
 
  Isn't DONE ignored like OK and DECLINED? Writing Apache Modules
  w. P.a.C. says:
 
  ..Each handler will be called in turn in the order in which it was
  declared. The exception to this rule is if one of the handlers in the
  series returns an error code (anything other than OK, DECLINED, or
  DONE)...

 ok, I think that paragraph can be read a few ways.

 pretty much anything other than OK, DECLINED, or DONE is treated as an error
 and starts the error processing cycle.  even things like REDIRECT.  that's
 what I think the parenthetical was talking about.  now, that aside, DONE has
 special meaning to Apache, namely ending the request and moving to the
 logging phase.  Looks like I was wrong about it closing the connection,
 though - I thought I had both read and tested that, but I can't see in the
 code where Apache overrides the current keepalive settings and in my tests
 the connection was left open.

 the current implementation in perl_call_handler has exceptions for != OK,
 then further processing for SERVER_ERROR and DECLINED.  DONE appears to fall
 through, which would appear to make it behave the way I described.

 however, the real proof is in testing, which for me shows exactly what I
 expected - DONE terminates the chain and returns 200.  try it yourself :)

 --Geoff

OK thanks for all the help. I read the eagle book properly the second time and
found the description of DONE (so this thread is getting a little embarrasing!).
I'm using chained handlers now with $r-pnotes to coordinate all handler
responses (Along with Apache::Filter to pass data along the chain) e.g. if the
first handler returnes DECLINED, the rest will also because of a flag set with
pnotes. Someone else suggested pnotes earlier in the thread.

thanks again.







Re: stacked handlers return vals

2001-10-21 Thread Mark Maunder

Nikolaus Rath wrote:

 * Mark Maunder [EMAIL PROTECTED] wrote:
  Hi,
 
  If I'm using stacked handlers, what should I return if I dont want the
  next handler in line to run because I've returned all required content
  to the client? (the eagle book says anything other than OK, DECLINED
  or DONE, but what's the appropriate return val that wont cause the
  client to think an error occured?)

 200 / HTTP_DOCUMENT_FOLLOWS?

--Nikolaus

Nope, tried it already. It just goes on to the next handler as if you
returned OK.





Re: stacked handlers return vals

2001-10-21 Thread Mark Maunder

Geoffrey Young wrote:

  -Original Message-
  From: Mark Maunder [mailto:[EMAIL PROTECTED]]
  Sent: Sunday, October 21, 2001 1:49 PM
  To: Nikolaus Rath
  Cc: [EMAIL PROTECTED]
  Subject: Re: stacked handlers return vals
 
 
  Nikolaus Rath wrote:
 
   * Mark Maunder [EMAIL PROTECTED] wrote:
Hi,
   
If I'm using stacked handlers, what should I return if I
  dont want the
next handler in line to run because I've returned all
  required content
to the client? (the eagle book says anything other than
  OK, DECLINED
or DONE, but what's the appropriate return val that wont cause the
client to think an error occured?)
  
   200 / HTTP_DOCUMENT_FOLLOWS?
  
  --Nikolaus
 
  Nope, tried it already. It just goes on to the next handler as if you
  returned OK.

 what is wrong with DONE?  DONE immediatly closes the client connection and
 starts the logging phase.  if you have sent the content already then there
 is nothing to worry about.  the call to send_http_header will pick up on the
 any status you set previously or use the default HTTP_OK set at the start of
 the request.

 no matter what status you return, it matters not once you've sent your
 headers.  DONE is there if you want to close the client connection, which
 will prevent any other PerlHandler from getting to the client

 HTH

 --Geoff

Thanks, I missed that.





stacked handlers return vals

2001-10-19 Thread Mark Maunder

Hi,

If I'm using stacked handlers, what should I return if I dont want the
next handler in line to run because I've returned all required content
to the client? (the eagle book says anything other than OK, DECLINED or
DONE, but what's the appropriate return val that wont cause the client
to think an error occured?)

tnx!






multiple rapid refreshes - how to handle them.

2001-10-17 Thread Mark Maunder

Is there a standard way of dealing with users who are on high bandwidth
connections who hit refresh (hold down F5 in IE for example) many times
on a page that generates alot of database activity?

On a 10 meg connection, holding down F5 in IE for a few seconds
generates around 300 requests and grinds our server to a halt. The app
is written as a single mod_perl handler that maintains state with
Apache::Session and cookies. content is generated from a backend mysql
database.

tnx!






Re: search engine module? [drifting OT DBI related]

2001-10-15 Thread Mark Maunder

Matt J. Avitable wrote:

 Hi,

  I've written a search engine that searches for jobs in a database based
  on keywords. I'm assembling a string of sql and then submitting it to
  the database based on the user's search criteria. It's working but is

 It sounds like you are writing a web front end for mysql.  I'm not
 sure about modules on cpan about that specifically.  If you wanted to get
 a bit more fancy, you might try DBIx::FullTextSearch.

Thanks. I Checked out FullTextSearch on some earlier advice and it's not
exactly what I'm after, but quite useful none the less. I've started using
MySQL's MATCH/AGAINST with fulltext indexes instead, and it is extremelly
fast (!!), but am waiting for a feature that's available in mysql 4.0 (due
end of this month) that allows you to use +word and -word syntax to specify
required or unwanted keywords. Also just as an asside, match/against only
works with MyISAM tables so I've had to convert some of mine from InnoDB at
the cost of losing transactions.





Re: search engine module? [drifting OT DBI related]

2001-10-15 Thread Mark Maunder

Mark Maunder wrote:

  I've started using
 MySQL's MATCH/AGAINST with fulltext indexes instead, and it is extremelly
 fast (!!), but am waiting for a feature that's available in mysql 4.0 (due
 end of this month) that allows you to use +word and -word syntax to specify
 required or unwanted keywords. Also just as an asside, match/against only
 works with MyISAM tables so I've had to convert some of mine from InnoDB at
 the cost of losing transactions.

er - lo and behold, mysql 4.0 alpha has been released a few minutes ago by
Monty.
http://www.mysql.com/downloads/mysql-4.0.html









search engine module?

2001-10-12 Thread Mark Maunder

I've written a search engine that searches for jobs in a database based
on keywords. I'm assembling a string of sql and then submitting it to
the database based on the user's search criteria. It's working but is
really simple right now - it just does a logical AND with all the
keywords the user submits. I'd like to include features like the ability
to submit a query like:
(perl AND apache) OR java NOT microsoft

I don't want to reinvent the wheel and I'm sure this has been done a
zillion times, so does anyone know of a module in CPAN that I can use
for this? I'm using MySQL on the back end and DBI under mod perl which
runs as a handler.






Re: [OT] What hourly rate to charge for programming?

2001-10-10 Thread Mark Maunder

Purcell, Scott wrote:

 What kind of thread is this?
 I ask a question about modperl on NT and I get riddled from the list for
 using NT. Then we have a thread that goes for two days about hourly charges?

What did you expect? You shoulda been using Win2K! *duck*






[huge OT]Re: Lets Get it on!

2001-10-10 Thread Mark Maunder

Randy Kobes wrote:

 Friendly ribbing aside, let's not lose sight of Scott's original
 sentiment ... In the two years since we've been keeping Win32 mod_perl
 binaries here, there's been an average of about 30 downloads per
 day, suggesting Win32 users make up a fair percentage of mod_perl
 users ...

[advance appologies for persisting advocacy thread]
Absolutelly. Win32 is an entry point for many developers before they move to open
source as  noticed on the Perl beginners list and I think it's quite important to
support  the platform (both technical and political) because from an advocacy POV,
it's a great source of new recruits (to apache, mod_perl and an open source
platform).

[footnote: use linux for it's flexibility and built in compiler - win32 is a pain in
the rear when it comes to the latter and former]





Re: POST and GET and getting multiple unsynced requests

2001-10-08 Thread Mark Maunder

 I think that's just a coincidence.  IIRC, the spec doesn't require this to
 work, and it doesn't work in all browsers.  The only real solution is to not
 do it.  PATH_INFO was a good suggestion.  I'd go with that if it can't be
 added to the POST data.

Thanks. I've taken your advice and am using a redirect after form submission to
the original URL (the URL with the GET args I was using in the form 'action'
attribute) as a workaround.

I can't use PATH_INFO (or $r-path_info() ) because I submit forms via GET at
various parts of the application so that users can bookmark the results (search
results for example).

Both browsers (IE and NS) seem to be submitting both a POST and GET request. In
the case of the POST request, the GET params are included in the URL and both
GET and POST params are accessable via $r-args() and $r-content respectivelly
(provided it is the POST request you're processing and not the GET - and you
can't be sure of which). I put a sniffer on the wire and the header of the GET
and POST requests looks like so:
POST /?step=10search=Perl HTTP/1.1
GET /?step=10search=Perl HTTP/1.1
(The POST request has the posted data included at the end of the header)
So it seems that it depends which request is received first. In the case of
netscape it's always the GET first and in the case of IE it is mostly the POST
first but it varies.





POST and GET and getting multiple unsynced requests

2001-10-07 Thread Mark Maunder

Hi all,

I've written a web app as a single mod_perl handler. I started writing
my forms so they would do a POST and GET simultaneously. I did this by
making the form method=POST  action=/job_details?job=65 for example.
Now I notice that IE and Netscape do a POST and GET request every time
the form is submitted (so I'm logging two requests for the same URI and
Mime type for each form submission. The first is GET and the second is
POST). My problem is that the data returned to the browser is from the
GET request. The page that is generated has content that is affected by
the POSTed data, but this is only visible on a refresh of the same page.

I've done tests with Netscape and IE. I consistently have this problem
with Netscape, and with IE it works most of the time, but approximatelly
every 20 requests, I'll get a page that is generated from the GET data,
not the POSTed data.

I've included a source snippit from my handler below. If anyone has seen
this before, I'd appreciate any help! (I scoured the guide and archive.
I'm really sorry if I missed it!)

--snip--
sub handler
{
my $r = Apache::Request-new(shift @_);
$r-log_error(Request being handled:  . $r-content_type() . 
-  . $r-uri() .  -  . $r-method());
#We dont want to handle image, CSS or javascript requests
if($r-content_type() =~ m/(^image|^text\/css|javascript)/)
{
return DECLINED;
}
my $dbh = FUtil::connect_database();
--end snippet--

And the error log shows:
[Sun Oct  7 23:05:38 2001] [error] Request being handled:  -
/job_details - GET
[Sun Oct  7 23:05:38 2001] [error] Request being handled:  -
/job_details - POST
[Sun Oct  7 23:05:38 2001] [error] Request being handled: text/css -
/style.css - GET
[Sun Oct  7 23:05:38 2001] [error] Request being handled:
application/x-javascript - /js.js - GET

The form HTML tag that did this was:
form action=http://www.freeusall.com/job_details?job=61; method=POST
name=hotlist_form_jd_61
This is doing a POST and GET.




Re: Porting

2001-09-24 Thread Mark Maunder

Any clues as to your motivation for porting to mod_perl? I've been trying to
sell a mod_perl solution to some Java nuts for some time and any help would be
much appreciated. What really makes mod_perl better than Java? Are there any
performance benchmarks out there that anyone knows about? Scaleability? JDBC vs.
DBI? Child/Servlet memory footprint size?

If someone says to you, why didn't you do it in Java? What do you say? (Besides
mentioning Sun's lame license.)

Didn't the eToys guys do some benchmarking? (Perrin?)

Manoj Anjaria wrote:

 Hello all,

 We have an application written in Java using MVC which we would like to port
 to mySQL/Perl platform. We have used Struts,to create tags.
 Any inputs in this regards will be appreaciated.

 Thanks
 Manoj

 _
 Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp

--
Mark Maunder
Senior Architect
SwiftCamel Software
http://www.swiftcamel.com
mailto:[EMAIL PROTECTED]





Shared cache with IPC::Shareable

2001-09-19 Thread Mark Maunder

Hi all,

I'm sharing memory between httpd processes using IPC::Shareable. It is working
but seems to behave inconsistently (memory is often not being freed etc..). I'm
using it for creating common cached areas for file and database contents shared
between httpd children. Is there a better way to do this i.e. Am I stuck with
IPC::Shareable? I'm running mod_perl and the whole application is running as a
single content handler.

It also isn't as fast as I thought it would be  - sure it takes a long time to
do the initial load of all caches on the first request, but I just thought it
would be a little faster than it is. Are there any performance issues I should
be aware of with IPC::Shareable or shared mem in general?

Thanks!

~mark






Re: keeping client images private

2001-09-12 Thread Mark Maunder

I'm afraid I'm not familiar (although aware of) Mason, so I can't help
you in that context. I wrote something similar a while ago. When a user
uploads the image file it goes into a common directory that contains all
images. The file name is renamed to the following format:
image file checksum in hex.image extention - .gif for example
The checksum ensures that all filenames are unique and offers a quick
way to check if an images has already been uploaded (just gen a checksum
of the images uploaded and check if the file already exists in the
images directory). This also offers a way to have a single copy of an
image where multiple users may have uploaded the same image.

Once the image has been stored, I write an entry for each file in a
table in an RDBMS (mysql) which contains the file name
(checksum.extention), description, original file name of the image, date
uploaded, time last viewed, number of hits etc...etc...

Then just write an apache handler that provides a virtual directory
structure to view each users images. e.g.
http://example.com/images/mark/image1.jpg will be intercepted by the
handler. Handler checks if user is logged in and has access to
/images/mark and if all checks out ok, then handler fetches image1.jpg's
information from the database, fetches the corresponding
checksum.extension file and returns an image/jpeg (or image/gif or
whatever)

You can also do some funky stuff like use Image::Magick to generate
thumbnails on the fly which are cached in a seperate directory. So the
first time a thumbnail is accessed, you generate it dynamically, the
second time it is served from the directory cache. You also store the
thumbnails as a checksum of the original image (perhaps with a different
extension) so that if the original changes, the thumbnail will have to
be regenerated.

(Sorry if the above seems a little unstructured - just a brain dump
really).

~mark

will trillich wrote:

 y'all seem to have some clever brains out here, so i'm wondering
 if some of you can offer suggestions--

 what's a good clean way to keep images private per customer?

 i'm using mod_perl and HTML::Mason with session cookies, but
 coming up with a paradigm for storing and revealing images
 privately has got me a bit flummoxed.

 mr. smith has pix which he can see when he logs in to the
 website, and mr. brown has pix of his own, but neither can
 see the other's images. each customer can have two levels of
 imagery (main images/subsidiary images).

 i could have a handler intercept image requests and deny access
 if session-user isn't valid ... should i just make an apache
 alias to handle images as if they're from a certain subdir? and
 then use mod_perl to redirect the requests to the actual images
 internally?

 or actually store the images in actual subdirs of the
 documentroot?

 is there a better/faster/cheaper way?

 i'm sure there's more than one way to do this -- and before i
 take the likely-to-be-most-circuituitous route, i thought i'd
 cull advice from the clever minds on this list...

 --
 [EMAIL PROTECTED]
 http://sourceforge.net/projects/newbiedoc -- we need your brain!
 http://www.dontUthink.com/ -- your brain needs us!

--
Mark Maunder
Senior Architect
SwiftCamel Software
http://www.swiftcamel.com
mailto:[EMAIL PROTECTED]





Re: Help with cookies

2001-08-09 Thread Mark Maunder

If you're chaining handlers, they should all return OK. They will all get
called, so long as they either appear in the config file on the same line, or
have been registered using $r-push_handlers().

One of them must send the header though, or return REDIRECT (for example) to
perform a redirect. Does Apache::Cookie-bake() set both the regular headers and
the error headers?

I usually call the header setting routines manually like so:
$r-header_out(Set-Cookie = $cookie-as_string()); #and also call
$r-err_header_out(Set-Cookie = $cookie-as_string());

If you're returning OK, the server assumes you handled the request. So you
should have sent a response page and would probably have used
$r-send_http_header to generate the header for that page.
If you return REDIRECT, the header is sent for you based on what you set with
$r-err_header_out(). So if you want to be sure that the cookie will be set, you
should set it in both the error headers and the regular headers.

Then depending on whether you're redirecting or just generating a page call:
return REDIRECT;
 Or call
$r-send_http_header('text/html'); #If you're about to send some HTML.

Either way the cookie gets sent. (The err_header_out is for the REDIRECT and the
header_out is for the regular send_http_header). Also remember to set the
Location header if you're doing a redirect. I also use both err_header_out AND
header_out to set this. (I should probably just be using err_header_out for
that).

~mark.




Robert Landrum wrote:

 At 3:50 PM -0400 8/8/01, Perrin Harkins wrote:
 
 It depends on what's happening in that second module.  If you don't send an
 actual response to the client, headers (including cookies) will not be sent
 out.

 Umm... Is

return OK;

 the correct thing to return when using multiple handlers?  I thought
 DECLINED was the correct status code.  Then the last module (in this
 case MIS_APPS::RHS::Control::Scan) would return OK.

 Robert Landrum

 --
 A good magician never reveals his secret; the unbelievable trick
 becomes simple and obvious once it is explained. So too with UNIX.

--
Mark Maunder
Senior Architect
SwiftCamel Software
http://www.swiftcamel.com
mailto:[EMAIL PROTECTED]





Re: using DBI with apache

2001-08-09 Thread Mark Maunder

DBI works under Apache. Apache::DBI just gives you some performance gains like
persistent connections etc. Get the script working with DBI under Apache and
then start messing with Apache::DBI. Your problem is that you need to print
Content-type: text/html\n\n; before you print anything else. (CGI basics).

Greg Cobb wrote:

 I can run this simple script through perl itself, but when I put it in the
 cgi-bin and try to run it using mod_perl Perl pops up in windows with an
 error.  I assume this means I need something like Apache::DBI?...  I
 originally got Apache in binary form with mod_perl installed and did not
 have to compile anything.  I tried to follow the instructions but i dont
 seem to be able to build Apache::DBI
 I ran
 perl makefile.pl which creates a file called makefile
 then the instructions tell you to run
 make
 make test
 make install

 make by run by itself says 'No terminator specified for in-line operator'
 and i dont have a test or install file that came with the Apache:DBI
 download.  I have a test.pl but that doesnt seem to be what i need.  I am
 running Win 98se if that helps.  Anyone have any suggestions?

 Here is the code I tried to run.
 #!\perl\bin\perl
 use DBI;
 $dbh = DBI-connect('dbi:ODBC:Test1Db');
 $sqlstatement=SELECT * FROM ASTAB;
 $sth = $dbh-prepare($sqlstatement);
 $sth-execute ||
die Could not execute SQL statement ... maybe invalid?;

 #output database results
 while (@row=$sth-fetchrow_array)
   { print @row\n }

 _
 Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp

--
Mark Maunder
Senior Architect
SwiftCamel Software
http://www.swiftcamel.com
mailto:[EMAIL PROTECTED]





Re: module to hit back at default.ida atack ?

2001-08-06 Thread Mark Maunder

Perhaps we should just keep a central database of where the attempts are coming from.
We could even extend it to work like the RBL - connects are not allowed from IP's
that have attempted the exploit (an explanation page appears instead of the requested
page) and are listed in our blacklist. That might force ISP's to kick the k1dd13z off
their system.  We could host the db on a webpage (searchable) and make it available
for download by the script that does the banning on a daily/hourly basis. We could
probably extend this to cover a few other exploits if this works. Would anyone use
this?


Sean Chittenden wrote:

Anybody know of any module I can use to hit back at these default.ida bozos
(i.e. keep them away from my IP addresses ?). I'm running apache/modperl on
Win32.
  
  [snip]
   ::grin::  In the post he mentioned about trashing the kernel on NT so
   this might be kinda fun...
 
  Well you might think it's fun but there are those who'd say it's criminal.

 Sorry, you're right.  I meant fun in the same way that Looney
 Toons and Wilie Coyote.  Funny to watch a cartoon fall off a cliff, but
 not necessarily funny in life.

  Please don't promote irresponsible ideas on the mod_perl List.

 My bad script kiddies, go away, grow up, be responsible, and
 look to other security oriented lists such as incidents and bugtraq for
 bad ideas.  -sc

 PS line type=fine personal_opinion=trueBad ideas aren't
 bad, execution of bad ideas is bad./line

 --
 Sean Chittenden

   
Part 1.2Type: application/pgp-signature

--
Mark Maunder
Senior Architect
SwiftCamel Software
http://www.swiftcamel.com
mailto:[EMAIL PROTECTED]





Re: module to hit back at default.ida atack ?

2001-08-06 Thread Mark Maunder

AFAIK most large backbone routers out there dont support application layer
filtering e.g. filtering based on what type of http request it is, or what is
requested. Too much CPU overhead methinks.

Some examples: In the case of the user having a dynamically assigned IP address,
the next person assigned that IP who hits any site subscribing to the realtime web
blackhole list (Lets call it RWBL) will see a polite message saying this IP has
been used for a hack attempt (with explanation on how to get it unblocked) and
will hopefully report it to their ISP. In the case of the user having a static IP
- well either their server was hacked, or they are the hacker, in which case the
effect will be similar - user will either stop hacking (or patch their server) or
risk being permanently banned from surfing any site subscribing to the RWBL.

To get off the black-hole list is a similar process to getting off the current
mail RBL list. Send a request explaining the cause of the hack-attempt and
assurances that a remedy is in place, or will be shortly.

Any suggestions on where to implement this in the server to ensure minimal
reconfiguration and impact to existing mod_perl handlers? It needs to be able to
block a request based on the contents of a text file or type of request and chuck
out an explanation page. Also needs to be able to append hack attempts into the
text file when the IP is not listed. The text file can be stored in the server
root somewhere (like robots.txt) and is gathered once daily by the central system.
The logic that will be used in the central system to ban IP's can be something
like 'if more than X number of hack attempts have been logged by different servers
from a particular IP, it's banned'. Perhaps X can be 7.

Also a list of banned request URI's can be made available for download for use by
the RWBL checker running on each server out there. That will allow us to adapt to
different worms or exploits.

David Young wrote:

 From: Mark Maunder [EMAIL PROTECTED]
  Perhaps we should just keep a central database of where the attempts are
  coming from.
  We could even extend it to work like the RBL - connects are not allowed from
  IP's that have attempted the exploit

 Would that really help anything? The traffic would still be reaching your
 server and clogging up the net, the only difference is that you'd be
 returning an access denied response rather than a 404.

 What would really help is if all the ISPs out there put filters on their
 routers to catch these requests as close to their source as possible.

--
Mark Maunder
Senior Architect
SwiftCamel Software
http://www.swiftcamel.com
mailto:[EMAIL PROTECTED]





Re: module to hit back at default.ida atack ?

2001-08-06 Thread Mark Maunder

I have a test system up and running. Anyone want to write a mod_perl handler to 
redirect
to a warning page if the clients IP is in the list? I'm not really sure which phase
would be the least intrusive into existing applications.

telnet www.swiftcamel.com 
Then hit enter and you'll see the latest list of servers that have attempted the hack
including the number of attempts per IP address (comma seperated). I only list servers
if we've received more than 1 attempt on different web servers. I've used our logs to
compile the initial list. (quite scary how many machines out there are infected.)

You can also dump a list of IP addresses once you connect (one per line) and they will
be added into the database. Blank line ends reception. Optionally you can add the
requested URI after the IP address on the same line seperated by a comma and it too 
will
be logged. I'm working on a web interface to search the list of IP's.

grep default.ida access_log | mail -s 'codered' [EMAIL PROTECTED]
and we'll add the IP's you logged to the system.

Jim Smith wrote:

 On Mon, Aug 06, 2001 at 02:46:54PM +0100, Mark Maunder wrote:
  AFAIK most large backbone routers out there dont support application layer
  filtering e.g. filtering based on what type of http request it is, or what is
  requested. Too much CPU overhead methinks.

  Of course, for those of us in state universities, content filtering makes
  us uneasy wrt first amendment rights, besides the CPU overhead.  Losing
  legitemate content is too much a risk.  It is far easier to cut the
  infected machines off the network until they are fixed.


  Some examples: In the case of the user having a dynamically assigned IP address,
  the next person assigned that IP who hits any site subscribing to the realtime web
  blackhole list (Lets call it RWBL) will see a polite message saying this IP has
  been used for a hack attempt (with explanation on how to get it unblocked) and
  will hopefully report it to their ISP. In the case of the user having a static IP
  - well either their server was hacked, or they are the hacker, in which case the
  effect will be similar - user will either stop hacking (or patch their server) or
  risk being permanently banned from surfing any site subscribing to the RWBL.
 [snip]
  Any suggestions on where to implement this in the server to ensure minimal
  reconfiguration and impact to existing mod_perl handlers? It needs to be able to
  block a request based on the contents of a text file or type of request and chuck
  out an explanation page. Also needs to be able to append hack attempts into the
  text file when the IP is not listed. The text file can be stored in the server
  root somewhere (like robots.txt) and is gathered once daily by the central system.
  The logic that will be used in the central system to ban IP's can be something
  like 'if more than X number of hack attempts have been logged by different servers
  from a particular IP, it's banned'. Perhaps X can be 7.

  If based on IP, use DNS - that's how the email RBLs are implemented.
  Makes a central database easy to maintain.  Take a look at the Sendmail
  rulesets for the RBLS. :)

 --jim

--
Mark Maunder
Senior Architect
SwiftCamel Software
http://www.swiftcamel.com
mailto:[EMAIL PROTECTED]





Re: Throwing die in Apache::Registry

2001-05-09 Thread Mark Maunder

Thanks Tom.

Yeah for XML::Parser line 236 perhaps we can get Clark (Current maintainer
according to POD) to change it to
return undef if $err;

Mark.
ps: I'll check that rule (tommorow. ..must...have...sleep..)

Tom Harper wrote:

 Mark--

 While you may be having problems with segfaults because
 of expat = yes rule--  i was having similar problems
 with XML parser relating to the the die statement.

 I do the same thing as far as eval'ing the parsefile
 call.  Also, I removed the die statement from parser.pm
 (v 2.29 line 240 or so) so it would return a useful error
 message rather than just die uninformatively.

 Maybe this is what you were asking about?

 Tom

 At 09:19 AM 5/4/01 +0100, Matt Sergeant wrote:
 On Fri, 4 May 2001, Perrin Harkins wrote:
 
  on 5/4/01 9:28 AM, Mark Maunder at [EMAIL PROTECTED] wrote:
   I have an Apache::Registry script that is using XML::Parser. The
 parser throws
   a
   'die' call if it encounters a parse error (Why?).
 
  Because it's an exception and the parser can't continue.
 
   I was handling this by
   putting
   the code in an eval block, but this no longer works since all Registry
 scripts
   are already in one huge eval block.
 
  It should still work.  An eval{} is scoped like any other block.  Maybe you
  have a typo?  Post your code and we'll look at it.
 
 More likely is a b0rked $SIG{__DIE__} handler, like fatalsToBrowser. Yick.
 
 --
 Matt/
 
 /||** Founder and CTO  **  **   http://axkit.com/ **
//||**  AxKit.com Ltd   **  ** XML Application Serving **
   // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
  // \\| // ** mod_perl news and resources: http://take23.org  **
  \\//
  //\\
 //  \\
 

--
Mark Maunder
[EMAIL PROTECTED]
http://swiftcamel.com/

 Try not.
 Do.
 Or do not.
 There is no try.
 ~yoda





Throwing die in Apache::Registry

2001-05-04 Thread Mark Maunder

Hi,

I'm sure this has been discussed, appologies if it has, but I scoured the lists
and docs and didn't get any help.

I have an Apache::Registry script that is using XML::Parser. The parser throws a
'die' call if it encounters a parse error (Why?). I was handling this by putting
the code in an eval block, but this no longer works since all Registry scripts
are already in one huge eval block. So whenever I get a parse error, my code
ignores my eval block which encapsulates it and jumps to the end of the Registry
eval block and effectivelly exits. How does one 'eval' code that might call 'die'
under Apache::Registry?

Mark.





Re: Throwing die in Apache::Registry

2001-05-04 Thread Mark Maunder

OK this is a little embarrasing... I assumed the script was die'ing when it hit the
XML::Parser routine and that eval wasn't catching the exception. Well the Apache
child is actually segfaulting. (Excuse: I'm running virtual hosts with seperate
logs. I didn't check the main error_log.). I checked the list archive and there's
tons of documentation about this and I think I saw a patch. (If anyone has more info
though, I'd appreciate it.) I'm running XML::Parser 2.30.

Thanks for the help and sorry about the time waster :)

Perrin Harkins wrote:

 on 5/4/01 9:28 AM, Mark Maunder at [EMAIL PROTECTED] wrote:
  I have an Apache::Registry script that is using XML::Parser. The parser throws
  a
  'die' call if it encounters a parse error (Why?).

 Because it's an exception and the parser can't continue.

  I was handling this by
  putting
  the code in an eval block, but this no longer works since all Registry scripts
  are already in one huge eval block.

 It should still work.  An eval{} is scoped like any other block.  Maybe you
 have a typo?  Post your code and we'll look at it.

 - Perrin

--
Mark Maunder
[EMAIL PROTECTED]
http://swiftcamel.com/

 Try not.
 Do.
 Or do not.
 There is no try.
 ~yoda





Re: modify Server header via a handler

2001-05-02 Thread Mark Maunder

You can get the server string in the header down to a minimum (Just 'Apache')
by putting
ServerTokens ProductOnly
on your httpd.conf. (Only supported after 1.3.12)
You can then use ap_add_version_component (C API) to add stuff after that.

IMHO you should at least mention  'Apache' and 'mod_perl' in the header so we
look good on netcraft. Or if you must, you can change the whole thing in the
src, I think it's src/include/httpd.h

~mark.

Alistair Mills wrote:

 On Tue, 1 May 2001, will trillich wrote:
 
  On Tue, May 01, 2001 at 12:10:34PM -0700, Randal L. Schwartz wrote:
newsreader == newsreader  [EMAIL PROTECTED] writes:
  
   newsreader randal s. posted a way to do that
   newsreader sometime back.  search for it in
   newsreader the archive.  his stonehenge
   newsreader website apparently uses the same trick.
  
   If he's already doing it in the fixup phase, that's where I'm doing it
   too, so that's probably not going to work.
 
  is it actually possible via perl?
 
  according to doug at
http://www.geocrawler.com/archives/3/182/1997/6/0/1014229/
  we shouldn't get our hopes up.
 

 I struggled to find a way of sending out a custom server response using
 Perl.

 Instead I want into into the Apache source to get it to print out a
 non-stanard server Apache response - I'm sure there might be an easier
 way though?

 --
 [EMAIL PROTECTED]
 http://www.kplworks.com/

  --
  [EMAIL PROTECTED]
  http://sourceforge.net/projects/newbiedoc -- we need your brain!
  http://www.dontUthink.com/ -- your brain needs us!
 

--
Mark Maunder
[EMAIL PROTECTED]
http://swiftcamel.com/

 Try not.
 Do.
 Or do not.
 There is no try.
 ~yoda





Re: an unusual [job request] + taking mod_perl to the commercial world

2001-04-27 Thread Mark Maunder

Well, hopefully the mod_perl community isn't so small that etoys counted as a
sizable fraction :)
I'm ex etoys Europe and have set up a mod_perl webdev company in London
assembling high traffic web sites, so I guess you can count me in as one of them
freed up mod_perl people. I was tempted to email Stas, but there's no way I could
pay his salary. I'm sure alot of companies out there would kill to have your name
associated with them though.

Jim Winstead wrote:

 On Fri, Apr 27, 2001 at 10:01:39AM -0700, Michael Lazzaro wrote:
  At 12:00 PM 4/27/01 -0400, JR Mayberry wrote:
  there will be more dreams jobs like you described.. simple fact is, I
  couldn't name more then 3 companies in my area who use it, and I never
  expect to do work with it again.
 
  ... on the other hand, even as recently as one year ago, it was almost
  impossible for our company (in southern california) to find mod_perl
  programmers.  Our last few job searches, tho, we've been able to find a
  *very* good supply of applicants with mod_perl experience... it's no longer
  been an issue.  (Most mod_perl applicants seem to have come by their
  experience from working on college campuses, BTW... which is another
  interesting -- and valuable -- change.  Not the fact that schools use it,
  but the _volume_ of applicants who are now learning it there.)

 well, i suspect a lot of those candidates actually surfaced as other
 idealab-backed companies either tanked or shifted direction. the
 death of etoys freed up a number of mod_perl-savvy developers. :)

 (in all seriousness, though, idealab and many of the companies it
 has spawned is a mod_perl-friendly place.)

 and my experience is that you don't need to hire mod_perl experts --
 specific skillsets are some distance down on the list of things i
 look at in hiring someone. given a good framework to develop in,
 and a good programmer who is willing to learn, mod_perl skills
 will bloom.

 but, outside of the linux companies and covalent, i don't know where
 one would look for a job just developing mod_perl itself.

 jim

--
Mark Maunder
[EMAIL PROTECTED]
http://swiftcamel.com/

 Try not.
 Do.
 Or do not.
 There is no try.
 ~yoda





Re: Initializing CGI Object from $r

2001-04-22 Thread Mark Maunder

Hi,

CGI accepts filehandles, hashref's, manually typed query string and
another CGI object. (type 'perldoc CGI'). You don't want to pass it any
of these because they're mainly used for debugging. Just create an
instance of CGI without passing it any params. Since you're writing a
handler I assume you're just using CGI for it's HTML routines and not to
retreive POST or GET data.  You want to use the Apache modperl modules
as far as possible for getting input data and outputting headers, HTML
etc.



Wade Burgett wrote:

 Can I initilize a new CGI object just by passing in a request from a
 handler?
 ie

 sub handler {

 my $r = shift;
 my $CGIQuery = new CGI($r);

 };

--
Mark Maunder
[EMAIL PROTECTED]
http://swiftcamel.com/

 Try not.
 Do.
 Or do not.
 There is no try.
 ~yoda