Re: Connection Pooling / TP Monitor

2000-10-29 Thread Gunther Birznieks

I guess part of the question is what is meant by "balanced" with regard to 
the non-apache back-end servers that was mentioned?

I am also concerned that the original question brings up the notion of 
failover. mod_backhand is not a failover solution. Backhand does have some 
facilities to do some failover (eg ByAge weeding) but it's not failover in 
the traditional sense. Backhand is for load balance not failover.

While Matt is correct that you could probably write your own load balance 
function, the main interesting function in mod_backhand is ByLoad which as 
far as I know is Apache specific and relies on the Apache scoreboard (or a 
patched version of this)

Non apache servers won't have this scoreboard file although perhaps you 
could program your own server(s) to emulate one if it's not mod_backhand.

The other requirement that non-apache servers may have for optimal use with 
mod_backhand is that the load balanced servers may need to report 
themselves to the main backhand server as one of the important functions is 
ByAge to weed out downed servers (and servers too heavily loaded to report 
their latest stats).

Otherwise, if you need to load balance a set of non-apache servers evenly 
and don't need ByLoad, you could always just use mod_rewrite with the 
reverse_proxy/load balancing recipe from Ralf's guide. This solution would 
get you up and running fast. But the main immediate downside (other than no 
true *load* balancing) is the lack of keep-alive upgrading.

I am also not sure if mod_log_spread has hooks to work with mod_backhand in 
particular which would make mod_rewrite load balancing (poor man's load 
balancing) less desirable. I suspect mod_log_spread is not 
backhand-specific although made by the same group but having not played 
with this module yet, I couldn't say for sure.

At 09:24 AM 10/29/00 +, Matt Sergeant wrote:
>On Sat, 28 Oct 2000, Les Mikesell wrote:
>
> > Is there any way to tie proxy requests mapped by mod_rewrite to
> > a balanced set of servers through mod_backhand (or anything
> > similar)?Also, can mod_backhand (or any alternative) work
> > with non-apache back end servers?I'm really looking for a way
> > to let mod_rewrite do the first cut at deciding where (or whether)
> > to send a request, but then be able to send to a load balanced, fail
> > over set, preferably without having to interpose another physical
> > proxy.
>
> From what I could make out, I think you should be able to use backhand
>only within certain  sections, and therefore have a request come
>in outside of that, rewrite it to a backhand enabled location and have
>backhand do its thing. Should work, but you'd have to try it.
>
>Alternatively write your own decision module for backhand. There's even a
>mod_perl module to do so (although apparently it needs patching for the
>latest version of mod_backhand).
>
>--
>
>
> /||** Director and CTO **
>//||**  AxKit.com Ltd   **  ** XML Application Serving **
>   // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
>  // \\| // ** Personal Web Site: http://sergeant.org/ **
>  \\//
>  //\\
> //  \\

__
Gunther Birznieks ([EMAIL PROTECTED])
eXtropia - The Web Technology Company
http://www.extropia.com/




Re: Connection Pooling / TP Monitor

2000-10-29 Thread G.W. Haywood

Hi guys,

On Mon, 30 Oct 2000, Gunther Birznieks wrote:
> At 09:24 AM 10/29/00 +, Matt Sergeant wrote:
> >On Sat, 28 Oct 2000, Les Mikesell wrote:

Load balncing, failover, etc.

Really useful stuff guys, how about when you write messages like this
putting in some (full) URIs for reference?  Most of the time it isn't
immediately necessary, I know, but I'm thinking that it would make it
so very easy for Geoff Y to cut and paste into the DIGEST.  People who
are floundering around looking for the stuff might get a flying start.

73,
Ged.








Re: Connection Pooling / TP Monitor

2000-10-29 Thread David Hodgkinson

Gunther Birznieks <[EMAIL PROTECTED]> writes:

> I am also concerned that the original question brings up the notion of 
> failover. mod_backhand is not a failover solution. Backhand does have some 
> facilities to do some failover (eg ByAge weeding) but it's not failover in 
> the traditional sense. Backhand is for load balance not failover.

Are we talking about failing "out" a server that's lost the plot, or
bringing a new server "in" as well? Isn't it just a case of defaulting
the apparent load of a failed machine up really high (like infinite)?

-- 
Dave Hodgkinson, http://www.hodgkinson.org
Editor-in-chief, The Highway Star   http://www.deep-purple.com
  Apache, mod_perl, MySQL, Sybase hired gun for, well, hire
  -



Re: [ RFC ] New Module Apache::SessionManager

2000-10-29 Thread Greg Cope

Matt Sergeant wrote:
> 
> On Wed, 25 Oct 2000, Gerald Richter wrote:
> 
> > I have three anonations:
> >
> > 1.)
> >
> > $r->header_out(Location => $r->uri());
> >
> > Also this code works with most browsers it doesn't conform to the HTTP
> > specs. A location header must include a host part. Shouldn't be to hard add
> > something like
> >
> > $r->header_out(Location => 'http://' . $r -> server -> server_hostname .
> > $r->uri());
> 
> + port too.

Ok

I've not had a chance to play with anything as I recovering from a
rather long week 

On another note I've had little success with Sourceforge - I've set a
project up but cannot seem to login to FTP nor upload a CVS snapshot,
nor add a description .  I've been in touch with support and am
awaiting more news.

Hence is it worth a CPAN entry ? (I've not got an account )

Anyway thanks for the tips.

Greg

> 
> --
> 
> 
> /||** Director and CTO **
>//||**  AxKit.com Ltd   **  ** XML Application Serving **
>   // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
>  // \\| // ** Personal Web Site: http://sergeant.org/ **
>  \\//
>  //\\
> //  \\



Re: [ RFC ] New Module Apache::SessionManager

2000-10-29 Thread Gunther Birznieks

How long have you been a member of sourceforge and when was the project 
created?

Sometimes it takes changes (eg giving you access to the project directory 
with write permissions) the famous 6 hour wait for their cluster of 
machines to get up to date.

You know that you are the default project admin, but if I remember 
correctly you have to give yourself write privileges to the project just 
like anyone else in the project list there.

Anyway, we've had similar issues with SourceForge. As a FAQ, it sounds OK 
to have a wait like that, but in practice it does get frustrating. However, 
in the end, it's a lot easier to have the CVS infrastructure hosted 
somewhere else cuz it's a pain in the ass to set up CVS for anonymous 
access securely and then all the tools surrounding it (eg cvsweb) and keep 
them up to date as security patches come out.

With that said though, I think their support has been responsive to us for 
a free service. We've usually had 24 hour turnaround on any question 
(except no answer on weekends -- but I've never had a high priority support 
issue to warrant that).

Later,
Gunther

At 12:53 PM 10/29/00 +, Greg Cope wrote:
>Matt Sergeant wrote:
> >
> > On Wed, 25 Oct 2000, Gerald Richter wrote:
> >
> > > I have three anonations:
> > >
> > > 1.)
> > >
> > > $r->header_out(Location => $r->uri());
> > >
> > > Also this code works with most browsers it doesn't conform to the HTTP
> > > specs. A location header must include a host part. Shouldn't be to 
> hard add
> > > something like
> > >
> > > $r->header_out(Location => 'http://' . $r -> server -> server_hostname .
> > > $r->uri());
> >
> > + port too.
>
>Ok
>
>I've not had a chance to play with anything as I recovering from a
>rather long week 
>
>On another note I've had little success with Sourceforge - I've set a
>project up but cannot seem to login to FTP nor upload a CVS snapshot,
>nor add a description .  I've been in touch with support and am
>awaiting more news.
>
>Hence is it worth a CPAN entry ? (I've not got an account )
>
>Anyway thanks for the tips.
>
>Greg
>
> >
> > --
> > 
> >
> > /||** Director and CTO **
> >//||**  AxKit.com Ltd   **  ** XML Application Serving **
> >   // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
> >  // \\| // ** Personal Web Site: http://sergeant.org/ **
> >  \\//
> >  //\\
> > //  \\

__
Gunther Birznieks ([EMAIL PROTECTED])
eXtropia - The Web Technology Company
http://www.extropia.com/




Re: how to really bang on a script?

2000-10-29 Thread Adi

martin langhoff wrote:
> 
> Chris,
> 
> i'd bet my head a few months ago someone announced an apache::bench
> module, that would take a log and run it as a benchmarking secuence of
> HTTP requests. just get to the list archives and start searching with
> benchmarks and logs. CPAN is your friend, also.

It was HTTPD::Bench::ApacheBench.  It is a Perl API to ab.  It doesn't take a
log per se, it simply sends sequences of HTTP requests and benchmarks the
results.  I'm sure you could very easily write a script to parse a log and then
make a benchmarking run out of it.

-Adi




Re: [ RFC ] New Module Apache::SessionManager

2000-10-29 Thread Greg Cope

darren chamberlain wrote:
> 
> Greg Cope ([EMAIL PROTECTED]) said something to this effect:
> > > $r->header_out(Location => 'http://' . $r -> server -> server_hostname .
> > > $r->uri());
> >
> > Seems easy - will add it in.
> 
> It's not that simple, of course -- you need to maintain port numbers and
> all that. I recommend using Apache::URI -- create a new Apache::URI object,
> set its attributes from the Apache object, and then call unparse on it.

I don't see the complication - this appears to work ok:

  my $uri = Apache::URI->parse($r);

  # hostinfo give port if necessary - otherwise not
  my $hostinfo = $uri->hostinfo;
  my $scheme =  $uri->scheme . '://';

  $redirect = $scheme . $hostinfo . '/'. $session_id . '/' . $rest .
$args;

This should always give the following redirect URI:

http://www.foo.com/456456456456456/orginal_request.hmtl

the scheme and port changing as neccessary.

The only  possible issue is if the hostname / scheme containt duff (un
uencoded) chars - which appears illogical to me.


Greg

> 
> (darren)
> 
> --
> In the fight between you and the world, back the world.



Re: [ RFC ] New Module Apache::SessionManager

2000-10-29 Thread Greg Cope

Gunther Birznieks wrote:
> 
> How long have you been a member of sourceforge and when was the project
> created?

7 days ago 

I am the admin - its called - wait for the great supprise - Session
Manager

> 
> Sometimes it takes changes (eg giving you access to the project directory
> with write permissions) the famous 6 hour wait for their cluster of
> machines to get up to date.
> 
> You know that you are the default project admin, but if I remember
> correctly you have to give yourself write privileges to the project just
> like anyone else in the project list there.
> 
> Anyway, we've had similar issues with SourceForge. As a FAQ, it sounds OK
> to have a wait like that, but in practice it does get frustrating. However,
> in the end, it's a lot easier to have the CVS infrastructure hosted
> somewhere else cuz it's a pain in the ass to set up CVS for anonymous
> access securely and then all the tools surrounding it (eg cvsweb) and keep
> them up to date as security patches come out.
> 
> With that said though, I think their support has been responsive to us for
> a free service. We've usually had 24 hour turnaround on any question
> (except no answer on weekends -- but I've never had a high priority support
> issue to warrant that).

No I've np argument with the service at all its great.

It could do with a few obivious points like "Go here to add a
description" if $description == undef && user eq 'admin' .

I've managed to add a description etc ... but as yet have not mastered
uploading a file 

Although this is not mod-perl related - because its about sourceforge -
I am trying to sort out a mod perl module hence any clues as to how to
upload a file would be welcome.

Greg


> 
> Later,
> Gunther
> 
> At 12:53 PM 10/29/00 +, Greg Cope wrote:
> >Matt Sergeant wrote:
> > >
> > > On Wed, 25 Oct 2000, Gerald Richter wrote:
> > >
> > > > I have three anonations:
> > > >
> > > > 1.)
> > > >
> > > > $r->header_out(Location => $r->uri());
> > > >
> > > > Also this code works with most browsers it doesn't conform to the HTTP
> > > > specs. A location header must include a host part. Shouldn't be to
> > hard add
> > > > something like
> > > >
> > > > $r->header_out(Location => 'http://' . $r -> server -> server_hostname .
> > > > $r->uri());
> > >
> > > + port too.
> >
> >Ok
> >
> >I've not had a chance to play with anything as I recovering from a
> >rather long week 
> >
> >On another note I've had little success with Sourceforge - I've set a
> >project up but cannot seem to login to FTP nor upload a CVS snapshot,
> >nor add a description .  I've been in touch with support and am
> >awaiting more news.
> >
> >Hence is it worth a CPAN entry ? (I've not got an account )
> >
> >Anyway thanks for the tips.
> >
> >Greg
> >
> > >
> > > --
> > > 
> > >
> > > /||** Director and CTO **
> > >//||**  AxKit.com Ltd   **  ** XML Application Serving **
> > >   // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
> > >  // \\| // ** Personal Web Site: http://sergeant.org/ **
> > >  \\//
> > >  //\\
> > > //  \\
> 
> __
> Gunther Birznieks ([EMAIL PROTECTED])
> eXtropia - The Web Technology Company
> http://www.extropia.com/



Re: Connection Pooling / TP Monitor

2000-10-29 Thread Gunther Birznieks

At 12:21 PM 10/29/00 +, David Hodgkinson wrote:
>Gunther Birznieks <[EMAIL PROTECTED]> writes:
>
> > I am also concerned that the original question brings up the notion of
> > failover. mod_backhand is not a failover solution. Backhand does have some
> > facilities to do some failover (eg ByAge weeding) but it's not failover in
> > the traditional sense. Backhand is for load balance not failover.
>
>Are we talking about failing "out" a server that's lost the plot, or
>bringing a new server "in" as well? Isn't it just a case of defaulting
>the apparent load of a failed machine up really high (like infinite)?

This question gets into the realm of stuff that I am not really well 
qualified to answer. However, I think the way it works is that there are 
several candicacy functions that slowly wittle down the list of servers to 
direct a given request to.

The simulation of a failed machine defaulting to infinite load is a bit odd 
in mod_backhand for a couple reasons.

1) The ByLoad candicacy function relies on resource information having been 
broadcasted by potential backhand destinations not something that is 
collected by the backhand origin.

Should a backhand destination server go down, it will not broadcast itself 
and ByLoad will not know the resource update.  In my experience, few 
servers ever know they are going down before something catastrophic 
happens. They may complain about something but they don't know it's going 
down. Of course, there are cases when a machine knows it is on the doomed 
list, but I would argue that this a rare case unfortunately.

In other words, the way mod_backhand's ByLoad function works would require 
mod_psychic to be compiled as well. :)

2) This then leads to the natural thing that you were probably thinking 
(?)... which is that ByLoad might end up pinging the destination server to 
make sure it is up before distributing the load to it.

Unfortunately, I don't think that this is in ByLoad. Or at least it's not 
documented at http://www.backhand.org/mod_backhand/

Also, Theo's slide http://www.backhand.org/ApacheCon2000/EU/img4.htm 
explicitly x'ed out the fail-over part of mod_backhand as a solution.

However, the question is at what point the ping candicacy function would be 
written. If you write it too early, you waste time pinging all the servers. 
If you write it too late, you might have too few machines to test. Let's go 
through an example of this..

Destination Servers 1,2,3,4,5,6,7,8 are mod_backhand'ed... Let's assume 
that the load is lowest on the lowest # server and highest on the highest 
number server.

A reasonable example of candicacy functions are the following: ByAge, 
ByRandom, ByLog, ByLoad. Let's assume that servers 5-8 have just gone down 
because someone decided to purchase one big UPS for all 4 servers instead 
of separate ones ,and the UPS just burned out and also shorted out the 
power when this happened causing all 4 servers to go down.

So let's say 5 seconds have gone by with requests..

1. ByAge says that they are all responded within the last 20 seconds (this 
is the default)...

Now, this provides some fail over but 20 seconds can be a long time for a 
server to be down and not weeded out.  In this case, 5 seconds has gone by 
and all 8 are seen by backhand as being up.

2. ByRandom randomizes the list (1,2,3,4,5,6,7,8) let's say this become 
8,6,5,1,3,2,6,4

3. ByLog strips everything but the first log2(n) servers (where n is the 
number of elements in the list). Thus, for 8 elements, we get 3 now. 8,6,5

4. ByLoad checks out the load and then distrubutes it to 5 which is the 
lowest load. But whoops..., 5 is down. Remember 5-8 went down.

Now it would be smart to build a ping into ByLoad but that still wouldn't 
help because actually 8,6,5 that are left after step 3 are all down too.

You also can't write a ByPing candicacy function that starts out because it 
basically means every request generates pings to every server asking if 
they are working which would be quite intense and it would defeat the 
performance advantage of the multicast broadcast of status data.

The moral is that to be more accurate mod_backhand actually have to build 
something into the candicacy function to tell it to start all candicacy 
functions over from scratch and wipe that server off the list.

If all the servers are down, then there's nothing to be done. But at least 
one will be up and this should be the chosen one.

However, my understanding from the mod_backhand talk and the documentation 
is that fail over is not an issue that is discussed as a goal of 
mod_backhand and that there are other products to recommend such as 
Alteon/BIGip/whatever switches or other such fail over products.

Anyway, I think that to some degree it does make sense that within the 
context of the original mod_backhand server distributing the connections, 
there should be some fail over for the destinations to back up the ByAge 
function at the very end of all the candi

[ ANNOUNCE ] Apache::SessionManager-0.06

2000-10-29 Thread Greg Cope

Dear All

I've finally got a tar ball onto sourceforge (but that's another [OT]
story ;-)

Announcing Apache::SessionManager.

For those that do not know this is a (near) Transparent Session Manager
module - it will get (and optionally set) a Session ID from a client
request.  It does no more that this with the ID - authenticating it /
checking validity is up to you.  It is supposed to compliment
Apache::Session, which can be used as the backbend session store.

It has some perldoc in the Apache::SessionManager.pm file.

There is an example startup.pl in the example dir. of the tar ball.

It is quite configurable in that you can change its behaviour to taste -
see the perldoc / source.

It works in my development environment.

It is BETA and the API may change.  Use it at your own risk!  The usual
warranty applies - in that if it breaks please send any bits to
/dev/null, and any patches / bug fixes to me!

Please send me any feedback directly.

A BIG thank you to all that have help / sent me feedback / ideas, and
also to Doug for mod_perl and the Apache and perl authors.

Oh, Find it here:

http://sourceforge.net/projects/sessionmanager/

Regards

Greg Cope



Re: [ ANNOUNCE ] Apache::SessionManager-0.06

2000-10-29 Thread Bill Moseley

At 05:24 PM 10/29/00 +, Greg Cope wrote:
>Announcing Apache::SessionManager.

Hi Greg,

Here's a couple of other comments.

Don't forget to keep track of args on redirects:

GET /a5cc39a8c110566e41b5b8efafc2a055/index.html/abc/123?query=abc http/1.0
Cookie: SESSION=cb74254c1de96365e91fa6d6d481f952

HTTP/1.1 302 Found
Date: Sun, 29 Oct 2000 18:28:54 GMT
Server: Apache/1.3.14 (Unix) mod_perl/1.24_01
Location: /index.html/abc/123=== lost the args here.

And don't forget about the use of DirectoryIndex:
GET /index.html http/1.0

HTTP/1.1 302 Found  <<== here's your redirect

Now this gets through:
GET / http/1.0

HTTP/1.1 200 OK


I'm also unclear about excluding some files with $NON_MATCH.  Perhaps I
didn't set it up correctly.  

If the session is in the URL, and a browser uses relative links it will try
to use that session for all links.  So if $NON_MATCH is used to ignore
.gif, for example, I see this:

File does not exist:
/usr/local/apache/htdocs/f0d960ddbbe1e82ca55ed2372447751e/apache_pb.gif

You might consider moving some of your code into other handlers later in
the request and just let the transhandler extract out the session id.  That
way you can use  and friends to configure what requires a
session and what doesn't, and you can use PerlSetVars to control behavior
section-by-section in httpd.conf.

Hope some of this helps.

Have fun,

Bill Moseley
mailto:[EMAIL PROTECTED]



Re: how to really bang on a script?

2000-10-29 Thread Christopher L. Everett

Adi wrote:
> 
> martin langhoff wrote:
> >
> > Chris,
> >
> > i'd bet my head a few months ago someone announced an apache::bench
> > module, that would take a log and run it as a benchmarking secuence of
> > HTTP requests. just get to the list archives and start searching with
> > benchmarks and logs. CPAN is your friend, also.
> 
> It was HTTPD::Bench::ApacheBench.  It is a Perl API to ab.  It doesn't take a
> log per se, it simply sends sequences of HTTP requests and benchmarks the
> results.  I'm sure you could very easily write a script to parse a log and then
> make a benchmarking run out of it.

Yes, I considered ab and I did find HTTPD::Bench::ApacheBench, while
excellently
done and copiously documented, isn't quite what I need:

1) I want to spoof the IP addresses of the browsers (I just realized
that 
   since I'm using mod_proxy_add_forward anyway, I can make the
requester 
   script behave as a proxy; the rest is cookbook).  I can't find
provision 
   for that in the interface for HTTPD::Bench::ApacheBench.
2) Record the query parameters as well as the response's MD5 checksum 
   directly in a database table on the fly.
3) The interface is more suited to setting up, then executing a batch
run 
   programmatically, rather than replaying a log.

Having examined the ApacheBench.pm source, I don't see how I can make it
do 
what I want by subclassing it.  Also the code is a little bit mystifying
to 
me in that the last line in the execute method, "return $self->ab;" is
the 
only mention of the class method "ab" in the entire file.  Obviously I
have 
_much, much_ more to learn ... :)

--Christopher

Christopher L. Everett
[EMAIL PROTECTED]



Out Of Memory While Running Apache_OWA

2000-10-29 Thread Mark Kirkwood

Dear list,

I am getting this error "Out of memory during large request for - bytes at OWA.pm 
line 347" in the Apache error log when attempting to run any Oracle PLSQL procedure.

Even trivial procedures like  :

procedure hello is
begin
  htp.p('hello');
end;

give the above error.


There seems to be quite a lot of free memory available on the server. Is there some 
mod_perl parameter I need to tweek ?

I am running Apache 1.3.12/Mod_perl 1.24/OWA-0.7 on HPUX 11.00
 

Thanks

Mark




Re: Apache::GzipChain

2000-10-29 Thread G.W. Haywood

Hi again,

On Sat, 28 Oct 2000, G.W. Haywood wrote:
> On Sat, 28 Oct 2000, Jerrad Pierce wrote:
> > Is anybody using GzipChain?
> IIRC, Josh said he was.

There are apparently some problems with IE claiming to support it and
then not supporting it.  Quote from Josh, edited to preserve anonimity:

--
The compression stuff is amazing, for 10% CPU, a 40K doc like [menu]
can be squeezed down to 5K.  I already shaved off 10% request time by
optimizing [module names] so that's a wash.  There was a problem at
[client name] with a couple users with proxy configured for IE, but
not really using one for some reason, and I never worked past that
issue, caught up in the rest.
--

> > Is there some known means of verifying that it is in fact working properly?

Have you some reason to suspect it isn't?  Send something to yourself?
Ask your users?

73,
Ged.






[ ANNOUNCE ] Apache::SessionManager-0.07

2000-10-29 Thread Greg Cope

Bill Moseley wrote:
> 
> At 05:24 PM 10/29/00 +, Greg Cope wrote:
> >Announcing Apache::SessionManager.
> 
> Hi Greg,
> 
> Here's a couple of other comments.

I should have mentioned that this was my first bit of public code - and
to be gentle ..

> 
> Don't forget to keep track of args on redirects:
> 
> GET /a5cc39a8c110566e41b5b8efafc2a055/index.html/abc/123?query=abc http/1.0
> Cookie: SESSION=cb74254c1de96365e91fa6d6d481f952
> 
> HTTP/1.1 302 Found
> Date: Sun, 29 Oct 2000 18:28:54 GMT
> Server: Apache/1.3.14 (Unix) mod_perl/1.24_01
> Location: /index.html/abc/123=== lost the args here.

I thought I had this covered ages ago. ... I've just tried this on my
set up and I do not lose the args e.g.

Netscrape 4.75 on Linux / Redhat

switch cookies off in preferences - redirect on:

get Cart/cat?foo=bar

redirect to:

get /{insert session id here}/Cart/cat?foo=bar

But I was not checking COOKIE_CHECK

A few minutes later that should be fixed ...

I have not being doing a COOKIE_CHECK style redirect against a
foo.html?arg=bar style get request - all mine have been simple HTML
pages.

Thanks.

> 
> And don't forget about the use of DirectoryIndex:
> GET /index.html http/1.0
> 
> HTTP/1.1 302 Found  <<== here's your redirect
> 
> Now this gets through:
> GET / http/1.0

Hum ...

Nice one - I had not tried this  It took me a while to trap a  '/'
request (it would go into an endless loop and blow out when the GET
request topped 4k, mostly full of session ID's).

Not sure how to fix this - ideas anyone ?

I do the following to trap a '/' style request - this is where I am
losing the /index.html (that's getting turned into a simple '/').

if ($redirect !~ m#/$# && -d $r->lookup_uri($redirect)->filename) {
$redirect .= '/';
  }

Is this a major issue ? (as apache should if I am not mistaken turn that
back into an index.html or whatever is the directory index directive)

As apache has yet fully to do the URI translation I apear to be missing
this an assuming its a '/' on its own ?!?!

> 
> HTTP/1.1 200 OK
> 
> I'm also unclear about excluding some files with $NON_MATCH.  Perhaps I
> didn't set it up correctly.
> 
> If the session is in the URL, and a browser uses relative links it will try
> to use that session for all links.  So if $NON_MATCH is used to ignore
> .gif, for example, I see this:
> 
> File does not exist:
> /usr/local/apache/htdocs/f0d960ddbbe1e82ca55ed2372447751e/apache_pb.gif

I wondered on this for some time - as I use a new virtual host for
static content {gif|jpeg|js|css} files (with logging turned off).

Err - on this point I am a plank and offer my apologies - a few '#' too
many and this functionality no longer works - whops 

In the meantime remove the hashes that
comment out the lines between 48 and 51 inclusive, also in the source is
a line:

$NON_MATCH = '\.gif|\.jpeg|\.jpg';# ignore images

So if you only want gifs try:

$NON_MATCH = '\.gif';


> 
> You might consider moving some of your code into other handlers later in
> the request and just let the transhandler extract out the session id.  That
> way you can use  and friends to configure what requires a
> session and what doesn't, and you can use PerlSetVars to control behavior
> section-by-section in httpd.conf.

Gerald has already suggested this - I was thinking of controlling
directory access via the match variable.  Why ? well perlset var gets
checked each request - performance wise a little nasty.  This is however
due to most of my projects being a easy to split on a URI '/foo/' entry
i.e I know which parts of a URI need sessioning as it were.

I was going to go for a perlset var as per Gerald Suggestions - but am
having secound thoughts - What do you|everyone think - using globals (a
la Apache::DBI style) is not as clean, but easy and fast whilst
perlsetvar is slower yet better to configure.

> Hope some of this helps.

Yup, you spotted a few bugs I had not seen - thanks

> 
> Have fun,

Define fun ;-)

I have found this one of the most rewarding coding exercises recently -
why:

a) Its complex.

b) Having all the mod_perl people have a go at breaking it (which they
have done ! )

c) fixing it - which is has thus far been quite eaay (famous last words
;-)

I'll be uploading a version 0.07 with the fixes outlined above.

Thanks Bill, much appreciated as you have tested it in ways that I had
not thought of (and I've tried a few), hopefull v 0.07 should fix most
of the above.

Thanks again.

Greg

> 
> Bill Moseley
> mailto:[EMAIL PROTECTED]



Re: [ ANNOUNCE ] Apache::SessionManager-0.07

2000-10-29 Thread Gunther Birznieks

At 01:31 AM 10/30/2000 +, Greg Cope wrote:
>[...snip...]
> >
> > And don't forget about the use of DirectoryIndex:
> > GET /index.html http/1.0
> >
> > HTTP/1.1 302 Found  <<== here's your redirect
> >
> > Now this gets through:
> > GET / http/1.0
>
>Hum ...
>
>Nice one - I had not tried this  It took me a while to trap a  '/'
>request (it would go into an endless loop and blow out when the GET
>request topped 4k, mostly full of session ID's).
>
>Not sure how to fix this - ideas anyone ?
>
>I do the following to trap a '/' style request - this is where I am
>losing the /index.html (that's getting turned into a simple '/').
>
>if ($redirect !~ m#/$# && -d $r->lookup_uri($redirect)->filename) {
> $redirect .= '/';
>   }
>
>Is this a major issue ? (as apache should if I am not mistaken turn that
>back into an index.html or whatever is the directory index directive)

This is an issue if your index.html requires the session id. So if you 
direct to / you'll lose it. It's not bad if index.html is static, but it 
could be generated via a handler or perhaps it's an index.cgi

>As apache has yet fully to do the URI translation I apear to be missing
>this an assuming its a '/' on its own ?!?!
>
> >
> > HTTP/1.1 200 OK
> >
> > I'm also unclear about excluding some files with $NON_MATCH.  Perhaps I
> > didn't set it up correctly.
> >
> > If the session is in the URL, and a browser uses relative links it will try
> > to use that session for all links.  So if $NON_MATCH is used to ignore
> > .gif, for example, I see this:
> >
> > File does not exist:
> > /usr/local/apache/htdocs/f0d960ddbbe1e82ca55ed2372447751e/apache_pb.gif
>
>I wondered on this for some time - as I use a new virtual host for
>static content {gif|jpeg|js|css} files (with logging turned off).
>
>Err - on this point I am a plank and offer my apologies - a few '#' too
>many and this functionality no longer works - whops 
>
>In the meantime remove the hashes that
>comment out the lines between 48 and 51 inclusive, also in the source is
>a line:
>
>$NON_MATCH = '\.gif|\.jpeg|\.jpg';# ignore images
>
>So if you only want gifs try:
>
>$NON_MATCH = '\.gif';

So what is the logic here? You must always process an existing session id 
for images because they will be in the path, but you just shouldn't 
generate the session id if one does not exist for these mime types.


> >
> > You might consider moving some of your code into other handlers later in
> > the request and just let the transhandler extract out the session id.  That
> > way you can use  and friends to configure what requires a
> > session and what doesn't, and you can use PerlSetVars to control behavior
> > section-by-section in httpd.conf.
>
>Gerald has already suggested this - I was thinking of controlling
>directory access via the match variable.  Why ? well perlset var gets
>checked each request - performance wise a little nasty.  This is however
>due to most of my projects being a easy to split on a URI '/foo/' entry
>i.e I know which parts of a URI need sessioning as it were.
>
>I was going to go for a perlset var as per Gerald Suggestions - but am
>having secound thoughts - What do you|everyone think - using globals (a
>la Apache::DBI style) is not as clean, but easy and fast whilst
>perlsetvar is slower yet better to configure.

I think you should use a hybrid. Globals for providing a server-wide 
DEFAULT and then PerlSetVar that overrides the global on a per-config 
basis. That way everyone wins.

> > Hope some of this helps.
>
>Yup, you spotted a few bugs I had not seen - thanks
>
> >
> > Have fun,
>
>Define fun ;-)
>
>I have found this one of the most rewarding coding exercises recently -
>why:
>
>a) Its complex.
>
>b) Having all the mod_perl people have a go at breaking it (which they
>have done ! )
>
>c) fixing it - which is has thus far been quite eaay (famous last words
>;-)
>
>I'll be uploading a version 0.07 with the fixes outlined above.

By the way, when the bugs are worked out of the Perl version, how difficult 
do you think it would be to convert to C?

I think it's a good module in Perl if you are running your apps in Perl, 
but I have a client that uses Windows NT, Java servers, and mod_perl... So 
I'd like to have a module like this on the front-end server and then proxy 
the requests to back-end servers by setting a custom auth header that the 
backend servers would recognize.

For a module like this, I'd prefer it not to be mod_perl because it seems 
like a fat thing to have on a front-end proxy server. On the other hand, I 
don't relish writing it in C unless there is a really solid Perl version 
that I can steal a well-tested algorithm from. :)

Later,
Gunther




maximum (practical) size of $r->notes

2000-10-29 Thread Todd Finney

This is a follow-up on a question that I asked a couple of 
months ago.  The subject was "executing a cgi from within a 
handler (templating redux)", dated 8/23/00.

The gist of the matter is that we need a handler which will 
serve html pages ('content files') inside of other html 
files ('template files'), while sticking other files and 
scripts ('component files') into the template at the same 
time.  Quasi-frames, if you will.  We've already covered 
why we didn't pick one of the readily-available packages to 
do this.

We've had a working handler that does _almost_everything it 
needs to do.  When the user requests a page, it figures out 
which template to use (based on the page requested), which 
component set to use (based on the user and the page), 
rolls the whole thing together and sends it along.

The wrapper can handle both static files and scripts as 
components or content files, and works really well most of 
the time.  However, we've run into a problem when a page 
needs to redirect after execution, such as a login page.

The problem is that when a component or content file is a 
script, the server executes that script when it encounters 
it in the template, a'la

- hey, the user wants foo.html
- the user is a member of group 'coders', and their 
component path is
/www/htdocs/components/coders/
- foo.html wants template set 1,
- go get /www/htdocs/components/coders/tmpl_1, and open
- begin printing the template file to the browser.  As the 
file goes by,
   watch for <[tags]> containing insertion points.
- hey, there's <[head]>, print or execute 
/www/htdocs/components/coders/head_1
- hey, there's <[tool]>, print or execute 
/www/htdocs/components/coders/tool_1
- hey, there's <[cont]>, print or execute foo.html
- hey, there's <[foot]>, print or execute 
/www/htdocs/components/coders/foot_1
- finish printing /www/htdocs/components/coders/tmpl_1 and 
close

If /www/htdocs/components/coders/tool_1 has a redirect call 
in it, it's too late for the browser to actually do 
anything about it.

I managed to corner Nathan in New York (thanks, Nathan!). 
He recommended a two-stage handler, one that processes the 
components and content, and another that actually handles 
the printing.  Using $r->notes, the second handler could be 
aware of what it needed to do before printing 
anything.  This is a really good idea, but it's turning out 
to be more difficult than I anticipated.

The only way I can think of doing this is adding a third 
handler, in the middle, that executes any scripts and 
stores the output somewhere.  Then it would check the 
output for a Location: header and set something like 
$notes->{'redirect'} if it finds anything.  The third 
handler would then check that value before printing 
anything.  If it's there, do it; if not, grab the output 
and the static files and print them to the user.

I'm concerned about putting large amounts of data into 
$r->notes.  Some of our script output can be pretty 
heavy.  If $r->notes can only take simple strings, how 
large of a simple string is it safe to put in it?  Is there 
a better way to do this?

cheers and thanks,
Todd













Apache::DB and core dump

2000-10-29 Thread Marek W

Do you possibly know  what could have caused this error while trying to run
this module. I use Linux RH 6.2. and mod_perl 1.23

Mark





Re: Connection Pooling / TP Monitor

2000-10-29 Thread Matt Sergeant

On Sat, 28 Oct 2000, Les Mikesell wrote:

> Is there any way to tie proxy requests mapped by mod_rewrite to
> a balanced set of servers through mod_backhand (or anything
> similar)?Also, can mod_backhand (or any alternative) work
> with non-apache back end servers?I'm really looking for a way
> to let mod_rewrite do the first cut at deciding where (or whether)
> to send a request, but then be able to send to a load balanced, fail
> over set, preferably without having to interpose another physical
> proxy.

>From what I could make out, I think you should be able to use backhand
only within certain  sections, and therefore have a request come
in outside of that, rewrite it to a backhand enabled location and have
backhand do its thing. Should work, but you'd have to try it.

Alternatively write your own decision module for backhand. There's even a
mod_perl module to do so (although apparently it needs patching for the
latest version of mod_backhand).

-- 


/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\