Re: proxy front-ends (was: Re: ApacheCon report)

2000-11-03 Thread Joe Schaefer

Gunther Birznieks [EMAIL PROTECTED] writes:

 Although I don't have much to add to the conversation, I just wanted to say 
 that this is one of the most absolutely technically enlightening posts I've 
 read on the mod_perl list in a while. It's really interesting to finally 
 clarify this once and for all.

You bet - brilliant detective/expository work going on here!  
On a side note, a while back I was trying to coerce the TUX developers 
to rework their server a little.  I've included snippets of the 
email correspondence below:



From: Joe Schaefer [EMAIL PROTECTED]
Subject: Can tux 'proxy' for the user space daemon?
Date: 06 Oct 2000 13:32:10 -0400

It would be great if TUX is someday capable of 
replacing the "reverse proxy" kludge for mod_perl.
From skimming the docs, it seems that 

TUX on port 80 +
apache on 8080

seems to fit this bill.

Question: In this setup, how does TUX behave wrt 
HTTP/1.1 keepalives to/from apache? 

Say apache is configured with mod_perl, and 
keepalives are disabled on apache.
Is TUX capable of maintaining keepalives on the 
browser - TUX connection, while maintaining a 
separate "pool" of (closed) TUX - apache connections?

If I'm way off here on how TUX works  (or will work),
please correct me!

Thanks.

==

From: Ingo Molnar [EMAIL PROTECTED]
Subject: Re: Can tux 'proxy' for the user space daemon?
Date: Sat, 7 Oct 2000 13:42:49 +0200 (CEST)

if TUX sees a request that is redirected to Apache, then all remaining
requests on the connection are redirected to Apache as well. TUX wont ever
see that connection again, the redirection works by 'trimming' all
previous input up to the request which goes to Apache, then the socket
itself is hung into Apache's listen socket, as if it came as a unique
request from the browser. This technique is completely transparent both to
Apache and to the browser. There is no mechanizm to 'bounce back' a
connection from Apache to TUX. (while connections do get bounced back and
forth between the kernel and user-space TUX modules.)

so eg. if the first 2 request within a single persistent HTTP/1.1
connection can be handled by TUX then it will be handled by TUX, and the
third (and all succeeding) requests will be redirected to Apache. Logging
will happen by TUX for the first 2 requests, and the remaining requests
will be logged by Apache.

==

From: Joe Schaefer [EMAIL PROTECTED]
Subject: Re: Can tux 'proxy' for the user space daemon?
Date: 07 Oct 2000 19:52:31 -0400

Too bad- this means that HTTP/1.1 pages generated by an apache module won't 
benefit from TUX serving the images and stylesheet links contained therein. I 
guess disabling keepalives on the apache connection is (still) the only way 
to go.

I still think it would be cool if there was some hack to make this work- 
perhaps a TUX "gateway" module could do it?  Instead of handing off a 
request directly to apache, maybe a (user-space) TUX module could hand it 
off and then return control back to TUX when the page has been delivered.
Is such a "gateway" TUX module viable?

=

From: Ingo Molnar [EMAIL PROTECTED]
Subject: Re: Can tux 'proxy' for the user space daemon?
Date: Mon, 9 Oct 2000 11:42:57 +0200 (CEST)

depends on the complexity of the module. If it's simple functionality then
it might be best to write a dedicated TUX module for it, without Apache.

but if it's too complex then the same code that is used to hand a TCP
connection over to Apache can be used by Apache to send a connection back
to TUX as well. A new branch of the TUX system-call could handle this.

Ingo




This might be worth looking in to (for linux anyway :).
-- 
Joe Schaefer



Re: proxy front-ends (was: Re: ApacheCon report)

2000-11-02 Thread Gunther Birznieks

Although I don't have much to add to the conversation, I just wanted to say 
that this is one of the most absolutely technically enlightening posts I've 
read on the mod_perl list in a while. It's really interesting to finally 
clarify this once and for all.

Smells like a mod_perl guide addition. :)

At 07:21 PM 11/2/2000 +0100, Roger Espel Llima wrote:
Ask Bjoern Hansen [EMAIL PROTECTED] wrote:
  Mr. Llima must do something I don't, because with real world
  requests I see a 15-20 to 1 ratio of mod_proxy/mod_perl processes at
  "my" site. And that is serving 500byte stuff.

and Michael Blakeley [EMAIL PROTECTED] later replied:
  Solaris lets a user-level application close() a socket immediately
  and go on to do other work. The sockets layer (the TCP/IP stack) will
  continue to keep that socket open while it delivers any buffered
  sends - but the user application doesn't need to know this  [...]
  Anyway, since the socket is closed from the mod_perl point of view,
  the heavyweight mod_perl process is no longer tied up. I don't know
  if this holds true for Linux as well, but if it doesn't, there's
  always the source code.

This is exactly it.  I did some tests with a real-world server, and the
conclusion was that, as long as the total write() size is less than the
kernel's max write buffer, then write() followed by close() doesn't
block.  That was using Linux, where the kernel buffer size can be set by
echo'ing numbers into /proc/sys/net/core/wmem_{default,max}.

However, Apache doesn't use a plain close(), but instead calls a
function called lingering_close, which tries to make sure that the other
side has received all the data.  That is done by select()ing on the
socket in 2 second increments, until the other side either closes the
connection too, or times out.  And *THIS* is the reason why front-end
servers are good.  An apache process spends an average of 0.8 seconds
(in my measurements) per request doing lingering close.  This is
consistent with Ask's ratio of 15-20 to 1 frontend to backend servers.

Now, there's no reason in principle why lingering_close() couldn't be
done in the kernel, freeing the user process from the waiting job, and
making frontend servers unnecessary.  There's even an interface for it,
namely SO_LINGER, and Apache knows how to use it.  But SO_LINGER is
badly specified, and known to be broken in most tcp/ip stacks, so
currently it's kind of a bad idea to use it, and we're stuck with the
two server model.

--
Roger Espel Llima, [EMAIL PROTECTED]
http://www.iagora.com/~espel/index.html

__
Gunther Birznieks ([EMAIL PROTECTED])
eXtropia - The Web Technology Company
http://www.extropia.com/




Re: ApacheCon report

2000-11-01 Thread Michael Blakeley

  From: "Perrin Harkins" [EMAIL PROTECTED]
  To: "Ask Bjoern Hansen" [EMAIL PROTECTED]
  Cc: [EMAIL PROTECTED]
  Sent: Tuesday, October 31, 2000 8:47 PM
  Subject: Re: ApacheCon report

Mr. Llima must do something I don't, because with real world
requests I see a 15-20 to 1 ratio of mod_proxy/mod_perl processes at
"my" site. And that is serving 500byte stuff.
  
   I'm not following.  Everyone agrees that we don't want to have big
   mod_perl processes waiting on slow clients.  The question is whether
   tuning your socket buffer can provide the same benefits as a proxy server
   and the conclusion so far is that it can't because of the lingering close
   problem.  Are you saying something different?

  A tcp close is supposed to require an acknowledgement from the
  other end or a fairly long timeout.  I don't see how a socket buffer
  alone can change this.Likewise for any of the load balancer
  front ends that work on the tcp connection level (but I'd like to
  be proven wrong about this).

Solaris lets a user-level application close() a socket immediately 
and go on to do other work. The sockets layer (the TCP/IP stack) will 
continue to keep that socket open while it delivers any buffered 
sends - but the user application doesn't need to know this (and 
naturally won't be able to read any incoming data if it arrives). 
When the tcp send buffer is empty, the socket will truly close, with 
all the usual FIN et. al. dialogue.

Anyway, since the socket is closed from the mod_perl point of view, 
the heavyweight mod_perl process is no longer tied up. I don't know 
if this holds true for Linux as well, but if it doesn't, there's 
always the source code.

The socket buffers on most Unix and Unix-like OSes tend to be 32kB to 
64 kB. Some OSes allow these to be tuned (ndd on Solaris).

-- Mike
-- 
Michael Blakeley   [EMAIL PROTECTED] http://www.blakeley.com/
 Performance Analysis for Internet Technologies



Re: ApacheCon report

2000-11-01 Thread Leslie Mikesell

According to Michael Blakeley:

I'm not following.  Everyone agrees that we don't want to have big
mod_perl processes waiting on slow clients.  The question is whether
tuning your socket buffer can provide the same benefits as a proxy server
and the conclusion so far is that it can't because of the lingering close
problem.  Are you saying something different?
 
   A tcp close is supposed to require an acknowledgement from the
   other end or a fairly long timeout.  I don't see how a socket buffer
   alone can change this.Likewise for any of the load balancer
   front ends that work on the tcp connection level (but I'd like to
   be proven wrong about this).
 
 Solaris lets a user-level application close() a socket immediately 
 and go on to do other work. The sockets layer (the TCP/IP stack) will 
 continue to keep that socket open while it delivers any buffered 
 sends - but the user application doesn't need to know this (and 
 naturally won't be able to read any incoming data if it arrives). 
 When the tcp send buffer is empty, the socket will truly close, with 
 all the usual FIN et. al. dialogue.
 
 Anyway, since the socket is closed from the mod_perl point of view, 
 the heavyweight mod_perl process is no longer tied up. I don't know 
 if this holds true for Linux as well, but if it doesn't, there's 
 always the source code.

I still like the idea of having mod_rewrite in a lightweight
front end, and if the request turns out to be static at that
point there isn't much point in dealing with proxying.  Has
anyone tried putting software load balancing behind the front
end proxy with something like eddieware, balance or ultra
monkey?  In that scheme the front ends might use an IP takeover
failover and/or DNS load balancing and would proxy to what they
think is a single back end server - then this would hit a
tcp level balancer instead.

  Les Mikesell
[EMAIL PROTECTED]



Re: ApacheCon report

2000-11-01 Thread Perrin Harkins

On Wed, 1 Nov 2000, Leslie Mikesell wrote:
 I still like the idea of having mod_rewrite in a lightweight
 front end, and if the request turns out to be static at that
 point there isn't much point in dealing with proxying.

Or if the request is in the proxy cache...

 Has anyone tried putting software load balancing behind the front end
 proxy with something like eddieware, balance or ultra monkey?  In that
 scheme the front ends might use an IP takeover failover and/or DNS
 load balancing and would proxy to what they think is a single back end
 server - then this would hit a tcp level balancer instead.

We use that setup with a hardware load balancer.  It works very well.

- Perrin




Re: ApacheCon report

2000-10-31 Thread Gunther Birznieks

At 12:00 AM 10/31/2000 -0800, Perrin Harkins wrote:
On Tue, 31 Oct 2000, Les Mikesell wrote:
   Ultimately, I don't see any way around the fact that proxying from one
   server to another ties up two processes for that time rather than one, so
   if your bottleneck is the number of processes you can run before running
   out of RAM, this is not a good approach.
 
  The point is you only tie up the back end for the time it takes to deliver
  to the proxy, then it moves on to another request while the proxy
  dribbles the content back to the client.   Plus, of course, it doesn't
  have to be on the same machine.

I was actually talking about doing this with no front-end proxy, just
mod_perl servers.  That's what Theo was suggesting.

That might work. Although I think there's enough image files that get 
served by front-end proxies that it still makes sense to have the front-end 
proxy engines. As a bonus, if you write your app smart with cache directive 
headers, some of the dynamic content can truly be cached by the front-end 
server.

I use a mod_proxy front-end myself and it works very well.

- Perrin

__
Gunther Birznieks ([EMAIL PROTECTED])
eXtropia - The Web Technology Company
http://www.extropia.com/




Re: ApacheCon report

2000-10-31 Thread Perrin Harkins

On Tue, 31 Oct 2000, Gunther Birznieks wrote:
 As a bonus, if you write your app smart with cache directive 
 headers, some of the dynamic content can truly be cached by the front-end 
 server.

We're using this technique now and it really rocks.  Great performance.

- Perrin




Re: ApacheCon report

2000-10-31 Thread Bill Moseley

At 04:13 PM 10/31/00 +0800, Gunther Birznieks wrote:
As a bonus, if you write your app smart with cache directive 
headers, some of the dynamic content can truly be cached by the front-end 
server.

Gunther,

Can you give some details?  I have co-branded template driven content that
is dynamically generated, but I allow caching.  Is this and example of what
you mean, or are you describing something else?

Thanks,



Bill Moseley
mailto:[EMAIL PROTECTED]



Re: ApacheCon report

2000-10-31 Thread Gunther Birznieks

At 10:43 AM 10/31/2000 -0800, Bill Moseley wrote:
At 04:13 PM 10/31/00 +0800, Gunther Birznieks wrote:
 As a bonus, if you write your app smart with cache directive
 headers, some of the dynamic content can truly be cached by the front-end
 server.

Gunther,

Can you give some details?  I have co-branded template driven content that
is dynamically generated, but I allow caching.  Is this and example of what
you mean, or are you describing something else?

No, that should be all you need. If you don't turn off caching, mod_proxy 
is a caching proxy even in reverse proxy mode. So if you support caching, 
then that should be a bonus for you!







Re: ApacheCon report

2000-10-31 Thread Ask Bjoern Hansen

On Mon, 30 Oct 2000, Perrin Harkins wrote:

[...]
 - Don't use a proxy server for doling out bytes to slow clients; just set
 the buffer on your sockets high enough to allow the server to dump the
 page and move on.  This has been discussed here before, notably in this
 post:
 
 
[EMAIL PROTECTED]">http://forum.swarthmore.edu/epigone/modperl/grerdbrerdwul/[EMAIL PROTECTED]
 
 The conclusion was that you could end up paying dearly for the lingeirng
 close on the socket.

Mr. Llima must do something I don't, because with real world
requests I see a 15-20 to 1 ratio of mod_proxy/mod_perl processes at
"my" site. And that is serving 500byte stuff.
 
-- 
ask bjoern hansen - http://www.netcetera.dk/~ask/
more than 70M impressions per day, http://valueclick.com





Re: ApacheCon report

2000-10-31 Thread Perrin Harkins

On Tue, 31 Oct 2000, Ask Bjoern Hansen wrote:

 On Mon, 30 Oct 2000, Perrin Harkins wrote:
 
 [...]
  - Don't use a proxy server for doling out bytes to slow clients; just set
  the buffer on your sockets high enough to allow the server to dump the
  page and move on.  This has been discussed here before, notably in this
  post:
  
  
[EMAIL PROTECTED]">http://forum.swarthmore.edu/epigone/modperl/grerdbrerdwul/[EMAIL PROTECTED]
  
  The conclusion was that you could end up paying dearly for the lingeirng
  close on the socket.
 
 Mr. Llima must do something I don't, because with real world
 requests I see a 15-20 to 1 ratio of mod_proxy/mod_perl processes at
 "my" site. And that is serving 500byte stuff.

I'm not following.  Everyone agrees that we don't want to have big
mod_perl processes waiting on slow clients.  The question is whether
tuning your socket buffer can provide the same benefits as a proxy server
and the conclusion so far is that it can't because of the lingering close
problem.  Are you saying something different?

- Perrin




Re: ApacheCon report

2000-10-31 Thread Ask Bjoern Hansen

On Tue, 31 Oct 2000, Perrin Harkins wrote:

  [...]
   
[EMAIL PROTECTED]">http://forum.swarthmore.edu/epigone/modperl/grerdbrerdwul/[EMAIL PROTECTED]
  
  Mr. Llima must do something I don't, because with real world
  requests I see a 15-20 to 1 ratio of mod_proxy/mod_perl processes at
  "my" site. And that is serving 500byte stuff.
 
 I'm not following.  Everyone agrees that we don't want to have
 big mod_perl processes waiting on slow clients.  The question is
 whether tuning your socket buffer can provide the same benefits
 as a proxy server and the conclusion so far is that it can't
 because of the lingering close problem.  Are you saying
 something different?

No.

Maybe I misunderstood the url quoted above.

Reminds me; would it make sense to put code like what's in
mod_proxy_add_forward.c in the mod_perl distribution?


 - ask

-- 
ask bjoern hansen - http://www.netcetera.dk/~ask/
more than 70M impressions per day, http://valueclick.com





Re: ApacheCon report

2000-10-31 Thread Les Mikesell


- Original Message - 
From: "Perrin Harkins" [EMAIL PROTECTED]
To: "Ask Bjoern Hansen" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Tuesday, October 31, 2000 8:47 PM
Subject: Re: ApacheCon report


  Mr. Llima must do something I don't, because with real world
  requests I see a 15-20 to 1 ratio of mod_proxy/mod_perl processes at
  "my" site. And that is serving 500byte stuff.
 
 I'm not following.  Everyone agrees that we don't want to have big
 mod_perl processes waiting on slow clients.  The question is whether
 tuning your socket buffer can provide the same benefits as a proxy server
 and the conclusion so far is that it can't because of the lingering close
 problem.  Are you saying something different?
 

A tcp close is supposed to require an acknowledgement from the
other end or a fairly long timeout.  I don't see how a socket buffer
alone can change this.Likewise for any of the load balancer
front ends that work on the tcp connection level (but I'd like to
be proven wrong about this).

  Les Mikesell
[EMAIL PROTECTED]





Re: ApacheCon report

2000-10-30 Thread Perrin Harkins

On Mon, 30 Oct 2000, Tim Sweetman wrote:

 Matt Sergeant wrote:
  
  On Fri, 27 Oct 2000, Tim Sweetman wrote:
  
   In no particular order, and splitting hairs some of the time...
  
   Sounded like mod_backhand was best used NOT in the same Apache as a phat
   application server (eg. mod_perl), because you don't want memory-heavy
   processes sitting waiting for responses. You'd be better off with a
   separate switching machine - or serve your static content from
   machine(s) that know to backhand dynamic requests to a phat machine. I
   think that's what Theo reckoned...
  
  Yes, but the backend mod_perl servers are running backhand. So you have:
  
  B  B  B  B
   \ |  | /
\ \/ /
 \|/
  F
  
  Where all the servers are running mod_backhand, but only F is publically
  accessible. There may also be 1 F. Its in his slides, and is prettier
  than the above :-)
 
 Yeah. I know how it was set up in Theo's demo (like that) but I got the
 impression that this wouldn't be optimal for a mod_Perl setup (or other
 big-footprinted configuration). You _can_ run mod_backhand on your
 content servers. You don't _have_ to.

Here's what I recall Theo saying (relative to mod_perl):

- Don't use a proxy server for doling out bytes to slow clients; just set
the buffer on your sockets high enough to allow the server to dump the
page and move on.  This has been discussed here before, notably in this
post:

[EMAIL PROTECTED]">http://forum.swarthmore.edu/epigone/modperl/grerdbrerdwul/[EMAIL PROTECTED]

The conclusion was that you could end up paying dearly for the lingeirng
close on the socket.

- If you use apache+mod_proxy as a load balancer in front of your back-end
servers (as opposed to a commercial solution like big ip), use
mod_backhand instead and your front-end server will be able to handle
requests itself when it's not too busy.

- He has created a way for proxied requests to use keep-alive without
enabling keep-alive for the whole server.  The obvious problem - that each
server will soon use up every other server's available clients - is
somehow avoided by sharing open sockets to the other servers in some
external daemon.  This sounded cool, but fishy.

Ultimately, I don't see any way around the fact that proxying from one
server to another ties up two processes for that time rather than one, so
if your bottleneck is the number of processes you can run before running
out of RAM, this is not a good approach.  If your bottleneck is CPU or
disk access, then it might be useful.  I guess that means this is not so
hot for the folks who are mostly bottlenecked by an RDBMS, but might be
good for XML'ers running CPU hungry transformations.  (Yes, I saw Matt's
talk on AxKit's cache...)

 One thing I have my eye on (which doesn't mean I'll necessarily get it
 done :) ) is some sort of data-holding-class that sits between an
 application and a template in a transparent way (eg. it could hold the
 method names  args that you were passing to a templating system, like a
 "command" design pattern IIRC).

In Perl, we call that a "variable".  (Sorry, couldn't resist...)  
Templating systems in Java jump through hoops to make generic data
structures, but Perl doesn't have to.  Just freeze it with Storable if you
need to save it.

 This would potentially allow:
 + switching between different templating systems ...?

Nearly all of them use standard perl hash refs.

 + checking out template tweaks without rerunning the app - good for
 "interactive" systems which
   keep chucking form data at users

This is easy to do with most systems.  I once wrote something that could
turn arbitrary form inputs into a data structure suitable for feeding to
Template Toolkit or similar.  Then I could create a form for entering
different kinds of test data, and save the test data using Storable.  It
was used for building templates for a system which ran the templating as a
batch process and so had a terrible turnaround time for testing changes.

 + (fairly transparent) conversion to XML/XSL/etc at an appropriate time,
 as/when/if a site/project
   grows to an appropriate size

There are some modules out there that serialize perl data structures as
XML.  Or you can just write a template for it.

- Perrin




Re: ApacheCon report

2000-10-30 Thread Les Mikesell


- Original Message -
From: "Perrin Harkins" [EMAIL PROTECTED]


 Here's what I recall Theo saying (relative to mod_perl):

 - Don't use a proxy server for doling out bytes to slow clients; just set
 the buffer on your sockets high enough to allow the server to dump the
 page and move on.  This has been discussed here before, notably in this
 post:


http://forum.swarthmore.edu/epigone/modperl/grerdbrerdwul/2811200559.B17
[EMAIL PROTECTED]

 The conclusion was that you could end up paying dearly for the lingeirng
 close on the socket.

In practice I see a fairly consistent ratio of 10 front-end proxies running
per one back end on a site where most hits end up being proxied so
the lingering is a real problem.

 Ultimately, I don't see any way around the fact that proxying from one
 server to another ties up two processes for that time rather than one, so
 if your bottleneck is the number of processes you can run before running
 out of RAM, this is not a good approach.

The point is you only tie up the back end for the time it takes to deliver
to the proxy, then it moves on to another request while the proxy
dribbles the content back to the client.   Plus, of course, it doesn't
have to be on the same machine.

 If your bottleneck is CPU or
 disk access, then it might be useful.  I guess that means this is not so
 hot for the folks who are mostly bottlenecked by an RDBMS, but might be
 good for XML'ers running CPU hungry transformations.  (Yes, I saw Matt's
 talk on AxKit's cache...)

Spreading requests over multiple backends is the fix for this.  There is
some gain in efficiency if you dedicate certain backend servers to
certain tasks since you will then tend to have the right things in the
cache buffers.

  Les Mikesell
 [EMAIL PROTECTED]




Re: ApacheCon report

2000-10-30 Thread Perrin Harkins

On Tue, 31 Oct 2000, Les Mikesell wrote:
  Ultimately, I don't see any way around the fact that proxying from one
  server to another ties up two processes for that time rather than one, so
  if your bottleneck is the number of processes you can run before running
  out of RAM, this is not a good approach.
 
 The point is you only tie up the back end for the time it takes to deliver
 to the proxy, then it moves on to another request while the proxy
 dribbles the content back to the client.   Plus, of course, it doesn't
 have to be on the same machine.

I was actually talking about doing this with no front-end proxy, just
mod_perl servers.  That's what Theo was suggesting.

I use a mod_proxy front-end myself and it works very well.

- Perrin




Re: ApacheCon report

2000-10-28 Thread Greg Cope

Matt Sergeant wrote:
 
 http://modperl.sergeant.org/ApacheConRep.txt
 
 Enjoy.

Thanks for that Matt, I did enjoy it - IBM's party coninciding with Suns
keynote made me chukle ;-)

I eventually could not make the conferance due to a nasty deadline 

Did Doug mention when mod_perl 2.0 would / maybe / migh possibly be
ready (I know, I know that it will be ready when its ready, only
asking!)

Greg

 
 --
 Matt/
 
 /||** Director and CTO **
//||**  AxKit.com Ltd   **  ** XML Application Serving **
   // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
  // \\| // ** Personal Web Site: http://sergeant.org/ **
  \\//
  //\\
 //  \\



Re: ApacheCon report

2000-10-28 Thread Matthew Byng-Maddick

On Sat, 28 Oct 2000, Greg Cope wrote:
 Matt Sergeant wrote:
  http://modperl.sergeant.org/ApacheConRep.txt
  Enjoy.
 Thanks for that Matt, I did enjoy it - IBM's party coninciding with Suns
 I eventually could not make the conferance due to a nasty deadline 

You missed a lot.

 Did Doug mention when mod_perl 2.0 would / maybe / migh possibly be
 ready (I know, I know that it will be ready when its ready, only
 asking!)

I don't remember him mentioning anything, but it will certainly have to
wait until all the bugs in apache 2.0 are fixed. :) (hopes that rbb
doesn't read this...)

According to various asf people, apache2 is at least a month away from
being finished, and probably more...

MBM

-- 
Matthew Byng-Maddick   Home: [EMAIL PROTECTED]  +44 20  8981 8633  (Home)
http://colondot.net/   Work: [EMAIL PROTECTED] +44 7956 613942  (Mobile)
perl -e '$_="Oyvv bsswjfw Thtm mefmfw2\n";while(m([^\n])){$_=$'"'"';$a=$;
$a=($a=~m(^\s)?$a:pack "c",unpack("c",$a)-5+($i++%5));print $a}print"\n";'




Re: ApacheCon report

2000-10-28 Thread Matt Sergeant

On Sat, 28 Oct 2000, Greg Cope wrote:

 Matt Sergeant wrote:
  
  http://modperl.sergeant.org/ApacheConRep.txt
  
  Enjoy.
 
 Thanks for that Matt, I did enjoy it - IBM's party coninciding with Suns
 keynote made me chukle ;-)
 
 I eventually could not make the conferance due to a nasty deadline 
 
 Did Doug mention when mod_perl 2.0 would / maybe / migh possibly be
 ready (I know, I know that it will be ready when its ready, only
 asking!)

Unfortunately I had to run (I had a beer with my name on it) before
getting chance to speak to Doug again. But it also depends a *lot* on the
progress of Apache 2.0, which is really holding mod_perl 2.0 up, from what
I can tell. But once that's out of the way, I don't think anything is
stopping mod_perl 2.0 progress - Doug seems to be able to move pretty
quickly on things, as he has a good idea where he's going with the
project.

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




Re: ApacheCon report

2000-10-28 Thread Greg Cope

Matt Sergeant wrote:
 
 On Sat, 28 Oct 2000, Greg Cope wrote:
 
  Matt Sergeant wrote:
  
   http://modperl.sergeant.org/ApacheConRep.txt
  
   Enjoy.
 
  Thanks for that Matt, I did enjoy it - IBM's party coninciding with Suns
  keynote made me chukle ;-)
 
  I eventually could not make the conferance due to a nasty deadline 
 
  Did Doug mention when mod_perl 2.0 would / maybe / migh possibly be
  ready (I know, I know that it will be ready when its ready, only
  asking!)
 
 Unfortunately I had to run (I had a beer with my name on it) before
 getting chance to speak to Doug again. But it also depends a *lot* on the
 progress of Apache 2.0, which is really holding mod_perl 2.0 up, from what
 I can tell. But once that's out of the way, I don't think anything is
 stopping mod_perl 2.0 progress - Doug seems to be able to move pretty
 quickly on things, as he has a good idea where he's going with the
 project.

Sounds sooner than I though - great.

Doug and Ryan appear to work for the same company - Covalent - so Doug
should go bend Ryans ear - I want my mod_perl 2.0 ;-)

Thanks again Matt.

Greg

 
 --
 Matt/
 
 /||** Director and CTO **
//||**  AxKit.com Ltd   **  ** XML Application Serving **
   // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
  // \\| // ** Personal Web Site: http://sergeant.org/ **
  \\//
  //\\
 //  \\



Re: ApacheCon report

2000-10-27 Thread Matt Sergeant

On Fri, 27 Oct 2000, Tim Sweetman wrote:

 In no particular order, and splitting hairs some of the time...
 
 Sounded like mod_backhand was best used NOT in the same Apache as a phat
 application server (eg. mod_perl), because you don't want memory-heavy
 processes sitting waiting for responses. You'd be better off with a
 separate switching machine - or serve your static content from
 machine(s) that know to backhand dynamic requests to a phat machine. I
 think that's what Theo reckoned...

Yes, but the backend mod_perl servers are running backhand. So you have:

B  B  B  B
 \ |  | /
  \ \/ /
   \|/
F

Where all the servers are running mod_backhand, but only F is publically
accessible. There may also be 1 F. Its in his slides, and is prettier
than the above :-)

 "make simple things easy, and hard things possible" -
 What concerns me about systems like AxKit  Cocoon is that they may make
 simple things complex, and some hard things possible. But this is a
 naive comment not based on trying to build rilly big systems with them.
 Perl, maybe, doesn't make simple things anything like as easy as PHP.
 (Again, a naive comment that may be incorrect)

No, its correct, I think. I'd like to maybe next time do the second half
of the AxKit stuff as a Demo, but that takes some demo-able material thats
actually going to make you say "Ooh, that *is* easier than what I'm using
right now". So I'll work on it :-)

 Douglas Adams, who spoke at ApacheCon, previously made an interesting
 BBC documentary on hypermedia  its possibilities, in about 1992. Ted
 Nelson, I think it was, realised that the ability to _include stuff from
 other sources in your documents_ was important, and called it a
 "transclusion" (though that concept, IIRC, may have included the
 propagation of nanopayments to the source - not sure).
 
 And at Apachecon, the XML guys say: "This Document() function's really
 cool! You can build a portal very easily..." And after falling asleep
 (reflex on hearing /portal/, marketing allergy) I thought, it's
 syndication/transclusion again. Evidently, a big idea. But a big idea
 buried in a heap of other big ideas.

Its all been done before. I spoke a bit to Rael Dornfest about P2P (Rael
is an O'Reilly guy behind the P2P summit). Basically its all the same
stuff we've already been doing, but its just a buzzword. But it often
takes buzzwords to make the world sit up and take notice and focus on the
right thing to be doing, even though they may not know it themselves!

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




RE: ApacheCon report

2000-10-27 Thread David Waldo

Do you happen to have the URL for Theo's presentation?
I don't see it on the apachecon site.

Many thanks,

Dave Waldo

 -Original Message-
 From: Matt Sergeant [mailto:[EMAIL PROTECTED]]
 Sent: Friday, October 27, 2000 12:37 PM
 To: Tim Sweetman
 Cc: [EMAIL PROTECTED]
 Subject: Re: ApacheCon report
 
 
 Yes, but the backend mod_perl servers are running backhand. 
 So you have:
 
 B  B  B  B
  \ |  | /
   \ \/ /
\|/
 F
 
 Where all the servers are running mod_backhand, but only F is 
 publically
 accessible. There may also be 1 F. Its in his slides, and is prettier
 than the above :-)
 



RE: ApacheCon report

2000-10-27 Thread Geoffrey Young



 -Original Message-
 From: David Waldo [mailto:[EMAIL PROTECTED]]
 Sent: Friday, October 27, 2000 12:53 PM
 To: [EMAIL PROTECTED]
 Subject: RE: ApacheCon report
 
 
 Do you happen to have the URL for Theo's presentation?
 I don't see it on the apachecon site.

http://www.backhand.org/


 
 Many thanks,
 
 Dave Waldo
 
  -Original Message-
  From: Matt Sergeant [mailto:[EMAIL PROTECTED]]
  Sent: Friday, October 27, 2000 12:37 PM
  To: Tim Sweetman
  Cc: [EMAIL PROTECTED]
  Subject: Re: ApacheCon report
  
  
  Yes, but the backend mod_perl servers are running backhand. 
  So you have:
  
  B  B  B  B
   \ |  | /
\ \/ /
 \|/
  F
  
  Where all the servers are running mod_backhand, but only F is 
  publically
  accessible. There may also be 1 F. Its in his slides, and 
 is prettier
  than the above :-)
  
 



RE: ApacheCon report

2000-10-27 Thread Geoffrey Young



 -Original Message-
 From: Matt Sergeant [mailto:[EMAIL PROTECTED]]
 Sent: Friday, October 27, 2000 12:37 PM
 To: Tim Sweetman
 Cc: [EMAIL PROTECTED]
 Subject: Re: ApacheCon report
 
 
 On Fri, 27 Oct 2000, Tim Sweetman wrote:
 
  In no particular order, and splitting hairs some of the time...
  
  Sounded like mod_backhand was best used NOT in the same 
 Apache as a phat
  application server (eg. mod_perl), because you don't want 
 memory-heavy
  processes sitting waiting for responses. You'd be better off with a
  separate switching machine - or serve your static content from
  machine(s) that know to backhand dynamic requests to a phat 
 machine. I
  think hat's what Theo reckoned...
 
 Yes, but the backend mod_perl servers are running backhand. 
 So you have:
 
 B  B  B  B
  \ |  | /
   \ \/ /
\|/
 F
 


I was really impressed with backhand at Theo's presentation at ApacheCon US
in March.  From what I rememeber though, it had serious limitations in the
SSL space.  Did Theo touch on that?  The converstation I had with him about
it back then was that it was going to be addressed in a future release...

also IIRC, backhand was only terribly useful behind something like BigIP
(which is what we use).  Is there another implementation sheme now?

perhaps my memory is fading...

--Geoff



RE: ApacheCon report

2000-10-27 Thread Matt Sergeant

On Fri, 27 Oct 2000, Geoffrey Young wrote:

 I was really impressed with backhand at Theo's presentation at ApacheCon US
 in March.  From what I rememeber though, it had serious limitations in the
 SSL space.  Did Theo touch on that?  The converstation I had with him about
 it back then was that it was going to be addressed in a future release...

Yes he did touch on that, but I wasn't really understanding what he was
saying. I think it was that the frontend server is SSL enabled only, and
so the backend servers don't get the SSL cert. But he said you could use a
module to put the cert in a header, or something like that, and it would
work fine so long as you programmed it all right :-)

But then I know very little about SSL, and maybe got the wrong idea.

 also IIRC, backhand was only terribly useful behind something like BigIP
 (which is what we use).  Is there another implementation sheme now?

No that's right. Its the difference between high availability
(which BigIP *can* do) and load balancing (which backhand does).

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\