Re: squid performance

2000-01-30 Thread Leslie Mikesell

According to Greg Stark:
> Leslie Mikesell <[EMAIL PROTECTED]> writes:
> 
> > The 'something happens' is the part I don't understand.  On a unix
> > server, nothing one httpd process does should affect another
> > one's ability to serve up a static file quickly, mod_perl or
> > not.  (Well, almost anyway). 
> 
> Welcome to the real world however where "something" can and does happen.
> Developers accidentally put untuned SQL code in a new page that takes too long
> to run. Database backups slow down normal processing. Disks crash slowing down
> the RAID array (if you're lucky). Developers include dependencies on services
> like mail directly in the web server instead of handling mail asynchronously
> and mail servers slow down for no reason at all. etc.

Of course.  I have single httpd processes screw up all the time.  They
don't affect the speed of other httpd processes unless they consume
all of the machine's resources or lock something in common.  I suppose
if you have a small limit on the number of backend programs you
could get to a point where they are all busy doing something wrong. 

> > If you are using squid or a caching proxy, those static requests
> > would not be passed to the backend most of the time anyway. 
> 
> Please reread the analysis more carefully. I explained that. That is
> precisely the scenario I'm describing faults in.

I read it, but just wasn't convinced.  I'd like to understand this
better, though.  What did you do to show that there is a difference
when netscape accesses different hostnames for fast static content
as opposed to the same one where a cache responds quickly but
dynamic content is slow?  I thought Netscape would open 6 or so
separate connections regardless and would only wait if all 6
were used.  That is, it should not make anything wait unless you
have dynamically-generated images (or redirects) tying up the
other connections besides the one supplying the main html.  Do
you have some reason to think it will open fewer connections 
if they are all to the same host? 

  Les Mikesell
   [EMAIL PROTECTED]



Re: squid performance

2000-01-29 Thread Greg Stark

Leslie Mikesell <[EMAIL PROTECTED]> writes:

> The 'something happens' is the part I don't understand.  On a unix
> server, nothing one httpd process does should affect another
> one's ability to serve up a static file quickly, mod_perl or
> not.  (Well, almost anyway). 

Welcome to the real world however where "something" can and does happen.
Developers accidentally put untuned SQL code in a new page that takes too long
to run. Database backups slow down normal processing. Disks crash slowing down
the RAID array (if you're lucky). Developers include dependencies on services
like mail directly in the web server instead of handling mail asynchronously
and mail servers slow down for no reason at all. etc.

> > The proxy server continues to get up to 20 requests per second
> > for proxied pages, for each request it tries to connect to the mod_perl
> > server. The mod_perl server can now only handle 5 requests per second though.
> > So the proxy server processes quickly end up waiting in the backlog queue. 
> 
> If you are using squid or a caching proxy, those static requests
> would not be passed to the backend most of the time anyway. 

Please reread the analysis more carefully. I explained that. That is
precisely the scenario I'm describing faults in.

-- 
greg



Re: squid performance

2000-01-29 Thread Leslie Mikesell

According to Greg Stark:

> > > 1) Netscape/IE won't intermix slow dynamic requests with fast static requests
> > >on the same keep-alive connection
> > 
> > I thought they just opened several connections in parallel without regard
> > for the type of content.
> 
> Right, that's the problem. If the two types of content are coming from the
> same proxy server (as far as NS/IE is concerned) then they will intermix the
> requests and the slow page could hold up several images queued behind it. I
> actually suspect IE5 is cleverer about this, but you still know more than it
> does.

They have a maximum number of connections they will open at once
but I don't think there is any concept of queueing involved. 

> > > 2) static images won't be delayed when the proxy gets bogged down waiting on
> > >the backend dynamic server.
> 
> Picture the following situation: The dynamic server normally generates pages
> in about 500ms or about 2/s; the mod_perl server runs 10 processes so it can
> handle 20 connections per second. The mod_proxy runs 200 processes and it
> handles static requests very quickly, so it can handle some huge number of
> static requests, but it can still only handle 20 proxied requests per second.
> 
> Now something happens to your mod_perl server and it starts taking 2s to
> generate pages.

The 'something happens' is the part I don't understand.  On a unix
server, nothing one httpd process does should affect another
one's ability to serve up a static file quickly, mod_perl or
not.  (Well, almost anyway). 

> The proxy server continues to get up to 20 requests per second
> for proxied pages, for each request it tries to connect to the mod_perl
> server. The mod_perl server can now only handle 5 requests per second though.
> So the proxy server processes quickly end up waiting in the backlog queue. 

If you are using squid or a caching proxy, those static requests
would not be passed to the backend most of the time anyway. 

> Now *all* the mod_proxy processes are in "R" state and handling proxied
> requests. The result is that the static images -- which under normal
> conditions are handled quicly -- become delayed until a proxy process is
> available to handle the request. Eventually the backlog queue will fill up and
> the proxy server will hand out errors.

But only if it doesn't cache or know how to serve static content itself.

> Use a separate hostname for your pictures, it's a pain on the html authors but
> it's worth it in the long run.

That depends on what happens in the long run. If your domain name or
vhost changes, all of those non-relative links will have to be
fixed again.

  Les Mikesell
   [EMAIL PROTECTED]



Re: squid performance

2000-01-29 Thread Greg Stark


Leslie Mikesell <[EMAIL PROTECTED]> writes:

> I agree that it is correct to serve images from a lightweight server
> but I don't quite understand how these points relate.  A proxy should
> avoid the need to hit the backend server for static content if the
> cache copy is current unless the user hits the reload button and
> the browser sends the request with 'pragma: no-cache'.

I'll try to expand a bit on the details:

> > 1) Netscape/IE won't intermix slow dynamic requests with fast static requests
> >on the same keep-alive connection
> 
> I thought they just opened several connections in parallel without regard
> for the type of content.

Right, that's the problem. If the two types of content are coming from the
same proxy server (as far as NS/IE is concerned) then they will intermix the
requests and the slow page could hold up several images queued behind it. I
actually suspect IE5 is cleverer about this, but you still know more than it
does.

By putting them on different hostnames the browser will open a second set of
parallel connections to that server and keep the two types of requests
separate.

> > 2) static images won't be delayed when the proxy gets bogged down waiting on
> >the backend dynamic server.

Picture the following situation: The dynamic server normally generates pages
in about 500ms or about 2/s; the mod_perl server runs 10 processes so it can
handle 20 connections per second. The mod_proxy runs 200 processes and it
handles static requests very quickly, so it can handle some huge number of
static requests, but it can still only handle 20 proxied requests per second.

Now something happens to your mod_perl server and it starts taking 2s to
generate pages. The proxy server continues to get up to 20 requests per second
for proxied pages, for each request it tries to connect to the mod_perl
server. The mod_perl server can now only handle 5 requests per second though.
So the proxy server processes quickly end up waiting in the backlog queue. 

Now *all* the mod_proxy processes are in "R" state and handling proxied
requests. The result is that the static images -- which under normal
conditions are handled quicly -- become delayed until a proxy process is
available to handle the request. Eventually the backlog queue will fill up and
the proxy server will hand out errors.

> This is a good idea because it is easy to move to a different machine
> if the load makes it necessary.  However, a simple approach is to
> use a non-mod_perl apache as a non-caching proxy front end for the
> dynamic content and let it deliver the static pages directly.  A
> short stack of RewriteRules can arrange this if you use the 
> [L] or [PT] flags on the matches you want the front end to serve
> and the [P] flag on the matches to proxy.

That's what I thought. I'm trying to help others avoid my mistake :)

Use a separate hostname for your pictures, it's a pain on the html authors but
it's worth it in the long run.
-- 
greg



Re: squid performance

2000-01-25 Thread Peter Haworth

Gerald Richter wrote:
> > > No, that's the size of the system call buffer.  It is not an
> > > application buffer.
> >
> > So how one should interpret the info at:
> > http://www.apache.org/docs/mod/mod_proxy.html#proxyreceivebuffersize
> >
> > 
> > The ProxyReceiveBufferSize directive specifies an explicit network buffer
> > size for outgoing HTTP and FTP connections, for increased throughput. It
> > has to be greater than 512 or set to 0 to indicate that the system's
> > default buffer size should be used.
> > 
> >
> > So what's the application buffer parameter? A hardcoded value?
> >
> 
> Yes, as Joshua posted today morning (at least it was morning in germany
> :-), the application buffer size is hardcoded, the size is 8192 (named
> IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().
> 
> The ProxyReceiveBufferSize set the receive buffer size of the socket, so
> it's an OS issue.

I've patched my frontend server so that there are two buffer sizes:
  ProxyReceiveBufferSize sets the socket buffer size
  ProxyInternalBufferSize sets the application buffer size

This meant renaming ap_breate() to ap_bcreate_size() and adding a size
parameter, which defaults to IOBUFSIZE if 0 is passed. Then add
  #define ap_bcreate(p,flags) ap_bcreate(p,flags,0)
and add a new ap_bcreate() which calls ap_bcreate_size() for binary
compatibility (actually I haven't added the new ap_bcreate() yet, and I never
got round to sending this to the Apache development group).

This is all necessary because some of the proxied pages on my site are large
PDF and PS files which can't be cached due to security resaons. I have the
socket buffer set to the max allowed 64K (on Solaris), with a 1M application
buffer.

In my opinion, ProxyReceiveBufferSize should be called ProxySocketBufferSize,
leaving the old name free for my new use. This would also remove some of the
confusion about what it actually does.

-- 
Peter Haworth   [EMAIL PROTECTED]
"Save the whales. Feed the hungry. Free the mallocs."



Re: squid performance

2000-01-20 Thread Leslie Mikesell

According to Greg Stark:

> I tried to use the minspareservers and maxspareservers and the other similar
> parameters to let apache tune this automatically and found it didn't work out
> well with mod_perl. What happened was that starting up perl processes was the
> single most cpu intensive thing apache could do, so as soon as it decided it
> needed a new process it slowed down the existing processes and put itself into
> a feedback loop. I prefer to force apache to start a fixed number of processes
> and just stick with that number.

I've never noticed that effect, but I thought that apache always
grew in increments of 'StartServers' so I've tried to keep that
small, equal to MinSpareSevers, and an even divisor of MaxSpareServers
just on general principles.  Maybe you are starting a large number
as you cross the minspareservers boundaries.

  Les Mikesell
   [EMAIL PROTECTED]



Re: squid performance

2000-01-20 Thread Leslie Mikesell

According to Greg Stark:

> > I think if you can avoid hitting a mod_perl server for the images,
> > you've won more than half the battle, especially on a graphically
> > intensive site.
> 
> I've learned the hard way that a proxy does not completely replace the need to
> put images and other other static components on a separate server. There are
> two reasons that you really really want to be serving images from another
> server (possibly running on the same machine of course).

I agree that it is correct to serve images from a lightweight server
but I don't quite understand how these points relate.  A proxy should
avoid the need to hit the backend server for static content if the
cache copy is current unless the user hits the reload button and
the browser sends the request with 'pragma: no-cache'.

> 1) Netscape/IE won't intermix slow dynamic requests with fast static requests
>on the same keep-alive connection

I thought they just opened several connections in parallel without regard
for the type of content.

> 2) static images won't be delayed when the proxy gets bogged down waiting on
>the backend dynamic server.

Is this under NT where mod_perl is single threaded?  Serving a new request
should not have any relationship to delays handling other requests on
unix unless you have hit your child process limit.

> Eg, if the dynamic content generation becomes slow enough to cause a 2s
> backlog of connections for dynamic content, then a proxy will not protect the
> static images from that delay. Netscape or IE may queue those requests after
> another dynamic content request, and even if they don't the proxy server will
> eventually have every slot taken up waiting on the dynamic server. 

A proxy that already has the cached image should deliver it with no
delay, and a request back to the same server should be serviced
immediately anyway.

> So *every* image on the page will have another 2s latency, instead of just a
> 2s latency for the entire page. This is worst in Netscape of course course
> where the page can't draw until all the images sizes are known.

Putting the sizes in the IMG SRC tag is a good idea anyway.

> This doesn't mean having a proxy is a bad idea. But it doesn't replace putting
> your images on pics.mydomain.foo even if that resolves to the same address and
> run a separate apache instance for them.

This is a good idea because it is easy to move to a different machine
if the load makes it necessary.  However, a simple approach is to
use a non-mod_perl apache as a non-caching proxy front end for the
dynamic content and let it deliver the static pages directly.  A
short stack of RewriteRules can arrange this if you use the 
[L] or [PT] flags on the matches you want the front end to serve
and the [P] flag on the matches to proxy.

  Les Mikesell
[EMAIL PROTECTED]



Re: squid performance

2000-01-20 Thread Stas Bekman

On 20 Jan 2000, Greg Stark wrote:
> I tried to use the minspareservers and maxspareservers and the other similar
> parameters to let apache tune this automatically and found it didn't work out
> well with mod_perl. What happened was that starting up perl processes was the
> single most cpu intensive thing apache could do, so as soon as it decided it
> needed a new process it slowed down the existing processes and put itself into
> a feedback loop. I prefer to force apache to start a fixed number of processes
> and just stick with that number.

This shouldn't happen if you preload most or all your code that you use.
The fork is very effective on modern OSes, and since most use
copy-on-write methodm the spawning of a new process should be almost
unnoticeable.

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: squid performance

2000-01-20 Thread Greg Stark


"G.W. Haywood" <[EMAIL PROTECTED]> writes:

> Would it be breaching any confidences to tell us how many
> kilobyterequests per memorymegabyte or some other equally daft
> dimensionless numbers?

I assume the number you're looking for is an ideal ratio between the proxy and
the backend server? No single number exists. You need to monitor your system
and tune. 

In theory you can calculate it by knowing the size of the average request, and
the latency to generate an average request in the backend. If your pages take
200ms to generate, and they're 4k on average, then they'll take 1s to spool
out to a 56kbs link and you'll need a 5:1 ratio. In practice however that
doesn't work out so cleanly because the OS is also doing buffering and because
it's really the worst case you're worried about, not the average.

If you have the memory you could just shoot for the most processes you can
handle, something like 256:32 for example is pretty aggressive. If your
backend scripts are written efficiently you'll probably find the backend
processes are nearly all idle.

I tried to use the minspareservers and maxspareservers and the other similar
parameters to let apache tune this automatically and found it didn't work out
well with mod_perl. What happened was that starting up perl processes was the
single most cpu intensive thing apache could do, so as soon as it decided it
needed a new process it slowed down the existing processes and put itself into
a feedback loop. I prefer to force apache to start a fixed number of processes
and just stick with that number.

-- 
greg



Re: squid performance

2000-01-20 Thread Greg Stark


Vivek Khera <[EMAIL PROTECTED]> writes:

> Squid does indeed cache and buffer the output like you describe.  I
> don't know if Apache does so, but in practice, it has not been an
> issue for my site, which is quite busy (about 700k pages per month).
> 
> I think if you can avoid hitting a mod_perl server for the images,
> you've won more than half the battle, especially on a graphically
> intensive site.

I've learned the hard way that a proxy does not completely replace the need to
put images and other other static components on a separate server. There are
two reasons that you really really want to be serving images from another
server (possibly running on the same machine of course).

1) Netscape/IE won't intermix slow dynamic requests with fast static requests
   on the same keep-alive connection

2) static images won't be delayed when the proxy gets bogged down waiting on
   the backend dynamic server.

Both of these result in a very slow user experience if the dynamic content
server gets at all slow -- even out of proportion to the slowdown. 

Eg, if the dynamic content generation becomes slow enough to cause a 2s
backlog of connections for dynamic content, then a proxy will not protect the
static images from that delay. Netscape or IE may queue those requests after
another dynamic content request, and even if they don't the proxy server will
eventually have every slot taken up waiting on the dynamic server. 

So *every* image on the page will have another 2s latency, instead of just a
2s latency for the entire page. This is worst in Netscape of course course
where the page can't draw until all the images sizes are known.

This doesn't mean having a proxy is a bad idea. But it doesn't replace putting
your images on pics.mydomain.foo even if that resolves to the same address and
run a separate apache instance for them.

-- 
greg



RE: squid performance

2000-01-18 Thread Gerald Richter

>
> I looked at mod_proxy and found the pass thru buffer size
> is IOBUFSIZ, it reads that from the remote server then
> writes to the client, in a loop.
> Squid has 16K.
> Neither is enough.
> In an effort to get those mod_perl daemons to free up for long
> requests, it is possible to patch mod_proxy to read as much as
> you like in one gulp then write it..
> Having done that, I am now pretty happy - mod_rewrite mod_proxy
> mod_forwarded_for in front of modperl works great.. just a handful
> of mod_perls can drive scores of slow readers! I think that is better
> than squid for those with this particular problem.
> -Justin

Just read the whole thread and you will notice that patching isn't
neccessary, because your OS does buffering for you

Gerald



Re: squid performance

2000-01-18 Thread Ask Bjoern Hansen

On 17 Jan 2000, Michael Alan Dorman wrote:

> Vivek Khera <[EMAIL PROTECTED]> writes:
> > This has infinite more flexibility than squid, and allows me to have
> > multiple personalities to my sites.  See for example the sites
> > http://www.morebuiness.com and http://govcon.morebusiness.com
> 
> If when you say "multiple personalities", you mean virtual hosts, then
> squid---at least recent versions---can do this as well.

(plain) Apache can also serve static files (maybe after language/whatever
negotiations etc), run CGI's and much more - as Vivek mentioned.


 - ask 

-- 
ask bjoern hansen - 
more than 60M impressions per day, 



Re: squid performance

2000-01-18 Thread Michael Alan Dorman

Vivek Khera <[EMAIL PROTECTED]> writes:
> This has infinite more flexibility than squid, and allows me to have
> multiple personalities to my sites.  See for example the sites
> http://www.morebuiness.com and http://govcon.morebusiness.com

If when you say "multiple personalities", you mean virtual hosts, then
squid---at least recent versions---can do this as well.

Mind you, the FAQ doesn't explain this at all, but the Users Guide at
http://www.squid-cache.org/Doc/Users-Guide/detail/accel.html.  Gives
you all the details.

You have to have a redirector process, but there are several out there
(squirm, a couple of others), and the specs for building your own are
really trivial---like five lines of perl for a basic shell.

Now I've not done bench tests to see which of these options does the
best (quickest, lowest load, etc) job, and of course familiarity with
the configuration options counts for a lot---but I have to say (not
that it's entirely pertinent) that even though I spend a lot more time
with apache than squid, the times I've had to play with acls, I've
found squid easier than apache.

Now if you meant something else, well, I don't know what to say other
than I think there are drugs that can help...

Mike.



Re: squid performance

2000-01-18 Thread jb

I looked at mod_proxy and found the pass thru buffer size
is IOBUFSIZ, it reads that from the remote server then
writes to the client, in a loop.
Squid has 16K.
Neither is enough.
In an effort to get those mod_perl daemons to free up for long
requests, it is possible to patch mod_proxy to read as much as
you like in one gulp then write it.. 
Having done that, I am now pretty happy - mod_rewrite mod_proxy
mod_forwarded_for in front of modperl works great.. just a handful
of mod_perls can drive scores of slow readers! I think that is better
than squid for those with this particular problem.
-Justin

On Mon, Jan 17, 2000 at 07:56:33AM -0800, Ask Bjoern Hansen wrote:
> On Sun, 16 Jan 2000, DeWitt Clinton wrote:
> 
> [...]
> > On that topic, is there an alternative to squid?  We are using it
> > exclusively as an accelerator, and don't need 90% of it's admittedly
> > impressive functionality.  Is there anything designed exclusively for this
> > purpose?
> 
> At ValueClick we can't use the caching for obvious reasons so we're using
> a bunch of apache/mod_proxy processes in front of the apache/mod_perl
> processes to save memory.
> 
> Even with our average <1KB per request we can keep hundreds of mod_proxy
> childs busy with very few active mod_perl childs.
> 
> 
>   - ask
> 
> -- 
> ask bjoern hansen - 
> more than 60M impressions per day, 



RE: squid performance

2000-01-18 Thread Ask Bjoern Hansen

On Tue, 18 Jan 2000, Stas Bekman wrote:

> I'm still confused... which is the right scenario:
> 
> 1) a mod_perl process generates a response of 64k, if the
> ProxyReceiveBufferSize is 64k, the process gets released immediately, as
> all 64k are buffered at the socket, then a proxy process comes in, picks
> 8k of data every time and sends down the wire. 

Yes.

Or at least the mod_perl gets released quickly, also for responses > 8KB.

 - ask 

-- 
ask bjoern hansen - 
more than 60M impressions per day, 



RE: squid performance

2000-01-17 Thread Gerald Richter

>
> Gerald, thanks for your answer.
> I'm still confused... which is the right scenario:
>
> 1) a mod_perl process generates a response of 64k, if the
> ProxyReceiveBufferSize is 64k, the process gets released immediately, as
> all 64k are buffered at the socket, then a proxy process comes in, picks
> 8k of data every time and sends down the wire.
>

Yes, I am not quite sure if it get's release immediately, or if it has to
wait until the whole transmission is successfull.

> 2) a mod_perl process generates a response of 64k, a proxy request reads
> from mod_perl socket by 8k chunks and sends down the socket, No matter
> what's the client's speed the data gets buffered once again at the socket.
> So even if the client is slow the proxy server completes the proxying of
> 64k data even before the client was able to absorb the data. Thus the
> system socket serves as another buffer on the way to the client.
>

yes, too (but receive and transmit buffer may be of different size,
depending on the OS)

The problem I don't know is, does the call to close the socket wait, until
all data is actually send successfully or not. If it doesn't wait, you may
not be noticed of any failiure, but because the proxing Apache can write as
fast to the socket transmitt buffer as he can read, it should be possible
that the proxing Apache copies all the data from the receive to the
transmitt buffer and after that releaseing the receive buffer, so the
mod_perl Apache is free to do other things, while the proxing Apache still
wait until the client returns the success of data transmission. (The last,
is the part I am not sure on)

>
> Also if the scenario 1 is the right one and it looks like:
>
> [  socket  ]
> [mod_perl] => [  ] => [mod_proxy] => wire
> [  buffer  ]
>
> When the buffer size is of 64k and the generated data is 128k, is it a
> shift register (pipeline) alike buffer, so every time a chunk of 8k is
> picked by mod_proxy, new 8k can enter the buffer. Or no new data can enter
> the buffer before it gets empty, i.e. all 64k get read by mod_proxy?
>
> As you understand the pipeline mode provides a better performance as it
> releases the heavy mod_perl process as soon as the amount of data awaiting
> to be sent to the client is equal to socket buffer size + 8k. I think it's
> not a shift register buffer type...
>

That depends on your OS, but a normal OS should of course use a piplined
buffer (often it's implemented as a ring buffer, when the write pointer
reaches the end of the buffer, it continues to write at the start of te
buffer and stopps when it hit's the current read position in the buffer)

Gerald



RE: squid performance

2000-01-17 Thread Stas Bekman

> > > Yes, as Joshua posted today morning (at least it was morning in
> > germany :-),
> > > the application buffer size is hardcoded, the size is 8192 (named
> > > IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().
> > >
> > > The ProxyReceiveBufferSize set the receive buffer size of the socket, so
> > > it's an OS issue.
> >
> > Which means that setting of ProxyReceiveBufferSize higher than 8k is
> > usless unless you modify the sources. Am I right? (I want to make it as
> > clear as possible i in the Guide)
> >
> 
> No, that means that Apache reads (and writes) the data of the request in
> chunks of 8K, but the OS is providing a buffer with the size of
> ProxyReceiveBufferSize (as far as you don't hit a limit). So the proxied
> request data is buffered by the OS and if the whole page fit's inside the OS
> buffer the sending Apache should be imediately released after sending the
> page, while the proxing Apache can read and write the data in 8 K chunks as
> slow as the client is.

Gerald, thanks for your answer.
I'm still confused... which is the right scenario:

1) a mod_perl process generates a response of 64k, if the
ProxyReceiveBufferSize is 64k, the process gets released immediately, as
all 64k are buffered at the socket, then a proxy process comes in, picks
8k of data every time and sends down the wire. 

2) a mod_perl process generates a response of 64k, a proxy request reads
from mod_perl socket by 8k chunks and sends down the socket, No matter
what's the client's speed the data gets buffered once again at the socket.
So even if the client is slow the proxy server completes the proxying of
64k data even before the client was able to absorb the data. Thus the
system socket serves as another buffer on the way to the client.

3) neither of them

Also if the scenario 1 is the right one and it looks like:

  [  socket  ]
[mod_perl] => [  ] => [mod_proxy] => wire
  [  buffer  ]

When the buffer size is of 64k and the generated data is 128k, is it a
shift register (pipeline) alike buffer, so every time a chunk of 8k is
picked by mod_proxy, new 8k can enter the buffer. Or no new data can enter
the buffer before it gets empty, i.e. all 64k get read by mod_proxy? 

As you understand the pipeline mode provides a better performance as it
releases the heavy mod_perl process as soon as the amount of data awaiting
to be sent to the client is equal to socket buffer size + 8k. I think it's
not a shift register buffer type...

Thank you!

> 
> That's the result of the discussion. I didn't tried it out myself until now
> if it really behaves this way. I will do so the next time and let you know
> if I find any different behaviour.
> 
> Gerald
> 
> 
> 



___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



RE: squid performance

2000-01-17 Thread Gerald Richter

Hi Stas,

> >
> > Yes, as Joshua posted today morning (at least it was morning in
> germany :-),
> > the application buffer size is hardcoded, the size is 8192 (named
> > IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().
> >
> > The ProxyReceiveBufferSize set the receive buffer size of the socket, so
> > it's an OS issue.
>
> Which means that setting of ProxyReceiveBufferSize higher than 8k is
> usless unless you modify the sources. Am I right? (I want to make it as
> clear as possible i in the Guide)
>

No, that means that Apache reads (and writes) the data of the request in
chunks of 8K, but the OS is providing a buffer with the size of
ProxyReceiveBufferSize (as far as you don't hit a limit). So the proxied
request data is buffered by the OS and if the whole page fit's inside the OS
buffer the sending Apache should be imediately released after sending the
page, while the proxing Apache can read and write the data in 8 K chunks as
slow as the client is.

That's the result of the discussion. I didn't tried it out myself until now
if it really behaves this way. I will do so the next time and let you know
if I find any different behaviour.

Gerald




RE: squid performance

2000-01-17 Thread Stas Bekman

> > > No, that's the size of the system call buffer.  It is not an
> > > application buffer.
> >
> > So how one should interpret the info at:
> > http://www.apache.org/docs/mod/mod_proxy.html#proxyreceivebuffersize
> >
> > 
> > The ProxyReceiveBufferSize directive specifies an explicit network buffer
> > size for outgoing HTTP and FTP connections, for increased throughput. It
> > has to be greater than 512 or set to 0 to indicate that the system's
> > default buffer size should be used.
> > 
> >
> > So what's the application buffer parameter? A hardcoded value?
> >
> 
> Yes, as Joshua posted today morning (at least it was morning in germany :-),
> the application buffer size is hardcoded, the size is 8192 (named
> IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().
> 
> The ProxyReceiveBufferSize set the receive buffer size of the socket, so
> it's an OS issue.

Which means that setting of ProxyReceiveBufferSize higher than 8k is
usless unless you modify the sources. Am I right? (I want to make it as
clear as possible i in the Guide)

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



RE: squid performance

2000-01-17 Thread Gerald Richter

> > No, that's the size of the system call buffer.  It is not an
> > application buffer.
>
> So how one should interpret the info at:
> http://www.apache.org/docs/mod/mod_proxy.html#proxyreceivebuffersize
>
> 
> The ProxyReceiveBufferSize directive specifies an explicit network buffer
> size for outgoing HTTP and FTP connections, for increased throughput. It
> has to be greater than 512 or set to 0 to indicate that the system's
> default buffer size should be used.
> 
>
> So what's the application buffer parameter? A hardcoded value?
>

Yes, as Joshua posted today morning (at least it was morning in germany :-),
the application buffer size is hardcoded, the size is 8192 (named
IOBUFSIZE). You will find it in proxy_util.c:ap_proxy_send_fb().

The ProxyReceiveBufferSize set the receive buffer size of the socket, so
it's an OS issue.

Gerald


-
Gerald Richterecos electronic communication services gmbh
Internetconnect * Webserver/-design/-datenbanken * Consulting

Post:   Tulpenstrasse 5 D-55276 Dienheim b. Mainz
E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151
WWW:http://www.ecos.de  Fax:  +49 6133 925152
-



Re: squid performance

2000-01-17 Thread Ask Bjoern Hansen

On Mon, 17 Jan 2000, G.W. Haywood wrote:

> > At ValueClick we can't use the caching for obvious reasons so we're using
> > a bunch of apache/mod_proxy processes in front of the apache/mod_perl
> > processes to save memory.
> > 
> > Even with our average <1KB per request we can keep hundreds of mod_proxy
> > childs busy with very few active mod_perl childs.
> 
> Would it be breaching any confidences to tell us how many
> kilobyterequests per memorymegabyte or some other equally daft
> dimensionless numbers?

Uh, I don't understand the question.

The replies to the requests are all redirects to the real content (which
is primarily served by Akamai) so it's quite non-typical.


 - ask

-- 
ask bjoern hansen - 
more than 60M impressions per day, 



RE: squid performance

2000-01-17 Thread Stas Bekman

> On Mon, 17 Jan 2000, Markus Wichitill wrote:
> 
> > > So, if you want to increase RCVBUF size above 65535, the default max
> > > value, you have to raise first the absolut limit in
> > > /proc/sys/net/core/rmem_max, 
> > 
> > Is "echo 131072 > /proc/sys/net/core/rmem_max" the proper way to do
> > this? I don't have much experience with /proc, but this seems to work.
> 
> Yes, that's the way described in Linux kernel documentation and I use
> myself.

So you should put this into /etc/rc.d/rc.local ?

> > If it's ok, it could be added to the Guide, which already mentions how
> > to change it in FreeBSD.
> 
> I'd also like to see this info added to the Guide.

Of course! Thanks for this factoid!

___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



RE: squid performance

2000-01-17 Thread Radu Greab



On Mon, 17 Jan 2000, Markus Wichitill wrote:

> > So, if you want to increase RCVBUF size above 65535, the default max
> > value, you have to raise first the absolut limit in
> > /proc/sys/net/core/rmem_max, 
> 
> Is "echo 131072 > /proc/sys/net/core/rmem_max" the proper way to do
> this? I don't have much experience with /proc, but this seems to work.

Yes, that's the way described in Linux kernel documentation and I use
myself.

> If it's ok, it could be added to the Guide, which already mentions how
> to change it in FreeBSD.

I'd also like to see this info added to the Guide.


Radu Greab




Re: squid performance

2000-01-17 Thread Stas Bekman

On Mon, 17 Jan 2000, Vivek Khera wrote:

> > "OB" == Oleg Bartunov <[EMAIL PROTECTED]> writes:
> 
> OB> I always thought ProxyReceiveBufferSize is supposed to be a 
> OB> buffer size. I have it 1Mb  - FreeBSD, Apache 1.3.9
> 
> No, that's the size of the system call buffer.  It is not an
> application buffer.

So how one should interpret the info at:
http://www.apache.org/docs/mod/mod_proxy.html#proxyreceivebuffersize


The ProxyReceiveBufferSize directive specifies an explicit network buffer
size for outgoing HTTP and FTP connections, for increased throughput. It
has to be greater than 512 or set to 0 to indicate that the system's
default buffer size should be used.


So what's the application buffer parameter? A hardcoded value?

Oleg, very you able to benchmark the buffer size change?



___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: squid performance

2000-01-17 Thread Vivek Khera

> "OB" == Oleg Bartunov <[EMAIL PROTECTED]> writes:

OB> I always thought ProxyReceiveBufferSize is supposed to be a 
OB> buffer size. I have it 1Mb  - FreeBSD, Apache 1.3.9

No, that's the size of the system call buffer.  It is not an
application buffer.



RE: squid performance

2000-01-17 Thread Markus Wichitill

> So, if you want to increase RCVBUF size above 65535, the default max
> value, you have to raise first the absolut limit in
> /proc/sys/net/core/rmem_max, 

Is "echo 131072 > /proc/sys/net/core/rmem_max" the proper way to do this? I don't have 
much experience with /proc, but this seems to work. If it's ok, it could be added to 
the Guide, which already mentions how to change it in FreeBSD.



Re: squid performance

2000-01-17 Thread G.W. Haywood

Hi there,

On Mon, 17 Jan 2000, Ask Bjoern Hansen wrote:

> At ValueClick we can't use the caching for obvious reasons so we're using
> a bunch of apache/mod_proxy processes in front of the apache/mod_perl
> processes to save memory.
> 
> Even with our average <1KB per request we can keep hundreds of mod_proxy
> childs busy with very few active mod_perl childs.

Would it be breaching any confidences to tell us how many
kilobyterequests per memorymegabyte or some other equally daft
dimensionless numbers?

73,
Ged.



Re: squid performance

2000-01-17 Thread Stas Bekman

> > On Solaris, default seems to be 256K ...
> 
> As I remember, that's what Linux defalts to.  Don't take may word for
> it, I can't remember exactly where or when I read it - but I think it
> was in this List some time during the last couple of months!

Guide is your friend :)
http://perl.apache.org/guide/scenario.html#Building_and_Using_mod_proxy


You can control the buffering feature with ProxyReceiveBufferSize
directive: 

ProxyReceiveBufferSize 16384

The above setting will set a buffer size to be of 16Kb. If it is not set
explicitly or set to 0, then the default buffer size is used. It may not
be smaller than 512 and it should be a number that it's a multiplicative
of 512.

Both the default and the maximum possible value are depend on OS. For
example on linux OS with kernel 2.2.5 the maximum and default values are
either 32k or 64k (hint: grep the kernel sources for SK_RMEM_MAX
variable). If you set the value bigger than limit, the default one will be
used.

Under FreeBSD it's possible to configure kernel to have bigger socket
buffers: 

   % sysctl -w kern.ipc.maxsockbuf=2621440

When you tell the kernel to use bigger sockets you can set bigger values
for ProxyReceiveBufferSize. i.e. 1048576 (1Mb) and bigger. 

So basically to get an immediate release of the mod_perl server from stale
awaiting, ProxyReceiveBufferSize should be set to a value greater than the
biggest generated respond produced by any mod_perl script but not bigger
than the limit. But even if not all the requests' output will be small
enough or the buffer big enough to absorb it all, you've got an improve
since the processes that generated smaller responds will be immideately
released. 




___
Stas Bekmanmailto:[EMAIL PROTECTED]  http://www.stason.org/stas
Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC
perl.apache.orgmodperl.sourcegarden.org   perlmonth.comperl.org
single o-> + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: squid performance

2000-01-17 Thread G.W. Haywood

Hi there,

On Mon, 17 Jan 2000, Joshua Chamas wrote:

> On Solaris, default seems to be 256K ...

As I remember, that's what Linux defalts to.  Don't take may word for
it, I can't remember exactly where or when I read it - but I think it
was in this List some time during the last couple of months!

> I needed to buffer up to 3M files, which I did by dynamically 
> allocating space in ap_proxy_send_fb.

For such large transfers between proxy and server, is there any reason
why one shouldn't just dump it into a tempfile in a ramdisk for the
proxy to deal with at its leisure, and let the OS take care of all the
virtual and sharing stuff?  After all, that's what it's for...

73
Ged.



Re: squid performance

2000-01-17 Thread Ask Bjoern Hansen

On Sun, 16 Jan 2000, DeWitt Clinton wrote:

[...]
> On that topic, is there an alternative to squid?  We are using it
> exclusively as an accelerator, and don't need 90% of it's admittedly
> impressive functionality.  Is there anything designed exclusively for this
> purpose?

At ValueClick we can't use the caching for obvious reasons so we're using
a bunch of apache/mod_proxy processes in front of the apache/mod_perl
processes to save memory.

Even with our average <1KB per request we can keep hundreds of mod_proxy
childs busy with very few active mod_perl childs.


  - ask

-- 
ask bjoern hansen - 
more than 60M impressions per day, 



RE: squid performance

2000-01-17 Thread Vivek Khera

> "GR" == Gerald Richter <[EMAIL PROTECTED]> writes:

>> Lately I've been using apache on the front end with mod_rewrite and
>> mod_proxy to send mod_perl-required page requests to the heavy back

GR> Do you know how does this work with slow clients compared to
GR> squid. I always thought (but never tried) one benfit of squid is,
GR> that it temporaly caches (should be better say buffers?) the

Squid does indeed cache and buffer the output like you describe.  I
don't know if Apache does so, but in practice, it has not been an
issue for my site, which is quite busy (about 700k pages per month).

I think if you can avoid hitting a mod_perl server for the images,
you've won more than half the battle, especially on a graphically
intensive site.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-301-545-6996
PGP & MIME spoken herehttp://www.kciLink.com/home/khera/



RE: squid performance

2000-01-17 Thread radu



On Mon, 17 Jan 2000, Gerald Richter wrote:

> Look at proxy_http.c line 263 (Apache 1.3.9):
> 
>   if (setsockopt(sock, SOL_SOCKET, SO_RCVBUF,
>  (const char *) &conf->recv_buffer_size, sizeof(int))
> 
> I am not an expert in socket programming, but the setsockopt man page on my
> Linux says: "The system places an absolut limit on these values", but
> doesn't says where this limit will be?


For 2.2 kernels the max limit is in /proc/sys/net/core/rmem_max and the
default value is in /proc/sys/net/core/rmem_default. It's good to note the
following comment from the kernel source:

"Don't error on this BSD doesn't and if you think about it this is right.
Otherwise apps have to play 'guess the biggest size' games. RCVBUF/SNDBUF
are treated in BSD as hints."

So, if you want to increase RCVBUF size above 65535, the default max
value, you have to raise first the absolut limit in
/proc/sys/net/core/rmem_max, otherwise you might be thinking that by
calling setsockopt you increased it to say 1 MB, but in fact the RCVBUF
size is still 65535.


HTH,
Radu Greab



Re: squid performance

2000-01-17 Thread Joshua Chamas

Gerald Richter wrote:
> 
> I have seen this in the source too, that's why I wrote it will not work with
> Apache, because most pages will be greater the 8K. Patching Apache, is one
> possibility, that's right, but I just looked after the
> ProxyReceiveBufferSize which Oleg pointed to, and this one sets the socket
> options and therefore should do the same job (as far as the OS supports it).
> Look at proxy_http.c line 263 (Apache 1.3.9):
> 
> if (setsockopt(sock, SOL_SOCKET, SO_RCVBUF,
>(const char *) &conf->recv_buffer_size, sizeof(int))
> 
> I am not an expert in socket programming, but the setsockopt man page on my
> Linux says: "The system places an absolut limit on these values", but
> doesn't says where this limit will be?
> 

On Solaris, default seems to be 256K ...

tcp_max_buf

 Specifies the maximum buffer size a user is allowed to specify with the SO_SNDBUF 
or
 SO_RCVBUF options. Attempts to use larger buffers fail with EINVAL. The default is
 256K. It is unwise to make this parameter much larger than the maximum buffer
 size your applications require, since that could allow malfunctioning or malicious
 applications to consume unreasonable amounts of kernel memory.

I needed to buffer up to 3M files, which I did by dynamically 
allocating space in ap_proxy_send_fb.  I didn't know that you 
could up the tcp_max_buf at the time, and would be interested 
in anyone's experience in doing so, whether this can actually 
be used to buffer large files.  Save me a source tweak in 
the future. ;)

-- Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks >> free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



RE: squid performance

2000-01-17 Thread Gerald Richter

Joshua,
>
> I don't know what squid's buffer is like, but back in apache
> 1.3.4, the proxy buffer IOBUFSIZE was #defined to 8192 bytes,
> which would be used in proxy_util.c:ap_proxy_send_fb() to loop
> over content being proxy passed in 8K chunks, passing that
> on to the client.
>
> So if all the web files are <8K, perfect, but I'd suggest
> increasing the value that ap_proxy_send_fb uses to buffer
> to the largest size output commonly sent by the mod_perl server.
> If this is done, then apache's mod_proxy can be used as
> effectively as squid to buffer output from a mod_perl server.
>
I have seen this in the source too, that's why I wrote it will not work with
Apache, because most pages will be greater the 8K. Patching Apache, is one
possibility, that's right, but I just looked after the
ProxyReceiveBufferSize which Oleg pointed to, and this one sets the socket
options and therefore should do the same job (as far as the OS supports it).
Look at proxy_http.c line 263 (Apache 1.3.9):

if (setsockopt(sock, SOL_SOCKET, SO_RCVBUF,
   (const char *) &conf->recv_buffer_size, sizeof(int))

I am not an expert in socket programming, but the setsockopt man page on my
Linux says: "The system places an absolut limit on these values", but
doesn't says where this limit will be?

Gerald



Re: squid performance

2000-01-17 Thread Oleg Bartunov

I always thought ProxyReceiveBufferSize is supposed to be a 
buffer size. I have it 1Mb  - FreeBSD, Apache 1.3.9

Oleg
On Mon, 17 Jan 2000, Joshua Chamas wrote:

> Date: Mon, 17 Jan 2000 00:36:17 -0800
> From: Joshua Chamas <[EMAIL PROTECTED]>
> To: Gerald Richter <[EMAIL PROTECTED]>
> Cc: Vivek Khera <[EMAIL PROTECTED]>, Modperl list <[EMAIL PROTECTED]>
> Subject: Re: squid performance
> 
> Gerald Richter wrote:
> > 
> > > These are on the same server, and all images and CGI's run on the
> > > small apache, and the page contents are dynamically generated by a
> > > heavy back-end proxied transparently.  The front end apache proxies to
> > > a different back-end based on the hostname it is contacted under.
> > >
> > Do you know how does this work with slow clients compared to squid. I always
> > thought (but never tried) one benfit of squid is, that it temporaly caches
> > (should be better say buffers?) the output generated by mod_perl scripts, so
> > the script can run as fast as possible and deliver it's output to squid,
> > while squid delivers the output to a slower client, the the process running
> > mod_perl can already serve the next request, therfore keeping the number of
> > mod_perl processes small.
> > 
> > Does this work in this way with squid? I don't think this will work with
> > Apache and a simple ProxyPass...
> > 
> 
> Gerald,
> 
> I don't know what squid's buffer is like, but back in apache 
> 1.3.4, the proxy buffer IOBUFSIZE was #defined to 8192 bytes, 
> which would be used in proxy_util.c:ap_proxy_send_fb() to loop 
> over content being proxy passed in 8K chunks, passing that
> on to the client.  
> 
> So if all the web files are <8K, perfect, but I'd suggest 
> increasing the value that ap_proxy_send_fb uses to buffer
> to the largest size output commonly sent by the mod_perl server.
> If this is done, then apache's mod_proxy can be used as 
> effectively as squid to buffer output from a mod_perl server.
> 
> Regards,
> 
> Joshua
> _
> Joshua Chamas Chamas Enterprises Inc.
> NodeWorks >> free web link monitoring Huntington Beach, CA  USA 
> http://www.nodeworks.com1-714-625-4051
> 

_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83



Re: squid performance

2000-01-17 Thread Joshua Chamas

Gerald Richter wrote:
> 
> > These are on the same server, and all images and CGI's run on the
> > small apache, and the page contents are dynamically generated by a
> > heavy back-end proxied transparently.  The front end apache proxies to
> > a different back-end based on the hostname it is contacted under.
> >
> Do you know how does this work with slow clients compared to squid. I always
> thought (but never tried) one benfit of squid is, that it temporaly caches
> (should be better say buffers?) the output generated by mod_perl scripts, so
> the script can run as fast as possible and deliver it's output to squid,
> while squid delivers the output to a slower client, the the process running
> mod_perl can already serve the next request, therfore keeping the number of
> mod_perl processes small.
> 
> Does this work in this way with squid? I don't think this will work with
> Apache and a simple ProxyPass...
> 

Gerald,

I don't know what squid's buffer is like, but back in apache 
1.3.4, the proxy buffer IOBUFSIZE was #defined to 8192 bytes, 
which would be used in proxy_util.c:ap_proxy_send_fb() to loop 
over content being proxy passed in 8K chunks, passing that
on to the client.  

So if all the web files are <8K, perfect, but I'd suggest 
increasing the value that ap_proxy_send_fb uses to buffer
to the largest size output commonly sent by the mod_perl server.
If this is done, then apache's mod_proxy can be used as 
effectively as squid to buffer output from a mod_perl server.

Regards,

Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks >> free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



RE: squid performance

2000-01-16 Thread Gerald Richter

> Lately I've been using apache on the front end with mod_rewrite and
> mod_proxy to send mod_perl-required page requests to the heavy back
> end.  This has infinite more flexibility than squid, and allows me to
> have multiple personalities to my sites.  See for example the sites
> http://www.morebuiness.com and http://govcon.morebusiness.com
>
> These are on the same server, and all images and CGI's run on the
> small apache, and the page contents are dynamically generated by a
> heavy back-end proxied transparently.  The front end apache proxies to
> a different back-end based on the hostname it is contacted under.
>
Do you know how does this work with slow clients compared to squid. I always
thought (but never tried) one benfit of squid is, that it temporaly caches
(should be better say buffers?) the output generated by mod_perl scripts, so
the script can run as fast as possible and deliver it's output to squid,
while squid delivers the output to a slower client, the the process running
mod_perl can already serve the next request, therfore keeping the number of
mod_perl processes small.

Does this work in this way with squid? I don't think this will work with
Apache and a simple ProxyPass...

Gerald



Re: squid performance

2000-01-16 Thread Jeffrey W. Baker

Vivek Khera wrote:

> Woulnd't running mod_perl on the front end kinda defeat the whole
> purpose of an accelerator?

Perhaps not.  The thing that adds the most heft to my httpd processes is
the Oracle libraries.  Mod_perl processes can be very small if they
aren't linking in a bunch of libraries.

-jwb



Re: squid performance

2000-01-16 Thread Vivek Khera

> "DC" == DeWitt Clinton <[EMAIL PROTECTED]> writes:

DC> On that topic, is there an alternative to squid?  We are using it
DC> exclusively as an accelerator, and don't need 90% of it's admittedly

Lately I've been using apache on the front end with mod_rewrite and
mod_proxy to send mod_perl-required page requests to the heavy back
end.  This has infinite more flexibility than squid, and allows me to
have multiple personalities to my sites.  See for example the sites
http://www.morebuiness.com and http://govcon.morebusiness.com

These are on the same server, and all images and CGI's run on the
small apache, and the page contents are dynamically generated by a
heavy back-end proxied transparently.  The front end apache proxies to
a different back-end based on the hostname it is contacted under.

DC> impressive functionality.  Is there anything designed exclusively for this
DC> purpose?  Perhaps a modperl module that implements caching of files based
DC> on Expires headers?

Woulnd't running mod_perl on the front end kinda defeat the whole
purpose of an accelerator?






Re: squid performance

2000-01-16 Thread DeWitt Clinton

On Sun, 16 Jan 2000, Steven Lembark wrote:

> given you have the core to support it...  try using libmm and
> a tied hash to just stash the stuff until it's asked for.  

Actually, we are currently developing a generic object caching interface
that will support a TTL based on expiration dates, and implementing this
in both Java and perl.  Since the perl version will be used most in a
multi-process environment (like apache/modperl) we'll back-end this on
mmap.  The java version will be used in both our middle tier (a
single-process threaded server) and in a servlet implementation of our
front-end, which also runs in a single process.

Right now, I do cache data in each httpd process, but since I haven't
implemented a TTL, I rely on the apache process dying naturally to clean
up the data.  Obviously, this creates *huge* httpd processes (50+ MB) with
non-shared but redundant data, and isn't particularly predictable.  Our
site is a little different that most ecommerce sites, because we have a
very low depth of inventory, and have to keep all the state in the
database, so that customers will rarely see out-of-date product data.

Ultimately, I hope to open up our entire source tree under the GPL anyway,
but I'll try to get the generic caching code out there earlier, if
possible.  I'll keep this list posted.  Unless someone has already done
this, that is.

-DeWitt









Re: squid performance

2000-01-16 Thread Jeffrey W. Baker

DeWitt Clinton wrote:
> 
> On Fri, 14 Jan 2000 [EMAIL PROTECTED] wrote:
> 
> > Hi, I am switching my modperl site to squid in httpd acclerator mode
> > and everything works as advertised, but was very surprised to find
> > squid 10x slower than apache on a cached 6k gif as measured by
> > apache bench... 100 requests/second vs almost 1000 for apache.
> 
> I've experienced the same results with accelerating small static files.
> However, we still use squid on our site (www.eziba.com) because our httpd
> processes are so heavy, and to save a trip to the database for documents
> that are cachable, such as product images.  We've found that the net
> performance gain to be noticeable.  However, if all you are serving is
> static files that live on the same tier as the apache server, you'll
> probably be better of without squid.
> 
> On that topic, is there an alternative to squid?  We are using it
> exclusively as an accelerator, and don't need 90% of it's admittedly
> impressive functionality.  Is there anything designed exclusively for this
> purpose?  Perhaps a modperl module that implements caching of files based
> on Expires headers?

How about mod_backhand?  http://www.backhand.org/.  It is capable of not
only buffering, but also load balancing and failover.

If all you want is buffering, it would be very easy to write a small
program that accepted http requests and forwarded them to another
daemon, then buffered the response.  Perhaps this would be a good
weekend project for one of us.

I've been tinkering with the idea of writing an httpd.  As much as I
like Apache, there are many things I don't like about it, and a ton of
functionality that I won't ever need.  All I really want is a web server
that:

1) Is multithreaded or otherwise highly parallel.
2) Has a request stage interface like Apache's
3) Uses async I/O

Number 3 would releive us of this squid hack that we are all using. 
That doesn't seem too hard.  I think I will write it.  I'll need someone
to hook in the Perl interpreter, though :)

-jwb



Re: squid performance

2000-01-16 Thread DeWitt Clinton

On Fri, 14 Jan 2000 [EMAIL PROTECTED] wrote:

> Hi, I am switching my modperl site to squid in httpd acclerator mode
> and everything works as advertised, but was very surprised to find
> squid 10x slower than apache on a cached 6k gif as measured by
> apache bench... 100 requests/second vs almost 1000 for apache.

I've experienced the same results with accelerating small static files.  
However, we still use squid on our site (www.eziba.com) because our httpd
processes are so heavy, and to save a trip to the database for documents
that are cachable, such as product images.  We've found that the net
performance gain to be noticeable.  However, if all you are serving is
static files that live on the same tier as the apache server, you'll
probably be better of without squid.

On that topic, is there an alternative to squid?  We are using it
exclusively as an accelerator, and don't need 90% of it's admittedly
impressive functionality.  Is there anything designed exclusively for this
purpose?  Perhaps a modperl module that implements caching of files based
on Expires headers?

-DeWitt