On Mon, 31 Mar 2003 [EMAIL PROTECTED] wrote:

> > we should also put in a directive to only compress when system load is 
> > below a certain level. (but we would need a apr_get_system_load() 
> > function first .. any volunteers? )
> 
> If you go down this route watch out for what's called 'back-flash'.
> 
> You can easily get into a catch-22 at the 'threshhold' rate
> where you are ping-ponging over/under the threshhold because
> currently executing ZLIB compressions will always be included in
> the 'system load' stat you are computing.
> 
> In other words... if you don't want to compress because you
> think the machine is too busy then it might only be too
> busy because it's already compressing. The minute you
> turn off compression you drop under-threshhold and now
> you are 'thrashing' and 'ping-ponging' over/under the
> threshhold.
> 
> You might want to always compare system load against
> transaction/compression task load to see if something other 
> than normal compression activity is eating the CPU.
> 
> Low transaction count + high CPU load = Something other than
> compression is eating the CPU and stopping compressions
> won't really make much difference.
> 
> High transaction count + high CPU load + high number
> of compressions in progress = Might be best to back
> off on the compressions for a moment.

In my mod_deflate for Apache 1.3.x I take into account an idle time
on FreeBSD 3.x+. To reduce ping-ponging I do not disable a compressing
at all if idle time is less then specified but I limit a number of
the processes that can concurrently compress an output.
It should gracefully enable or disable compressing if idle time
is low.

But I found this limitation is almost usefullness on modern CPU and small
ehough responses (about < 100K). Much more limiting factor is memory
overhead that zlib required for compressing - 256K-384K.
So I have directive to disable a compressing to avoid an intensive swapping
if a number of Apache processes is bigger then specified.


Igor Sysoev
http://sysoev.ru/en/

Reply via email to