Please describe the hardware your using fully.  NICS, etc.   This is
not normal behavior.

On 10/31/05, Peter Zaitsev <[EMAIL PROTECTED]> wrote:
> On Sun, 2005-10-30 at 23:14 +0100, Espen Johansen wrote:
> > Hi Peter,
> >
> > I have seen you have done a lot of testing with apache benchmarking.
> > I find it a little strange to use this as a test. Basically you will hit the
> > roof of standing I/O operations because you introduce latency with pfsense.
> > The lower the latency the more finished tasks/connections per time unit.
> > Most people don't take this into consideration when they tune apache.
> > Although, this is one of the most important aspects of web-server tuning.
>
> Espen,
>
> If you would see to the set of my emails you would see the growing
> latency with network pooling is not my concern, as well as well as
> dropping throughput with pfsense in the middle - it is all
> understandable.
>
> What is NOT ok however is the stall  (20+ seconds) when CPU usage on
> pfsense drops almost to  zero and no traffics come on connections.
> Sometimes it causes apache benchmark to abort sometimes just shows crazy
> response times.
>
> This does not happen in direct benchmark (no pfsense in the middle) or
> with pfsense with disable firewall.
>
> Why I used apache benchmark ?  Well it is simple stress test which
> results in a lot of traffic and a lot of states in the state tables.
>
> >
> > This is the scenario:
> >
> > Client with low BW and high latency will generate a standing I/O because of
> > the way apache is designed. So if a client with 100ms latency asks for a
> > file of 100Kbyte and he has a 3KB/s transfer rate he will generate a
> > standing I/O operation for "latency + transfer time", and the I/O operation
> > will not be finished until he has a completed transfer. So basically you do
> > the same, because you change the amount of time the request takes to process
> > you will have more standing I/O operations then if pfsense does routing only
> > (faster then routing and filtering). So lets say that you increase latency
> > from 0.4 ms to 2 ms it will mean that you have standing I/O 250% longer. So
> > in turn that will mean that your ability to serve connections will be 1/5
> > with 2ms compared to 0.4 ms latency.
>
> Well... This would be the case in real life scenario - slow clients
> blowing up number of apache children.  But it is not the case in
> synthetic Apache benchmark test.   In this case you set fixed
> concurrency.   I obviously set it low enough for my Apache box to
> handle.
>
> Furthermore pfsense locks even with single connection (this is
> independent if device pooling is enabled)
>
>
> >
> > The ones listed below seems to be the once that has the most effect on
> > polling and performance. You will have to play around with these settings to
> > find out what works best on your HW, as I can't seem to find some common
> > setting that works well for all kinds of HW.
> >
> > kern.polling.each_burst=80
> > kern.polling.burst_max=1000
> > kern.polling.user_frac=50
>
>
> Thanks.
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to