I've run SMOKE tests and not seen any discernible diffs
in performance, but they have not been incredibly
stressful tests.

On Feb 2, 2011, at 7:02 PM, David Dabbs wrote:

> Hi.
> 
> Has anyone compared before/after performance when pounding a pre-patched
> httpd (with ab or other load generator) and with the fdqueue mods? 
> Or, for those more daring readers, observed improvements in a production
> environment?
> Before deploying a manually patched 2.2.x branch, we're probably going to
> run some sort of load test. 
> Having read the thread, I don't think we'd need to do anything other than
> throw a lot of load at it, right?
> 
> Thanks,
> 
> David
> 
> 
> -----Original Message-----
> From: Niklas Edmundsson [mailto:ni...@acc.umu.se] 
> Sent: Friday, January 28, 2011 8:58 AM
> To: dev@httpd.apache.org
> Subject: Re: Performance fix in event mpm
> 
> On Fri, 28 Jan 2011, Jim Jagielski wrote:
> 
>> I was going to submit it as a backport, yes.
> 
> I have a strong feeling that this can explain the weird performance 
> issues/behavior we've seen when hitting any bottleneck that results in 
> requests being queued up.
> 
> Thanks for finding/fixing this :)
> 
>> 
>> On Jan 27, 2011, at 9:08 PM, David Dabbs wrote:
>> 
>>> I see that the changes described below were applied to the trunk worker
> and
>>> event MPM code.
>>> Would you consider applying it to the 2.2x branch? I will do so myself
> and
>>> test in my env.
>>> 
>>> 
>>> Many thanks,
>>> 
>>> David Dabbs
>>> 
>>> 
>>> 
>>> -----Original Message-----
>>> From: Jim Jagielski [mailto:j...@jagunet.com]
>>> Sent: Thursday, January 27, 2011 12:43 PM
>>> To: dev@httpd.apache.org
>>> Subject: Re: Performance fix in event mpm
>>> 
>>> 
>>> On Jan 27, 2011, at 1:31 PM, Jim Jagielski wrote:
>>> 
>>>> 
>>>> On Jan 27, 2011, at 12:21 PM, Jim Van Fleet wrote:
>>>> 
>>>>> I have been developing an application using apache 2.2 on linux 2.6.
> My
>>> test environment creates a very heavy workload and puts a strain on every
>>> thing.
>>>>> 
>>>>> I would get good performance for a while and as the load ramped up,
>>> performance would quickly get very bad.  Erratically, transactions would
>>> finish quickly or take a very long time -- tcpdump analysis showed
>>> millisecond or seconds between responses. Also, the recv queue got very
>>> large.
>>>>> 
>>>>> I noticed that ap_queue_pop removes elements from the queue LIFO rather
>>> than FIFO.  Also noticed that apr_queue_pop uses a different technique
> which
>>> is not too expensive and is fifo, so I changed ap_queue/pop/push to use
> that
>>> technique and the receive problems went away.
>>>>> 
>>>>> Please let me know if you think this change is appropriate and/or if
>>> you'd like more data
>>>>> 
>>>> 
>>>> Hmmm.... Not sure why the fdqueue would be LIFO. But certainly
>>>> the above ain't right for pop! :)
>>> 
>>> OK, looking over the history, it looks like the Q was changed from
>>> FIFO to LIFO ~10years ago (worker)... The reasoning:
>>> 
>>> This is a rather simple patch that may improve cache-hit performance
>>> under some conditions by changing the queue of available worker threads
>>> from FIFO to LIFO. It also adds a tiny reduction in the arithmetic that
>>> happens in the critical section, which will definately help if you have
>>> a lame compiler.
>>> 
>>> Seems to me that changing back to FIFO would make sense, esp
>>> with trunk. We can profile the expense of the '% queue->bounds'
>>> but it seems to me that if it was really bad, we'd have seen it
>>> in apr and changed it there... after all, all we're doing
>>> with that is keeping it in bounds and a comparison and subtraction
>>> would do that just as well...
>>> 
>> 
> 
> 
> /Nikke
> -- 
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>  Niklas Edmundsson, Admin @ {acc,hpc2n}.umu.se      |     ni...@acc.umu.se
> ---------------------------------------------------------------------------
>  I am Bashir on Borg: I'd be hostile to if my poop was cubed!
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> 

Reply via email to