On Sun, Oct 1, 2017 at 2:50 PM, wrote:
> https://bz.apache.org/bugzilla/show_bug.cgi?id=61551
>
> --- Comment #18 from Eric Covener ---
>
> Maybe a bit premature to ask mod_Security to make a change API wise.
>
> Looks like a process_connection() hook could complete without changing
> the state
Thanks for testing and verifying the fix, Stefan!
> Am 31.07.2017 um 11:32 schrieb Stefan Priebe - Profihost AG
> :
>
> 4tr i was able to fix this by mod_h2 v1.10.10
>
> Greets,
> Stefan
>
> Am 25.07.2017 um 15:40 schrieb Stefan Eissing:
>> Well, if the customer could reproduce this at a
>>
4tr i was able to fix this by mod_h2 v1.10.10
Greets,
Stefan
Am 25.07.2017 um 15:40 schrieb Stefan Eissing:
> Well, if the customer could reproduce this at a
>
> LogLevel http2:trace2
>
> that would help.
>
>> Am 25.07.2017 um 15:38 schrieb Stefan Priebe - Profihost AG
>> :
>>
>> Hello Ste
I am waiting to hear back from the peeps that opened the github issue. From how
I read their logs, the patch should help them. Will report what they say.
-Stefan
> Am 25.07.2017 um 15:40 schrieb Stefan Eissing :
>
> Well, if the customer could reproduce this at a
>
> LogLevel http2:trace2
>
Well, if the customer could reproduce this at a
LogLevel http2:trace2
that would help.
> Am 25.07.2017 um 15:38 schrieb Stefan Priebe - Profihost AG
> :
>
> Hello Stefan,
>
> thanks for the patch. No it does not solve the problem our customer is
> seeing.
>
> What kind of details / logs y
Hello Stefan,
thanks for the patch. No it does not solve the problem our customer is
seeing.
What kind of details / logs you need?
Greets,
Stefan
Am 25.07.2017 um 11:59 schrieb Stefan Eissing:
> The issue was opened here: https://github.com/icing/mod_h2/issues/143
>
> I made a patch that i hop
The issue was opened here: https://github.com/icing/mod_h2/issues/143
I made a patch that i hope addresses the problem. The 2.4.x version I attach to
this mail.
Thanks!
Stefan
h2_stream_stall_2.4.x-v0.diff
Description: Binary data
> Am 25.07.2017 um 08:13 schrieb Stefan Priebe - Profihost
Am 24.07.2017 um 23:06 schrieb Stefan Eissing:
> I have another report of request getting stuck, resulting in the error you
> noticed. Will look tomorrow and report back here what I find.
Thanks, if you need any logs. Pleae ask.
Stefan
>
>> Am 24.07.2017 um 22:20 schrieb Stefan Priebe - Profi
I have another report of request getting stuck, resulting in the error you
noticed. Will look tomorrow and report back here what I find.
> Am 24.07.2017 um 22:20 schrieb Stefan Priebe - Profihost AG
> :
>
> Hello all,
>
> currently 8 hours of testing without any issues.
>
> @Stefan
> i've mos
Hello all,
currently 8 hours of testing without any issues.
@Stefan
i've most probably another issue with http2 where some elements of the
page are sometimes missing and the connection results in
ERR_CONNECTION_CLOSED after 60s. What kind of details do you need?
Greets,
Stefan
Am 22.07.2017 um 1
First test with version five looks good so far will continue extensive testing
tomorrow.
Greets,
Stefan
Excuse my typo sent from my mobile phone.
> Am 22.07.2017 um 13:35 schrieb Yann Ylavic :
>
>> On Sat, Jul 22, 2017 at 2:18 AM, Yann Ylavic wrote:
>> On Fri, Jul 21, 2017 at 10:31 PM, Stefan
On Sat, Jul 22, 2017 at 2:18 AM, Yann Ylavic wrote:
> On Fri, Jul 21, 2017 at 10:31 PM, Stefan Priebe - Profihost AG
> wrote:
>>
>> your new defer linger V3 deadlocked as well.
>>
>> GDB traces:
>> https://www.apaste.info/LMfJ
>
> This shows the listener thread waiting for a worker while there ar
And to answer myself: no, the v3 patch does not expose anything when running in
h2fuzz.
> Am 22.07.2017 um 07:17 schrieb Stefan Eissing :
>
> Profihost, where bugs come to die!
>
> I am currently fully overloaded, but it would be interesting to check how the
> previous versions of the patch fa
Profihost, where bugs come to die!
I am currently fully overloaded, but it would be interesting to check how the
previous versions of the patch fare in a h2fuzz setup.
-Stefan
> Am 22.07.2017 um 02:18 schrieb Yann Ylavic :
>
> On Fri, Jul 21, 2017 at 10:31 PM, Stefan Priebe - Profihost AG
> w
On Fri, Jul 21, 2017 at 10:31 PM, Stefan Priebe - Profihost AG
wrote:
>
> your new defer linger V3 deadlocked as well.
>
> GDB traces:
> https://www.apaste.info/LMfJ
This shows the listener thread waiting for a worker while there are
many available.
My mistake, the worker threads failed to rearm
Hello Yann,
your new defer linger V3 deadlocked as well.
GDB traces:
https://www.apaste.info/LMfJ
But this time i have no fullstatus for you as the apache didn't serve
any connections at all anymore. But even before i did NOT see those
strange values for closing connections.
Thanks!
Greets,
St
Hello Yann,
i downloaded V3. Can't guarantee when i can test. May be today or on monday.
Greets,
Stefan
Am 21.07.2017 um 01:08 schrieb Yann Ylavic:
> On Thu, Jul 20, 2017 at 12:48 PM, Stefan Priebe - Profihost AG
> wrote:
>> V3 didn't help.
>
> I just posted a new patch in this thread, with a
> So, should we favor the draining of defer_linger_chain as much workers
> as necessary like the current patch, or should we have as few workers
> as possible and not start new workers in loops with no effect on
> defer_linger_chain?
I think the fewer workers option could lead to hard to debug (fr
> Also, it seems that in the deferred lingering case we should probaly
> shorten the socket timeout before calling (and possibly blocking on)
> ap_start_lingering_close()'s hooks/flush, since we likely come from a
> time-up already...
+1
On Fri, Jul 21, 2017 at 3:07 PM, Luca Toscano wrote:
>>
>> To prevent starvation of deferred lingering closes, the listener may
>> create a worker at the of its loop, when/if the chain is (fully)
>> filled.
>
> IIUC the trick is to run "(have_idle_worker && push2worker(NULL) ==
> APR_SUCCESS)" tha
Hi Yann,
2017-07-21 1:05 GMT+02:00 Yann Ylavic :
> On Fri, Jul 14, 2017 at 9:52 PM, Yann Ylavic wrote:
> >
> > So overall, this patch may introduce the need for more workers than
> > before, what was (wrongly) done by the listener thread has to be done
> > somewhere anyway...
>
> That patch didn
2017-07-21 1:16 GMT+02:00 Yann Ylavic :
> On Fri, Jul 21, 2017 at 1:05 AM, Postmaster
> wrote:
> > This message was created automatically by mail delivery software. Your
> email message was not delivered as is to the intended recipients because
> malware was detected in one or more attachments in
On Fri, Jul 21, 2017 at 1:08 AM, Yann Ylavic wrote:
> On Thu, Jul 20, 2017 at 12:48 PM, Stefan Priebe - Profihost AG
> wrote:
>> V3 didn't help.
>
> I just posted a new patch in this thread, with a new approach which I
> think is better anyway.
>
> Would you mind testing it in your environment?
On Fri, Jul 21, 2017 at 1:05 AM, Postmaster
wrote:
> This message was created automatically by mail delivery software. Your email
> message was not delivered as is to the intended recipients because malware
> was detected in one or more attachments included with it. All attachments
> were delet
On Thu, Jul 20, 2017 at 12:48 PM, Stefan Priebe - Profihost AG
wrote:
> V3 didn't help.
I just posted a new patch in this thread, with a new approach which I
think is better anyway.
Would you mind testing it in your environment?
Regards,
Yann.
On Fri, Jul 14, 2017 at 9:52 PM, Yann Ylavic wrote:
>
> So overall, this patch may introduce the need for more workers than
> before, what was (wrongly) done by the listener thread has to be done
> somewhere anyway...
That patch didn't work (as reported by Stefan Pribe) and I now don't
feel the n
On Fri, Jul 14, 2017 at 9:52 PM, Yann Ylavic wrote:
>
> So overall, this patch may introduce the need for more workers than
> before, what was (wrongly) done by the listener thread has to be done
> somewhere anyway...
That patch didn't work (as reported by Stefan Pribe) and I now don't
feel the n
Yes:
Slot PID Stopping ConnectionsThreads Async connections
total accepting busy idle writing keep-alive closing
03614 no 1 no4196 0 0
4294966701
13615 no 0 no5195 0 0
4294966697
21022
On Thu, Jul 20, 2017 at 2:58 PM, Stefan Priebe - Profihost AG
wrote:
> Yes it looks the same but I can't tell if it is.
>
> Here's a backtrace from V3:
> https://apaste.info/Aw0r
Thanks Stefan, how about mod_status, still some strange entries?
Regards,
Yann.
Yes it looks the same but I can't tell if it is.
Here's a backtrace from V3:
https://apaste.info/Aw0r
Greets,
Stefan
Excuse my typo sent from my mobile phone.
> Am 20.07.2017 um 13:01 schrieb Yann Ylavic :
>
> On Thu, Jul 20, 2017 at 12:48 PM, Stefan Priebe - Profihost AG
> wrote:
>> V3 didn'
On Thu, Jul 20, 2017 at 12:48 PM, Stefan Priebe - Profihost AG
wrote:
> V3 didn't help. Will post a new gdb backtrace soon takes some time as I'm on
> holiday.
Thanks Stefan, still some/the same issue in status?
Regards,
Yann.
V3 didn't help. Will post a new gdb backtrace soon takes some time as I'm on
holiday.
Stefan
Excuse my typo sent from my mobile phone.
> Am 20.07.2017 um 01:26 schrieb Yann Ylavic :
>
> On Wed, Jul 19, 2017 at 11:14 PM, Stefan Priebe - Profihost AG
> wrote:
>> Am 19.07.2017 um 22:46 schrieb Y
Am 20.07.2017 um 01:26 schrieb Yann Ylavic:
> On Wed, Jul 19, 2017 at 11:14 PM, Stefan Priebe - Profihost AG
> wrote:
>> Am 19.07.2017 um 22:46 schrieb Yann Ylavic:
>>>
>>> Attached is a v2 if you feel confident enough, still ;)
>>
>> Thanks, yes i will.
>
> If you managed to install v2 already
On Wed, Jul 19, 2017 at 11:14 PM, Stefan Priebe - Profihost AG
wrote:
> Am 19.07.2017 um 22:46 schrieb Yann Ylavic:
>>
>> Attached is a v2 if you feel confident enough, still ;)
>
> Thanks, yes i will.
If you managed to install v2 already you may want to ignore this new
v3, which only addresses a
Am 19.07.2017 um 22:46 schrieb Yann Ylavic:
> Hi Stefan,
>
> thanks for testing again!
>
> On Wed, Jul 19, 2017 at 7:42 PM, Stefan Priebe - Profihost AG
> wrote:
>>
>> What looks strange
>> from a first view is that async connections closing has very high and
>> strange values:
>> 4294967211
>
Hi Stefan,
thanks for testing again!
On Wed, Jul 19, 2017 at 7:42 PM, Stefan Priebe - Profihost AG
wrote:
>
> What looks strange
> from a first view is that async connections closing has very high and
> strange values:
> 4294967211
Indeed, I messed up with mpm_event's lingering_count in the fir
On Wed, Jul 19, 2017 at 2:25 PM, Stefan Priebe - Profihost AG
wrote:
> Hello,
>
> here we go:
>
> This one is from a server where the first httpd process got stuck:
>
>Slot PID Stopping ConnectionsThreads Async connections
>total accepting busy idle writin
Hello,
here we go:
This one is from a server where the first httpd process got stuck:
Slot PID Stopping ConnectionsThreads Async connections
total accepting busy idle writing keep-alive closing
031675 no 0 no0200 0 0
42
Hello Luca,
i need to wait until a machine is crashing again. What looks strange
from a first view is that async connections closing has very high and
strange values:
4294967211
Even a not yet crashed system has those:
Slot PID Stopping ConnectionsThreads Async connections
Hello Stefan,
2017-07-19 17:05 GMT+02:00 Stefan Priebe - Profihost AG <
s.pri...@profihost.ag>:
>
> Am 19.07.2017 um 16:59 schrieb Stefan Priebe - Profihost AG:
> > Hello Yann,
> >
> > i'm observing some deadlocks again.
> >
> > I'm using
> > httpd 2.4.27
> > + mod_h2
> > + httpd-2.4.x-mpm_event-
Hello,
fullstatus says:
Slot PID Stopping ConnectionsThreads Async connections
total accepting busy idle writing keep-alive closing
025042 no 0 no2198 0 0
4294966698
14347 no 0 no0200 0
Am 19.07.2017 um 16:59 schrieb Stefan Priebe - Profihost AG:
> Hello Yann,
>
> i'm observing some deadlocks again.
>
> I'm using
> httpd 2.4.27
> + mod_h2
> + httpd-2.4.x-mpm_event-wakeup-v7.1.patch
> + your ssl linger fix patch from this thread
>
> What kind of information do you need? If you
Hello Yann,
i'm observing some deadlocks again.
I'm using
httpd 2.4.27
+ mod_h2
+ httpd-2.4.x-mpm_event-wakeup-v7.1.patch
+ your ssl linger fix patch from this thread
What kind of information do you need? If you need a full stack backtrace
- from which pid? Or from all httpd pids?
Thanks!
Gre
2017-07-17 9:33 GMT+02:00 Stefan Eissing :
>
> > Am 14.07.2017 um 21:52 schrieb Yann Ylavic :
> >
> > On Fri, Jun 30, 2017 at 1:33 PM, Yann Ylavic
> wrote:
> >> On Fri, Jun 30, 2017 at 1:20 PM, Ruediger Pluem
> wrote:
> >>>
> >>> On 06/30/2017 12:18 PM, Yann Ylavic wrote:
>
> IMHO mod_
Threw it into my test suite and works nicely.
> Am 17.07.2017 um 14:02 schrieb Yann Ylavic :
>
> On Mon, Jul 17, 2017 at 9:33 AM, Stefan Eissing
> wrote:
>>
>> I will test the patch, most likely today. I lot of +1s for the initiative!
>
> Thanks Stefan, as I said the proposed patch currently
On Mon, Jul 17, 2017 at 9:33 AM, Stefan Eissing
wrote:
>
> I will test the patch, most likely today. I lot of +1s for the initiative!
Thanks Stefan, as I said the proposed patch currently reuses the
existing CONN_STATE_LINGER state to shutdown connections, but if it
needs to be set from outside m
> Am 14.07.2017 um 21:52 schrieb Yann Ylavic :
>
> On Fri, Jun 30, 2017 at 1:33 PM, Yann Ylavic wrote:
>> On Fri, Jun 30, 2017 at 1:20 PM, Ruediger Pluem wrote:
>>>
>>> On 06/30/2017 12:18 PM, Yann Ylavic wrote:
IMHO mod_ssl shoudn't (BIO_)flush unconditionally in
modssl_smart_
On Fri, Jun 30, 2017 at 1:33 PM, Yann Ylavic wrote:
> On Fri, Jun 30, 2017 at 1:20 PM, Ruediger Pluem wrote:
>>
>> On 06/30/2017 12:18 PM, Yann Ylavic wrote:
>>>
>>> IMHO mod_ssl shoudn't (BIO_)flush unconditionally in
>>> modssl_smart_shutdown(), only in the "abortive" mode of
>>> ssl_filter_io_
Hi Yann and Ruediger,
2c from a mpm-event newbie inline:
2017-06-30 13:33 GMT+02:00 Yann Ylavic :
> On Fri, Jun 30, 2017 at 1:20 PM, Ruediger Pluem wrote:
> >
> > On 06/30/2017 12:18 PM, Yann Ylavic wrote:
> >>
> >> IMHO mod_ssl shoudn't (BIO_)flush unconditionally in
> >> modssl_smart_shutdown
> Am 30.06.2017 um 13:33 schrieb Yann Ylavic :
>
> On Fri, Jun 30, 2017 at 1:20 PM, Ruediger Pluem wrote:
>>
>> On 06/30/2017 12:18 PM, Yann Ylavic wrote:
>>>
>>> IMHO mod_ssl shoudn't (BIO_)flush unconditionally in
>>> modssl_smart_shutdown(), only in the "abortive" mode of
>>> ssl_filter_io_
On Fri, Jun 30, 2017 at 1:20 PM, Ruediger Pluem wrote:
>
> On 06/30/2017 12:18 PM, Yann Ylavic wrote:
>>
>> IMHO mod_ssl shoudn't (BIO_)flush unconditionally in
>> modssl_smart_shutdown(), only in the "abortive" mode of
>> ssl_filter_io_shutdown().
>
> I think the issue starts before that.
> ap_pr
On Fri, Jun 30, 2017 at 12:52 PM, Luca Toscano wrote:
>
> 2017-06-30 12:18 GMT+02:00 Yann Ylavic :
>> >
>> > http://svn.apache.org/viewvc?view=revision&revision=1706669
>> > http://svn.apache.org/viewvc?view=revision&revision=1734656
>> >
>> > IIUC these ones are meant to provide a more async beha
On 06/30/2017 12:18 PM, Yann Ylavic wrote:
> Hi Luca,
>
> [better/easier to talk about details on dev@]
>
> On Fri, Jun 30, 2017 at 11:05 AM, wrote:
>> https://bz.apache.org/bugzilla/show_bug.cgi?id=60956
>>
>> --- Comment #11 from Luca Toscano ---
>> Other two interesting trunk improvements
Hi Yann!
2017-06-30 12:18 GMT+02:00 Yann Ylavic :
> Hi Luca,
>
> [better/easier to talk about details on dev@]
>
> On Fri, Jun 30, 2017 at 11:05 AM, wrote:
> > https://bz.apache.org/bugzilla/show_bug.cgi?id=60956
> >
> > --- Comment #11 from Luca Toscano ---
> > Other two interesting trunk imp
Hi Luca,
[better/easier to talk about details on dev@]
On Fri, Jun 30, 2017 at 11:05 AM, wrote:
> https://bz.apache.org/bugzilla/show_bug.cgi?id=60956
>
> --- Comment #11 from Luca Toscano ---
> Other two interesting trunk improvements that have not been backported yet:
>
> http://svn.apache.o
Sure, Bill. Love to have your feedback on this and make it work for mod_ftp,
too.
> Am 30.01.2016 um 06:04 schrieb William A Rowe Jr :
>
> If you can give me a few days (not httpd'ing again until
> late Sun eve) - this is very close to the issues we have
> in mod_ftp with the data connection/re
If you can give me a few days (not httpd'ing again until
late Sun eve) - this is very close to the issues we have
in mod_ftp with the data connection/request aside the
control connection. The right patch will improve both
sets of dirty hacks :)
Thanks for the proposal!
Bill
On Fri, Jan 29, 2016
Ditto, didn't see anything controversial.
On Fri, Jan 29, 2016 at 9:49 AM, Jim Jagielski wrote:
> Looks good to me... If it results in problems or issues,
> we'll fix 'em as the come along ;)
>
>> On Jan 29, 2016, at 8:01 AM, Stefan Eissing
>> wrote:
>>
>> I would like to propose some additions
Looks good to me... If it results in problems or issues,
we'll fix 'em as the come along ;)
> On Jan 29, 2016, at 8:01 AM, Stefan Eissing
> wrote:
>
> I would like to propose some additions to event that help me get rid of two
> ugly hacks in mod_http2:
>
> 1. Initialization of slave connecti
I would like to propose some additions to event that help me get rid of two
ugly hacks in mod_http2:
1. Initialization of slave connections
event registers on pre_connection hook and checks if c is a slave
(c->master) and if the connection state is either not there or the same as
master (poi
Still missing 1 more vote... this has been running on our infra
with NO problems.
On Sep 26, 2013, at 8:25 AM, Jim Jagielski wrote:
>
> On Sep 25, 2013, at 8:07 PM, William A. Rowe Jr. wrote:
>
>> Before we incorporate it... can we have some sense of the impact of the
>> optimization? So fa
Now that skiplist is being added to APR 1.5, I will start
the process of moving trunk to use it and will propose
a backport for 2.4...
+1...
On Sep 28, 2013, at 12:12 PM, Graham Leggett wrote:
>
>> On 26 Sep 2013, at 15:44, Jim Jagielski wrote:
>>
>> Like I said, I think that skiplist fits better in APR; in
>> fact there are a few other things in httpd that would be
>> "better" in APR, but APR and httpd are 2 sep projects an
> On 26 Sep 2013, at 15:44, Jim Jagielski wrote:
>
> Like I said, I think that skiplist fits better in APR; in
> fact there are a few other things in httpd that would be
> "better" in APR, but APR and httpd are 2 sep projects and so
> we can't "force" things.
>
> In fact, I'm adding dev@apr to
On Sep 26, 2013, at 10:20 AM, William A. Rowe Jr. wrote:
> On Thu, 26 Sep 2013 08:25:46 -0400
> Jim Jagielski wrote:
>
>>
>> On Sep 25, 2013, at 8:07 PM, William A. Rowe Jr.
>> wrote:
>>
>>> Before we incorporate it... can we have some sense of the impact of
>>> the optimization? So far we
On Thu, 26 Sep 2013 08:25:46 -0400
Jim Jagielski wrote:
>
> On Sep 25, 2013, at 8:07 PM, William A. Rowe Jr.
> wrote:
>
> > Before we incorporate it... can we have some sense of the impact of
> > the optimization? So far we don't have much data to go on.
>
> From the orig post: "My benchmark
On Sep 25, 2013, at 8:07 PM, William A. Rowe Jr. wrote:
> Before we incorporate it... can we have some sense of the impact of the
> optimization? So far we don't have much data to go on.
From the orig post: "My benchmarks show decreased latency and a performance
boost of ~5% (on avg)"
>
>
Before we incorporate it... can we have some sense of the impact of the
optimization? So far we don't have much data to go on.
There is talk of releasing some apr 1.5 enhancements. I'd personally favor
adding skip list to apr rather than -util or httpd, since it could be
useful core functionalit
Bueller? Bueller?
On Sep 19, 2013, at 12:17 PM, Jim Jagielski wrote:
> With the successful running, does anyone wish to add
> some votes to STATUS to allow the backport to be
> approved? :)
>
> On Sep 16, 2013, at 9:27 AM, Jim Jagielski wrote:
>
>> visible if pushed, yes.
>>
>> On Sep 15, 20
With the successful running, does anyone wish to add
some votes to STATUS to allow the backport to be
approved? :)
On Sep 16, 2013, at 9:27 AM, Jim Jagielski wrote:
> visible if pushed, yes.
>
> On Sep 15, 2013, at 2:23 PM, Marion & Christophe JAILLET
> wrote:
>
>>
>> Le 15/09/2013 16:30, R
visible if pushed, yes.
On Sep 15, 2013, at 2:23 PM, Marion & Christophe JAILLET
wrote:
>
> Le 15/09/2013 16:30, Rainer Jung a écrit :
>> I'm pretty sure from those pictures you would not be able to find the point
>> in time where I switched 2.4.6 and 2.4.7-dev between the servers.
>
> In ot
s are far from
being saturated. The load mix isn't similar enough and constant enough
to derive anything from the response times we could extract from the
access logs. I doubt that we can measure the benefits in this scenario,
but we can check, whether in a pretty complex situation the skiplist
Le 15/09/2013 16:30, Rainer Jung a écrit :
I'm pretty sure from those pictures you would not be able to find the
point in time where I switched 2.4.6 and 2.4.7-dev between the servers.
In other words, does it mean that no special performance improvement is
to be expected ?
I remember to hav
On 15.09.2013 05:31, Rainer Jung wrote:
> On 10.09.2013 16:13, Jim Jagielski wrote:
>> For completeness, a full, combined patch is:
>>
>> http://people.apache.org/~jim/patches/httpd-2.4-event-test.patch
>>
>> It requires a patch that knows about creating new files
>> when encountering /dev/null
On 10.09.2013 16:13, Jim Jagielski wrote:
> For completeness, a full, combined patch is:
>
> http://people.apache.org/~jim/patches/httpd-2.4-event-test.patch
>
> It requires a patch that knows about creating new files
> when encountering /dev/null...
The code (plus r1410004) runs on eos (US
For completeness, a full, combined patch is:
http://people.apache.org/~jim/patches/httpd-2.4-event-test.patch
It requires a patch that knows about creating new files
when encountering /dev/null...
On Sep 10, 2013, at 9:52 AM, Jim Jagielski wrote:
> For the testing, we need:
>
>http://
For the testing, we need:
http://people.apache.org/~jim/patches/httpd-2.4-skiplist.patch
http://people.apache.org/~jim/patches/httpd-2.4-podx.patch
http://people.apache.org/~jim/patches/httpd-2.4-event.patch
This includes all the performance/sync updates
Can we get infra to test that
Am Donnerstag, 5. September 2013, 23:46:21 schrieb Rainer Jung:
> In addition: what about eventopt?
AFAICT, the problem that the listener thread busy-loops if there are
not enough worker threads is still unfixed [1]. Or did I miss the fix?
[1]
http://mail-archives.apache.org/mod_mbox/httpd-dev/
On Sep 5, 2013, at 5:46 PM, Rainer Jung wrote:
> On 05.09.2013 16:35, Jim Jagielski wrote:
>> BTW, the main diff between event in trunk and 2.4 is
>> the use of skiplist. My benchmarks show decreased latency
>> and a performance boost of ~5% (on avg). Can anyone confirm?
>> It would be nice to p
On 05.09.2013 16:35, Jim Jagielski wrote:
> BTW, the main diff between event in trunk and 2.4 is
> the use of skiplist. My benchmarks show decreased latency
> and a performance boost of ~5% (on avg). Can anyone confirm?
> It would be nice to possibly get that in 2.4.7 as well.
I can offer to negot
BTW, the main diff between event in trunk and 2.4 is
the use of skiplist. My benchmarks show decreased latency
and a performance boost of ~5% (on avg). Can anyone confirm?
It would be nice to possibly get that in 2.4.7 as well.
On Sep 5, 2013, at 9:08 AM, Jim Jagielski wrote:
> It would be nice
On Sat, 12 Jan 2013, Stefan Fritsch wrote:
On Thursday 10 January 2013, Niklas Edmundsson wrote:
To reiterate back to the event mpm / mod_status integration, are
there any work in progress on implementing a more verbose status
display for the event mpm? I'm thinking of something that can
On Thursday 10 January 2013, Niklas Edmundsson wrote:
> To reiterate back to the event mpm / mod_status integration, are
> there any work in progress on implementing a more verbose status
> display for the event mpm? I'm thinking of something that can show
> all requests currentl
To reiterate back to the event mpm / mod_status integration, are there
any work in progress on implementing a more verbose status display for
the event mpm? I'm thinking of something that can show all requests
currently being processed like we have today for prefork/worker. The
cu
On Monday 07 January 2013, Daniel Lescohier wrote:
> I see that event mpm uses a worker queue that uses a condition
> variable, and it does a condition variable signal when something
> is pushed onto it. If all of the cpu cores are doing useful work,
> the signal is not going to for
I see that event mpm uses a worker queue that uses a condition variable,
and it does a condition variable signal when something is pushed onto it.
If all of the cpu cores are doing useful work, the signal is not going to
force a context switch out of a thread doing useful work, the thread will
+1... a lot of little improvements can result in a BIG
improvement.
On Jan 5, 2013, at 8:34 AM, Graham Leggett wrote:
> On 05 Jan 2013, at 2:05 AM, Stefan Fritsch wrote:
>
>> For 1., a better thread selection would definitely be a win. For 2.
>> and 3., it is less obvious.
>
> +1.
>
> Just
On 05 Jan 2013, at 2:05 AM, Stefan Fritsch wrote:
> For 1., a better thread selection would definitely be a win. For 2.
> and 3., it is less obvious.
+1.
Just because in some cases a cache might not help, doesn't mean we shouldn't
take advantage of the cache when it will help.
Regards,
Graha
On Friday 04 January 2013, Daniel Lescohier wrote:
> I just saw this from last month from Stefan Fritsch and Niklas
> Edmundsson:
>
> The fact that the client ip shows up on all threads points to some
>
> >> potential optimization: Recently active threads should be
> >> preferred, because their m
I just saw this from last month from Stefan Fritsch and Niklas Edmundsson:
The fact that the client ip shows up on all threads points to some
>> potential optimization: Recently active threads should be preferred,
>> because their memory is more likely to be in the cpu caches. Right
>> now, the th
On Mon, 17 Dec 2012, Stefan Fritsch wrote:
On Sunday 16 December 2012, Niklas Edmundsson wrote:
I'm playing around with the event mpm in httpd 2.4.3, and I've come
to the conclusion that the stats reported by mod_status is rather
strange. I'm not sure if it's a bug or si
On Sunday 16 December 2012, Niklas Edmundsson wrote:
> I'm playing around with the event mpm in httpd 2.4.3, and I've come
> to the conclusion that the stats reported by mod_status is rather
> strange. I'm not sure if it's a bug or simply not implemented...
>
>
Hi all.
I'm playing around with the event mpm in httpd 2.4.3, and I've come to
the conclusion that the stats reported by mod_status is rather
strange. I'm not sure if it's a bug or simply not implemented...
My test case is just a simple file transfer of a DVD imag
On Mon, Nov 14, 2011 at 11:12 AM, Paul Querna wrote:
>
> The problem became that in trunk, we had to told the lock for the
> timeout queues while we were doing the pollset operation. The
> pollset already had its own internal mutex too, for its own rings. So
> we were double locking a piece of
On Mon, Nov 14, 2011 at 7:47 AM, Greg Ames wrote:
>
>
> On Fri, Nov 11, 2011 at 11:07 PM, Paul Querna wrote:
>>
>> 4) Have the single Event thread de-queue operations from all the worker
>> threads.
>
>
> Since the operations include Add and Remove, are you saying we would have to
> have to wait
On Fri, Nov 11, 2011 at 11:07 PM, Paul Querna wrote:
>
> 4) Have the single Event thread de-queue operations from all the worker
> threads.
>
Since the operations include Add and Remove, are you saying we would have
to have to wait for a context switch to the listener thread before
apr_pollset_a
na/httpd/compare/trunk...event-performance>
>
> I did some basic benchmarking to validate it, though if anyone has a
> real test lab setup that can throw huge traffic numbers at it that
> would be very helpful.
>
> For the "It works" default index page, I got the followi
that
would be very helpful.
For the "It works" default index page, I got the following:
event mpm trunk: 15210.07 req/second
event mpm performance branch: 15775.42 req/second (~4%)
nginx 0.7.65-1ubuntu2: 12070.35
Event MPM was using a 100% default install configuration, nginx was
usi
On Sat, 12 Nov 2011, Stefan Fritsch wrote:
locking for the timeout queues. But what we really should do in 2.4.0 is
remove all the MPM-implementation specific details from conn_state_t. The
only field that is actually used outside of the MPMs is 'state'. If we make
the rest non-public and someh
in thread would need to do a sorted insert into its
main timeout queue, which is expensive. Or how would you find out when the
next timeout is due?
Without modification to the event mpm, it would potentially cause some
issues as the event thread isn't always waking up that often, but I
1 - 100 of 245 matches
Mail list logo