Re: svn commit: r1742794 - /httpd/httpd/branches/2.4.x/STATUS

2016-05-10 Thread William A Rowe Jr
On Tue, May 10, 2016 at 7:26 PM, Yann Ylavic  wrote:

> On Tue, May 10, 2016 at 5:23 PM, William A Rowe Jr 
> wrote:
> > On Sun, May 8, 2016 at 7:29 AM, Yann Ylavic 
> wrote:
> >>
> >> ISTM that the request line has already been clobbered (blanked) by the
> >> previous call to ap_update_child_status_from_conn(SERVER_BUSY_READ) in
> >> ap_process_http_[a]sync_connection(), where we already lost the
> >> previous request's line.
> >
> > That's FINE.  In the previous call we are already on to another thread,
> > this thread did not perform work on the previous request (or if it had,
> > it had surrendered the thread and accidentally ended back up there.)
>
> Right, that's why/when the new thread clobbers the request-line.
> We are talking about MPM event only here, with sync MPMs we do
> SERVER_BUSY_READ for the first request on the connection only, and my
> goal is to have consistent reporting for all MPMs.
>

But that's not what the third suggested backport does, it toggles the
request
state to "W"rite when there is actually no request received (either it was
actually not ready, or had spuriously disconnected)


> > The scoreboard reports what work this *thread* has performed.  So
> > far, if it is a wakeup to read a new request, on the event MPM in 2.4
> > this is not request-based work.
>
> The scoreboard reports that the thread is now working on a new
> connection (possibly kept alive), but clears half of the data
> associated with that connection (i.e. the last request/vhost)
> altogether.
>

Of course it should, because the old request is still parked on the old
worker thread until it is reclaimed, and the newly activated thread for
this connection never processed work for the old thread...


> >> That's why I proposed to save the previous request line and host in
> >> the conn_rec, and use them in the scoreboard until a new (real)
> >> request is available (it didn't work as expected because the
> >> scoreboard was clobbered in multiple places before Bill's changes, but
> >> it should work better/easily now).
> >
> >
> > It isn't workable because every time we hack around this issue, we
> > mask the actual errors in the calls to the scoreboard API.  In 2.4,
> > the API design is DSS.
>
> DSS?


Dirt simple stupid.


> > Complicating it is a really bad idea, and
> > misleads, if not actually breaking existing consumers of the scoreboard.
>
> My proposed patch (not that complicated IMHO) shouldn't break anything
> (if working as expected), it is meant for consistency between event
> and other MPMs (now that Rainer fixed the blanks there due to
> keepalive/close), see below.
>

I still need to further review, but you toggle a writing state for a request
with no inbound request, which is clearly wrong.


> >> That would be: when no r is available but c is, use the saved
> >> request-line/host from c; when r is available, use the ones from r and
> >> update them in c.
> >>
> >> That way there would be no blank at all (IOW, the scoreboard entry
> >> would be left with the latest request handled on the connection, not a
> >> blank which inevitably ends every connection after keepalive timeout
> >> or close, and shouldn't be taken into account as an empty request
> >> IMHO).
> >
> > Which is all wrong IMHO. If the request completed, that thread still
> > reports the previous request in an idle or keepalive state until that
> thread
> > is woken up to do new work.
>
> Right, for MPM event we can indeed consider that the reporting of the
> previous request for a thread is dead once this thread is given
> another connection, but when yet another thread will take care of the
> former connection, the previous request could show up in its reporting
> (that's how my patch works, faithfully reporting how each MPM works,
> without clobbering data unnecessarily).
>
> The case where this happens is for keepalive/lingering-close handling.
> Suppose thread t1 handles request r1 on connection c1, and t1 is
> released after r1 (when c1 is kept alive).
> Same for t2 with r2 on c2.
>
> Now the following cases (not exhaustive):
> 1. r3 arrives on c1, t2 handles the event (read).
> 2. c2 is closed/reset remotely, t1 handles the event (read).
> 2. c1 is closed/reset remotely, t2 handles the event (read).
>
> Without my patch:
> 1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by the
> scoreboard), while t1 still reports r1 on c1 (although "superseded" by
> r3).
> 2. t1 reads/closes c2 and clobbers r1 (blank).
> 3. t2 reads/closes c1 and clobbers r3 (blank).
> => t1 and t2 scoreboard's request/vhost are blank.
>
> With my patch:
> 1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by the
> scoreboard), while t1 still reports r1 on c1 (although "superseded" by
> r3).
> 2. t1 reads/closes c2 and restores r2.
> 3. t2 reads/closes c1 and restores r3.
> => t1 and t2 scoreboard's request/vhost contain relevent (last) info.
>

Thanks for all of the additional details, it would be helpful throughout the
rest of this d

Re: svn commit: r1742794 - /httpd/httpd/branches/2.4.x/STATUS

2016-05-10 Thread Yann Ylavic
On Tue, May 10, 2016 at 5:23 PM, William A Rowe Jr  wrote:
>
> On Sun, May 8, 2016 at 7:29 AM, Yann Ylavic  wrote:
>>
>> ISTM that the request line has already been clobbered (blanked) by the
>> previous call to ap_update_child_status_from_conn(SERVER_BUSY_READ) in
>> ap_process_http_[a]sync_connection(), where we already lost the
>> previous request's line.
>
> That's FINE.  In the previous call we are already on to another thread,
> this thread did not perform work on the previous request (or if it had,
> it had surrendered the thread and accidentally ended back up there.)

Right, that's why/when the new thread clobbers the request-line.
We are talking about MPM event only here, with sync MPMs we do
SERVER_BUSY_READ for the first request on the connection only, and my
goal is to have consistent reporting for all MPMs.

>
> The scoreboard reports what work this *thread* has performed.  So
> far, if it is a wakeup to read a new request, on the event MPM in 2.4
> this is not request-based work.

The scoreboard reports that the thread is now working on a new
connection (possibly kept alive), but clears half of the data
associated with that connection (i.e. the last request/vhost)
altogether.

>
>> That's why I proposed to save the previous request line and host in
>> the conn_rec, and use them in the scoreboard until a new (real)
>> request is available (it didn't work as expected because the
>> scoreboard was clobbered in multiple places before Bill's changes, but
>> it should work better/easily now).
>
>
> It isn't workable because every time we hack around this issue, we
> mask the actual errors in the calls to the scoreboard API.  In 2.4,
> the API design is DSS.

DSS?

> Complicating it is a really bad idea, and
> misleads, if not actually breaking existing consumers of the scoreboard.

My proposed patch (not that complicated IMHO) shouldn't break anything
(if working as expected), it is meant for consistency between event
and other MPMs (now that Rainer fixed the blanks there due to
keepalive/close), see below.

>
>> That would be: when no r is available but c is, use the saved
>> request-line/host from c; when r is available, use the ones from r and
>> update them in c.
>>
>> That way there would be no blank at all (IOW, the scoreboard entry
>> would be left with the latest request handled on the connection, not a
>> blank which inevitably ends every connection after keepalive timeout
>> or close, and shouldn't be taken into account as an empty request
>> IMHO).
>
> Which is all wrong IMHO. If the request completed, that thread still
> reports the previous request in an idle or keepalive state until that thread
> is woken up to do new work.

Right, for MPM event we can indeed consider that the reporting of the
previous request for a thread is dead once this thread is given
another connection, but when yet another thread will take care of the
former connection, the previous request could show up in its reporting
(that's how my patch works, faithfully reporting how each MPM works,
without clobbering data unnecessarily).

The case where this happens is for keepalive/lingering-close handling.
Suppose thread t1 handles request r1 on connection c1, and t1 is
released after r1 (when c1 is kept alive).
Same for t2 with r2 on c2.

Now the following cases (not exhaustive):
1. r3 arrives on c1, t2 handles the event (read).
2. c2 is closed/reset remotely, t1 handles the event (read).
2. c1 is closed/reset remotely, t2 handles the event (read).

Without my patch:
1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by the
scoreboard), while t1 still reports r1 on c1 (although "superseded" by
r3).
2. t1 reads/closes c2 and clobbers r1 (blank).
3. t2 reads/closes c1 and clobbers r3 (blank).
=> t1 and t2 scoreboard's request/vhost are blank.

With my patch:
1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by the
scoreboard), while t1 still reports r1 on c1 (although "superseded" by
r3).
2. t1 reads/closes c2 and restores r2.
3. t2 reads/closes c1 and restores r3.
=> t1 and t2 scoreboard's request/vhost contain relevent (last) info.

>
>> How about the attached patch?
>
> This is the right direction for  - we will be waking up threads
> to resume work on an already in-flight request, correct?  That request
> needs to be displayed because that is the work this thread is performing.

This would indeed also work in full event mode (i.e. future/trunk),
but it's legitimate in 2.4.x too, IMHO, since requests already cross
threads at keepalive/lingering-close time (with MPM event).


Re: [Bug 57771] cleanup_script uses incorrect connection ID

2016-05-10 Thread Eric Covener
On Tue, May 10, 2016 at 2:06 PM, Jeff Trawick  wrote:
> On Tue, May 10, 2016 at 4:57 PM, Eric Covener  wrote:
>>
>> Can I entice anyone else into looking at this PR?  I think I am
>> missing the boat entirely.
>
>
> What's the deal?  Pear-flavored drink in return?


Commensurate with how dumb the solution makes my PR updates look.
Guarantee at least a Tim Horton's hash brown.


Re: [Bug 57771] cleanup_script uses incorrect connection ID

2016-05-10 Thread Jeff Trawick
On Tue, May 10, 2016 at 4:57 PM, Eric Covener  wrote:

> Can I entice anyone else into looking at this PR?  I think I am
> missing the boat entirely.
>

What's the deal?  Pear-flavored drink in return?


>
> My last unwritten thought here was that it might be best to grab the
> pid before returning.
>
> -- Forwarded message --
> From:  
> Date: Wed, Apr 27, 2016 at 8:41 AM
> Subject: [Bug 57771] cleanup_script uses incorrect connection ID
> To: b...@httpd.apache.org
>
>
> https://bz.apache.org/bugzilla/show_bug.cgi?id=57771
>
> --- Comment #8 from Nick Warne  ---
> (In reply to Eric Covener from comment #7)
> > > 4) Thread #1 puts request A to sleep while it waits for the CGI.
> >
> > This bit doesn't make sense to me, but maybe someone else can clarify.
>
> FYI Eric, I see the issue on my NRTG page, which uses a lot of #flastmod.
> This
> small patch totally fixes it up, and I have to apply it manually each new
> apache release.
>
> Nick
>
> --
> You are receiving this mail because:
> You are the assignee for the bug.
>
> -
> To unsubscribe, e-mail: bugs-unsubscr...@httpd.apache.org
> For additional commands, e-mail: bugs-h...@httpd.apache.org
>
>
>
> --
> Eric Covener
> cove...@gmail.com
>



-- 
Born in Roswell... married an alien...
http://emptyhammock.com/


Fwd: [Bug 57771] cleanup_script uses incorrect connection ID

2016-05-10 Thread Eric Covener
Can I entice anyone else into looking at this PR?  I think I am
missing the boat entirely.

My last unwritten thought here was that it might be best to grab the
pid before returning.

-- Forwarded message --
From:  
Date: Wed, Apr 27, 2016 at 8:41 AM
Subject: [Bug 57771] cleanup_script uses incorrect connection ID
To: b...@httpd.apache.org


https://bz.apache.org/bugzilla/show_bug.cgi?id=57771

--- Comment #8 from Nick Warne  ---
(In reply to Eric Covener from comment #7)
> > 4) Thread #1 puts request A to sleep while it waits for the CGI.
>
> This bit doesn't make sense to me, but maybe someone else can clarify.

FYI Eric, I see the issue on my NRTG page, which uses a lot of #flastmod.  This
small patch totally fixes it up, and I have to apply it manually each new
apache release.

Nick

--
You are receiving this mail because:
You are the assignee for the bug.

-
To unsubscribe, e-mail: bugs-unsubscr...@httpd.apache.org
For additional commands, e-mail: bugs-h...@httpd.apache.org



-- 
Eric Covener
cove...@gmail.com


End of the road of 2.2.x maintenance?

2016-05-10 Thread William A Rowe Jr
It's been a year, and seems to be a good time to revisit this topic
while those folks who are present at ApacheCon can discuss f2f
the merits of bringing the 2.2.x chapter to a close, and share their
thoughts back here on-list.

According to http://w3techs.com/technologies/history_details/ws-apache/2
the inflection point of a majority of 2.4 instances over 2.2 appears
to occur about 9 months from now.

OpenSUSE 13.1 adopted 2.4 way back in Nov of '13.

Ubuntu - 14.04 LTS, and Debian 8 (Jessie) switched to 2.4 in April '14.

RHEL / CentOS 7 are well over a year old, adopted 2.4 in June '14.
Fedora 18 shipped 2.4 way back in Jan '13.

E.g. every user of the broadly distributed Linux releases will have had
three full years to adopt 2.4 by June of 2017.  I expect the BSD world
looks similar (modulo any Apache License 2.0 stupidity we are all
too familiar with.)  If someone in the BSD, Solaris and other spheres
wants to chime in here with those milestones, that would be great.

I am prepared to RM a final bug-fix release of 2.2 in conjunction with
the next 2.4 release effort, to gather in any final requests for fixes
before we move to a 12-month, security-fixes-only window on that branch.
Once those 12 months expire, as we've done with 1.3 and 2.0, there's
the possibility that relatively few committers would collect some critical
patches/apply-to-2.2.xx final security fixes, but no further releases would
occur.

Are we ready to start the 12 month countdown as of the next/final bug
fix release of 2.2, and highlight this in both the 2.2 and 2.4 announce
broadcasts?

I'm hoping we conclude some fixes of 2.4 scoreboard regressions and
get to the point of releasing 2.4 with mod_proxy_http2 sometime within
the next month or so, and that we can reach a consensus about how
we will proceed on the 2.2 branch, before we get to that release.

Feedback desired,

Cheers,

Bill


Re: svn commit: r1742794 - /httpd/httpd/branches/2.4.x/STATUS

2016-05-10 Thread William A Rowe Jr
I believe this is the right patch (or close to correct) for 2.4 branch, and
perhaps
even trunk with additional refactoring.

In the case of resuming a thread, it should be up to the event/advanced MPM
to reflect the correct no-info | [connection-info [request-info]] state of
the conn
at the time the thread is resumed with different work to perform.

On Sun, May 8, 2016 at 5:30 AM,  wrote:

> Author: rjung
> Date: Sun May  8 10:30:38 2016
> New Revision: 1742794
>
> URL: http://svn.apache.org/viewvc?rev=1742794&view=rev
> Log:
> Propose another (small) scoreboard/mod_status fix.
>
> Modified:
> httpd/httpd/branches/2.4.x/STATUS
>
> Modified: httpd/httpd/branches/2.4.x/STATUS
> URL:
> http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/STATUS?rev=1742794&r1=1742793&r2=1742794&view=diff
>
> ==
> --- httpd/httpd/branches/2.4.x/STATUS (original)
> +++ httpd/httpd/branches/2.4.x/STATUS Sun May  8 10:30:38 2016
> @@ -181,6 +181,20 @@ PATCHES PROPOSED TO BACKPORT FROM TRUNK:
>   2.4.x patch: trunk works
>   +1: wrowe, rpluem
>
> +   * Don't globber scoreboard request info if read_request_line() fails
> with
> + a timeout. In that case there's not yet any new useful request info
> + available.
> + Noticed via server-status showing request "NULL" after keep-alive
> + connections timed out.
> + No CHANGES entry needed, because there's already one for PR 59333.
> + Sorry for the two patches. The second is better. Both change the same
> + line, so technically only the second is needed, but merging both
> keeps
> + mergeinfo more complete.
> + trunk patch: http://svn.apache.org/r1742791
> +  http://svn.apache.org/r1742792
> + 2.4.x patch: trunk works
> + +1: rjung
> +
>  PATCHES/ISSUES THAT ARE BEING WORKED
>
>*) http: Don't remove the Content-Length of zero from a HEAD response if
>
>
>


Re: svn commit: r1742794 - /httpd/httpd/branches/2.4.x/STATUS

2016-05-10 Thread William A Rowe Jr
On Sun, May 8, 2016 at 7:29 AM, Yann Ylavic  wrote:

>
> ISTM that the request line has already been clobbered (blanked) by the
> previous call to ap_update_child_status_from_conn(SERVER_BUSY_READ) in
> ap_process_http_[a]sync_connection(), where we already lost the
> previous request's line.
>

That's FINE.  In the previous call we are already on to another thread,
this thread did not perform work on the previous request (or if it had,
it had surrendered the thread and accidentally ended back up there.)

The scoreboard reports what work this *thread* has performed.  So
far, if it is a wakeup to read a new request, on the event MPM in 2.4
this is not request-based work.


That's why I proposed to save the previous request line and host in
> the conn_rec, and use them in the scoreboard until a new (real)
> request is available (it didn't work as expected because the
> scoreboard was clobbered in multiple places before Bill's changes, but
> it should work better/easily now).
>

It isn't workable because every time we hack around this issue, we
mask the actual errors in the calls to the scoreboard API.  In 2.4,
the API design is DSS.  Complicating it is a really bad idea, and
misleads, if not actually breaking existing consumers of the scoreboard.


> That would be: when no r is available but c is, use the saved
> request-line/host from c; when r is available, use the ones from r and
> update them in c.

That way there would be no blank at all (IOW, the scoreboard entry
> would be left with the latest request handled on the connection, not a
> blank which inevitably ends every connection after keepalive timeout
> or close, and shouldn't be taken into account as an empty request
> IMHO).
>

Which is all wrong IMHO. If the request completed, that thread still
reports the previous request in an idle or keepalive state until that thread
is woken up to do new work.


> Also, that would be consistent in both MPMs worker and event, whereas
> currently event is more likely to show this behaviour, thanks to
> queuing (no worker thread used for keepalive handling and hence no
> BUSY_READ during that time, no clobbering either).
> Thus for mpm_event, we'd need to use
> ap_update_child_status_from_conn(SERVER_CLOSING, c) so that the saved
> request-line/vhost from the connection get reused when/if a worker
> thread handles the lingering close.
>
> How about the attached patch?
>

This is the right direction for  - we will be waking up threads
to resume work on an already in-flight request, correct?  That request
needs to be displayed because that is the work this thread is performing.

On trunk and even with your patch as-is, IIUC you are filling in the score
request field for a thread that never performed work on a given request,
which is invalid data.

If we refine this idea somewhat to null out the request once the request
has been entirely fulfilled on a given thread, such that the thread still
reflects what last request work it performed, but the resumed connection
does not report a request it didn't satisfy, I think we will have a solid
fix
for trunk.

For 2.4 - I'm strongly -1 until convinced we can't navigate without it,
given that 2.4 does not allow cross-threaded request_rec lifespans.


Re: Detecting client aborts and stream resets

2016-05-10 Thread Michael Kaufmann

Zitat von William A Rowe Jr :


On Wed, May 4, 2016 at 3:44 PM, Michael Kaufmann 
wrote:


William is right, this is not a good idea. The ->aborted flag should serve

this purpose of telling anyone interested that this connection is not
longer delivering. I will make a github release soon where that is working
and you can test.



Thank you Stefan! It is now working for stream resets, but it's not yet
working if the client closes the whole TCP connection.



As expected... this is why I pointed out in my first reply that you don't
want a single-protocol solution to this puzzle.


Of course I'd also prefer a general solution.

I have created a patch for mod_http2: With the patch, it sets  
c->aborted on the master connection and also on the "dummy"  
connections (streams) when the client closes the TCP connection.


@Stefan: It would be great if you could check this code and add it to  
mod_http2 :-)



Index: modules/http2/h2_mplx.c
===
--- modules/http2/h2_mplx.c (revision 1743146)
+++ modules/http2/h2_mplx.c (working copy)
@@ -488,6 +488,15 @@
 return 1;
 }

+static int task_abort_connection(void *ctx, void *val)
+{
+h2_task *task = val;
+if (task->c) {
+task->c->aborted = 1;
+}
+return 1;
+}
+
 apr_status_t h2_mplx_release_and_join(h2_mplx *m, apr_thread_cond_t *wait)
 {
 apr_status_t status;
@@ -537,6 +546,8 @@
  * and workers *should* return in a timely fashion.
  */
 for (i = 0; m->workers_busy > 0; ++i) {
+h2_ihash_iter(m->tasks, task_abort_connection, m);
+
 m->join_wait = wait;
 status = apr_thread_cond_timedwait(wait, m->lock,  
apr_time_from_sec(wait_secs));


@@ -591,6 +602,7 @@
 AP_DEBUG_ASSERT(m);
 if (!m->aborted && enter_mutex(m, &acquired) == APR_SUCCESS) {
 m->aborted = 1;
+m->c->aborted = 1;
 h2_ngn_shed_abort(m->ngn_shed);
 leave_mutex(m, acquired);
 }





See my later reply about detecting connection tear-down, patches welcome.


Sounds good! If someone sends a patch, I will help to test it.