On Tue, May 10, 2016 at 7:26 PM, Yann Ylavic <ylavic....@gmail.com> wrote:

> On Tue, May 10, 2016 at 5:23 PM, William A Rowe Jr <wr...@rowe-clan.net>
> wrote:
> > On Sun, May 8, 2016 at 7:29 AM, Yann Ylavic <ylavic....@gmail.com>
> wrote:
> >>
> >> ISTM that the request line has already been clobbered (blanked) by the
> >> previous call to ap_update_child_status_from_conn(SERVER_BUSY_READ) in
> >> ap_process_http_[a]sync_connection(), where we already lost the
> >> previous request's line.
> >
> > That's FINE.  In the previous call we are already on to another thread,
> > this thread did not perform work on the previous request (or if it had,
> > it had surrendered the thread and accidentally ended back up there.)
>
> Right, that's why/when the new thread clobbers the request-line.
> We are talking about MPM event only here, with sync MPMs we do
> SERVER_BUSY_READ for the first request on the connection only, and my
> goal is to have consistent reporting for all MPMs.
>

But that's not what the third suggested backport does, it toggles the
request
state to "W"rite when there is actually no request received (either it was
actually not ready, or had spuriously disconnected)


> > The scoreboard reports what work this *thread* has performed.  So
> > far, if it is a wakeup to read a new request, on the event MPM in 2.4
> > this is not request-based work.
>
> The scoreboard reports that the thread is now working on a new
> connection (possibly kept alive), but clears half of the data
> associated with that connection (i.e. the last request/vhost)
> altogether.
>

Of course it should, because the old request is still parked on the old
worker thread until it is reclaimed, and the newly activated thread for
this connection never processed work for the old thread...


> >> That's why I proposed to save the previous request line and host in
> >> the conn_rec, and use them in the scoreboard until a new (real)
> >> request is available (it didn't work as expected because the
> >> scoreboard was clobbered in multiple places before Bill's changes, but
> >> it should work better/easily now).
> >
> >
> > It isn't workable because every time we hack around this issue, we
> > mask the actual errors in the calls to the scoreboard API.  In 2.4,
> > the API design is DSS.
>
> DSS?


Dirt simple stupid.


> > Complicating it is a really bad idea, and
> > misleads, if not actually breaking existing consumers of the scoreboard.
>
> My proposed patch (not that complicated IMHO) shouldn't break anything
> (if working as expected), it is meant for consistency between event
> and other MPMs (now that Rainer fixed the blanks there due to
> keepalive/close), see below.
>

I still need to further review, but you toggle a writing state for a request
with no inbound request, which is clearly wrong.


> >> That would be: when no r is available but c is, use the saved
> >> request-line/host from c; when r is available, use the ones from r and
> >> update them in c.
> >>
> >> That way there would be no blank at all (IOW, the scoreboard entry
> >> would be left with the latest request handled on the connection, not a
> >> blank which inevitably ends every connection after keepalive timeout
> >> or close, and shouldn't be taken into account as an empty request
> >> IMHO).
> >
> > Which is all wrong IMHO. If the request completed, that thread still
> > reports the previous request in an idle or keepalive state until that
> thread
> > is woken up to do new work.
>
> Right, for MPM event we can indeed consider that the reporting of the
> previous request for a thread is dead once this thread is given
> another connection, but when yet another thread will take care of the
> former connection, the previous request could show up in its reporting
> (that's how my patch works, faithfully reporting how each MPM works,
> without clobbering data unnecessarily).
>
> The case where this happens is for keepalive/lingering-close handling.
> Suppose thread t1 handles request r1 on connection c1, and t1 is
> released after r1 (when c1 is kept alive).
> Same for t2 with r2 on c2.
>
> Now the following cases (not exhaustive):
> 1. r3 arrives on c1, t2 handles the event (read).
> 2. c2 is closed/reset remotely, t1 handles the event (read).
> 2. c1 is closed/reset remotely, t2 handles the event (read).
>
> Without my patch:
> 1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by the
> scoreboard), while t1 still reports r1 on c1 (although "superseded" by
> r3).
> 2. t1 reads/closes c2 and clobbers r1 (blank).
> 3. t2 reads/closes c1 and clobbers r3 (blank).
> => t1 and t2 scoreboard's request/vhost are blank.
>
> With my patch:
> 1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by the
> scoreboard), while t1 still reports r1 on c1 (although "superseded" by
> r3).
> 2. t1 reads/closes c2 and restores r2.
> 3. t2 reads/closes c1 and restores r3.
> => t1 and t2 scoreboard's request/vhost contain relevent (last) info.
>

Thanks for all of the additional details, it would be helpful throughout the
rest of this discussion if we summarized the before-and-after examples
of the actual scoreboard lines we propose, so we make sure we aren't
talking past one another. I'll study these examples in the morning and
check back in after I've evaluated, but I'm pretty sure that setting a "W"
state for a request with no request is an oxymoron.


> >> How about the attached patch?
> >
> > This is the right direction for <trunk> - we will be waking up threads
> > to resume work on an already in-flight request, correct?  That request
> > needs to be displayed because that is the work this thread is performing.
>
> This would indeed also work in full event mode (i.e. future/trunk),
> but it's legitimate in 2.4.x too, IMHO, since requests already cross
> threads at keepalive/lingering-close time (with MPM event).
>

No, we don't care.

The only state in 2.4 that we might care about is write-completion if we
have fully satisfied the request, that the request was logged, but the data
wasn't fully transmitted (e.g. a sendfile completion state.)  In that case
alone we would want to report what that thread is continuing to write.

In any other case, keepalive/lingering close completion, the previous
request did *not happen on our thread*, so the previous work for that
request is still noted (until superceeded) on the actual **work thread**
and this thread is only alive to shutter/continue the connection, with
zero interest in the request which preceeded it.

My 2c, which seems to beat all of the other regressions introduced
into the 2.4.x maintenance branch, but I'm happy to be convinced of
the value of introducing other behaviors.  I'm more likely to encourage
us to shove these at trunk for a new behavior for the next generation
of htttpd.

Reply via email to