On Tue, May 10, 2016 at 5:23 PM, William A Rowe Jr <wr...@rowe-clan.net> wrote: > > On Sun, May 8, 2016 at 7:29 AM, Yann Ylavic <ylavic....@gmail.com> wrote: >> >> ISTM that the request line has already been clobbered (blanked) by the >> previous call to ap_update_child_status_from_conn(SERVER_BUSY_READ) in >> ap_process_http_[a]sync_connection(), where we already lost the >> previous request's line. > > That's FINE. In the previous call we are already on to another thread, > this thread did not perform work on the previous request (or if it had, > it had surrendered the thread and accidentally ended back up there.)
Right, that's why/when the new thread clobbers the request-line. We are talking about MPM event only here, with sync MPMs we do SERVER_BUSY_READ for the first request on the connection only, and my goal is to have consistent reporting for all MPMs. > > The scoreboard reports what work this *thread* has performed. So > far, if it is a wakeup to read a new request, on the event MPM in 2.4 > this is not request-based work. The scoreboard reports that the thread is now working on a new connection (possibly kept alive), but clears half of the data associated with that connection (i.e. the last request/vhost) altogether. > >> That's why I proposed to save the previous request line and host in >> the conn_rec, and use them in the scoreboard until a new (real) >> request is available (it didn't work as expected because the >> scoreboard was clobbered in multiple places before Bill's changes, but >> it should work better/easily now). > > > It isn't workable because every time we hack around this issue, we > mask the actual errors in the calls to the scoreboard API. In 2.4, > the API design is DSS. DSS? > Complicating it is a really bad idea, and > misleads, if not actually breaking existing consumers of the scoreboard. My proposed patch (not that complicated IMHO) shouldn't break anything (if working as expected), it is meant for consistency between event and other MPMs (now that Rainer fixed the blanks there due to keepalive/close), see below. > >> That would be: when no r is available but c is, use the saved >> request-line/host from c; when r is available, use the ones from r and >> update them in c. >> >> That way there would be no blank at all (IOW, the scoreboard entry >> would be left with the latest request handled on the connection, not a >> blank which inevitably ends every connection after keepalive timeout >> or close, and shouldn't be taken into account as an empty request >> IMHO). > > Which is all wrong IMHO. If the request completed, that thread still > reports the previous request in an idle or keepalive state until that thread > is woken up to do new work. Right, for MPM event we can indeed consider that the reporting of the previous request for a thread is dead once this thread is given another connection, but when yet another thread will take care of the former connection, the previous request could show up in its reporting (that's how my patch works, faithfully reporting how each MPM works, without clobbering data unnecessarily). The case where this happens is for keepalive/lingering-close handling. Suppose thread t1 handles request r1 on connection c1, and t1 is released after r1 (when c1 is kept alive). Same for t2 with r2 on c2. Now the following cases (not exhaustive): 1. r3 arrives on c1, t2 handles the event (read). 2. c2 is closed/reset remotely, t1 handles the event (read). 2. c1 is closed/reset remotely, t2 handles the event (read). Without my patch: 1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by the scoreboard), while t1 still reports r1 on c1 (although "superseded" by r3). 2. t1 reads/closes c2 and clobbers r1 (blank). 3. t2 reads/closes c1 and clobbers r3 (blank). => t1 and t2 scoreboard's request/vhost are blank. With my patch: 1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by the scoreboard), while t1 still reports r1 on c1 (although "superseded" by r3). 2. t1 reads/closes c2 and restores r2. 3. t2 reads/closes c1 and restores r3. => t1 and t2 scoreboard's request/vhost contain relevent (last) info. > >> How about the attached patch? > > This is the right direction for <trunk> - we will be waking up threads > to resume work on an already in-flight request, correct? That request > needs to be displayed because that is the work this thread is performing. This would indeed also work in full event mode (i.e. future/trunk), but it's legitimate in 2.4.x too, IMHO, since requests already cross threads at keepalive/lingering-close time (with MPM event).