On Tue, Sep 21, 2021 at 12:25 PM Ruediger Pluem <[email protected]> wrote:
>
> On 9/20/21 1:31 PM, Yann Ylavic wrote:
> >
> > For CONN_STATE_WRITE_COMPLETION + CONN_SENSE_WANT_READ we indeed reach
> > here and will POLLIN, then once readable we come back with
> > CONN_STATE_WRITE_COMPLETION + CONN_SENSE_DEFAULT so if pending != OK
> > we'll reach the below:
> >
> > else if (ap_run_input_pending(c) == OK) {
> > cs->pub.state = CONN_STATE_READ_REQUEST_LINE;
> > goto read_request;
> > }
> >
> > and thus ap_run_process_connection() directly.
>
> Thanks for the pointer. But I guess the similar
>
> else if (c->data_in_input_filters) {
> cs->pub.state = CONN_STATE_READ_REQUEST_LINE;
> goto read_request;
>
> in 2.4.x does not give this to us as I think that c->data_in_input_filters is
> not updated until we get here. Hence we probably
> need to fix this differently here.
Yes, possibly a speculative read of one byte on c->input_filters..
> >
> > What about this patch (attached)?
> > I tried to comment a bit the WRITE_COMPLETION_STATE too, because it's
> > not only about writing...
> >
> > + /* CONN_STATE_WRITE_COMPLETION is a misnomer, this is actually the
> > + * user driver polling state which can be POLLOUT but also POLLIN
> > + * depending on cs->pub.sense. But even for CONN_SENSE_WANT_READ
> > + * we want the outgoing pipe to be empty before POLLIN, so there
>
> Why do we need to empty the outgoing pipe first? It will automatically go
> into blocking once it thinks that it is needed.
If we don't flush what we have retained we might be in a situation
where the remote is waiting for these data before sending us
something, and we'd deadlock.
> If not I
> think that we can start processing input in parallel. If this produces more
> output that cannot be queued we get into blocking
> anyway. The only drawback of starting input processing again could be that if
> the request we start processing is running for a
> long time the output will not be flushed further until new output was
> produced. And I guess working on the same connection
> in two different threads would not be possible as the connection structure is
> not thread safe.
I don't think we should ever have the same connection handled by
multiple threads, that's a can of worms where we'd have to synchronize
at some point and determine which packet arrived first. Packets need
to be handled in sequence, when a connection is handled by a worker
thread it should never be in the MPM's pollset at the same time (i.e.
possibly scheduled by/to another thread).
> >
> > if (cs->pub.state == CONN_STATE_WRITE_COMPLETION) {
> > - int pending = DECLINED;
> > + int pending, want_read = 0;
> >
> > - ap_update_child_status(cs->sbh, SERVER_BUSY_WRITE, NULL);
> > -
> > - if (from_wc_q) {
> > - from_wc_q = 0; /* one shot */
> > - pending = ap_run_output_pending(c);
> > + if (cs->pub.sense == CONN_SENSE_DEFAULT) {
> > + /* If the user did not ask for READ or WRITE explicitely then
> > + * we are in normal post request processing, i.e. busy write
> > + * in the scoreboard.
> > + */
> > + ap_update_child_status(cs->sbh, SERVER_BUSY_WRITE, NULL);
> > }
> > - else if (ap_filter_should_yield(c->output_filters)) {
>
> Why no longer the ap_filter_should_yield(c->output_filters) approach when a
> proccesing just before left us in write completion?
I just wanted to "show" in the patch that we'd better call
ap_run_output_pending() when CONN_SENSE_WANT_READ to avoid the POLLOUT
if we can flush just now. ap_filter_should_yield() is an optimization
which avoids running the output filters if we already ran them or
called ap_run_output_pending() just before, but it's not necessarily
the case when CONN_SENSE_WANT_READ is asked (I suppose).
Anyway, I think we should ap_run_output_pending() in more cases then
the current "from_wc_q" (so I removed this logic), the right test is
possibly something along "from_wc_q || want_read" or something, or we
can just let ap_filter_should_yield() as is and do the POLLOUT even if
writing is non-blocking already..
Regards;
Yann.