Thanks Simone,
Responses inline

El lun, 3 oct 2022 a la(s) 14:52, Simone Bordet ([email protected])
escribió:

> Hi,
>
> On Mon, Oct 3, 2022 at 9:52 PM Tomas Fernandez Lobbe <[email protected]>
> wrote:
> >
> > Hello,
> > I was looking at the way SolrJ (Solr's client library) is using Jetty to
> issue requests to Solr server (using Jetty server). As of now, Solr is
> using Jetty 9.4.48 in main (don't know if that's relevant to my question,
> unless behavior in this area recently changed).
> > From what I could see in the code and docs, the way Jetty client handles
> a new request to a particular destination is:
> > 1. Add request to a queue
> > 2. Attempt to get a connection from the pool
> > 3. If successful getting a connection, use that to send the request,
> otherwise, just exit, something else will grab the request from the queue
> and send it when connections are available.
> >
> > Caller is expected to provide listener(s) that will receive a callback
> for events happening in the request/response. Solr is using an
> "InputStreamResponseListener" like[1]:
> >
> >       InputStreamResponseListener listener = new
> InputStreamResponseListener();
> >       req.send(listener);
> >       Response response = listener.get(idleTimeout,
> TimeUnit.MILLISECONDS);
> >       InputStream is = listener.getInputStream();
> >       return processErrorsAndResponse(solrRequest, parser, response, is);
> >
> > This pattern looks similar to what's in the Jetty docs, and even similar
> to what the blocking APIs in Jetty client itself are using.
> >
> > The thread that sends the request (because it was successful acquiring
> the connection from the pool) will continue fetching requests from the
> queue and sending those for as long as there are requests in the queue. My
> question is, can't this be problematic as the queue grows? Can the thread
> that sends a request "FOO" be stuck sending other requests from the queue
> for longer than the request "FOO" takes to be processed on the server side,
> and a response is available on the client (at which point the
> listener.get(...) would return immediately).
> >
>
> When thread T1 sends a request, it acquires connection C1 and sends
> the request bytes.
> In order for T1 to come back and find another request queued, another
> thread T2 must have queued it, found no connection available, and
> initiated the opening of a second connection C2.
>

I see. And does this happen only in the case of "open new connection"? How
about cases where all the connections are in use? If the queue, for
whatever reason got N requests, won't T1 try to clear them all?

T1 would try to acquire a connection but it cannot because C2 is not
> opened yet, so it returns.
>

But doesn't the "process(Connection)" method in HttpDestination continue
processing requests without releasing the connection?


>
> When C2 is opened, another thread, say T3, runs the after-opening
> code, which would poll a request from the queue and try to send it.
> T1 may steal C2 from T3, and do additional work that is not pertinent
> to the first request, but it's typically a quite rare case.
>
> For T1 to find a large queue of requests, it would have to compete
> with a large number of connection-opening threads, and the chances
> that T1 steals from all of them are minimal.
>
> In case of a single connection per destination, T1 can only send a
> second request if the first request/response cycle is completed (for
> HTTP1).
> It's typical that it is not due to network latency, so T1 would return
> after sending the first request.
>
> For multiplexed protocols such as HTTP2, sender thread T2 would find a
> connection, so it would typically send the request.
> Again, it may be possible that T1 steals the request from T2, but that
> again should be a rare case.
>
> I guess there is a degenerate case in HTTP1 where, with 1 connection
> per destination, the responses arrive so fast that T1 is always busy
> sending, but things must be aligned pretty precisely for that to
> happen.
>
> > Would it make more sense for the `send` to happen on the client's
> executor, something like?
> >
> > httpClient.getExecutor().execute(() -> req.send(listener));
>
> This will always pay the cost of a CPU context switch also for the
> non-degenerate cases, so it's not particularly desirable.
> Calling execute() for request.send(listener) would not be the right
> place to cope with the degenerate case -- the execute() should be
> called from much deeper in the implementation when T1 loops to try to
> send another request and it has found another connection.
> Feels like quite some work for little return to cover a rare case.
>
> Do you have evidence that you are hitting the degenerate case often?
>

I started looking at this chasing a "Max Requests queued per Destination"
issue, which I believe was caused by Solr not aborting requests correctly
(see [2] if you are interested), so at this point is just me trying to
understand the code and making sure we are using it correctly.

Tomás

[2] https://issues.apache.org/jira/browse/SOLR-16229


>
> --
> Simone Bordet
> ----
> http://cometd.org
> http://webtide.com
> Developer advice, training, services and support
> from the Jetty & CometD experts.
> _______________________________________________
> jetty-users mailing list
> [email protected]
> To unsubscribe from this list, visit
> https://www.eclipse.org/mailman/listinfo/jetty-users
>
_______________________________________________
jetty-users mailing list
[email protected]
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/jetty-users

Reply via email to