On 15/03/2021 16:12, Michael Goulish wrote:
*So why is the request/second rate significantly lower with the router? *

My model, so far, is as follows. Does this seem plausible?

   When there is no router present the senders send with less constraint,
filling up whatever buffers there are between them and the server. This
achieves high throughput but at the cost of higher latency. If they could
somehow restrain themselves they would probably achieve significantly lower
latency than the with-router case.

When the router is present it imposes more constraints on how fast the
senders can send. This reduces throughput but improves latency to the point
where the with-router case is actually slightly faster along much of the
curve, although at higher levels of concurrency the difference is very
small.

If latency is measured from the point that the request has been fully written to the socket on the client, up until the point that the response has been read from it, then in theory I agree that the rate could be lower without the latency being any worse. Any time waiting for the socket to be writable would not be included in the latency but would affect the rate.

I would expect latency to be measured from the time the request was *triggered* until the response was received. In which case any time waiting for the socket to be writable *would* impact the latency.

For the client socket not to be writable the local buffers would be full and the router would have stopped reading from that socket on the other side. I wonder if some instrumentation to show the time on the router where the socket was not being read from would help confirm the issue?


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to