*So why is the request/second rate significantly lower with the router? * My model, so far, is as follows. Does this seem plausible?
When there is no router present the senders send with less constraint, filling up whatever buffers there are between them and the server. This achieves high throughput but at the cost of higher latency. If they could somehow restrain themselves they would probably achieve significantly lower latency than the with-router case. When the router is present it imposes more constraints on how fast the senders can send. This reduces throughput but improves latency to the point where the with-router case is actually slightly faster along much of the curve, although at higher levels of concurrency the difference is very small. On Mon, Mar 15, 2021 at 12:06 PM Michael Goulish <[email protected]> wrote: > I have updated the document to include "4 KB Buffers" in title, and new > graphs for Average Latency, and Slowest Latency. > > The link I already sent still works for the new version. > > > On Mon, Mar 15, 2021 at 10:51 AM Gordon Sim <[email protected]> wrote: > >> On 15/03/2021 14:06, Michael Goulish wrote: >> > *How does the average latency compare?* >> > >> > I had to check this 3 times before I believed it, but the average >> latency >> > is dead equal -- to within 0.1 msec -- with or without the router, all >> the >> > way from 10 to 100 concurrent workers. >> >> So why is the request/second rate significantly lower with the router? >> If each client is just issuing one request after the other, and the >> average time to finish each request is as good with the router as >> without, where is the extra time that a reduced rate implies going? >> >> Could it be that the small number of slower requests are so much slower >> that it is impacting the overall rate? >> >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: [email protected] >> For additional commands, e-mail: [email protected] >> >>
