On 09/06/2016 08:27 AM, Amos Jeffries wrote: > On 27/08/2016 12:32 p.m., Alex Rousskov wrote: >> W1 W2 W3 W4 W5 W6 >> v3.1 32% 38% 16% 48% 16+ 9% >> v3.3 23% 31% 14% 42% 15% 8% >> v3.5 11% 16% 12% 36% 7% 6% >> v4.0 11% 15% 9% 30% 14% 5%
> That trend goes all the way back to at least 2.6. Which is a bit weird, > since it contradicts the total request-per-second capacity we have been > watching in polygraph results. I do not know what you mean by "total request-per-second capacity", but I most likely have not been watching it. Is there a historical results table for that measure that I can study? > The speed per request goes down and the number that can be handled rises. Even if concurrent transactions handling has been improving (and that is a big if -- I have no data to back it up), it does not excuse the significant worsening of per-transaction performance IMO. > As far as my simple observations have been, the per-request slowdown > correlates to the amount of code being added (ie new features) and also > to the amount of AsyncCalls involved with the request handling (where > more AsyncCall is more latency). Additional code obviously slows things down, but the existing rate of adding new features/async calls does not seem to match the magnitude of the measured performance decline, especially for the basic code paths tested by these micro tests. For things to be getting worse in such a manner, we probably have to make the old code/features worse (in addition to adding new features/async calls) and/or adding those new features in a terribly inefficient way. Alex. _______________________________________________ squid-dev mailing list squid-dev@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-dev