On 03:21 am, gl...@twistedmatrix.com wrote:

On Jan 12, 2015, at 1:08 PM, exar...@twistedmatrix.com wrote:
E.g. I need latency histograms, but this seems unsupported (benchmark results can only have avg/min/max/stddev). For me, this isn't "nice to have", but essential. Throughput is one thing. Constistent low latency a completely different. The latter is
much much harder.

Codespeed is terrible. But this is not one of the ways in which it is terrible. Codespeed doesn't care if you label your measurement "latency". I think you've just noticed that what the existing benchmarks measure is mostly (entirely?) throughput. If you wanted to write a latency benchmark, I don't think anything's stopping you.

I believe Tobias was not saying "codespeed can't have a measurement called 'latency'" but rather "codespeed can't do histograms of measurements, which we need for measurement of latency and you don't need for measurement of throughput". Is that accurate? I don't know if there's a histogram feature hidden in the UI somewhere.

It would be nice to at least try a little bit to contribute things (like a histogram feature) to codespeed before charging off in a completely different direction.

I wasn't suggesting it would be a good idea to contribute to codespeed. I think codespeed should be thrown in the trash. It was a great demonstration of a concept and we should thank it for that. However, as the basis of future development - no, it's an awful piece of unmaintained software.

I was just trying to say that work towards replacing it should learn what it can from codespeed to try to avoid creating another piece of awful, ultimately unmaintained software.

Jean-Paul

_______________________________________________
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

Reply via email to