Erik Cederstrand wrote:
Hi

I'd like to send a small update on my progress on the Performance Tracker project.

I now have a small setup of a server and a slave chugging along, currently collecting data. I'm following CURRENT and collecting results from super-smack and unixbench.

The project still needs some work, but there's a temporary web interface to the data here: http://littlebit.dk:5000/plot/. Apart from the plotting it's possible to compare two dates and see the files that have changed. Error bars are 3*standard deviation, for the points with multiple measurements.

Of interest is e.g. super-smack (select-key, 1 client) right when the GENERIC kernel was moved from the 4BSD to ULE scheduler on Oct. 19. Unixbench (arithmetic test, float) also has a significant jump on Oct. 3.

There setup of the slave is documented roughly on the page but I'll be writing a full report and documentation over the next month.

Comments are very welcome but please followup on [EMAIL PROTECTED]

This is coming along very nicely indeed!

One suggestion I have is that as more metrics are added it becomes important for an "at a glance" overview of changes so we can monitor for performance improvements and regressions among many workloads.

One way to do this would be a matrix of each metric with its change compared to recent samples. e.g. you could do a student's T comparison of today's numbers with those from yesterday, or from a week ago, and colour-code those that show a significant deviation from "no change". This might be a bit noisy on short timescales, so you could aggregrate data into larger bins and compare e.g. moving 1-week aggregates. Fluctuations on short timescales won't stand out, but if there is a real change then it will show up less than a week later.

These significant events could also be graphed themselves and/or a history log maintained (or automatically annotated on the individual graphs) so historical changes can also be pinpointed.

At some point the ability to annotate the data will become important (e.g. "We understand the cause of this, it was r1.123 of foo.c, which was corrected in r1.124. The developer responsible has been shot.")

Kris

P.S. If I understand correctly, the float test shows a regression? The metric is calculations/second, so higher = better?
_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to