On 03/06/2012 02:59 PM, Henrik Ingo wrote:
On Tue, Mar 6, 2012 at 8:27 PM, pcrews<[email protected]>  wrote:
2) What do we want to do with the data on a failure?  Do we want to just
remove the run?  (I need to refresh my memory on how drizzle-automation
currently handles things)

For performance regressions, you of course want to keep the data
especially for failures. (I assume you mean the output from the
benchmark?)

I'll have to refresh my memory, but we do have room for improvement in how we handle our data. IIRC, there was some work / thinking to do in regards to our benchmark data + automated regression identification. As I work on an initial patch and remember more, I'll post / request input : )


I haven't followed these closely, but from superficial browsing it
seems that the variability between normal runs is only 1-2 percent.
The recent regression seems to have been like 15 percent. So clearly
visible. The threshold should probably be configurable, but in any
case will be something between 3 and 5 %.

++. Sounds reasonable and it should be easy to get something working. I think we can have an initial patch this week. Will play with some params / options to help us tune this and get it into shape.

Thanks for the input.

-patrick

henrik




_______________________________________________
Mailing list: https://launchpad.net/~drizzle-discuss
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~drizzle-discuss
More help   : https://help.launchpad.net/ListHelp

Reply via email to