Sean Dague <s...@dague.net> writes: > On 01/02/2014 04:29 PM, Michael Still wrote: >> Heh, I didn't know that wiki page existed. I've added an entry to the >> checklist. >> >> There's also some talk of adding some help text to the vote message >> turbo-hipster leaves in gerrit, but we haven't gotten around to doing >> that yet. >> >> Cheers, >> Michael > > So was there enough countable slowness earlier in the run that you could > have predicted these runs would be slower overall? > > My experience looking at Tempest run data is there can be as much as an > +60% variance from fastest and slowest nodes (same instance type) within > the same cloud provider, which is the reason we've never tried to > performance gate on it. > > However if there was some earlier benchmark that would let you realize > that the whole run was slow, so give it more of a buffer, that would > probably be useful.
If you are able to do this and benchmark the performance of a cloud server reliably enough, we might be able to make progress on performance testing, which has been long desired. The large ops test is (somewhat accidentally) a performance test, and predictably, it has failed when we change cloud node provider configurations. A benchmark could make this test more reliable and other tests more feasible. -Jim _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev