Excerpts from Sean Dague's message of 2016-02-10 04:33:44 -0800: > The largeops tests at this point are mostly finding out that some of our > new cloud providers are slow - http://tinyurl.com/j5u4nf5 > > This is fundamentally a performance test, with timings having been tuned > to pass 98% of the time on two clouds that were very predictable in > performance. We're now running on 4 clouds, and the variance between > them all, and between every run on each can be as much as a factor of 2. > > We could just bump all the timeouts again, but that's basically the same > thing as dropping them. > > These tests are not instrumented in a way that any real solution can be > addressed in most cases. Tests without a path forward, that are failing > good patches a lot, are very much the kind of thing we should remove > from the system. >
I think we need to replace this with something that measures work counters, and not clock time. As you say, some of the other test suites out there already pick up a lot of this slack too. Also, I'm working at this as well with the counter-inspection spec, so hopefully dropping this now won't leave too much of a gap in coverage while we ramp up counter-inspection. +1 to getting rid of it now, as instability in the test suites, which slows down development velocity, is worse than missing a few performance regressions in the corners. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev