On 02/10/2016 08:42 AM, David Moreau Simard wrote:
> Lots of questions, I'm sorry.
> 
> Are you planning to drop them indefinitely or is it temporary ? Is it to
> help alleviate the gate from it's current misery ?
> 
> Why were these tests introduced in the first place ? To find issues or
> bottenecks relative to scale or amount of operations ? Was it a request
> from the operator community ?
> 
> I have a strong feeling there is a very real need for *something* that
> is able to find silly issues that only manifest themselves beyond the
> scale of one VM before we ship something to the operator community.

Permanently.

A test suite is only useful if it gives you a set of bread crumbs to go
from fail to fix, and it's predictable enough to believe the results are
real.

Macro performance testing is not possible on the environment we function
in. Because a 10x performance regression in one operation under the
covers gets smoothed out to a 5 or 10% variance at the macro level.

Which we can't detect. Over time we find things failing a bit more and
people bump the timeouts or reduce the parallelism.

When this job was first created, no one was looking at performance at
all. It was a minor stop gap to catch a class of issues. Since then we
grew up db performance testing, rally, the current performance team.
Lots more people are running performance analysis in their downstream QA
teams and providing feedback back.

The Neutron team stopped running this job a while ago because it was
just noise. And I agree with their call there. We should do the same
across OpenStack.

        -Sean

-- 
Sean Dague
http://dague.net

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to