Rather than strong-arm surefire + JUnit, I'd prefer to see this effort going into making the flakey tests more deterministic.
-n On Wed, Apr 10, 2013 at 9:07 AM, Ted Yu <[email protected]> wrote: > 0.95 and trunk builds, as of this email, are both green. > > Looks like what we can do is to pay more attention to flaky test(s) and fix > them, considering the awareness of this matter we have gathered so far. > > Cheers > > On Wed, Apr 3, 2013 at 11:14 AM, Jimmy Xiang <[email protected]> wrote: > > > HBASE-8256 was filed. We can discuss it further on the Jira if > interested. > > > > Thanks, > > Jimmy > > > > On Tue, Apr 2, 2013 at 10:50 AM, Nicolas Liochon <[email protected]> > > wrote: > > > > > I'm between +0 and -0.5 > > > +0 because I like green status. They help to detect regression. > > > > > > -0.5 because > > > - If we can't afford to fix it now I guess it won't be fixed in the > > > future: we will continue to keep it in the codebase (i.e. paying the > > cost > > > of updating it when we change an interface), but without any added > value > > as > > > we don't run it. > > > - some tests failures are actually issues in the main source code. > Ok, > > > they're often minor, but still they are issues. Last example I have is > > from > > > today: the one found by Jeff related HBASE-8204. > > > - and sometimes it shows lacks in the way we test (for example, the > > > waitFor stuff, while quite obvious in a way, was added only very > > recently). > > > - often a flaky test is better than no test at all: they can still > > > detect regressions. > > > - I also don't understand why the precommit seems to be now better > > than > > > the main build. > > > > > > For me, doing it in a case by case way would be simpler (using the > > > component owners: it a test on a given component is flaky, the decision > > can > > > be taken between the people who want to remove the test and the > component > > > owners, with a jira, an analysis and a traced decision) > > > > > > Cheers, > > > > > > Nicolas > > > > > > > > > > > > On Tue, Apr 2, 2013 at 7:09 PM, Jimmy Xiang <[email protected]> > wrote: > > > > > > > We have not seen couple blue Jenkin builds for 0.95/trunk for quite > > some > > > > time. Because of this, sometimes we ignore the precommit build > > failures, > > > > which could let some bugs (code or test) sneaked in. > > > > > > > > I was wondering if it is time to disable all flaky tests and let > Jenkin > > > > stay blue. We can maintain a list of tests disabled, and get them > back > > > > once they are fixed. For each disabled test, if someone wants to get > it > > > > back, please file a jira so that we don't duplicate the effort and > work > > > on > > > > the same one. > > > > > > > > As to how to define a test flaky, to me, if a test fails twice in the > > > last > > > > 10/20 runs, then it is flaky, if there is no apparent env issue. > > > > > > > > We have different Jenkins job for hadoop 1 and hadoop 2. If a test > is > > > > flaky for either one, it is flaky. > > > > > > > > What do you think? > > > > > > > > Thanks, > > > > Jimmy > > > > > > > > > >
