Just to throw my 2c in here. It would also be good if we could run the
tests that failed "last time" first. That way at least if they are broken
then we'd fail fast rather than waiting for ever. Not much sucks more than
waiting for 2 hours to find out a test failed when it fails regularly and
you c
Hi All,
I've created a PR for what we have internally for retrying flaky tests. Any
reviews and ideas are welcome: https://github.com/apache/kafka/pull/6506
It's basically collects the failed classes and reruns them at the end. If
they successful it overwrites the test report.
Thanks,
Viktor
On
I agree with Ron.
I think improving the framework with a configurable number of retries on
some tests will yield the highest ROI in terms of passing builds.
On Fri, Mar 8, 2019 at 10:48 PM Ron Dagostino wrote:
> It's a classic problem: you can't string N things together serially and
> expect hig
It's a classic problem: you can't string N things together serially and
expect high reliability. 5,000 tests in a row isn't going to give you a
bunch of 9's. It feels to me that the test frameworks themselves should
support a more robust model -- like a way to tag a test as "retry me up to
N time
> We internally have an improvement for a half a year now which reruns the
flaky test classes at the end of the test gradle task, lets you know that
they were rerun and probably flaky. It fails the build only if the second
run of the test class was also unsuccessful. I think it works pretty good,
w
Hey All,
Thanks for the loads of ideas.
@Stanislav, @Sonke
I probably left it out from my email but I really imagined this as a
case-by-case basis change. If we think that it wouldn't cause problems,
then it might be applied. That way we'd limit the blast radius somewhat.
The 1 hour gain is reall
It's an idea that has come up before and worth exploring eventually.
However, I'd first try to optimize the server startup/shutdown process. If
we measure where the time is going, maybe some opportunities will present
themselves.
Ismael
On Wed, Feb 27, 2019, 3:09 AM Viktor Somogyi-Vass
wrote:
>
Hi Colin.
<< wrote:
> On Wed, Feb 27, 2019, at 10:02, Ron Dagostino wrote:
> > Hi everyone. Maybe providing the option to run it both ways -- start
> your
> > own cluster vs. using one that is pre-started -- might be useful? Don't
> > know how it would work or if it would be useful, but it is s
On Wed, Feb 27, 2019, at 10:02, Ron Dagostino wrote:
> Hi everyone. Maybe providing the option to run it both ways -- start your
> own cluster vs. using one that is pre-started -- might be useful? Don't
> know how it would work or if it would be useful, but it is something to
> think about.
>
>
Hi everyone. Maybe providing the option to run it both ways -- start your
own cluster vs. using one that is pre-started -- might be useful? Don't
know how it would work or if it would be useful, but it is something to
think about.
Also, while the argument against using a pre-started cluster due
Hi,
while I am also extremely annoyed at times by the amount of coffee I
have to drink before tests finish I think the argument about flaky
tests is valid! The current setup has the benefit that every test case
runs on a pristine cluster, if we changed this we'd need to go through
all tests and en
Hey Viktor,
I am all up for the idea of speeding up the tests. Running the
`:core:integrationTest` command takes an absurd amount of time as is and is
continuously going to go up if we don't do anything about it.
Having said that, I am very scared that your proposal might significantly
increase th
Hi Folks,
I've been observing lately that unit tests usually take 2.5 hours to run
and a very big portion of these are the core tests where a new cluster is
spun up for every test. This takes most of the time. I ran a test
(TopicCommandWithAdminClient with 38 test inside) through the profiler and
13 matches
Mail list logo