>
> > How would we handle commits that break the integration tests? Would
> > we revert commits on trunk, or fix-forward?
>
> This is currently up to committer discretion, and I don't think that
> would change if we were to re-tool the PR builds. In the presence of
> flaky failures, we can't
David,
Thanks for finding that gradle plugin. The `changedModules` mode is
exactly what I had in mind for fairness to modules earlier in the
dependency graph.
> if we moved to a policy where PRs only need some of the tests to pass
> to merge, when would we run the full CI? On each trunk commit
Hey folks, interesting discussion.
I came across a Gradle plugin that calculates a DAG of modules based on the
diff and can run only the affected module's tests or the affected +
downstream tests.
https://github.com/dropbox/AffectedModuleDetector
I tested it out locally, and it seems to work as
Gaurav,
The target-determinator is certainly the "off-the-shelf" solution I
expected would be out there. If the project migrates to Bazel I think
that would make the partial builds much easier to implement.
I think we should look into the other benefits of migrating to bazel
to see if it is worth
Hi Greg,
I can see the point about enabling partial runs as a temporary measure to
fight flakiness, and it does carry some merit. In that case, though, we
should have an idea of what the desired end state is once we've stopped
relying on any temporary measures. Do you think we should aim to
Hey Greg,
Thanks for sharing this idea!
The idea of building and testing a relevant subset of code certainly seems
interesting.
Perhaps this is a good fit for Bazel [1] where
target-determinator [2] can be used to to find a subset of targets that have
changed between two commits.
Even
David,
Thanks for your thoughts!
> Indeed, they will be more likely to pass but the
> downside is that folks may start to only rely on that signal and commit
> without looking at the full test suite. This seems dangerous to me.
I completely agree with you that it is not desirable for committers
Hey Greg,
Thanks for bringing this up.
I am not sure to understand the benefit of the parallele trigger of a
subset of the tests. Indeed, they will be more likely to pass but the
downside is that folks may start to only rely on that signal and commit
without looking at the full test suite. This
Hey all,
I've been working on test flakiness recently, and I've been trying to
come up with ways to tackle the issue top-down as well as bottom-up,
and I'm interested to hear your thoughts on an idea.
In addition to the current full-suite runs, can we in parallel trigger
a smaller test run which