I am of the same opinion, this is the approach we're taking for Flink
as well. Various mitigations (e.g. capping the parallelism at 2 rather
than the default of num cores) have helped.

Several times the idea has been proposed to group unit tests together
for "expensive" backends. E.g. for self-contained tests one can create
a single pipeline that contains all the tests with their asserts, and
then run that once to amortize the overhead (which is quite
significant when you're only manipulating literally bytes of data).
Only on failure would it exercise them individually (either
sequentially, or via a binary search).

On Mon, Jan 21, 2019 at 10:07 AM Etienne Chauchot <echauc...@apache.org> wrote:
>
> Hi guys,
>
> Lately I have been fixing various Elasticsearch flakiness issues in the 
> UTests by: introducing timeouts, countdown latches, force refresh, embedded 
> cluster size decrease ...
>
> These flakiness issues are due to the embedded Elasticsearch not coping well 
> with the jenkins overload. Still, IMHO I believe that having embedded backend 
> for UTests are a lot better than mocks. Even if they are less tolerant to 
> load, I prefer having UTests 100% representative of real backend and add 
> countermeasures to protect against jenkins overload.
>
> WDYT ?
>
> Etienne
>
>

Reply via email to