Hi Dan,

good catch !!!

I think it makes sense to have a predictable order. It's worth to do it, because even if we don't detect some tests failures, I think it would be mostly the test itself (like missing resources cleanup or so), not that the test is covering itself.

+1 on the proposal.

Regards
JB

On 30/01/2018 17:33, Daniel Kulp wrote:
I spent a couple hours this morning trying to figure out why two of the SQL 
tests are failing on my machine, but not for Jenkins or for JB.   Not knowing 
anything about the SQL stuff, it was very hard to debug and it wouldn’t fail 
within Eclipse or even if I ran that individual test from the command line with 
-Dtest= .   Thus, a real pain…

It turns out, there is an interaction problem between it and a test that is 
running before it on my machine, but on Jenkins and JB’s machine, the tests are 
run in a different order so the problem doesn’t surface.   So here’s the 
question:

Should the surefire configuration specify a “runOrder” so that the tests would run the same 
on all of our machines?   By default, the runOrder is “filesystem” so depending on the 
order that the filesystem returns the test classes to surefire, the tests would run in 
different order.   It looks like my APFS Mac returns them in a different order than JB’s 
Linux.    But that also means if there is a Jenkins test failure or similar, I might not be 
able to reproduce it.   (Or a Windows person or even a Linux user using a different fs than 
Jenkins)   For most of the projects I use, we generally have 
“<runOrder>alphabetical</runOrder>” to make things completely predictable.   
That said, by making things non-deterministic, it can find issues like this where tests 
aren’t cleaning themselves up correctly.    Could do a runOrder=hourly to flip back and 
forth between alphabetical and reverse-alphabetical.  Predictable, but changes to detect 
issues.

Thoughts?


Reply via email to