While cleaning up the tests is there any value in splitting out tests that are redundant - test that test low level functions whose failures will be picked up in other tests of higher level functions
- tests that are run on modules that "never" change.

The lower level test may still be useful for testing a change to a low level function or for tracking down a failure in a higher level function that uses a low level routine but may not add much value to a test suite that is run frequently.

Would this reduce the amount of time taken to do a full test at the expense of some increased risk that an edge case might be missed? Would setting aside the clutter allow the team to focus on the tests that really matter?

Ron

On 20/12/2017 1:21 PM, Paul Angus wrote:
Hi Marc-Aurèle, (and everyone else)

The title probably is slightly incorrect.  It should really say known Marvin 
test failures.  Trillian is the automation that creates the environments to run 
the tests in, the tests are purely those that are in Marvin codebase so anyone 
can repeat them.  In fact we would like to see other people running the tests 
in their environments and comparing the results.

With regard to the failing tests, I agree, that it would be dangerous to hide 
failures.
I would like to see however, a matrix of known good and known bad tests, and 
any PR that then fails known good tests has a problem.
With a visible list of known bad tests we can 'not fail' a PR due to failing a 
bad test, and also there would be a list of bad tests which the community can 
attack and whittle down the list until all tests *should* pass.

That way we can make clear (automated) decisions on pass/fail.  Rather than get 
a list of pass/fails that we then have to interpret.



Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue

-----Original Message-----
From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
Sent: 20 December 2017 12:56
To: dev@cloudstack.apache.org
Subject: Known trillian test failures

@rhtyd

Could something be done to avoid confusing people pushing PR to have trillian 
test failures, which apparently are know to fail all the time or often? I know 
it's hard to keep the tests in good shape and make them run smoothly but I find 
it very disturbing and therefore I have to admit I'm not paying attention to 
those outputs, sadly.

Skipping them adds the high risk of never getting fixed... I would hope that 
someone having full access the the management & agent's logs could fix them, 
since AFAIK they aren't available.

Cheers


--
Ron Wheeler
President
Artifact Software Inc
email: rwhee...@artifact-software.com
skype: ronaldmwheeler
phone: 866-970-2435, ext 102

Reply via email to