Re: Known trillian test failures

2017-12-20 Thread Ron Wheeler
While cleaning up the tests is there any value in splitting out tests 
that are redundant
- test that test low level functions whose failures will be picked up in 
other tests of higher level functions

- tests that are run on modules that "never" change.

The lower level test may still be useful for testing a change to a low 
level function or for tracking down a failure in a higher level function 
that uses a low level routine but may not add much value to a test suite 
that is run frequently.


Would this reduce the amount of time taken to do a full test at the 
expense of some increased risk that an edge case might be missed?
Would setting aside the clutter allow the team to focus on the tests 
that really matter?


Ron

On 20/12/2017 1:21 PM, Paul Angus wrote:

Hi Marc-Aurèle, (and everyone else)

The title probably is slightly incorrect.  It should really say known Marvin 
test failures.  Trillian is the automation that creates the environments to run 
the tests in, the tests are purely those that are in Marvin codebase so anyone 
can repeat them.  In fact we would like to see other people running the tests 
in their environments and comparing the results.

With regard to the failing tests, I agree, that it would be dangerous to hide 
failures.
I would like to see however, a matrix of known good and known bad tests, and 
any PR that then fails known good tests has a problem.
With a visible list of known bad tests we can 'not fail' a PR due to failing a 
bad test, and also there would be a list of bad tests which the community can 
attack and whittle down the list until all tests *should* pass.

That way we can make clear (automated) decisions on pass/fail.  Rather than get 
a list of pass/fails that we then have to interpret.



Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
   
  



-Original Message-
From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch]
Sent: 20 December 2017 12:56
To: dev@cloudstack.apache.org
Subject: Known trillian test failures

@rhtyd

Could something be done to avoid confusing people pushing PR to have trillian 
test failures, which apparently are know to fail all the time or often? I know 
it's hard to keep the tests in good shape and make them run smoothly but I find 
it very disturbing and therefore I have to admit I'm not paying attention to 
those outputs, sadly.

Skipping them adds the high risk of never getting fixed... I would hope that 
someone having full access the the management & agent's logs could fix them, 
since AFAIK they aren't available.

Cheers



--
Ron Wheeler
President
Artifact Software Inc
email: rwhee...@artifact-software.com
skype: ronaldmwheeler
phone: 866-970-2435, ext 102



RE: Known trillian test failures

2017-12-20 Thread Paul Angus
Hi Marc-Aurèle, (and everyone else)

The title probably is slightly incorrect.  It should really say known Marvin 
test failures.  Trillian is the automation that creates the environments to run 
the tests in, the tests are purely those that are in Marvin codebase so anyone 
can repeat them.  In fact we would like to see other people running the tests 
in their environments and comparing the results.

With regard to the failing tests, I agree, that it would be dangerous to hide 
failures.
I would like to see however, a matrix of known good and known bad tests, and 
any PR that then fails known good tests has a problem.
With a visible list of known bad tests we can 'not fail' a PR due to failing a 
bad test, and also there would be a list of bad tests which the community can 
attack and whittle down the list until all tests *should* pass.

That way we can make clear (automated) decisions on pass/fail.  Rather than get 
a list of pass/fails that we then have to interpret.



Kind regards,

Paul Angus

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-
From: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch] 
Sent: 20 December 2017 12:56
To: dev@cloudstack.apache.org
Subject: Known trillian test failures

@rhtyd

Could something be done to avoid confusing people pushing PR to have trillian 
test failures, which apparently are know to fail all the time or often? I know 
it's hard to keep the tests in good shape and make them run smoothly but I find 
it very disturbing and therefore I have to admit I'm not paying attention to 
those outputs, sadly.

Skipping them adds the high risk of never getting fixed... I would hope that 
someone having full access the the management & agent's logs could fix them, 
since AFAIK they aren't available.

Cheers


Re: Known trillian test failures

2017-12-20 Thread Rohit Yadav
Hi Marc,


You've raised a very valid concern. When we've known list of smoketest 
failures, it's understandable that most people may not understand how to 
interpret them and ignore them. Access to the Trillian environment is another 
issue. I don't have all the answers and a solution ot these problems at the 
moment but let me discuss that internally and get back to you.


Can you and others help review PR 2211 where I've tried to address that? I ask 
this because this PR not only tries to migrate us to a newer Debian 
systemvmtemplate but focuses on stabilizing master by getting almost 100% 
smoketest pass rate on VMware/KVM/XenServer, to get there I had to fix some 
tests as well. Once we can have such a pass rate on master, it will be easier 
to verify test results on other PRs against the baseline.


I'll see if we can improvement Trillian test runs to include management server 
(and agent) logs in the marvin log zip that is put as part of the result on the 
github pr.


- Rohit


From: Marc-Aur?le Brothier 
Sent: Wednesday, December 20, 2017 6:26:05 PM
To: dev@cloudstack.apache.org
Subject: Known trillian test failures

@rhtyd

Could something be done to avoid confusing people pushing PR to have
trillian test failures, which apparently are know to fail all the time or
often? I know it's hard to keep the tests in good shape and make them run
smoothly but I find it very disturbing and therefore I have to admit I'm
not paying attention to those outputs, sadly.

Skipping them adds the high risk of never getting fixed... I would hope
that someone having full access the the management & agent's logs could fix
them, since AFAIK they aren't available.

Cheers

rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue