Santhosh - In the current model no need to know. All 20 check-ins are  
considered culprits. That subgroup of check-ins need to be reviewed and 
responsible developer need to fix. 

Ideally CI should be run for each check in however given the time it takes to 
run the Marvin automation, it is being run on schedule and grouping check-ins. 
Unit tests for each check in would be suitable option. 

This is what I understand from what Alex has proposed [1]

" Continuous Integration

The BVT will be run on master and the current release branch on a continuous 
basis.  If the BVT fails, the commits submitted between the last successful BVT 
run and the failed BVT runs are reverted.  The developers who submitted the 
commits are notified of the revert and can merge their changes again when they 
have figured out the problem on their own branch." 

 [1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Development+Process


Thanks
/Sudha

-----Original Message-----
From: Santhosh Edukulla [mailto:santhosh.eduku...@citrix.com] 
Sent: Tuesday, July 22, 2014 4:14 AM
To: dev@cloudstack.apache.org
Subject: RE: Disabling failed test cases (was RE: Review Request 23605: 
CLOUDSTACK-7107: Disabling failed test cases)

1. If EX: 20 check-ins happens between two runs, and say 2 test cases failed 
for a given run, how do  we know which check-in caused the failure in an 
automated CI environment which runs automatically from start to end?

Regards,
Santhosh
________________________________________
From: Sudha Ponnaganti [sudha.ponnaga...@citrix.com]
Sent: Monday, July 21, 2014 7:42 AM
To: dev@cloudstack.apache.org
Subject: RE: Disabling failed test cases (was RE: Review Request 23605: 
CLOUDSTACK-7107: Disabling failed test cases)

Hugo,

I absolutely agree with you that tests should not be disabled and fixes should 
be made before check in.

As per what Alex has mentioned in his CI enablement mail [1], premise of CI is 
that it runs at 100% pass rates and if any check in causes  failure in CI Run, 
the bad check-in is easily identified that check-in gets reverted,  so rest of 
the check-ins would move forward so this failure would not block rest of the 
community and health of branch is maintained.

To enable CI in to production, it is absolutely necessary to get 100% pass rate 
before turning on CI otherwise all master check-ins will halt because of these 
legacy issues which require some investigation and fixing. If the commit is 
known then it can be reverted and no need to disable test but this seem to be 
an old issue but not a current check-in. To me it looks like this is a one off 
type of thing just to get CI up and running very first time.

Once this is fixed and tests are enabled, there should not be any such test 
disabling in future.

Alternatively, If this is too confusing , CI can be stopped now before making 
in to production and fixes can be done and then enable CI - we have waited long 
enough and we can wait some more to get these last couple of issues to be fixed 
before turning on CI. But running CI with arbitrary pass rate is not desirable. 
It defeats the purpose and hard to manage.

Thanks
/Sudha

[1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Development+Process


-----Original Message-----
From: Trippie [mailto:trip...@gmail.com] On Behalf Of Hugo Trippaers
Sent: Monday, July 21, 2014 3:32 AM
To: dev@cloudstack.apache.org
Subject: Re: Disabling failed test cases (was RE: Review Request 23605: 
CLOUDSTACK-7107: Disabling failed test cases)

Hey Sudha,

Sorry, but i disagree. The purpose of tests should not be to get a 100% pass 
rate. The tests should show an accurate state of the how the tests are doing 
versus the current state of the branch being tested. If tests fail we should 
fix why the tests fails and the system should not report an OK in the meantime. 
Doing so is too confusing, we need to be able to rely on the fact that if the 
tests report OK everything is actually OK.



Cheers,

Hugo


On 21 jul. 2014, at 12:28, Sudha Ponnaganti <sudha.ponnaga...@citrix.com> wrote:

> In the beginning to get CI up and running,  it would be ok to disable these 
> handful of tests while getting fixes in,  to achieve 100% pass rates.  When 
> CI runs in production, code changes need to be reverted if there are any 
> "new" failures to keep CI pass rates at 100% (a known state to make CI 
> effective).  But should not just disable a test and move forward in long run.
>
> This should not be automated and make it as  part of  production CI process.
>
> Thanks
> /Sudha
>
> -----Original Message-----
> From: Santhosh Edukulla [mailto:santhosh.eduku...@citrix.com]
> Sent: Monday, July 21, 2014 3:22 AM
> To: Gaurav Aradhye; Stephen Turner; Hugo Trippaers; 
> dev@cloudstack.apache.org
> Cc: Girish Shilamkar
> Subject: RE: Disabling failed test cases (was RE: Review Request
> 23605: CLOUDSTACK-7107: Disabling failed test cases)
>
> All,
>
> Alex, wanted to disable test cases in between CI( continuous integration) 
> runs for the below "reason" for failures. I only, provided a way to achieve 
> the same using tags, so that it will work for dual purpose, one not to effect 
> community and can be used in CI as well, it will not effect if some body 
> wanted to run all test cases immaterial of tags.
>
> Reason: In CI,automation "auto" kick starts every 3 hours( configurable) and 
> picks up those delta changes and runs few checks, including sanity. Now, the 
> idea was to keep baseline of testcases running as always pass. Now between 
> two CI runs say T1 and T2, if there are "new" failures introduced, it will be 
> automatically detected with new git changes and bugs are logged automatically 
> against those check-ins.
>
> Now, till those bugs gets fixed, those were disabled keeping the baseline as 
> always pass again. The window to fix those failures( either product or test 
> case), through triage was almost constant and it need to be done soon, test 
> cases are then enabled back once fixed, available in next available CI run 
> again. It was to decide the failures between T1 and T2, as baseline is always 
> clean and pass, otherwise CI runs may accumulate failures, and confuse over 
> runs that which commits introduced failures.
>
> But, its not hard and fixed rule, we can discuss a better way as well, this 
> was followed in 4.4 release in phase1 for CI, in another phase 2( WIP ), if 
> we agree to some other better solution, then definitely it should be adopted.
>
> Santhosh
> ________________________________________
> From: Gaurav Aradhye [gaurav.arad...@clogeny.com]
> Sent: Monday, July 21, 2014 5:40 AM
> To: Stephen Turner; Hugo Trippaers; dev@cloudstack.apache.org; 
> Santhosh Edukulla
> Cc: Girish Shilamkar
> Subject: Re: Disabling failed test cases (was RE: Review Request
> 23605: CLOUDSTACK-7107: Disabling failed test cases)
>
> Hugo, Stephen,
>
> We have been following this practice as part of Continuous Integration 
> changes as defined in doc [1]. I personally think that tagging test case with 
> BugId is good idea to map the test cases with bugs, but the test case should 
> not be skipped when tagged. We can have discussion on this, and change the 
> process if majority agree.
>
> Adding Santhosh.
>
> [1]:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudstack+-+Co
> ntinuous+Integration
>
> Regards,
> Gaurav
>
>
> On Mon, Jul 21, 2014 at 2:37 PM, Stephen Turner 
> <stephen.tur...@citrix.com<mailto:stephen.tur...@citrix.com>> wrote:
> In the case that it's a product bug, wouldn't it be better to keep running 
> the test even if you know it's going to fail? That way, you get a consistent 
> view of the overall pass rate from build to build. If you disable all the 
> tests that are failing, you're going to get a 100% pass rate, but you can't 
> see whether your quality is going up or down.
>
> --
> Stephen Turner
>
>
> -----Original Message-----
> From: Gaurav Aradhye
> [mailto:nore...@reviews.apache.org<mailto:nore...@reviews.apache.org>]
> On Behalf Of Gaurav Aradhye
> Sent: 21 July 2014 09:58
> To: Girish Shilamkar
> Cc: Gaurav Aradhye; Hugo Trippaers; cloudstack
> Subject: Re: Review Request 23605: CLOUDSTACK-7107: Disabling failed 
> test cases
>
>
>
>> On July 21, 2014, 1:03 p.m., Hugo Trippaers wrote:
>>> Why would we want to disable test cases that fail? Doesn't this mean we 
>>> need to fix something else so they don't fail anymore?
>
> Hi Hugo,
>
> Whenever we found a test case failing, we create bug for that, may it be a 
> test script issue or product bug, so that the test case gets associated with 
> a particular bug and it's easy to track in future why it is failing.
>
> Addition of this decorator BugId to test case skips the test in the run.
>
> Whenever the bug gets fixed, then the person who has fixed the bug removes 
> the BugId decorator from test case so that the test case gets picked up in 
> the next run.
>
>
> - Gaurav
>
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/23605/#review48204
> -----------------------------------------------------------
>
>
> On July 17, 2014, 1:17 p.m., Gaurav Aradhye wrote:
>>
>> -----------------------------------------------------------
>> This is an automatically generated e-mail. To reply, visit:
>> https://reviews.apache.org/r/23605/
>> -----------------------------------------------------------
>>
>> (Updated July 17, 2014, 1:17 p.m.)
>>
>>
>> Review request for cloudstack and Girish Shilamkar.
>>
>>
>> Bugs: CLOUDSTACK-7074 and CLOUDSTACK-7107
>>    https://issues.apache.org/jira/browse/CLOUDSTACK-7074
>>    https://issues.apache.org/jira/browse/CLOUDSTACK-7107
>>
>>
>> Repository: cloudstack-git
>>
>>
>> Description
>> -------
>>
>> Disabling failed test cases on master.
>>
>>
>> Diffs
>> -----
>>
>>  test/integration/smoke/test_primary_storage.py 66aec59 
>> test/integration/smoke/test_vm_life_cycle.py 240ab68
>>
>> Diff: https://reviews.apache.org/r/23605/diff/
>>
>>
>> Testing
>> -------
>>
>>
>> Thanks,
>>
>> Gaurav Aradhye
>>
>>
>
>

Reply via email to