Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-22 Thread Boris Pavlovic
David,


1. It will either lead to people doing things to game the system or overuse
> of the #no-coverage-check  tag you mentioned.


Job doesn't force people to increase coverage, it just checks that it was
not decreased.

Btw according to the latest refactoring, we are using absolute values
missing lines, so now we don't need #no-coverage-check at all.


2. It really doesn't tell you too much. A core developer should really be
> looking at the tested use cases to see if they are all there. Line coverage
> and even branch coverage won't tell you that.


It saves a lot of time.

1) I don't want to review patches that are not fully covered by tests (they
just won't be merged in any case).
Checking by eyes takes in average 2 minutes. I am doing 750 reviews /
release.
This automation saves about ~25 hours / release, that can be spend on
analyze and reviewing of unit tests code quality.
If we assume that in

2) It simplify dev process:

I write my code. Before publishing I just run "tox -ecover" that shows
not fully covered files in my change.

3) It reduce usage of CI hardware and reduce amount of patch sets

If "tox" runs cover job by default, most developers will make it pass
before sending patch on review.


Best regards,
Boris Pavlovic

On Wed, Apr 22, 2015 at 5:00 PM, David Stanek  wrote:

>
> On Sat, Apr 18, 2015 at 9:30 PM, Boris Pavlovic  wrote:
>
>> Code coverage is one of the very important metric of overall code
>> quality especially in case of Python. It's quite important to ensure that
>> code is covered fully with well written unit tests.
>>
>> One of the nice thing is coverage job.
>>
>
> I really like the idea of adding the coverage job everywhere so that
> developers can view the results be using a link in Gerrit. I think this
> would make it easier for many.
>
> I don't like the idea of checking that coverage is increased. There are
> many issues with that. The two biggest one for me are:
>
> 1. It will either lead to people doing things to game the system or
> overuse of the #no-coverage-check  tag you mentioned.
>
> 2. It really doesn't tell you too much. A core developer should really be
> looking at the tested use cases to see if they are all there. Line coverage
> and even branch coverage won't tell you that.
>
>
> --
> David
> blog: http://www.traceback.org
> twitter: http://twitter.com/dstanek
> www: http://dstanek.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-22 Thread David Stanek
On Sat, Apr 18, 2015 at 9:30 PM, Boris Pavlovic  wrote:

> Code coverage is one of the very important metric of overall code quality
> especially in case of Python. It's quite important to ensure that code is
> covered fully with well written unit tests.
>
> One of the nice thing is coverage job.
>

I really like the idea of adding the coverage job everywhere so that
developers can view the results be using a link in Gerrit. I think this
would make it easier for many.

I don't like the idea of checking that coverage is increased. There are
many issues with that. The two biggest one for me are:

1. It will either lead to people doing things to game the system or overuse
of the #no-coverage-check  tag you mentioned.

2. It really doesn't tell you too much. A core developer should really be
looking at the tested use cases to see if they are all there. Line coverage
and even branch coverage won't tell you that.


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Boris Pavlovic
Ian,



If you were thinking instead to provide coverage *tools* that were easy for
> developers to use,


Hm, seems like you missed the point. This "gate job" can be run like unit
tests "tox -e cover". That will point you on the missing lines that are
introduced in your patch.

  As a dev, I would not be terribly interested in finding that I've
> improved overall test coverage from 90.1% to 90.2%


It is not the goal of the job that I add. Job checks that your patch don't
introduce code that is not covered by unit test (that is all).


but I might be *very* interested to know that I got 100% decision (or even
> boolean) coverage on the specific lines of the feature I just added by
> running just the unit tests that exercise it.


And this is exactly what "tox -e cover" does and job that run tox -e cover
in gates.

Best regards,
Boris Pavlovic


On Tue, Apr 21, 2015 at 3:28 AM, Ian Wells  wrote:

> On 20 April 2015 at 07:40, Boris Pavlovic  wrote:
>
>> Dan,
>>
>> IMHO, most of the test coverage we have for nova's neutronapi is more
>>> than useless. It's so synthetic that it provides no regression
>>> protection, and often requires significantly more work than the change
>>> that is actually being added. It's a huge maintenance burden with very
>>> little value, IMHO. Good tests for that code would be very valuable of
>>> course, but what is there now is not.
>>> I think there are cases where going from 90 to 91% mean adding a ton of
>>> extra spaghetti just to satisfy a bot, which actually adds nothing but
>>> bloat to maintain.
>>
>>
>> Let's not mix the bad unit tests in Nova with the fact that code should
>> be fully covered by well written unit tests.
>> This big task can be split into 2 smaller tasks:
>> 1) Bot that will check that we are covering new code by tests and don't
>> introduce regressions
>>
>
> http://en.wikipedia.org/wiki/Code_coverage
>
> You appear to be talking about statement coverage, which is one of the
> weaker coverage metrics.
>
> if a:
> thing
>
> gets 100% statement coverage if a is true, so I don't need to test when a
> is false (which would be at a minimum decision coverage).
>
> I wonder if the focus is wrong.  Maybe helping devs is better than making
> more gate jobs, for starters; and maybe overall coverage is not a great
> metric when you're changing 100 lines in 100,000.  If you were thinking
> instead to provide coverage *tools* that were easy for developers to use,
> that would be a different question.  As a dev, I would not be terribly
> interested in finding that I've improved overall test coverage from 90.1%
> to 90.2%, but I might be *very* interested to know that I got 100% decision
> (or even boolean) coverage on the specific lines of the feature I just
> added by running just the unit tests that exercise it.
> --
> Ian.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Joshua Harlow
It'd be nice to having something like https://coveralls.io/features 
which afaik just reports back on pull requests (and doesn't try to 
enforce much of anything, aka non-voting).


For example: https://github.com/aliles/funcsigs/pull/13

In general it'd be neat if we could more easily interconnect into these 
kind of github.com interconnects (for lack of better words) somehow, but 
I'm not sure if any such interconnect exists (something that translates 
gerrit reviews into a format these systems can understand and post back 
to?).


Ian Wells wrote:

On 20 April 2015 at 07:40, Boris Pavlovic mailto:bo...@pavlovic.me>> wrote:

Dan,

IMHO, most of the test coverage we have for nova's neutronapi is
more
than useless. It's so synthetic that it provides no regression
protection, and often requires significantly more work than the
change
that is actually being added. It's a huge maintenance burden
with very
little value, IMHO. Good tests for that code would be very
valuable of
course, but what is there now is not.
I think there are cases where going from 90 to 91% mean adding a
ton of
extra spaghetti just to satisfy a bot, which actually adds
nothing but
bloat to maintain.


Let's not mix the bad unit tests in Nova with the fact that code
should be fully covered by well written unit tests.
This big task can be split into 2 smaller tasks:
1) Bot that will check that we are covering new code by tests and
don't introduce regressions


http://en.wikipedia.org/wiki/Code_coverage

You appear to be talking about statement coverage, which is one of the
weaker coverage metrics.

 if a:
 thing

gets 100% statement coverage if a is true, so I don't need to test when
a is false (which would be at a minimum decision coverage).

I wonder if the focus is wrong.  Maybe helping devs is better than
making more gate jobs, for starters; and maybe overall coverage is not a
great metric when you're changing 100 lines in 100,000.  If you were
thinking instead to provide coverage *tools* that were easy for
developers to use, that would be a different question.  As a dev, I
would not be terribly interested in finding that I've improved overall
test coverage from 90.1% to 90.2%, but I might be *very* interested to
know that I got 100% decision (or even boolean) coverage on the specific
lines of the feature I just added by running just the unit tests that
exercise it.
--
Ian.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Ian Wells
On 20 April 2015 at 07:40, Boris Pavlovic  wrote:

> Dan,
>
> IMHO, most of the test coverage we have for nova's neutronapi is more
>> than useless. It's so synthetic that it provides no regression
>> protection, and often requires significantly more work than the change
>> that is actually being added. It's a huge maintenance burden with very
>> little value, IMHO. Good tests for that code would be very valuable of
>> course, but what is there now is not.
>> I think there are cases where going from 90 to 91% mean adding a ton of
>> extra spaghetti just to satisfy a bot, which actually adds nothing but
>> bloat to maintain.
>
>
> Let's not mix the bad unit tests in Nova with the fact that code should be
> fully covered by well written unit tests.
> This big task can be split into 2 smaller tasks:
> 1) Bot that will check that we are covering new code by tests and don't
> introduce regressions
>

http://en.wikipedia.org/wiki/Code_coverage

You appear to be talking about statement coverage, which is one of the
weaker coverage metrics.

if a:
thing

gets 100% statement coverage if a is true, so I don't need to test when a
is false (which would be at a minimum decision coverage).

I wonder if the focus is wrong.  Maybe helping devs is better than making
more gate jobs, for starters; and maybe overall coverage is not a great
metric when you're changing 100 lines in 100,000.  If you were thinking
instead to provide coverage *tools* that were easy for developers to use,
that would be a different question.  As a dev, I would not be terribly
interested in finding that I've improved overall test coverage from 90.1%
to 90.2%, but I might be *very* interested to know that I got 100% decision
(or even boolean) coverage on the specific lines of the feature I just
added by running just the unit tests that exercise it.
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Boris Pavlovic
Morgan,

Thank you for your input. I improved coverage job in this patch:
https://review.openstack.org/#/c/175557/1

Now:
* It is based on missing lines and not coverage percentage.
* It shows nice messages and coverage diffs:

   Allowed to introduce missing lines : 8
   Missing lines in master: 649
   Missing lines in proposed change   : 669

   NameStmts   Miss
Branch BrMiss  Cover
   @@ -43 +43 @@
   -rally/benchmark/processing/utils   52  0
  30  198.780%
   +rally/benchmark/processing/utils   52 20
  30 1557.317%
   @@ -190 +190 @@
   -TOTAL9400649
228439491.073%
   +TOTAL9400669
228440890.782%

   Please write more unit tests, we should keep our test coverage :(


Best regards,
Boris Pavlovic

On Mon, Apr 20, 2015 at 9:11 PM, Hayes, Graham  wrote:

> On 20/04/15 18:01, Clint Byrum wrote:
> > Excerpts from Boris Pavlovic's message of 2015-04-18 18:30:02 -0700:
> >> Hi stackers,
> >>
> >> Code coverage is one of the very important metric of overall code
> quality
> >> especially in case of Python. It's quite important to ensure that code
> is
> >> covered fully with well written unit tests.
> >>
> >> One of the nice thing is coverage job.
> >>
> >> In Rally we are running it against every check which allows us to get
> >> detailed information about
> >> coverage before merging patch:
> >>
> http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/
> >>
> >> This helped Rally core team to automate checking that new/changed code
> is
> >> covered by unit tests and we raised unit test coverage from ~78% to
> almost
> >> 91%.
> >>
> >> But it produces few issues:
> >> 1) >9k nitpicking - core reviewers have to put -1 if something is not
> >> covered by unit tests
> >> 2) core team scaling issues - core team members spend a lot of time just
> >> checking that whole code is covered by unit test and leaving messages
> like
> >> this should be covered by unit test
> >> 3) not friendly community - it's not nice to get on your code -1 from
> >> somebody that is asking just to write unit tests
> >> 4) coverage regressions - sometimes we accidentally accept patches that
> >> reduce coverage
> >>
> >> To resolve this issue I improved a bit coverage job in Rally project,
> and
> >> now it compares master vs master + patch coverage. If new coverage is
> less
> >> than master job is marked as -1.
> >>
> >> Here is the patch for job enhancement:
> >> https://review.openstack.org/#/c/174645/
> >>
> >> Here is coverage job in action:
> >> patch https://review.openstack.org/#/c/174677/
> >> job message
> >>
> http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695
> >>
> >
> > The link to the important line was key, because without it, just clicking
> > through from the review was incomprehensible to me. Can I suggest some
> > whitespace or bordering so we can see where the error is easily?
> >
> > Anyway, interesting thoughts from everyone. I have to agree with those
> > that say this isn't reliable enough to make it vote. Non-voting would be
> > interesting though, if it gave a clear score difference, and a diff of
> > the two coverage reports. I think this is more useful as an automated
> > pointer to how things probably should be, but sometimes it's entirely
> > o-k to regress this number a few points.
> >
> > Also graphing this over time in a post-commit job seems like a
> no-brainer.
>
>
> Designate has started doing this - it is still a WIP as we continue
> tweaking settings, but we have a dashboard here -
>
> http://sonar.designate-ci.com/
>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Hayes, Graham
On 20/04/15 18:01, Clint Byrum wrote:
> Excerpts from Boris Pavlovic's message of 2015-04-18 18:30:02 -0700:
>> Hi stackers,
>>
>> Code coverage is one of the very important metric of overall code quality
>> especially in case of Python. It's quite important to ensure that code is
>> covered fully with well written unit tests.
>>
>> One of the nice thing is coverage job.
>>
>> In Rally we are running it against every check which allows us to get
>> detailed information about
>> coverage before merging patch:
>> http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/
>>
>> This helped Rally core team to automate checking that new/changed code is
>> covered by unit tests and we raised unit test coverage from ~78% to almost
>> 91%.
>>
>> But it produces few issues:
>> 1) >9k nitpicking - core reviewers have to put -1 if something is not
>> covered by unit tests
>> 2) core team scaling issues - core team members spend a lot of time just
>> checking that whole code is covered by unit test and leaving messages like
>> this should be covered by unit test
>> 3) not friendly community - it's not nice to get on your code -1 from
>> somebody that is asking just to write unit tests
>> 4) coverage regressions - sometimes we accidentally accept patches that
>> reduce coverage
>>
>> To resolve this issue I improved a bit coverage job in Rally project, and
>> now it compares master vs master + patch coverage. If new coverage is less
>> than master job is marked as -1.
>>
>> Here is the patch for job enhancement:
>> https://review.openstack.org/#/c/174645/
>>
>> Here is coverage job in action:
>> patch https://review.openstack.org/#/c/174677/
>> job message
>> http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695
>>
> 
> The link to the important line was key, because without it, just clicking
> through from the review was incomprehensible to me. Can I suggest some
> whitespace or bordering so we can see where the error is easily?
> 
> Anyway, interesting thoughts from everyone. I have to agree with those
> that say this isn't reliable enough to make it vote. Non-voting would be
> interesting though, if it gave a clear score difference, and a diff of
> the two coverage reports. I think this is more useful as an automated
> pointer to how things probably should be, but sometimes it's entirely
> o-k to regress this number a few points.
> 
> Also graphing this over time in a post-commit job seems like a no-brainer.


Designate has started doing this - it is still a WIP as we continue
tweaking settings, but we have a dashboard here -

http://sonar.designate-ci.com/

> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Boris Pavlovic
Clint,


Anyway, interesting thoughts from everyone. I have to agree with those
> that say this isn't reliable enough to make it vote. Non-voting would be
> interesting though, if it gave a clear score difference, and a diff of
> the two coverage reports. I think this is more useful as an automated
> pointer to how things probably should be, but sometimes it's entirely
> o-k to regress this number a few points.


Diffs between reports is almost ready.


Best regards,
Boris Pavlovic


On Mon, Apr 20, 2015 at 7:59 PM, Clint Byrum  wrote:

> Excerpts from Boris Pavlovic's message of 2015-04-18 18:30:02 -0700:
> > Hi stackers,
> >
> > Code coverage is one of the very important metric of overall code quality
> > especially in case of Python. It's quite important to ensure that code is
> > covered fully with well written unit tests.
> >
> > One of the nice thing is coverage job.
> >
> > In Rally we are running it against every check which allows us to get
> > detailed information about
> > coverage before merging patch:
> >
> http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/
> >
> > This helped Rally core team to automate checking that new/changed code is
> > covered by unit tests and we raised unit test coverage from ~78% to
> almost
> > 91%.
> >
> > But it produces few issues:
> > 1) >9k nitpicking - core reviewers have to put -1 if something is not
> > covered by unit tests
> > 2) core team scaling issues - core team members spend a lot of time just
> > checking that whole code is covered by unit test and leaving messages
> like
> > this should be covered by unit test
> > 3) not friendly community - it's not nice to get on your code -1 from
> > somebody that is asking just to write unit tests
> > 4) coverage regressions - sometimes we accidentally accept patches that
> > reduce coverage
> >
> > To resolve this issue I improved a bit coverage job in Rally project, and
> > now it compares master vs master + patch coverage. If new coverage is
> less
> > than master job is marked as -1.
> >
> > Here is the patch for job enhancement:
> > https://review.openstack.org/#/c/174645/
> >
> > Here is coverage job in action:
> > patch https://review.openstack.org/#/c/174677/
> > job message
> >
> http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695
> >
>
> The link to the important line was key, because without it, just clicking
> through from the review was incomprehensible to me. Can I suggest some
> whitespace or bordering so we can see where the error is easily?
>
> Anyway, interesting thoughts from everyone. I have to agree with those
> that say this isn't reliable enough to make it vote. Non-voting would be
> interesting though, if it gave a clear score difference, and a diff of
> the two coverage reports. I think this is more useful as an automated
> pointer to how things probably should be, but sometimes it's entirely
> o-k to regress this number a few points.
>
> Also graphing this over time in a post-commit job seems like a no-brainer.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Clint Byrum
Excerpts from Boris Pavlovic's message of 2015-04-18 18:30:02 -0700:
> Hi stackers,
> 
> Code coverage is one of the very important metric of overall code quality
> especially in case of Python. It's quite important to ensure that code is
> covered fully with well written unit tests.
> 
> One of the nice thing is coverage job.
> 
> In Rally we are running it against every check which allows us to get
> detailed information about
> coverage before merging patch:
> http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/
> 
> This helped Rally core team to automate checking that new/changed code is
> covered by unit tests and we raised unit test coverage from ~78% to almost
> 91%.
> 
> But it produces few issues:
> 1) >9k nitpicking - core reviewers have to put -1 if something is not
> covered by unit tests
> 2) core team scaling issues - core team members spend a lot of time just
> checking that whole code is covered by unit test and leaving messages like
> this should be covered by unit test
> 3) not friendly community - it's not nice to get on your code -1 from
> somebody that is asking just to write unit tests
> 4) coverage regressions - sometimes we accidentally accept patches that
> reduce coverage
> 
> To resolve this issue I improved a bit coverage job in Rally project, and
> now it compares master vs master + patch coverage. If new coverage is less
> than master job is marked as -1.
> 
> Here is the patch for job enhancement:
> https://review.openstack.org/#/c/174645/
> 
> Here is coverage job in action:
> patch https://review.openstack.org/#/c/174677/
> job message
> http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695
> 

The link to the important line was key, because without it, just clicking
through from the review was incomprehensible to me. Can I suggest some
whitespace or bordering so we can see where the error is easily?

Anyway, interesting thoughts from everyone. I have to agree with those
that say this isn't reliable enough to make it vote. Non-voting would be
interesting though, if it gave a clear score difference, and a diff of
the two coverage reports. I think this is more useful as an automated
pointer to how things probably should be, but sometimes it's entirely
o-k to regress this number a few points.

Also graphing this over time in a post-commit job seems like a no-brainer.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Mike Perez
On 09:30 Apr 20, Jay Pipes wrote:
> On 04/20/2015 07:13 AM, Sean Dague wrote:
> >On 04/18/2015 09:30 PM, Boris Pavlovic wrote:
> >>Hi stackers,
> >>
> >>Code coverage is one of the very important metric of overall code
> >>quality especially in case of Python. It's quite important to ensure
> >>that code is covered fully with well written unit tests.
> >>
> >>One of the nice thing is coverage job.
> >>
> >>In Rally we are running it against every check which allows us to get
> >>detailed information about
> >>coverage before merging patch:
> >>http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/
> >>
> >>This helped Rally core team to automate checking that new/changed code
> >>is covered by unit tests and we raised unit test coverage from ~78% to
> >>almost 91%.
> >>
> >>But it produces few issues:
> >>1) >9k nitpicking - core reviewers have to put -1 if something is not
> >>covered by unit tests
> >>2) core team scaling issues - core team members spend a lot of time just
> >>checking that whole code is covered by unit test and leaving messages
> >>like this should be covered by unit test
> >>3) not friendly community - it's not nice to get on your code -1 from
> >>somebody that is asking just to write unit tests
> >>4) coverage regressions - sometimes we accidentally accept patches that
> >>reduce coverage
> >>
> >>To resolve this issue I improved a bit coverage job in Rally project,
> >>and now it compares master vs master + patch coverage. If new coverage
> >>is less than master job is marked as -1.
> >>
> >>Here is the patch for job enhancement:
> >>https://review.openstack.org/#/c/174645/
> >>
> >>Here is coverage job in action:
> >>patch https://review.openstack.org/#/c/174677/
> >>job message
> >>http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695
> >
> >Honestly, I'm suspicious of approaches like this. While 90% coverage is
> >clearly better than 0% coverage, it's not clear that 91% coverage is
> >always better than 90% coverage. Especially when you talk about larger
> >and more complex code bases.
> 
> Well, I think there are very few cases where *less* coverage is better.
> 
> >I actually firmly feel that #3 is wrong. I think it's a lot better to
> >have an early conversation with people about unit tests that need to be
> >written when they don't submit any. Because I think it's a lot less
> >friendly to have someone iterate 10 patches to figure out how to pass a
> >bot's idea of good tests, and then have a reviewer say it's wrong.
> 
> This I completely agree with. Asking for unit tests is a common
> thing, and if done early in the review process, is not a
> "non-friendly" thing. It's just matter of fact. And if the comment is
> given with some example unit test code -- or a pointer to a unit test
> that could be used as an example -- even better. It grows the
> submitter.

I also agree talking with people early about this is fine. I deal with new
reviewers often, and it has never been a negative experience. Just don't be
a jerk when asking, and follow up soon.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Dan Smith
> Let's not mix the bad unit tests in Nova with the fact that code should
> be fully covered by well written unit tests. 

I'm not using bad tests in nova to justify not having coverage testing.
I'm saying that the argument that "more coverage is always better" has
some real-life counter examples.

Also, "a test that says coverage increased might lead to lazy +2s" is
very similar to "this code made it into the tree because it had a ton of
(bad) tests that made it look well-tested", which we already know is true :)

> P.S. Unit tests are the first criteria of code quality: if it is hard to
> cover code by unit tests, code is bad written and should be refactored. 

Totally agree. Draw what conclusions you will about my feelings on the
quality of the code that is tested by those tests :)

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Boris Pavlovic
Dan,

IMHO, most of the test coverage we have for nova's neutronapi is more
> than useless. It's so synthetic that it provides no regression
> protection, and often requires significantly more work than the change
> that is actually being added. It's a huge maintenance burden with very
> little value, IMHO. Good tests for that code would be very valuable of
> course, but what is there now is not.
> I think there are cases where going from 90 to 91% mean adding a ton of
> extra spaghetti just to satisfy a bot, which actually adds nothing but
> bloat to maintain.


Let's not mix the bad unit tests in Nova with the fact that code should be
fully covered by well written unit tests.
This big task can be split into 2 smaller tasks:
1) Bot that will check that we are covering new code by tests and don't
introduce regressions
2) Core and other reviewers ensures that tests are well written.

P.S. Unit tests are the first criteria of code quality: if it is hard to
cover code by unit tests, code is bad written
and should be refactored.

Best regards,
Boris Pavlovic

On Mon, Apr 20, 2015 at 5:26 PM, Dan Smith  wrote:

> > Well, I think there are very few cases where *less* coverage is better.
>
> IMHO, most of the test coverage we have for nova's neutronapi is more
> than useless. It's so synthetic that it provides no regression
> protection, and often requires significantly more work than the change
> that is actually being added. It's a huge maintenance burden with very
> little value, IMHO. Good tests for that code would be very valuable of
> course, but what is there now is not.
>
> I think there are cases where going from 90 to 91% mean adding a ton of
> extra spaghetti just to satisfy a bot, which actually adds nothing but
> bloat to maintain.
>
> > This I completely agree with. Asking for unit tests is a common thing,
> > and if done early in the review process, is not a "non-friendly" thing.
> > It's just matter of fact. And if the comment is given with some example
> > unit test code -- or a pointer to a unit test that could be used as an
> > example -- even better. It grows the submitter.
>
> +1
>
> > I certainly am not opposed to introducing coverage regression jobs that
> > produce a history of coverage. I would not personally support them being
> > a blocking/gate job, though.
>
> As Gordon said elsewhere in this thread, I'm not even sure I want to see
> it reporting as PASS/FAIL. It sounds like this would end up being like
> our pylint job, which was utterly confused by a lot of things and
> started to be something that wasn't even reliable enough to use as an
> advisory test.
>
> Recording the data for long-term analysis would be excellent though.
> It'd be nice to see that we increased coverage over a cycle.
>
> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Boris Pavlovic
Gordon,

+1. i don't think this would be a good idea as a non-voting job either as
> it can/will lead to lazy reviews.


It is similar to say let's remove unit/functional/pep8/pylint/"any other
testing" because it will lead to lazy reviews.

Best regards,
Boris Pavlovic





On Mon, Apr 20, 2015 at 5:14 PM, gordon chung  wrote:

> 
> > Date: Mon, 20 Apr 2015 07:13:31 -0400
> > From: s...@dague.net
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [all][code quality] Voting coverage job (-1
> if coverage get worse after patch)
> >
> > On 04/18/2015 09:30 PM, Boris Pavlovic wrote:
> >> Hi stackers,
> >>
> >> Code coverage is one of the very important metric of overall code
> >> quality especially in case of Python. It's quite important to ensure
> >> that code is covered fully with well written unit tests.
> >>
> >> One of the nice thing is coverage job.
> >>
> >> In Rally we are running it against every check which allows us to get
> >> detailed information about
> >> coverage before merging patch:
> >>
> http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/
> >>
> >> This helped Rally core team to automate checking that new/changed code
> >> is covered by unit tests and we raised unit test coverage from ~78% to
> >> almost 91%.
> >>
> >> But it produces few issues:
> >> 1)>9k nitpicking - core reviewers have to put -1 if something is not
> >> covered by unit tests
> >> 2) core team scaling issues - core team members spend a lot of time just
> >> checking that whole code is covered by unit test and leaving messages
> >> like this should be covered by unit test
> >> 3) not friendly community - it's not nice to get on your code -1 from
> >> somebody that is asking just to write unit tests
> >> 4) coverage regressions - sometimes we accidentally accept patches that
> >> reduce coverage
> >>
> >> To resolve this issue I improved a bit coverage job in Rally project,
> >> and now it compares master vs master + patch coverage. If new coverage
> >> is less than master job is marked as -1.
> >>
> >> Here is the patch for job enhancement:
> >> https://review.openstack.org/#/c/174645/
> >>
> >> Here is coverage job in action:
> >> patch https://review.openstack.org/#/c/174677/
> >> job message
> >>
> http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695
> >
> > Honestly, I'm suspicious of approaches like this. While 90% coverage is
> > clearly better than 0% coverage, it's not clear that 91% coverage is
> > always better than 90% coverage. Especially when you talk about larger
> > and more complex code bases.
> >
> > I actually firmly feel that #3 is wrong. I think it's a lot better to
> > have an early conversation with people about unit tests that need to be
> > written when they don't submit any. Because I think it's a lot less
> > friendly to have someone iterate 10 patches to figure out how to pass a
> > bot's idea of good tests, and then have a reviewer say it's wrong.
> >
> > Honestly, coverage for projects seems to me to be more of an analog
> > measure that would be good to graph over time and make sure it's not
> > regression, but personally I think the brownian motion of individual
> > patches seems like it's not a great idea to gate on every single patch.
> > I personally would be -1 for adding this to projects that I have +2 on.
> >
> > -Sean
> >
> > --
> > Sean Dague
> > http://dague.net
>
> +1. i don't think this would be a good idea as a non-voting job either as
> it can/will lead to lazy reviews.
>
> cheers,
> gord
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Dan Smith
> Well, I think there are very few cases where *less* coverage is better.

IMHO, most of the test coverage we have for nova's neutronapi is more
than useless. It's so synthetic that it provides no regression
protection, and often requires significantly more work than the change
that is actually being added. It's a huge maintenance burden with very
little value, IMHO. Good tests for that code would be very valuable of
course, but what is there now is not.

I think there are cases where going from 90 to 91% mean adding a ton of
extra spaghetti just to satisfy a bot, which actually adds nothing but
bloat to maintain.

> This I completely agree with. Asking for unit tests is a common thing,
> and if done early in the review process, is not a "non-friendly" thing.
> It's just matter of fact. And if the comment is given with some example
> unit test code -- or a pointer to a unit test that could be used as an
> example -- even better. It grows the submitter.

+1

> I certainly am not opposed to introducing coverage regression jobs that
> produce a history of coverage. I would not personally support them being
> a blocking/gate job, though.

As Gordon said elsewhere in this thread, I'm not even sure I want to see
it reporting as PASS/FAIL. It sounds like this would end up being like
our pylint job, which was utterly confused by a lot of things and
started to be something that wasn't even reliable enough to use as an
advisory test.

Recording the data for long-term analysis would be excellent though.
It'd be nice to see that we increased coverage over a cycle.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread gordon chung

> Date: Mon, 20 Apr 2015 07:13:31 -0400
> From: s...@dague.net
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if 
> coverage get worse after patch)
>
> On 04/18/2015 09:30 PM, Boris Pavlovic wrote:
>> Hi stackers,
>>
>> Code coverage is one of the very important metric of overall code
>> quality especially in case of Python. It's quite important to ensure
>> that code is covered fully with well written unit tests.
>>
>> One of the nice thing is coverage job.
>>
>> In Rally we are running it against every check which allows us to get
>> detailed information about
>> coverage before merging patch:
>> http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/
>>
>> This helped Rally core team to automate checking that new/changed code
>> is covered by unit tests and we raised unit test coverage from ~78% to
>> almost 91%.
>>
>> But it produces few issues:
>> 1)>9k nitpicking - core reviewers have to put -1 if something is not
>> covered by unit tests
>> 2) core team scaling issues - core team members spend a lot of time just
>> checking that whole code is covered by unit test and leaving messages
>> like this should be covered by unit test
>> 3) not friendly community - it's not nice to get on your code -1 from
>> somebody that is asking just to write unit tests
>> 4) coverage regressions - sometimes we accidentally accept patches that
>> reduce coverage
>>
>> To resolve this issue I improved a bit coverage job in Rally project,
>> and now it compares master vs master + patch coverage. If new coverage
>> is less than master job is marked as -1.
>>
>> Here is the patch for job enhancement:
>> https://review.openstack.org/#/c/174645/
>>
>> Here is coverage job in action:
>> patch https://review.openstack.org/#/c/174677/
>> job message
>> http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695
>
> Honestly, I'm suspicious of approaches like this. While 90% coverage is
> clearly better than 0% coverage, it's not clear that 91% coverage is
> always better than 90% coverage. Especially when you talk about larger
> and more complex code bases.
>
> I actually firmly feel that #3 is wrong. I think it's a lot better to
> have an early conversation with people about unit tests that need to be
> written when they don't submit any. Because I think it's a lot less
> friendly to have someone iterate 10 patches to figure out how to pass a
> bot's idea of good tests, and then have a reviewer say it's wrong.
>
> Honestly, coverage for projects seems to me to be more of an analog
> measure that would be good to graph over time and make sure it's not
> regression, but personally I think the brownian motion of individual
> patches seems like it's not a great idea to gate on every single patch.
> I personally would be -1 for adding this to projects that I have +2 on.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net

+1. i don't think this would be a good idea as a non-voting job either as it 
can/will lead to lazy reviews.

cheers,
gord

  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Jay Pipes

On 04/20/2015 07:13 AM, Sean Dague wrote:

On 04/18/2015 09:30 PM, Boris Pavlovic wrote:

Hi stackers,

Code coverage is one of the very important metric of overall code
quality especially in case of Python. It's quite important to ensure
that code is covered fully with well written unit tests.

One of the nice thing is coverage job.

In Rally we are running it against every check which allows us to get
detailed information about
coverage before merging patch:
http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/

This helped Rally core team to automate checking that new/changed code
is covered by unit tests and we raised unit test coverage from ~78% to
almost 91%.

But it produces few issues:
1) >9k nitpicking - core reviewers have to put -1 if something is not
covered by unit tests
2) core team scaling issues - core team members spend a lot of time just
checking that whole code is covered by unit test and leaving messages
like this should be covered by unit test
3) not friendly community - it's not nice to get on your code -1 from
somebody that is asking just to write unit tests
4) coverage regressions - sometimes we accidentally accept patches that
reduce coverage

To resolve this issue I improved a bit coverage job in Rally project,
and now it compares master vs master + patch coverage. If new coverage
is less than master job is marked as -1.

Here is the patch for job enhancement:
https://review.openstack.org/#/c/174645/

Here is coverage job in action:
patch https://review.openstack.org/#/c/174677/
job message
http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695


Honestly, I'm suspicious of approaches like this. While 90% coverage is
clearly better than 0% coverage, it's not clear that 91% coverage is
always better than 90% coverage. Especially when you talk about larger
and more complex code bases.


Well, I think there are very few cases where *less* coverage is better.


I actually firmly feel that #3 is wrong. I think it's a lot better to
have an early conversation with people about unit tests that need to be
written when they don't submit any. Because I think it's a lot less
friendly to have someone iterate 10 patches to figure out how to pass a
bot's idea of good tests, and then have a reviewer say it's wrong.


This I completely agree with. Asking for unit tests is a common thing, 
and if done early in the review process, is not a "non-friendly" thing. 
It's just matter of fact. And if the comment is given with some example 
unit test code -- or a pointer to a unit test that could be used as an 
example -- even better. It grows the submitter.



Honestly, coverage for projects seems to me to be more of an analog
measure that would be good to graph over time and make sure it's not
regression, but personally I think the brownian motion of individual
patches seems like it's not a great idea to gate on every single patch.
I personally would be -1 for adding this to projects that I have +2 on.


I certainly am not opposed to introducing coverage regression jobs that 
produce a history of coverage. I would not personally support them being 
a blocking/gate job, though.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-20 Thread Sean Dague
On 04/18/2015 09:30 PM, Boris Pavlovic wrote:
> Hi stackers, 
> 
> Code coverage is one of the very important metric of overall code
> quality especially in case of Python. It's quite important to ensure
> that code is covered fully with well written unit tests. 
> 
> One of the nice thing is coverage job. 
> 
> In Rally we are running it against every check which allows us to get
> detailed information about
> coverage before merging patch: 
> http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/
> 
> This helped Rally core team to automate checking that new/changed code
> is covered by unit tests and we raised unit test coverage from ~78% to
> almost 91%. 
> 
> But it produces few issues: 
> 1) >9k nitpicking - core reviewers have to put -1 if something is not
> covered by unit tests
> 2) core team scaling issues - core team members spend a lot of time just
> checking that whole code is covered by unit test and leaving messages
> like this should be covered by unit test 
> 3) not friendly community - it's not nice to get on your code -1 from
> somebody that is asking just to write unit tests
> 4) coverage regressions - sometimes we accidentally accept patches that
> reduce coverage  
> 
> To resolve this issue I improved a bit coverage job in Rally project,
> and now it compares master vs master + patch coverage. If new coverage
> is less than master job is marked as -1. 
> 
> Here is the patch for job enhancement: 
> https://review.openstack.org/#/c/174645/
> 
> Here is coverage job in action:
> patch https://review.openstack.org/#/c/174677/
> job message
> http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695

Honestly, I'm suspicious of approaches like this. While 90% coverage is
clearly better than 0% coverage, it's not clear that 91% coverage is
always better than 90% coverage. Especially when you talk about larger
and more complex code bases.

I actually firmly feel that #3 is wrong. I think it's a lot better to
have an early conversation with people about unit tests that need to be
written when they don't submit any. Because I think it's a lot less
friendly to have someone iterate 10 patches to figure out how to pass a
bot's idea of good tests, and then have a reviewer say it's wrong.

Honestly, coverage for projects seems to me to be more of an analog
measure that would be good to graph over time and make sure it's not
regression, but personally I think the brownian motion of individual
patches seems like it's not a great idea to gate on every single patch.
I personally would be -1 for adding this to projects that I have +2 on.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-19 Thread Boris Pavlovic
Morgan,


Good catch. This can be easily fixed if we add special tag in commit
message: e.g. #no-coverage-check


Best regards,
Boris Pavlovic

On Sun, Apr 19, 2015 at 9:33 AM, Morgan Fainberg 
wrote:

> This is an interesting idea, but just a note on implementation:
>
> It is absolutely possible to reduce the % of coverage without losing (or
> even gaining) coverage of the code base. This can occur if deprecated code
> is removed and no new unit tests are added. Overall % of code covered by
> tests can decline since covered code has been removed with its unit tests,
> and non-covered code remains the same. In reality, coverage has not changed
> (or has improved). It is simply a limitation in going purely by % of code
> covered.
>
> I suggest that this move towards checking for classes/methods and make
> sure that coverage for those do not change between the two runs. If a given
> method or class has 100% coverage prior to a patch set, and in the new
> revision, it drops to less than 100% it should at least flag that there was
> a change in coverage. If a class, function, or method is removed it won't
> be in the new coverage report, and should not impact the test.
>
> --Morgan
>
> Sent via mobile
>
> On Apr 18, 2015, at 18:30, Boris Pavlovic  wrote:
>
> Hi stackers,
>
> Code coverage is one of the very important metric of overall code quality
> especially in case of Python. It's quite important to ensure that code is
> covered fully with well written unit tests.
>
> One of the nice thing is coverage job.
>
> In Rally we are running it against every check which allows us to get
> detailed information about
> coverage before merging patch:
> http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/
>
> This helped Rally core team to automate checking that new/changed code is
> covered by unit tests and we raised unit test coverage from ~78% to almost
> 91%.
>
> But it produces few issues:
> 1) >9k nitpicking - core reviewers have to put -1 if something is not
> covered by unit tests
> 2) core team scaling issues - core team members spend a lot of time just
> checking that whole code is covered by unit test and leaving messages like
> this should be covered by unit test
> 3) not friendly community - it's not nice to get on your code -1 from
> somebody that is asking just to write unit tests
> 4) coverage regressions - sometimes we accidentally accept patches that
> reduce coverage
>
> To resolve this issue I improved a bit coverage job in Rally project, and
> now it compares master vs master + patch coverage. If new coverage is less
> than master job is marked as -1.
>
> Here is the patch for job enhancement:
> https://review.openstack.org/#/c/174645/
>
> Here is coverage job in action:
> patch https://review.openstack.org/#/c/174677/
> job message
> http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695
>
>
> Best regards,
> Boris Pavlovic
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-18 Thread Morgan Fainberg
This is an interesting idea, but just a note on implementation:

It is absolutely possible to reduce the % of coverage without losing (or even 
gaining) coverage of the code base. This can occur if deprecated code is 
removed and no new unit tests are added. Overall % of code covered by tests can 
decline since covered code has been removed with its unit tests, and 
non-covered code remains the same. In reality, coverage has not changed (or has 
improved). It is simply a limitation in going purely by % of code covered. 

I suggest that this move towards checking for classes/methods and make sure 
that coverage for those do not change between the two runs. If a given method 
or class has 100% coverage prior to a patch set, and in the new revision, it 
drops to less than 100% it should at least flag that there was a change in 
coverage. If a class, function, or method is removed it won't be in the new 
coverage report, and should not impact the test. 

--Morgan

Sent via mobile

> On Apr 18, 2015, at 18:30, Boris Pavlovic  wrote:
> 
> Hi stackers, 
> 
> Code coverage is one of the very important metric of overall code quality 
> especially in case of Python. It's quite important to ensure that code is 
> covered fully with well written unit tests. 
> 
> One of the nice thing is coverage job. 
> 
> In Rally we are running it against every check which allows us to get 
> detailed information about
> coverage before merging patch: 
> http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/
> 
> This helped Rally core team to automate checking that new/changed code is 
> covered by unit tests and we raised unit test coverage from ~78% to almost 
> 91%. 
> 
> But it produces few issues: 
> 1) >9k nitpicking - core reviewers have to put -1 if something is not covered 
> by unit tests
> 2) core team scaling issues - core team members spend a lot of time just 
> checking that whole code is covered by unit test and leaving messages like 
> this should be covered by unit test 
> 3) not friendly community - it's not nice to get on your code -1 from 
> somebody that is asking just to write unit tests
> 4) coverage regressions - sometimes we accidentally accept patches that 
> reduce coverage  
> 
> To resolve this issue I improved a bit coverage job in Rally project, and now 
> it compares master vs master + patch coverage. If new coverage is less than 
> master job is marked as -1. 
> 
> Here is the patch for job enhancement: 
> https://review.openstack.org/#/c/174645/
> 
> Here is coverage job in action:
> patch https://review.openstack.org/#/c/174677/
> job message 
> http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695
> 
> 
> Best regards,
> Boris Pavlovic 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][code quality] Voting coverage job (-1 if coverage get worse after patch)

2015-04-18 Thread Boris Pavlovic
Hi stackers,

Code coverage is one of the very important metric of overall code quality
especially in case of Python. It's quite important to ensure that code is
covered fully with well written unit tests.

One of the nice thing is coverage job.

In Rally we are running it against every check which allows us to get
detailed information about
coverage before merging patch:
http://logs.openstack.org/84/175084/1/check/rally-coverage/c61d5e1/cover/

This helped Rally core team to automate checking that new/changed code is
covered by unit tests and we raised unit test coverage from ~78% to almost
91%.

But it produces few issues:
1) >9k nitpicking - core reviewers have to put -1 if something is not
covered by unit tests
2) core team scaling issues - core team members spend a lot of time just
checking that whole code is covered by unit test and leaving messages like
this should be covered by unit test
3) not friendly community - it's not nice to get on your code -1 from
somebody that is asking just to write unit tests
4) coverage regressions - sometimes we accidentally accept patches that
reduce coverage

To resolve this issue I improved a bit coverage job in Rally project, and
now it compares master vs master + patch coverage. If new coverage is less
than master job is marked as -1.

Here is the patch for job enhancement:
https://review.openstack.org/#/c/174645/

Here is coverage job in action:
patch https://review.openstack.org/#/c/174677/
job message
http://logs.openstack.org/77/174677/4/check/rally-coverage/ba49c90/console.html#_2015-04-17_15_57_17_695


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev