David,
1. It will either lead to people doing things to game the system or overuse
> of the #no-coverage-check tag you mentioned.
Job doesn't force people to increase coverage, it just checks that it was
not decreased.
Btw according to the latest refactoring, we are using absolute values
miss
On Sat, Apr 18, 2015 at 9:30 PM, Boris Pavlovic wrote:
> Code coverage is one of the very important metric of overall code quality
> especially in case of Python. It's quite important to ensure that code is
> covered fully with well written unit tests.
>
> One of the nice thing is coverage job.
>
Ian,
If you were thinking instead to provide coverage *tools* that were easy for
> developers to use,
Hm, seems like you missed the point. This "gate job" can be run like unit
tests "tox -e cover". That will point you on the missing lines that are
introduced in your patch.
As a dev, I would
It'd be nice to having something like https://coveralls.io/features
which afaik just reports back on pull requests (and doesn't try to
enforce much of anything, aka non-voting).
For example: https://github.com/aliles/funcsigs/pull/13
In general it'd be neat if we could more easily interconnect
On 20 April 2015 at 07:40, Boris Pavlovic wrote:
> Dan,
>
> IMHO, most of the test coverage we have for nova's neutronapi is more
>> than useless. It's so synthetic that it provides no regression
>> protection, and often requires significantly more work than the change
>> that is actually being a
Morgan,
Thank you for your input. I improved coverage job in this patch:
https://review.openstack.org/#/c/175557/1
Now:
* It is based on missing lines and not coverage percentage.
* It shows nice messages and coverage diffs:
Allowed to introduce missing lines : 8
Missing lines in master
On 20/04/15 18:01, Clint Byrum wrote:
> Excerpts from Boris Pavlovic's message of 2015-04-18 18:30:02 -0700:
>> Hi stackers,
>>
>> Code coverage is one of the very important metric of overall code quality
>> especially in case of Python. It's quite important to ensure that code is
>> covered fully
Clint,
Anyway, interesting thoughts from everyone. I have to agree with those
> that say this isn't reliable enough to make it vote. Non-voting would be
> interesting though, if it gave a clear score difference, and a diff of
> the two coverage reports. I think this is more useful as an automated
Excerpts from Boris Pavlovic's message of 2015-04-18 18:30:02 -0700:
> Hi stackers,
>
> Code coverage is one of the very important metric of overall code quality
> especially in case of Python. It's quite important to ensure that code is
> covered fully with well written unit tests.
>
> One of th
On 09:30 Apr 20, Jay Pipes wrote:
> On 04/20/2015 07:13 AM, Sean Dague wrote:
> >On 04/18/2015 09:30 PM, Boris Pavlovic wrote:
> >>Hi stackers,
> >>
> >>Code coverage is one of the very important metric of overall code
> >>quality especially in case of Python. It's quite important to ensure
> >>tha
> Let's not mix the bad unit tests in Nova with the fact that code should
> be fully covered by well written unit tests.
I'm not using bad tests in nova to justify not having coverage testing.
I'm saying that the argument that "more coverage is always better" has
some real-life counter examples.
Dan,
IMHO, most of the test coverage we have for nova's neutronapi is more
> than useless. It's so synthetic that it provides no regression
> protection, and often requires significantly more work than the change
> that is actually being added. It's a huge maintenance burden with very
> little val
On Mon, Apr 20, 2015 at 5:14 PM, gordon chung wrote:
>
> > Date: Mon, 20 Apr 2015 07:13:31 -0400
> > From: s...@dague.net
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [all][code quality] Voting co
> Well, I think there are very few cases where *less* coverage is better.
IMHO, most of the test coverage we have for nova's neutronapi is more
than useless. It's so synthetic that it provides no regression
protection, and often requires significantly more work than the change
that is actually bei
> Date: Mon, 20 Apr 2015 07:13:31 -0400
> From: s...@dague.net
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if
> coverage get worse after patch)
>
> On 04/18
On 04/20/2015 07:13 AM, Sean Dague wrote:
On 04/18/2015 09:30 PM, Boris Pavlovic wrote:
Hi stackers,
Code coverage is one of the very important metric of overall code
quality especially in case of Python. It's quite important to ensure
that code is covered fully with well written unit tests.
O
On 04/18/2015 09:30 PM, Boris Pavlovic wrote:
> Hi stackers,
>
> Code coverage is one of the very important metric of overall code
> quality especially in case of Python. It's quite important to ensure
> that code is covered fully with well written unit tests.
>
> One of the nice thing is cover
Morgan,
Good catch. This can be easily fixed if we add special tag in commit
message: e.g. #no-coverage-check
Best regards,
Boris Pavlovic
On Sun, Apr 19, 2015 at 9:33 AM, Morgan Fainberg
wrote:
> This is an interesting idea, but just a note on implementation:
>
> It is absolutely possible t
This is an interesting idea, but just a note on implementation:
It is absolutely possible to reduce the % of coverage without losing (or even
gaining) coverage of the code base. This can occur if deprecated code is
removed and no new unit tests are added. Overall % of code covered by tests can
Hi stackers,
Code coverage is one of the very important metric of overall code quality
especially in case of Python. It's quite important to ensure that code is
covered fully with well written unit tests.
One of the nice thing is coverage job.
In Rally we are running it against every check which
20 matches
Mail list logo