It'd be nice to having something like https://coveralls.io/features which afaik just reports back on pull requests (and doesn't try to enforce much of anything, aka non-voting).

For example: https://github.com/aliles/funcsigs/pull/13

In general it'd be neat if we could more easily interconnect into these kind of github.com interconnects (for lack of better words) somehow, but I'm not sure if any such interconnect exists (something that translates gerrit reviews into a format these systems can understand and post back to?).

Ian Wells wrote:
On 20 April 2015 at 07:40, Boris Pavlovic <bo...@pavlovic.me
<mailto:bo...@pavlovic.me>> wrote:

    Dan,

        IMHO, most of the test coverage we have for nova's neutronapi is
        more
        than useless. It's so synthetic that it provides no regression
        protection, and often requires significantly more work than the
        change
        that is actually being added. It's a huge maintenance burden
        with very
        little value, IMHO. Good tests for that code would be very
        valuable of
        course, but what is there now is not.
        I think there are cases where going from 90 to 91% mean adding a
        ton of
        extra spaghetti just to satisfy a bot, which actually adds
        nothing but
        bloat to maintain.


    Let's not mix the bad unit tests in Nova with the fact that code
    should be fully covered by well written unit tests.
    This big task can be split into 2 smaller tasks:
    1) Bot that will check that we are covering new code by tests and
    don't introduce regressions


http://en.wikipedia.org/wiki/Code_coverage

You appear to be talking about statement coverage, which is one of the
weaker coverage metrics.

     if a:
         thing

gets 100% statement coverage if a is true, so I don't need to test when
a is false (which would be at a minimum decision coverage).

I wonder if the focus is wrong.  Maybe helping devs is better than
making more gate jobs, for starters; and maybe overall coverage is not a
great metric when you're changing 100 lines in 100,000.  If you were
thinking instead to provide coverage *tools* that were easy for
developers to use, that would be a different question.  As a dev, I
would not be terribly interested in finding that I've improved overall
test coverage from 90.1% to 90.2%, but I might be *very* interested to
know that I got 100% decision (or even boolean) coverage on the specific
lines of the feature I just added by running just the unit tests that
exercise it.
--
Ian.


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to