On Tue, Apr 11, 2017 at 2:49 PM, Brian Warner <war...@lothar.com> wrote:

>
> Tahoe-LAFS devchat 11-Apr-2017
>
> * warner saw intermittent test coverage (flagged by codecov on the last
>   few PRs that landed)
>   * will file a ticket with the lines that are sometimes covered
>   * we should figure out what's going on, add tests to cover them
>     properly
>   * new PRs that don't touch code (e.g. docs) should (obviously) never
>     cause coverage regressions
>   * it'd be nice if there was a tool to show which test files provided
>     which coverage
>     * so we can make sure unit-test files get full coverage on
>       individual modules
>     * then integration-like tests are useful too, but we don't depend
>       upon them for coverage
>


Here is the very ancient, abandoned piece of software I mentioned which is
capable of telling you which code was executed by which test methods:

https://launchpad.net/merit

Note it works by instrumenting a pyunit-style TestResult so it will
probably only work with pyunit-style tests.

Also it's so ancient I don't know if it works at all anymore.  But it
illustrates one possible solution to the problem, at least.

With a properly working tool of this sort, one could compare reports from
multiple test runs of the same code and see which code is not consistently
covered by the same tests.  Removing such cases would presumably help make
coverage reporting tools like codecov more useful by removing noise from
their results that comes from coverage results that vary without underlying
code changes.

Jean-Paul
_______________________________________________
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev

Reply via email to