Hi,
IMO, there are two scenarios when users need test coverage:
1. When new features or major optimizations are submitted, the owners of
the PR may care about the code coverage of these new features.
2. When we need to optimize the original code to increase the overall code
test coverage of Pulsar.
And there are two scenarios when users don't need test coverage:
1. The PR only is a Bug Fix
2. There is a small local optimization in this PR
In fact, most PRs for Pulsar right now are bug fixes or small
optimizations. The PR owner does not need to consider test coverage.

So, my suggestion is that we can provide a button to run code coverage and
download coverage reports.
 And put this button on the CI detail page, so that people who really need
it can get the code test coverage without affecting the running of most
other CIs.

Sincerely,
Xiangying

On Sat, Sep 10, 2022 at 5:13 AM asn <yaa...@gmail.com> wrote:

> Hi,
>
> IMO, code coverage is not just a number, 50% or 70% makes no sense except
> to let us feel that we have more confidence. So what is really important? I
> think it is the *coverage* itself, we need to see where we need to write
> tests in the future based the result because only if we have this data, we
> could know where to cover. Furthermore, we can force every pr to write high
> quality test.
>
> tison <wander4...@gmail.com> 于2022年9月10日周六 01:37写道:
>
> > > Please provide data point its impact to CI stability.
> >
> > If you take a look at https://github.com/apache/pulsar/pull/17382, it
> > changes pulsar-ci.yml to run commands for generating the codecov report.
> > I'm unsure of the impact of a new step, it may not affect too much since
> > there's already a runner take the whole job.
> >
> > For the rest, I wrote:
> >
> > > I read the output in your proposal as simply:
> > >> The report will serve as additional input for the reviewers. The
> > requester
> > is expected to explain any significant negative impact.
> > > This works for me as a nonmandatory helper.
> >
> > Feelings can be biased. No objection to the action.
> >
> > Best,
> > tison.
> >
> >
> > Lin Zhao <lin.z...@streamnative.io.invalid> 于2022年9月10日周六 01:26写道:
> >
> > > Hi Tison,
> > >
> > > Thanks for your input. I agree that the community should focus on the
> > > priorities regarding CI that you mentioned. At the same time, I'm
> having
> > a
> > > hard time understanding the negative impact that you suggested from
> this
> > > change.
> > >
> > > 1. To my knowledge code coverage calculation adds little overhead to
> ci.
> > > Please provide data point its impact to CI stability.
> > > 2. Code coverage isn't completely accurate and yet it's valuable data
> > > point. The goal is not to reach 100%, but to prevent coverage becoming
> > > worse. To me this change has only benefits and not drawbacks.
> > > 3. I don't agree with your sentiments about code coverage. There's
> > material
> > > difference between 50% and 70% and we are at the low end of 50%. *We
> will
> > > not add a commit gate to coverage change. *The report serves as an
> > > additional data point for the approvers.
> > >
> > > Lin
> > >
> > >
> > > On Fri, Sep 9, 2022 at 9:51 AM tison <wander4...@gmail.com> wrote:
> > >
> > > > Hi Lin,
> > > >
> > > > Thanks for starting this discussion!
> > > >
> > > > As long as it takes a different resource set from current CI tasks,
> I'm
> > > +0
> > > > as commented on PR-17382. I hardly read the report.
> > > >
> > > > I read the output in your proposal as simply:
> > > >
> > > > > The report will serve as additional input for the reviewers. The
> > > > requester
> > > > is expected to explain any significant negative impact.
> > > >
> > > > This works for me as a nonmandatory helper.
> > > >
> > > > Thoughts on the baseline:
> > > >
> > > > 1. I don't like Code Coverage because from my experience they're
> > _never_
> > > > precise. And chasing 100% test coverage is unnecessary.
> > > > 2. Although, if test coverage is below 50%, it _is_ a signal to me
> that
> > > we
> > > > suffer from too few tests.
> > > > 3. Over 50% or 70%, test coverage looks the same to me.
> > > >
> > > > And our tests suffer from:
> > > >
> > > > 1. Heavyweight. We're now working on resolving CI congestion. Many
> > "unit
> > > > tests" are actually integration tests with mock Pulsar Service etc.
> > which
> > > > setup/teardown takes over ten (tens of) seconds.
> > > > 2. Brittle. Too many mocks and radical powermocks make tests brittle.
> > > > Mockito suggests never mock classes you don't own, but it seems
> nobody
> > > > cares. We mock over ES and ZK clients and so on.
> > > >
> > > > I don't want to spread the discussion over tests. But when it comes
> to
> > > > improving tests, I want to share the most emergent items on it.
> > > >
> > > > Best,
> > > > tison.
> > > >
> > > >
> > > > Lin Zhao <lin.z...@streamnative.io.invalid> 于2022年9月10日周六 00:26写道:
> > > >
> > > > > Hi,
> > > > >
> > > > > I'd like to start a discussion about turning on CodeCov report for
> > PRs
> > > to
> > > > > master to show the PR's impact on unit test coverage. Previous
> > > discussion
> > > > > on https://github.com/apache/pulsar/pull/17382.
> > > > >
> > > > > Proposal:
> > > > > 1. Unit test coverage will be added to the CI pipeline and reported
> > to
> > > > the
> > > > > PR page.
> > > > > Sample report:
> > > > >
> > > > > @@            Coverage Diff            @@##             master
> > > > > #17382   +/-   ##
> > > > > =========================================
> > > > >   Coverage          ?   32.10%
> > > > >   Complexity        ?     4141
> > > > > =========================================
> > > > >   Files             ?      387
> > > > >   Lines             ?    42806
> > > > >   Branches          ?     4420
> > > > > =========================================
> > > > >   Hits              ?    13741
> > > > >   Misses            ?    27032
> > > > >   Partials          ?     2033
> > > > >
> > > > > 2. The report will serve as additional input for the reviewers. The
> > > > > requester is expected to explain any significant negative impact.
> > > > >
> > > > > Why?
> > > > >
> > > > >
> > > > >    1. The existing code coverage for Pulsar is very poor at just
> > above
> > > > > 50%. Reasonable expectation for libraries is 90% and 70 or 80% for
> > the
> > > > > broker. We are at 60% and 50%.
> > > > >    2. The coverage report would prevent coverage from getting worse
> > > > > and bring the conversation on the table.
> > > > >    3. Even though code coverage has its limitation in assessing
> test
> > > > > coverage, it's the only viable tool.
> > > > >
> > > > >
> > > > >
> > > > > Thoughts?
> > > > >
> > > >
> > >
> >
>
>
> --
> yaalsn
>

Reply via email to