Coverage runs with gcc can run in parallel. With clang, not so much... CC=gcc 
is your friend...

D.

From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Andrew Yourtchenko
Sent: Thursday, June 18, 2020 4:25 PM
To: Balaji Venkatraman (balajiv) <bala...@cisco.com>
Cc: Neale Ranns (nranns) <nra...@cisco.com>; vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] VPP API CRC compatibility check process in checkstyle 
merged and active

Hi Balaji,

Yeah that was what I was thinking, though weekly ain’t good enough - one would 
have to run coverage report before and after and ensure it doesn’t drop.

But it’s only one point and it’s also not a given that a code with the api 
change/addition contains all the code for that new api version - almost the 
opposite, in my experience..

The best case would be of course to ensure that *every* commit has a 
non-decrementing code coverage value, and trigger some kind of alert if it 
does.... That will fulfil the requirements from the api standpoint 
automatically, and also automatically nudge the improvements in the code 
coverage overall...

I vaguely remember hearing that code coverage can’t run the test cases in 
parallel, is that right ?

—a



On 18 Jun 2020, at 19:04, Balaji Venkatraman (balajiv) 
<bala...@cisco.com<mailto:bala...@cisco.com>> wrote:

Hi Andrew,

Just a few comments regarding coverage.

We could use the coverage (we currently run on a weekly basis) as baseline and 
monitor for incremental increases when a versioning change occurs. If there was 
a way to check the UT for the _v2 covers the ‘new/modified’ code and if 
possible add the coverage data as part of the commit criteria, that would be 
ideal. Until then, we could manually check if the coverage shows code for _v2 
being touched by the new test added for it before it is approved.

Just a suggestion!

--
Regards,
Balaji.


From: <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> on behalf of Andrew 
Yourtchenko <ayour...@gmail.com<mailto:ayour...@gmail.com>>
Date: Thursday, June 18, 2020 at 8:58 AM
To: "Neale Ranns (nranns)" <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] VPP API CRC compatibility check process in checkstyle 
merged and active

Hi Neale,



On 18 Jun 2020, at 17:11, Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>> wrote:
Hi Andrew,
A couple of questions?

Absolutely! That’s how we improve it! Thanks a lot for the questions ! Replies 
inline:




Firstly, about unit testing aka make test. This is the salient passage in your 
guide:
  "foo_message_v2 is tested in "make test" to the same extent as the 
foo_message"
IMHO "to the same extent" implies everywhere v1 is used v2 should now be used 
in its place. One would hope that in most cases a simple find and replace 
through all test cases would do the job. However, once one has created such a 
fork and verified (presumably through some objective measure like lcov) that it 
is the same extent of coverage, what becomes of it? V1 and V2 APIs must 
co-exist for some time, so how do we continue to run the v1 original tests and 
the v2 fork?

For most of most of the practical use cases the _v2 will be a trivial change 
compared to _v1 (eg. field change, etc), and that it would be implemented by v1 
handler calling v2 handler,
one can start with adding the tests for v2 that touch just the new/changed 
functionality, and in that case the tests calling v1 will “count” against the 
v2 coverage without the test duplication.


https://gerrit.fd.io/r/c/vpp/+/27586 Is the fresh example of just this approach.

I discussed with Ole and I tried to make a stricter and more concise 
description here for an API change:

https://wiki.fd.io/view/VPP/ApiChangeProcess#Tooling

So I would say we can explicitly say “the tests need to be converted to use the 
new API” either at the moment of “productizing” the new API or deletion of the 
old API. And yeah the idea is that we could eventually do automatic code 
coverage tests specifically for those points to ensure it doesn’t drop (or that 
it monotonically increases :)

I am not sure there is a good way to test the “code coverage for an API” per 
se, since none of the tests have only one API - the before/after overall 
comparison should be good enough ?

Given that between any two releases multi APIs may go through a version 
upgrade, there will be many such forks to manage.

I think it should be just one per message at most ? (If one uses the 
“in-progress” transition phase for new messages - in fact we are pondering that 
it might be a good idea to also enforce that via the tool, so that would add an 
explicit “yes this is ready” phase, and avoid “accidental production status”.



Additionally, are we also going to test all combinations of messages and their 
versions, e.g. foo_v2 with bar_v2.

I think the best judgement still applies. If you have foo_v1 and bar_v1 which 
are related and replaced by foo_v2 and bar_v2, which means their deprecations  
would be probably synced, and the same would apply for the use by consumers. So 
either “v1 and v1” or “v2 and v2”.

Again - the logic behind all of this is to allow the user sitting on release X 
not using any deprecated APIs to painlessly upgrade a pre-X+1 master branch or 
the X+1 release, so they can keep their wheels turning *and* have time to fix 
the now-deprecated APIs that they use.

Having a commitment to “any version with any version” functionality - I think 
we can hold off with that commitment after we see how well the weaker promise 
works in practice.

What do you think ?




Secondly, what's the process for determining the initial categorization of 
existing APIs?

Basically, we shipped all of the APIs in the releases - so anything is a fair 
game to be a production.

Given some of the APIs are actually not used by anyone yet and need some more 
work (like IKEv2), the plan is to have a one-month grace period to 
“deproductize” the APIs:

https://wiki.fd.io/view/VPP/ApiChangeProcess#An_in-progress_API_accidentally_marked_as_.22production.22

This comes with a little bit of overhead but it gives a good visibility for the 
consumers, if there are any, to react.

We will keep this “noisy deproductize” process as well for the future to handle 
the one-off accidents (which we should have none in case we enforce the 
addition happens via in-progress state).

What do you think ?




/neale
tpyed by my fat tumhbs

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16757): https://lists.fd.io/g/vpp-dev/message/16757
Mute This Topic: https://lists.fd.io/mt/74956323/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to