On Wed, Dec 2, 2020 at 1:57 PM David Gow <david...@google.com> wrote:
> On Wed, Dec 2, 2020 at 6:06 PM Andy Shevchenko
> <andy.shevche...@gmail.com> wrote:
> > On Wed, Dec 02, 2020 at 09:51:19AM +0530, Arpitha Raghunandan wrote:

...

> > What I;m talking about is the output. How it will be implemented (using the
> > same variable or differently) is up to you. So the point is I want to see 
> > the
> > statistics of success/total at the end.
> >
> > I think this should be done in KUNIT rather than in the individual test 
> > cases.
>
> I tend to agree here that this really is something for KUnit. At the
> moment, the tools/testing/kunit/kunit.py script will parse the kernel
> log and generate these sorts of statistics. I know that needing to run
> it through a script might seem like a step backwards, but there's no
> formal place for statistics in the KTAP specification[1] being worked
> on to standardise kselftest/kunit output formats.

Then it sucks. Fix specification (in a long term) and does it have a
comment style of messages that we can have this statistics printed
(but maybe not parsed)?

> Note that there are
> other parsers for TAP-like formats which are being used with KUnit
> results, so systems like LAVA could also sum up these statistics. It's
> also possible, as Arpitha alluded to, to have the test dump them out
> as a comment.

Fine to me.

> This won't actually work for this test as-is, though, as the KUnit
> version is running as a single giant test case (so KUnit believes that
> 1/1 tests have passed, rather than having any more-detailed
> statistics). It looks like there are a few ways to split it up a bit
> which would make it neater (a test each for the for() loops in
> test_hexdump_init() seems sensible to me), but at the moment, there's
> not really a way of programmatically generating test cases which KUnit
> then counts

Fix it, please. We rely on this statistics pretty much.

> The "Parameterised Tests"[2] work Arpitha has been working on ought to
> go some way to helping here, though it won't solve this completely in
> this initial version. The problem there is that parameterised tests
> are not reported individually in a way the kunit.py parser can report
> cleanly, yet, so it'll still only be counted as one test until that's
> changed (though, at least, that shouldn't require any test-specific
> work).
>
> My suggestion for the ultimate state of the test would be:
> - Split up the test into separate KUnit tests for the different
> "categories" of tests: (e.g., test_hexdump_set,
> test_hexdump_overflow_set_ascii, etc)
> - Replace the for loops in test_hexdump_init() with parameters, so
> that KUnit is aware of the original runs.
> - Once KUnit and the tooling supports it, these will be reported as
> subtests. (In the meantime, the results will be listed individually,
> commented out)

I'm fine as long as we have this information printed to the user.

> Of course, it'll take a while before all of those KUnit pieces are in
> place. I personally think that a good compromise would be to just do
> the first of these for now, which would make kunit_tool give at least
> a 4/4 rather than 1/1 result. Then, once the parameterised testing
> work is merged (and perhaps the tooling fixes are finished), the tests
> could be updated to take advantage of that.

How can we guarantee it will be not forgotten?

> [1]: 
> https://lore.kernel.org/linux-kselftest/cy4pr13mb1175b804e31e502221bc8163fd...@cy4pr13mb1175.namprd13.prod.outlook.com/T/
> [2]: 
> https://lore.kernel.org/linux-kselftest/20201116054035.211498-1-98.a...@gmail.com/

-- 
With Best Regards,
Andy Shevchenko

Reply via email to