Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest

2019-09-21 Thread Frank Rowand
On 9/20/19 9:57 AM, Rob Herring wrote:
> Following up from LPC discussions...
> 
> On Thu, Mar 21, 2019 at 8:30 PM Brendan Higgins
>  wrote:
>>
>> On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand  wrote:
>>>
>>> On 2/27/19 7:52 PM, Brendan Higgins wrote:
>>>> On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand  
>>>> wrote:
>>>>>
>>>>> On 2/18/19 2:25 PM, Frank Rowand wrote:
>>>>>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>>>>>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand  
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>>>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand  
>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand 
>>>>>>>>>>>  wrote:
>> < snip >
>>>>>
>>>>> In the base version, the order of execution of the test code requires
>>>>> bouncing back and forth between the test functions and the coding of
>>>>> of_test_find_node_by_name_cases[].
>>>>
>>>> You shouldn't need to bounce back and forth because the order in which
>>>> the tests run shouldn't matter.
>>>
>>> If one can't guarantee total independence of all of the tests, with no
>>> side effects, then yes.  But that is not my world.  To make that
>>> guarantee, I would need to be able to run just a single test in an
>>> entire test run.
>>>
>>> I actually want to make side effects possible.  Whether from other
>>> tests or from live kernel code that is accessing the live devicetree.
>>> Any extra stress makes me happier.
>>>
>>> I forget the exact term that has been tossed around, but to me the
>>> devicetree unittests are more like system validation, release tests,
>>> acceptance tests, and stress tests.  Not unit tests in the philosophy
>>> of KUnit.
>>
>> Ah, I understand. I thought that they were actually trying to be unit
>> tests; that pretty much voids this discussion then. Integration tests
>> and end to end tests are valuable as long as that is actually what you
>> are trying to do.
> 
> There's a mixture. There's a whole bunch of tests that are basically
> just testing various DT APIs and use a static DT. Those are all unit
> tests IMO.
> 
> Then there's all the overlay tests Frank has added. I guess some of
> those are not unittests in the strictest sense. Regardless, if we're
> reporting test results, we should align our reporting with what will
> become the rest of the kernel.

The last time I talked to you at lpc, I was still resisting moving the
DT unittests to the kunit framework.  But I think I am on board now.

Brendan agreed to accept a kunit patch from me (when I write it) to enable
the DT unittests to report # of tests run and # of tests passed/failed,
as is currently the case.

Brendan also agreed that null initialization and clean up would be ok
for the DT unittests.  But that he does not want that model to be
frequently used.  (You mention this idea later in the email I am
replying to.)


> 
>>> I do see the value of pure unit tests, and there are rare times that
>>> my devicetree use case might be better served by that approach.  But
>>> if so, it is very easy for me to add a simple pure test when debugging.
>>> My general use case does not map onto this model.
>>
>> Why do you think it is rare that you would actually want unit tests?
> 
> I don't. We should have a unittest (or multiple) for every single DT
> API call and that should be a requirement to add any new APIs.
> 
>> I mean, if you don't get much code churn, then maybe it's not going to
>> provide you a ton of value to immediately go and write a bunch of unit
>> tests right now, but I can't think of a single time where it's hurt.
>> Unit tests, from my experience, are usually the easiest tests to
>> maintain, and the most helpful when I am developing.
>>
>> Maybe I need to understand your use case better.
>>
>>>
>>>
>>>>>
>>>>> In the frank version the order of execution of the test code is obvious.
>>>>
>>>> So I know we were arguing before over whether order *does* matter in
>>>> some of the other test cases (none in the example 

Re: [PATCH v5 00/18] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-06-19 Thread Frank Rowand
Hi Brendan,

I am only responding to this because you asked me to in the v4 thread.

Thank you for evaluating my comments in the v4 thread and asking me to
comment on v5

On 6/17/19 1:25 AM, Brendan Higgins wrote:
> ## TL;DR
> 
> A not so quick follow-up to Stephen's suggestions on PATCH v4. Nothing
> that really changes any functionality or usage with the minor exception
> of a couple public functions that Stephen asked me to rename.
> Nevertheless, a good deal of clean up and fixes. See changes below.
> 
> As for our current status, right now we got Reviewed-bys on all patches
> except:
> 
> - [PATCH v5 08/18] objtool: add kunit_try_catch_throw to the noreturn
>   list
> 
> However, it would probably be good to get reviews/acks from the
> subsystem maintainers on:
> 
> - [PATCH v5 06/18] kbuild: enable building KUnit
> - [PATCH v5 08/18] objtool: add kunit_try_catch_throw to the noreturn
>   list
> - [PATCH v5 15/18] Documentation: kunit: add documentation for KUnit
> - [PATCH v5 17/18] kernel/sysctl-test: Add null pointer test for
>   sysctl.c:proc_dointvec()
> - [PATCH v5 18/18] MAINTAINERS: add proc sysctl KUnit test to PROC
>   SYSCTL section
> 
> Other than that, I think we should be good to go.
> 
> One last thing, I updated the background to include my thoughts on KUnit
> vs. in kernel testing with kselftest in the background sections as
> suggested by Frank in the discussion on PATCH v2.
> 
> ## Background
> 
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> (however, KUnit still allows you to run tests on test machines or in VMs
> if you want[1]) and does not require tests to be written in userspace
> running on a host kernel. Additionally, KUnit is fast: From invocation
> to completion KUnit can run several dozen tests in under a second.
> Currently, the entire KUnit test suite for KUnit runs in under a second
> from the initial invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 

I looked only at this section, as was specifically requested:

> ### But wait! Doesn't kselftest support in kernel testing?!
> 
> In a previous version of this patchset Frank pointed out that kselftest
> already supports writing a test that resides in the kernel using the
> test module feature[2]. LWN did a really great summary on this
> discussion here[3].
> 
> Kselftest has a feature that allows a test module to be loaded into a
> kernel using the kselftest framework; this does allow someone to write
> tests against kernel code not directly exposed to userland; however, it
> does not provide much of a framework around how to structure the tests.
> The kselftest test module feature just provides a header which has a
> standardized way of reporting test failures, 


> and then provides
> infrastructure to load and run the tests using the kselftest test
> harness.

The in-kernel tests can also be invoked at boot time if they are
configured (Kconfig) as in-kernel instead of as modules.  I did not
check how many of the tests have tri-state configuration to allow
this, but the few that I looked at did.

> 
> The kselftest test module does not seem to be opinionated at all in
> regards to how tests are structured, how they check for failures, how
> tests are organized. Even in the method it provides for reporting
> failures is pretty simple; it doesn't have any more advanced failure
> reporting or logging features. Given what's there, I think it is fair to
> say that it is not actually a framework, but a feature that makes it
> possible for someone to do some checks in kernel space.

I would call that description a little dismissive.  The set of in-kernel
tests that I looked like followed a common pattern and reported results
in a uniform manner.

> 
> Furthermore, kselftest test module has very few users. I checked for all
> the tests that use it using the following grep command:
> 
> grep -Hrn -e 'kselftest_module\.h'
> 
> and only got three results: lib/test_strscpy.c, lib/test_printf.c, and
> lib/test_bitmap.c.

You missed many tests.  I listed much more than that in the v4 thread, and
someone else also listed more in the v4 thread.


> 
> So despite kselftest test module's existence, there really is no feature
> overlap between kselftest and KUnit, save one: that you can use either
> to write an in-kernel test, but this is a very small feature in
> comparison to everything that KUnit allows you to do. KUnit is a full
> x-unit style unit testing framework, whereas kselftest looks a lot more
> like an end-to-end/functional testing framework, with

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-14 Thread Frank Rowand
On 5/11/19 10:33 AM, Theodore Ts'o wrote:
> On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
>> However, the reply is incorrect.  Kselftest in-kernel tests (which
>> is the context here) can be configured as built in instead of as
>> a module, and built in a UML kernel.  The UML kernel can boot,
>> running the in-kernel tests before UML attempts to invoke the
>> init process.
> 
> Um, Citation needed?

The paragraph that you quoted tells you exactly how to run a kselftest
in-kernel test in a UML kernel.  Just to what that paragraph says.


> 
> I don't see any evidence for this in the kselftest documentation, nor
> do I see any evidence of this in the kselftest Makefiles.
> 
> There exists test modules in the kernel that run before the init
> scripts run --- but that's not strictly speaking part of kselftests,
> and do not have any kind of infrastructure.  As noted, the
> kselftests_harness header file fundamentally assumes that you are
> running test code in userspace.

You are ignoring the kselftest in-kernel tests.

We are talking in circles.  I'm done with this thread.

-Frank

> 
>   - Ted
> .
> 

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-14 Thread Frank Rowand
On 5/14/19 1:38 AM, Brendan Higgins wrote:
> On Fri, May 10, 2019 at 03:13:40PM -0700, Frank Rowand wrote:
>> On 5/10/19 9:17 AM, Logan Gunthorpe wrote:
>>>
>>>
>>> On 2019-05-09 11:18 p.m., Frank Rowand wrote:
>>>
>>>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
>>>
>>> Cool. From my cursory look, in my opinion, these would be greatly
>>> improved by converting them to the framework Brendan is proposing for Kunit.
>>>
>>>>> If they do exists, it seems like it would make sense to
>>>>> convert those to kunit and have Kunit tests run-able in a VM or
>>>>> baremetal instance.
>>>>
>>>> They already run in a VM.
>>>>
>>>> They already run on bare metal.
>>>>
>>>> They already run in UML.
>>>
>>> Simply being able to run in UML is not the only thing here. Kunit
>>> provides the infrastructure to quickly build, run and report results for
>>> all the tests from userspace without needing to worry about the details
>>> of building and running a UML kernel, then parsing dmesg to figure out
>>> what tests were run or not.
>>
>> Yes.  But that is not the only environment that KUnit must support to be
>> of use to me for devicetree unittests (this is not new, Brendan is quite
>> aware of my needs and is not ignoring them).
>>
>>
>>>> This is not to say that KUnit does not make sense.  But I'm still trying
>>>> to get a better description of the KUnit features (and there are
>>>> some).
>>>
>>> So read the patches, or the documentation[1] or the LWN article[2]. It's
>>> pretty well described in a lot of places -- that's one of the big
>>> advantages of it. In contrast, few people seems to have any concept of
>>> what kselftests are or where they are or how to run them (I was
>>> surprised to find the in-kernel tests in the lib tree).
>>>
>>> Logan
>>>
>>> [1] https://google.github.io/kunit-docs/third_party/kernel/docs/
>>> [2] https://lwn.net/Articles/780985/
>>
>> I have been following the RFC versions.  I have installed the RFC patches
>> and run them to the extent that they worked (devicetree unittests were
>> a guinea pig for test conversion in the RFC series, but the converted
>> tests did not work).  I read portions of the code while trying to
>> understand the unittests conversion.  I made review comments based on
>> the portion of the code that I did read.  I have read the documentation
>> (very nice btw, as I have said before, but should be expanded).
>>
>> My comment is that the description to submit the patch series should
>> be fuller -- KUnit potentially has a lot of nice attributes, and I
>> still think I have only scratched the surface.  The average reviewer
>> may have even less in-depth knowledge than I do.  And as I have
>> commented before, I keep diving into areas that I had no previous
>> experience with (such as kselftest) to be able to properly judge this
>> patch series.
> 
> Thanks for the praise! That means a lot coming from you!
> 
> I really cannot disagree that I could use more documentation. You can
> pretty much always use more documentation. Nevertheless, is there a
> particular part of the documentation that you think it lacking?

I wasn't talking about the documentation that is part of KUnit.  I was
targeting patch 0.


> It sounds like there was a pretty long discussion here about, a number
> of different things.
> 
> Do you want a better description of what unit testing is and how KUnit
> helps make it possible?
> 
> Do you want more of an explanation distinguishing KUnit from kselftest?
> How so?

The high level issue is to provide reviewers with enough context to be
able to evaluate the patch series.  That is probably not very obvious
at this point in the thread.  At this point I was responding to Logan's
response to me that I should be reading Documentation to get a better
description of KUnit features.  I _think_ that Logan thought that I
did not understand KUnit features and was trying to be helpful by
pointing out where I could get more information.  If so, he was missing
my intended point had been that patch 0 should provide more information
to justify adding this feature.

One thing that has become very apparent in the discussion of this patch
series is that some people do not understand that kselftest includes
in-kernel tests, not just userspace tests.  As such, KUnit is an
additional implementation of "the same feature".  (One can debate
exactly which in

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-10 Thread Frank Rowand
On 5/10/19 1:54 PM, Brendan Higgins wrote:
> On Fri, May 10, 2019 at 5:13 AM Knut Omang  wrote:
>> On Fri, 2019-05-10 at 03:23 -0700, Brendan Higgins wrote:
>>>> On Fri, May 10, 2019 at 7:49 AM Knut Omang  wrote:
>>>>>
>>>>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
>>>>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
>>>>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>>>>>>>>>
>>>>>>>>> The second item, arguably, does have significant overlap with 
>>>>>>>>> kselftest.
>>>>>>>>> Whether you are running short tests in a light weight UML environment 
>>>>>>>>> or
>>>>>>>>> higher level tests in an heavier VM the two could be using the same
>>>>>>>>> framework for writing or defining in-kernel tests. It *may* also be 
>>>>>>>>> valuable
>>>>>>>>> for some people to be able to run all the UML tests in the heavy VM
>>>>>>>>> environment along side other higher level tests.
>>>>>>>>>
>>>>>>>>> Looking at the selftests tree in the repo, we already have similar 
>>>>>>>>> items to
>>>>>>>>> what Kunit is adding as I described in point (2) above. 
>>>>>>>>> kselftest_harness.h
>>>>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar 
>>>>>>>>> intentions to
>>>>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>>>>>>>>>
>>>>>>>>> However, the number of users of this harness appears to be quite 
>>>>>>>>> small. Most
>>>>>>>>> of the code in the selftests tree seems to be a random mismash of 
>>>>>>>>> scripts
>>>>>>>>> and userspace code so it's not hard to see it as something completely
>>>>>>>>> different from the new Kunit:
>>>>>>>>>
>>>>>>>>> $ git grep --files-with-matches kselftest_harness.h *
>>>>>>>>
>>>>>>>> To the extent that we can unify how tests are written, I agree that
>>>>>>>> this would be a good thing.  However, you should note that
>>>>>>>> kselftest_harness.h is currently assums that it will be included in
>>>>>>>> userspace programs.  This is most obviously seen if you look closely
>>>>>>>> at the functions defined in the header files which makes calls to
>>>>>>>> fork(), abort() and fprintf().
>>>>>>>
>>>>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
>>>>>>> in-kernel tests and kselftest_harness for userspace tests seems like
>>>>>>> a sensible line to draw to me. Trying to unify kernel and userspace
>>>>>>> here sounds like it could be difficult so it's probably not worth
>>>>>>> forcing the issue unless someone wants to do some really fancy work
>>>>>>> to get it done.
>>>>>>>
>>>>>>> Based on some of the other commenters, I was under the impression
>>>>>>> that kselftests had in-kernel tests but I'm not sure where or if they
>>>>>>> exist.
>>>>>>
>>>>>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
>>>>>>
>>>>>> Here is a likely list of them in the kernel source tree:
>>>>>>
>>>>>> $ grep module_init lib/test_*.c
>>>>>> lib/test_bitfield.c:module_init(test_bitfields)
>>>>>> lib/test_bitmap.c:module_init(test_bitmap_init);
>>>>>> lib/test_bpf.c:module_init(test_bpf_init);
>>>>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
>>>>>> lib/test_firmware.c:module_init(test_firmware_init);
>>>>>> lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
>>>>>> lib/test_hexdump.c:module_init(test_hexdump_init);
>>>>>> lib/test_ida.c:module_init(ida_checks);
>>>>>> lib/t

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-10 Thread Frank Rowand
On 5/10/19 9:17 AM, Logan Gunthorpe wrote:
> 
> 
> On 2019-05-09 11:18 p.m., Frank Rowand wrote:
> 
>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> 
> Cool. From my cursory look, in my opinion, these would be greatly
> improved by converting them to the framework Brendan is proposing for Kunit.
> 
>>> If they do exists, it seems like it would make sense to
>>> convert those to kunit and have Kunit tests run-able in a VM or
>>> baremetal instance.
>>
>> They already run in a VM.
>>
>> They already run on bare metal.
>>
>> They already run in UML.
> 
> Simply being able to run in UML is not the only thing here. Kunit
> provides the infrastructure to quickly build, run and report results for
> all the tests from userspace without needing to worry about the details
> of building and running a UML kernel, then parsing dmesg to figure out
> what tests were run or not.

Yes.  But that is not the only environment that KUnit must support to be
of use to me for devicetree unittests (this is not new, Brendan is quite
aware of my needs and is not ignoring them).


>> This is not to say that KUnit does not make sense.  But I'm still trying
>> to get a better description of the KUnit features (and there are
>> some).
> 
> So read the patches, or the documentation[1] or the LWN article[2]. It's
> pretty well described in a lot of places -- that's one of the big
> advantages of it. In contrast, few people seems to have any concept of
> what kselftests are or where they are or how to run them (I was
> surprised to find the in-kernel tests in the lib tree).
> 
> Logan
> 
> [1] https://google.github.io/kunit-docs/third_party/kernel/docs/
> [2] https://lwn.net/Articles/780985/

I have been following the RFC versions.  I have installed the RFC patches
and run them to the extent that they worked (devicetree unittests were
a guinea pig for test conversion in the RFC series, but the converted
tests did not work).  I read portions of the code while trying to
understand the unittests conversion.  I made review comments based on
the portion of the code that I did read.  I have read the documentation
(very nice btw, as I have said before, but should be expanded).

My comment is that the description to submit the patch series should
be fuller -- KUnit potentially has a lot of nice attributes, and I
still think I have only scratched the surface.  The average reviewer
may have even less in-depth knowledge than I do.  And as I have
commented before, I keep diving into areas that I had no previous
experience with (such as kselftest) to be able to properly judge this
patch series.

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-10 Thread Frank Rowand
On 5/10/19 3:23 AM, Brendan Higgins wrote:
>> On Fri, May 10, 2019 at 7:49 AM Knut Omang  wrote:
>>>
>>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
>>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
>>>>>
>>>>>
>>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
>>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>>>>>>>
>>>>>>> The second item, arguably, does have significant overlap with kselftest.
>>>>>>> Whether you are running short tests in a light weight UML environment or
>>>>>>> higher level tests in an heavier VM the two could be using the same
>>>>>>> framework for writing or defining in-kernel tests. It *may* also be 
>>>>>>> valuable
>>>>>>> for some people to be able to run all the UML tests in the heavy VM
>>>>>>> environment along side other higher level tests.
>>>>>>>
>>>>>>> Looking at the selftests tree in the repo, we already have similar 
>>>>>>> items to
>>>>>>> what Kunit is adding as I described in point (2) above. 
>>>>>>> kselftest_harness.h
>>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions 
>>>>>>> to
>>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>>>>>>>
>>>>>>> However, the number of users of this harness appears to be quite small. 
>>>>>>> Most
>>>>>>> of the code in the selftests tree seems to be a random mismash of 
>>>>>>> scripts
>>>>>>> and userspace code so it's not hard to see it as something completely
>>>>>>> different from the new Kunit:
>>>>>>>
>>>>>>> $ git grep --files-with-matches kselftest_harness.h *
>>>>>>
>>>>>> To the extent that we can unify how tests are written, I agree that
>>>>>> this would be a good thing.  However, you should note that
>>>>>> kselftest_harness.h is currently assums that it will be included in
>>>>>> userspace programs.  This is most obviously seen if you look closely
>>>>>> at the functions defined in the header files which makes calls to
>>>>>> fork(), abort() and fprintf().
>>>>>
>>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
>>>>> in-kernel tests and kselftest_harness for userspace tests seems like
>>>>> a sensible line to draw to me. Trying to unify kernel and userspace
>>>>> here sounds like it could be difficult so it's probably not worth
>>>>> forcing the issue unless someone wants to do some really fancy work
>>>>> to get it done.
>>>>>
>>>>> Based on some of the other commenters, I was under the impression
>>>>> that kselftests had in-kernel tests but I'm not sure where or if they
>>>>> exist.
>>>>
>>>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
>>>>
>>>> Here is a likely list of them in the kernel source tree:
>>>>
>>>> $ grep module_init lib/test_*.c
>>>> lib/test_bitfield.c:module_init(test_bitfields)
>>>> lib/test_bitmap.c:module_init(test_bitmap_init);
>>>> lib/test_bpf.c:module_init(test_bpf_init);
>>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
>>>> lib/test_firmware.c:module_init(test_firmware_init);
>>>> lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
>>>> lib/test_hexdump.c:module_init(test_hexdump_init);
>>>> lib/test_ida.c:module_init(ida_checks);
>>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
>>>> lib/test_list_sort.c:module_init(list_sort_test);
>>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
>>>> lib/test_module.c:static int __init test_module_init(void)
>>>> lib/test_module.c:module_init(test_module_init);
>>>> lib/test_objagg.c:module_init(test_objagg_init);
>>>> lib/test_overflow.c:static int __init test_module_init(void)
>>>> lib/test_overflow.c:module_init(test_module_init);
>>>> lib/test_parman.c:module_init(test_parman_init);
>>>> lib/test_printf.c:module_init(test_printf_init);
>>>> lib/test_rhashtable.c:module_init(test_rht

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-10 Thread Frank Rowand
On 5/9/19 3:20 PM, Logan Gunthorpe wrote:
> 
> 
> On 2019-05-09 3:42 p.m., Theodore Ts'o wrote:
>> On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
>>>
>>>     "My understanding is that the intent of KUnit is to avoid booting a 
>>> kernel on
>>>     real hardware or in a virtual machine.  That seems to be a matter of 
>>> semantics
>>>     to me because isn't invoking a UML Linux just running the Linux kernel 
>>> in
>>>     a different form of virtualization?
>>>
>>>     So I do not understand why KUnit is an improvement over kselftest.
>>>
>>>     ...
>>> 
>>> What am I missing?"
>> 
>> One major difference: kselftest requires a userspace environment;
>> it starts systemd, requires a root file system from which you can
>> load modules, etc.  Kunit doesn't require a root file system;
>> doesn't require that you start systemd; doesn't allow you to run
>> arbitrary perl, python, bash, etc. scripts.  As such, it's much
>> lighter weight than kselftest, and will have much less overhead
>> before you can start running tests.  So it's not really the same
>> kind of virtualization.

I'm back to reply to this subthread, after a delay, as promised.


> I largely agree with everything Ted has said in this thread, but I
> wonder if we are conflating two different ideas that is causing an
> impasse. From what I see, Kunit actually provides two different
> things:

> 1) An execution environment that can be run very quickly in userspace
> on tests in the kernel source. This speeds up the tests and gives a
> lot of benefit to developers using those tests because they can get
> feedback on their code changes a *lot* quicker.

kselftest in-kernel tests provide exactly the same when the tests are
configured as "built-in" code instead of as modules.


> 2) A framework to write unit tests that provides a lot of the same
> facilities as other common unit testing frameworks from userspace
> (ie. a runner that runs a list of tests and a bunch of helpers such
> as KUNIT_EXPECT_* to simplify test passes and failures).

> The first item from Kunit is novel and I see absolutely no overlap
> with anything kselftest does. It's also the valuable thing I'd like
> to see merged and grow.

The first item exists in kselftest.


> The second item, arguably, does have significant overlap with
> kselftest. Whether you are running short tests in a light weight UML
> environment or higher level tests in an heavier VM the two could be
> using the same framework for writing or defining in-kernel tests. It
> *may* also be valuable for some people to be able to run all the UML
> tests in the heavy VM environment along side other higher level
> tests.
> 
> Looking at the selftests tree in the repo, we already have similar
> items to what Kunit is adding as I described in point (2) above.
> kselftest_harness.h contains macros like EXPECT_* and ASSERT_* with
> very similar intentions to the new KUNIT_EXECPT_* and KUNIT_ASSERT_*
> macros.

I might be wrong here because I have not dug deeply enough into the
code!!!  Does this framework apply to the userspace tests, the
in-kernel tests, or both?  My "not having dug enough GUESS" is that
these are for the user space tests (although if so, they could be
extended for in-kernel use also).

So I think this one maybe does not have an overlap between KUnit
and kselftest.


> However, the number of users of this harness appears to be quite
> small. Most of the code in the selftests tree seems to be a random
> mismash of scripts and userspace code so it's not hard to see it as
> something completely different from the new Kunit:
> $ git grep --files-with-matches kselftest_harness.h *
> Documentation/dev-tools/kselftest.rst
> MAINTAINERS
> tools/testing/selftests/kselftest_harness.h
> tools/testing/selftests/net/tls.c
> tools/testing/selftests/rtc/rtctest.c
> tools/testing/selftests/seccomp/Makefile
> tools/testing/selftests/seccomp/seccomp_bpf.c
> tools/testing/selftests/uevent/Makefile
> tools/testing/selftests/uevent/uevent_filtering.c


> Thus, I can personally see a lot of value in integrating the kunit
> test framework with this kselftest harness. There's only a small
> number of users of the kselftest harness today, so one way or another
> it seems like getting this integrated early would be a good idea.
> Letting Kunit and Kselftests progress independently for a few years
> will only make this worse and may become something we end up
> regretting.

Yes, this I agree with.

-Frank

> 
> Logan
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-10 Thread Frank Rowand
On 5/9/19 2:42 PM, Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
>>
>>"My understanding is that the intent of KUnit is to avoid booting a 
>> kernel on
>>real hardware or in a virtual machine.  That seems to be a matter of 
>> semantics
>>to me because isn't invoking a UML Linux just running the Linux kernel in
>>a different form of virtualization?
>>
>>So I do not understand why KUnit is an improvement over kselftest.
>>
>>...
>>
>>What am I missing?"
> 
> One major difference: kselftest requires a userspace environment; it
> starts systemd, requires a root file system from which you can load
> modules, etc.  Kunit doesn't require a root file system; doesn't
> require that you start systemd; doesn't allow you to run arbitrary
> perl, python, bash, etc. scripts.  As such, it's much lighter weight
> than kselftest, and will have much less overhead before you can start
> running tests.  So it's not really the same kind of virtualization.
> 
> Does this help?
> 
>   - Ted
> 

I'm back to reply to this subthread, after a delay, as promised.

That is the type of information that I was looking for, so
thank you for the reply.

However, the reply is incorrect.  Kselftest in-kernel tests (which
is the context here) can be configured as built in instead of as
a module, and built in a UML kernel.  The UML kernel can boot,
running the in-kernel tests before UML attempts to invoke the
init process.

No userspace environment needed.  So exactly the same overhead
as KUnit when invoked in that manner.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-10 Thread Frank Rowand
On 5/10/19 3:43 AM, Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 10:11:01PM -0700, Frank Rowand wrote:
>>>> You *can* run in-kernel test using modules; but there is no framework
>>>> for the in-kernel code found in the test modules, which means each of
>>>> the in-kernel code has to create their own in-kernel test
>>>> infrastructure.
>>
>> The kselftest in-kernel tests follow a common pattern.  As such, there
>> is a framework.
> 
> So we may have different definitions of "framework".  In my book, code
> reuse by "cut and paste" does not make a framework.  Could they be
> rewritten to *use* a framework, whether it be KTF or KUnit?  Sure!
> But they are not using a framework *today*.
> 
>> This next two paragraphs you ignored entirely in your reply:
>>
>>> Why create an entire new subsystem (KUnit) when you can add a header
>>> file (and .c code as appropriate) that outputs the proper TAP formatted
>>> results from kselftest kernel test modules?
> 
> And you keep ignoring my main observation, which is that spinning up a
> VM, letting systemd start, mounting a root file system, etc., is all
> unnecessary overhead which takes time.  This is important to me,
> because developer velocity is extremely important if you are doing
> test driven development.

No, I do not "keep ignoring my main observation".  You made that
observation in an email of Thu, 9 May 2019 09:35:51 -0400.  In my
reply to Tim's reply to your email, I wrote:

   "< massive snip >

   I'll reply in more detail to some other earlier messages in this thread
   later.

   This reply is an attempt to return to the intent of my original reply to
   patch 0 of this series."


I have not directly replied to any of your other emails that have made
that observation (I may have replied to other emails that were replies
to such an email of yours, but not in the context of the overhead).
After this email, my next reply will be my delayed response to your
original email about overhead.

And the "mommy, he hit me first" argument does not contribute to a
constructive conversation about a kernel patch submittal.


> Yes, you can manually unload a module, recompile the module, somehow
> get the module back into the VM (perhaps by using virtio-9p), and then
> reloading the module with the in-kernel test code, and the restart the
> test.  BUT: (a) even if it is faster, it requires a lot of manual
> steps, and would be very hard to automate, and (b) if the test code
> ever OOPS or triggers a lockdep warning, you will need to restart the
> VM, and so this involves all of the VM restart overhead, plus trying
> to automate determining when you actually do need to restart the VM
> versus unloading and reloading the module.   It's clunky.

I have mentioned before that the in-kernel kselftest tests can be
run in UML.  You simply select the configure options to build them
into the kernel instead of building them as modules.  Then build
a UML kernel and execute ("boot") the UML kernel.

This is exactly the same as for KUnit.  No more overhead.  No less
overhead.  No more steps.  No fewer steps.


> 
> Being able to do the equivalent of "make && make check" is a really
> big deal.  And "make check" needs to go fast.
> 
> You keep ignore this point, perhaps because you don't care about this
> issue?  Which is fine, and why we may just need to agree to disagree.

No, I agree that fast test execution is useful.


> 
> Cheers,
> 
>   - Ted
> 
> P.S.  Running scripts is Turing-equivalent, so it's self-evident that
> *anything* you can do with other test frameworks you can somehow do in
> kselftests.  That argument doesn't impress me, and why I do consider
> it quite flippant.  (Heck, /bin/vi is Turing equivalent so we could
> use vi to as a kernel test framework.  Or we could use emacs.  Let's
> not.  :-)

I have not been talking about running scripts, other than to the extent
that _one of the ways_ the in-kernel kselftests can be invoked is via
a script that loads the test module.  The same exact in-kernel test can
instead be built into a UML kernel, as mentioned above.


> The question is whether it is the most best and most efficient way to
> do that testing.  And developer velocity is a really big part of my
> evaluation function when judging whether or a test framework is fit
> for that purpose.
> .
> 

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-09 Thread Frank Rowand
On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> 
> 
> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>>>
>>> The second item, arguably, does have significant overlap with kselftest.
>>> Whether you are running short tests in a light weight UML environment or
>>> higher level tests in an heavier VM the two could be using the same
>>> framework for writing or defining in-kernel tests. It *may* also be valuable
>>> for some people to be able to run all the UML tests in the heavy VM
>>> environment along side other higher level tests.
>>>
>>> Looking at the selftests tree in the repo, we already have similar items to
>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>>>
>>> However, the number of users of this harness appears to be quite small. Most
>>> of the code in the selftests tree seems to be a random mismash of scripts
>>> and userspace code so it's not hard to see it as something completely
>>> different from the new Kunit:
>>>
>>> $ git grep --files-with-matches kselftest_harness.h *
>>
>> To the extent that we can unify how tests are written, I agree that
>> this would be a good thing.  However, you should note that
>> kselftest_harness.h is currently assums that it will be included in
>> userspace programs.  This is most obviously seen if you look closely
>> at the functions defined in the header files which makes calls to
>> fork(), abort() and fprintf().
> 
> Ah, yes. I obviously did not dig deep enough. Using kunit for
> in-kernel tests and kselftest_harness for userspace tests seems like
> a sensible line to draw to me. Trying to unify kernel and userspace
> here sounds like it could be difficult so it's probably not worth
> forcing the issue unless someone wants to do some really fancy work
> to get it done.
> 
> Based on some of the other commenters, I was under the impression
> that kselftests had in-kernel tests but I'm not sure where or if they
> exist.

YES, kselftest has in-kernel tests.  (Excuse the shouting...)

Here is a likely list of them in the kernel source tree:

$ grep module_init lib/test_*.c
lib/test_bitfield.c:module_init(test_bitfields)
lib/test_bitmap.c:module_init(test_bitmap_init);
lib/test_bpf.c:module_init(test_bpf_init);
lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
lib/test_firmware.c:module_init(test_firmware_init);
lib/test_hash.c:module_init(test_hash_init);/* Does everything */
lib/test_hexdump.c:module_init(test_hexdump_init);
lib/test_ida.c:module_init(ida_checks);
lib/test_kasan.c:module_init(kmalloc_tests_init);
lib/test_list_sort.c:module_init(list_sort_test);
lib/test_memcat_p.c:module_init(test_memcat_p_init);
lib/test_module.c:static int __init test_module_init(void)
lib/test_module.c:module_init(test_module_init);
lib/test_objagg.c:module_init(test_objagg_init);
lib/test_overflow.c:static int __init test_module_init(void)
lib/test_overflow.c:module_init(test_module_init);
lib/test_parman.c:module_init(test_parman_init);
lib/test_printf.c:module_init(test_printf_init);
lib/test_rhashtable.c:module_init(test_rht_init);
lib/test_siphash.c:module_init(siphash_test_init);
lib/test_sort.c:module_init(test_sort_init);
lib/test_stackinit.c:module_init(test_stackinit_init);
lib/test_static_key_base.c:module_init(test_static_key_base_init);
lib/test_static_keys.c:module_init(test_static_key_init);
lib/test_string.c:module_init(string_selftest_init);
lib/test_ubsan.c:module_init(test_ubsan_init);
lib/test_user_copy.c:module_init(test_user_copy_init);
lib/test_uuid.c:module_init(test_uuid_init);
lib/test_vmalloc.c:module_init(vmalloc_test_init)
lib/test_xarray.c:module_init(xarray_checks);


> If they do exists, it seems like it would make sense to
> convert those to kunit and have Kunit tests run-able in a VM or
> baremetal instance.

They already run in a VM.

They already run on bare metal.

They already run in UML.

This is not to say that KUnit does not make sense.  But I'm still trying
to get a better description of the KUnit features (and there are
some).

-Frank
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-09 Thread Frank Rowand
Hi Ted,

I'll try answering this again.

The first time I was a little flippant in part of my answer because I
thought your comments somewhat flippant.  This time I'll provide a
more complete answer.


On 5/8/19 7:13 PM, Frank Rowand wrote:
> On 5/8/19 6:58 PM, Theodore Ts'o wrote:
>> On Wed, May 08, 2019 at 05:43:35PM -0700, Frank Rowand wrote:

*  begin context from Greg KH that you snipped  *

On 5/7/19 1:01 AM, Greg KH wrote:

<< note that I have snipped my original question above this point >>

>
> kselftest provides no in-kernel framework for testing kernel code
> specifically.  That should be what kunit provides, an "easy" way to
> write in-kernel tests for things.

*  end   context from Greg KH that you snipped  *

>>> kselftest provides a mechanism for in-kernel tests via modules.  For
>>> example, see:
>>>
>>>   tools/testing/selftests/vm/run_vmtests invokes:
>>> tools/testing/selftests/vm/test_vmalloc.sh
>>>   loads module:
>>> test_vmalloc
>>> (which is built from lib/test_vmalloc.c if CONFIG_TEST_VMALLOC)
>>
>> The majority of the kselftests are implemented as userspace programs.

My flippant answer:

> Non-argument.

This time:

My reply to Greg was pointing out that in-kernel tests do exist in
kselftest.

Your comment that the majority of kselftests are implemented as userspace
programs has no bearing on whether kselftest support in-kernel tests.
It does not counter the fact the kselftest supports in-kernel tests.


>> You *can* run in-kernel test using modules; but there is no framework
>> for the in-kernel code found in the test modules, which means each of
>> the in-kernel code has to create their own in-kernel test
>> infrastructure.

The kselftest in-kernel tests follow a common pattern.  As such, there
is a framework.

This next two paragraphs you ignored entirely in your reply:

> Why create an entire new subsystem (KUnit) when you can add a header
> file (and .c code as appropriate) that outputs the proper TAP formatted
> results from kselftest kernel test modules?
> 
> There are already a multitude of in kernel test modules used by
> kselftest.  It would be good if they all used a common TAP compliant
> mechanism to report results.


 
>> That's much like saying you can use vice grips to turn a nut or
>> bolt-head.  You *can*, but it might be that using a monkey wrench
>> would be a much better tool that is much easier.
>>
>> What would you say to a wood worker objecting that a toolbox should
>> contain a monkey wrench because he already knows how to use vise
>> grips, and his tiny brain shouldn't be forced to learn how to use a
>> wrench when he knows how to use a vise grip, which is a perfectly good
>> tool?
>>
>> If you want to use vice grips as a hammer, screwdriver, monkey wrench,
>> etc.  there's nothing stopping you from doing that.  But it's not fair
>> to object to other people who might want to use better tools.
>>
>> The reality is that we have a lot of testing tools.  It's not just
>> kselftests.  There is xfstests for file system code, blktests for
>> block layer tests, etc.   We use the right tool for the right job.

My flippant answer:

> More specious arguments.

This time:

I took your answer as a straw man, and had no desire to spend time
countering a straw man.

I am not proposing using a vice grips (to use your analogy).  I
am saying that maybe the monkey wrench already exists.

My point of this whole thread has been to try to get the submitter
to provide a better, more complete explanation of how and why KUnit 
is a better tool.

I have not yet objected to the number (and differences among) the
many sub-system tests.  I am questioning whether there is a need
for another _test framework_ for in-kernel testing.  If there is
something important that KUnit provides that does not exist in
existing frameworks then the next question would be to ask how
to implement that important thing (improve the existing
framework, replace the existing framework, or have two
frameworks).  I still think it is premature to ask this
question until we first know the answer to what unique features
KUnit adds (and apparently until we know what the existing
framework provides).

-Frank

> 
> -Frank
> 
>>
>>  - Ted
>>
> 
> 

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-09 Thread Frank Rowand
On 5/9/19 10:00 AM, tim.b...@sony.com wrote:
>> -Original Message-
>> From: Theodore Ts'o 
>>

< massive snip >

I'll reply in more detail to some other earlier messages in this thread
later.

This reply is an attempt to return to the intent of my original reply to
patch 0 of this series.


>> Ultimately, I'm a pragmatist.  If KTF serves your needs best, good for
>> you.  If other approaches are better for other parts of the kernel,
>> let's not try to impose a strict "There Must Be Only One" religion.
>> That's already not true today, and for good reason.  There are many
>> different kinds of kernel code, and many different types of test
>> philosophies.  Trying to force all kernel testing into a single
>> Procrustean Bed is simply not productive.
> 
> Had to look up "Procrustean Bed" - great phrase.  :-)
> 
> I'm not of the opinion that there must only be one test framework
> in the kernel. But we should avoid unnecessary multiplication. Every
> person is going to have a different idea for where the line of necessity
> is drawn.  My own opinion is that what KUnit is adding is different enough
> from kselftest, that it's a valuable addition.  
> 
>  -- Tim

My first reply to patch 0 was in the context of knowing next to nothing
about kselftest.  My level of understanding was essentially from slideware.
(As the thread progressed, I dug a little deeper into kselftest, so now have
a slightly better, though still fairly surface understanding of kselftest).

Maybe I did not explain myself clearly enough in that reply.  I will try
again here.

Patch 0 provided a one paragraph explanation of why KUnit exists, titled

   "## What's so special about unit testing?"

Patch 0 also provided a statement that it is not meant to replace
kselftest, in a paragraph titled

   "## Is KUnit trying to replace other testing frameworks for the kernel?"

I stated:

   "My understanding is that the intent of KUnit is to avoid booting a kernel on
   real hardware or in a virtual machine.  That seems to be a matter of 
semantics
   to me because isn't invoking a UML Linux just running the Linux kernel in
   a different form of virtualization?

   So I do not understand why KUnit is an improvement over kselftest.

   ...

   What am I missing?"


I was looking for a fuller, better explanation than was given in patch 0
of how KUnit provides something that is different than what kselftest
provides for creating unit tests for kernel code.

New question "(2)":

(2) If KUnit provides something unique, then the obvious follow on
question would be: does it make sense to (a) integrate that feature into
kselftest, (b) integrate the current kselftest in kernel test functionality
into KUnit and convert the in kernel test portion of kselftest to use
KUnit, or (c) KUnit and kselftest are so different that they need to
remain separate features.

*  Please do not reply to this email with a discussion of "(2)".
*  Such a discussion is premature if there is not an answer
*  to my first question.

Observation (3), rephrased:

(3) I also brought up the issue that if option (c) was the answer to
question "(2)" (instead of either option (a) or option (b)) then this
is extra overhead for any developer or maintainer involved in different
subsystems that use the different frameworks.  I intended this as one
possible motivation for why my first question mattered.

Ted grabbed hold of this issue and basically ignored what I intended
to be my core question.  Nobody else has answered my core question,
though Knut's first reply did manage to get somewhere near the
intent of my core question.

*  Please do not reply to this email with a discussion of "(3)".
*  Such a discussion is premature if there is not an answer
*  to my first question.
*
*  Also that discussion is already occurring in this thread,
*  feel free to discuss it there if you want, even though I
*  feel such discussion is premature.

I still do not have an answer to my original question.

-Frank
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-08 Thread Frank Rowand
On 5/8/19 6:44 PM, Theodore Ts'o wrote:
> On Wed, May 08, 2019 at 05:58:49PM -0700, Frank Rowand wrote:
>>
>> If KUnit is added to the kernel, and a subsystem that I am submitting
>> code for has chosen to use KUnit instead of kselftest, then yes, I do
>> *have* to use KUnit if my submission needs to contain a test for the
>> code unless I want to convince the maintainer that somehow my case
>> is special and I prefer to use kselftest instead of KUnittest.
> 
> That's going to be between you and the maintainer.  Today, if you want
> to submit a substantive change to xfs or ext4, you're going to be
> asked to create test for that new feature using xfstests.  It doesn't
> matter that xfstests isn't in the kernel --- if that's what is
> required by the maintainer.

Yes, that is exactly what I was saying.

Please do not cut the pertinent parts of context that I am replying to.


>>> supposed to be a simple way to run a large number of small tests that
>>> for specific small components in a system.
>>
>> kselftest also supports running a subset of tests.  That subset of tests
>> can also be a large number of small tests.  There is nothing inherent
>> in KUnit vs kselftest in this regard, as far as I am aware.


> The big difference is that kselftests are driven by a C program that
> runs in userspace.  Take a look at 
> tools/testing/selftests/filesystem/dnotify_test.c
> it has a main(int argc, char *argv) function.
> 
> In contrast, KUnit are fragments of C code which run in the kernel;
> not in userspace.  This allows us to test internal functions inside
> complex file system (such as the block allocator in ext4) directly.
> This makes it *fundamentally* different from kselftest.

No, totally incorrect.  kselftests also supports in kernel modules as
I mention in another reply to this patch.

This is talking past each other a little bit, because your next reply
is a reply to my email about modules.

-Frank

> 
> Cheers,
> 
>   - Ted
> 

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-08 Thread Frank Rowand
On 5/8/19 6:58 PM, Theodore Ts'o wrote:
> On Wed, May 08, 2019 at 05:43:35PM -0700, Frank Rowand wrote:
>> kselftest provides a mechanism for in-kernel tests via modules.  For
>> example, see:
>>
>>   tools/testing/selftests/vm/run_vmtests invokes:
>> tools/testing/selftests/vm/test_vmalloc.sh
>>   loads module:
>> test_vmalloc
>> (which is built from lib/test_vmalloc.c if CONFIG_TEST_VMALLOC)
> 
> The majority of the kselftests are implemented as userspace programs.

Non-argument.


> You *can* run in-kernel test using modules; but there is no framework
> for the in-kernel code found in the test modules, which means each of
> the in-kernel code has to create their own in-kernel test
> infrastructure.

Why create an entire new subsystem (KUnit) when you can add a header
file (and .c code as appropriate) that outputs the proper TAP formatted
results from kselftest kernel test modules?

There are already a multitude of in kernel test modules used by
kselftest.  It would be good if they all used a common TAP compliant
mechanism to report results.

 
> That's much like saying you can use vice grips to turn a nut or
> bolt-head.  You *can*, but it might be that using a monkey wrench
> would be a much better tool that is much easier.
> 
> What would you say to a wood worker objecting that a toolbox should
> contain a monkey wrench because he already knows how to use vise
> grips, and his tiny brain shouldn't be forced to learn how to use a
> wrench when he knows how to use a vise grip, which is a perfectly good
> tool?
> 
> If you want to use vice grips as a hammer, screwdriver, monkey wrench,
> etc.  there's nothing stopping you from doing that.  But it's not fair
> to object to other people who might want to use better tools.
> 
> The reality is that we have a lot of testing tools.  It's not just
> kselftests.  There is xfstests for file system code, blktests for
> block layer tests, etc.   We use the right tool for the right job.

More specious arguments.

-Frank

> 
>   - Ted
> 

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-08 Thread Frank Rowand
On 5/7/19 8:23 AM, shuah wrote:
> On 5/7/19 2:01 AM, Greg KH wrote:
>> On Mon, May 06, 2019 at 08:14:12PM -0700, Frank Rowand wrote:
>>> On 5/1/19 4:01 PM, Brendan Higgins wrote:
>>>> ## TLDR
>>>>
>>>> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
>>>> 5.2.
>>>>
>>>> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
>>>> we would merge through your tree when the time came? Am I remembering
>>>> correctly?
>>>>
>>>> ## Background
>>>>
>>>> This patch set proposes KUnit, a lightweight unit testing and mocking
>>>> framework for the Linux kernel.
>>>>
>>>> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
>>>> it does not require installing the kernel on a test machine or in a VM
>>>> and does not require tests to be written in userspace running on a host
>>>> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
>>>> can run several dozen tests in under a second. Currently, the entire
>>>> KUnit test suite for KUnit runs in under a second from the initial
>>>> invocation (build time excluded).
>>>>
>>>> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
>>>> Googletest/Googlemock for C++. KUnit provides facilities for defining
>>>> unit test cases, grouping related test cases into test suites, providing
>>>> common infrastructure for running tests, mocking, spying, and much more.
>>>
>>> As a result of the emails replying to this patch thread, I am now
>>> starting to look at kselftest.  My level of understanding is based
>>> on some slide presentations, an LWN article, 
>>> https://kselftest.wiki.kernel.org/
>>> and a _tiny_ bit of looking at kselftest code.
>>>
>>> tl;dr; I don't really understand kselftest yet.
>>>
>>>
>>> (1) why KUnit exists
>>>
>>>> ## What's so special about unit testing?
>>>>
>>>> A unit test is supposed to test a single unit of code in isolation,
>>>> hence the name. There should be no dependencies outside the control of
>>>> the test; this means no external dependencies, which makes tests orders
>>>> of magnitudes faster. Likewise, since there are no external dependencies,
>>>> there are no hoops to jump through to run the tests. Additionally, this
>>>> makes unit tests deterministic: a failing unit test always indicates a
>>>> problem. Finally, because unit tests necessarily have finer granularity,
>>>> they are able to test all code paths easily solving the classic problem
>>>> of difficulty in exercising error handling code.
>>>
>>> (2) KUnit is not meant to replace kselftest
>>>
>>>> ## Is KUnit trying to replace other testing frameworks for the kernel?
>>>>
>>>> No. Most existing tests for the Linux kernel are end-to-end tests, which
>>>> have their place. A well tested system has lots of unit tests, a
>>>> reasonable number of integration tests, and some end-to-end tests. KUnit
>>>> is just trying to address the unit test space which is currently not
>>>> being addressed.
>>>
>>> My understanding is that the intent of KUnit is to avoid booting a kernel on
>>> real hardware or in a virtual machine.  That seems to be a matter of 
>>> semantics
>>> to me because isn't invoking a UML Linux just running the Linux kernel in
>>> a different form of virtualization?
>>>
>>> So I do not understand why KUnit is an improvement over kselftest.
> 
> They are in two different categories. Kselftest falls into black box
> regression test suite which is a collection of user-space tests with a
> few kernel test modules back-ending the tests in some cases.
> 
> Kselftest can be used by both kernel developers and users and provides
> a good way to regression test releases in test rings.
> 
> KUnit is a white box category and is a better fit as unit test framework
> for development and provides a in-kernel testing. I wouldn't view them
> one replacing the other. They just provide coverage for different areas
> of testing.

I don't see what about kselftest or KUnit is inherently black box or
white box.  I would expect both frameworks to be used for either type
of testing.


> I wouldn't view KUnit as something that would be easily run in test rings for 
> example.

I don't see

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-08 Thread Frank Rowand
Hi Ted,

On 5/7/19 10:22 AM, Theodore Ts'o wrote:
> On Tue, May 07, 2019 at 10:01:19AM +0200, Greg KH wrote:
Not very helpful to cut the text here, plus not explicitly indicating that
text was cut (yes, I know the ">>>" will be a clue for the careful reader),
losing the set up for my question.


>>> My understanding is that the intent of KUnit is to avoid booting a kernel on
>>> real hardware or in a virtual machine.  That seems to be a matter of 
>>> semantics
>>> to me because isn't invoking a UML Linux just running the Linux kernel in
>>> a different form of virtualization?
>>>
>>> So I do not understand why KUnit is an improvement over kselftest.
>>>
>>> It seems to me that KUnit is just another piece of infrastructure that I
>>> am going to have to be familiar with as a kernel developer.  More overhead,
>>> more information to stuff into my tiny little brain.
>>>
>>> I would guess that some developers will focus on just one of the two test
>>> environments (and some will focus on both), splitting the development
>>> resources instead of pooling them on a common infrastructure.
>>>
>>> What am I missing?
>>
>> kselftest provides no in-kernel framework for testing kernel code
>> specifically.  That should be what kunit provides, an "easy" way to
>> write in-kernel tests for things.
>>
>> Brendan, did I get it right?
> 
> Yes, that's basically right.  You don't *have* to use KUnit.  It's

If KUnit is added to the kernel, and a subsystem that I am submitting
code for has chosen to use KUnit instead of kselftest, then yes, I do
*have* to use KUnit if my submission needs to contain a test for the
code unless I want to convince the maintainer that somehow my case
is special and I prefer to use kselftest instead of KUnittest.


> supposed to be a simple way to run a large number of small tests that
> for specific small components in a system.

kselftest also supports running a subset of tests.  That subset of tests
can also be a large number of small tests.  There is nothing inherent
in KUnit vs kselftest in this regard, as far as I am aware.


> For example, I currently use xfstests using KVM and GCE to test all of
> ext4.  These tests require using multiple 5 GB and 20GB virtual disks,
> and it works by mounting ext4 file systems and exercising ext4 through
> the system call interfaces, using userspace tools such as fsstress,
> fsx, fio, etc.  It requires time overhead to start the VM, create and
> allocate virtual disks, etc.  For example, to run a single 3 seconds
> xfstest (generic/001), it requires full 10 seconds to run it via
> kvm-xfstests.
> 


> KUnit is something else; it's specifically intended to allow you to
> create lightweight tests quickly and easily, and by reducing the
> effort needed to write and run unit tests, hopefully we'll have a lot
> more of them and thus improve kernel quality.

The same is true of kselftest.  You can create lightweight tests in
kselftest.


> As an example, I have a volunteer working on developing KUinit tests
> for ext4.  We're going to start by testing the ext4 extent status
> tree.  The source code is at fs/ext4/extent_status.c; it's
> approximately 1800 LOC.  The Kunit tests for the extent status tree
> will exercise all of the corner cases for the various extent status
> tree functions --- e.g., ext4_es_insert_delayed_block(),
> ext4_es_remove_extent(), ext4_es_cache_extent(), etc.  And it will do
> this in isolation without our needing to create a test file system or
> using a test block device.
> 

> Next we'll test the ext4 block allocator, again in isolation.  To test
> the block allocator we will have to write "mock functions" which
> simulate reading allocation bitmaps from disk.  Again, this will allow
> the test writer to explicitly construct corner cases and validate that
> the block allocator works as expected without having to reverese
> engineer file system data structures which will force a particular
> code path to be executed.

This would be a difference, but mock functions do not exist in KUnit.
The KUnit test will call the real kernel function in the UML kernel.

I think Brendan has indicated a desire to have mock functions in the
future.

Brendan, do I understand that correctly?

-Frank

> So this is why it's largely irrelevant to me that KUinit uses UML.  In
> fact, it's a feature.  We're not testing device drivers, or the
> scheduler, or anything else architecture-specific.  UML is not about
> virtualization.  What it's about in this context is allowing us to
> start running test code as quickly as possible.  Booting KVM takes
> about 3-4 seconds, and this includes initializing virtio_scsi and
> other device drivers.  If by using UML we can hold the amount of
> unnecessary kernel subsystem initialization down to the absolute
> minimum, and if it means that we can communicating to the test
> framework via a userspace "printf" from UML/KUnit code, as opposed to
> via a virtual serial port to KVM's virtual console, it all makes for
> lighter

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-08 Thread Frank Rowand
On 5/7/19 1:01 AM, Greg KH wrote:
> On Mon, May 06, 2019 at 08:14:12PM -0700, Frank Rowand wrote:
>> On 5/1/19 4:01 PM, Brendan Higgins wrote:
>>> ## TLDR
>>>
>>> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
>>> 5.2.
>>>
>>> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
>>> we would merge through your tree when the time came? Am I remembering
>>> correctly?
>>>
>>> ## Background
>>>
>>> This patch set proposes KUnit, a lightweight unit testing and mocking
>>> framework for the Linux kernel.
>>>
>>> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
>>> it does not require installing the kernel on a test machine or in a VM
>>> and does not require tests to be written in userspace running on a host
>>> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
>>> can run several dozen tests in under a second. Currently, the entire
>>> KUnit test suite for KUnit runs in under a second from the initial
>>> invocation (build time excluded).
>>>
>>> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
>>> Googletest/Googlemock for C++. KUnit provides facilities for defining
>>> unit test cases, grouping related test cases into test suites, providing
>>> common infrastructure for running tests, mocking, spying, and much more.
>>
>> As a result of the emails replying to this patch thread, I am now
>> starting to look at kselftest.  My level of understanding is based
>> on some slide presentations, an LWN article, 
>> https://kselftest.wiki.kernel.org/
>> and a _tiny_ bit of looking at kselftest code.
>>
>> tl;dr; I don't really understand kselftest yet.
>>
>>
>> (1) why KUnit exists
>>
>>> ## What's so special about unit testing?
>>>
>>> A unit test is supposed to test a single unit of code in isolation,
>>> hence the name. There should be no dependencies outside the control of
>>> the test; this means no external dependencies, which makes tests orders
>>> of magnitudes faster. Likewise, since there are no external dependencies,
>>> there are no hoops to jump through to run the tests. Additionally, this
>>> makes unit tests deterministic: a failing unit test always indicates a
>>> problem. Finally, because unit tests necessarily have finer granularity,
>>> they are able to test all code paths easily solving the classic problem
>>> of difficulty in exercising error handling code.
>>
>> (2) KUnit is not meant to replace kselftest
>>
>>> ## Is KUnit trying to replace other testing frameworks for the kernel?
>>>
>>> No. Most existing tests for the Linux kernel are end-to-end tests, which
>>> have their place. A well tested system has lots of unit tests, a
>>> reasonable number of integration tests, and some end-to-end tests. KUnit
>>> is just trying to address the unit test space which is currently not
>>> being addressed.
>>
>> My understanding is that the intent of KUnit is to avoid booting a kernel on
>> real hardware or in a virtual machine.  That seems to be a matter of 
>> semantics
>> to me because isn't invoking a UML Linux just running the Linux kernel in
>> a different form of virtualization?
>>
>> So I do not understand why KUnit is an improvement over kselftest.
>>
>> It seems to me that KUnit is just another piece of infrastructure that I
>> am going to have to be familiar with as a kernel developer.  More overhead,
>> more information to stuff into my tiny little brain.
>>
>> I would guess that some developers will focus on just one of the two test
>> environments (and some will focus on both), splitting the development
>> resources instead of pooling them on a common infrastructure.
>>
>> What am I missing?
> 
> kselftest provides no in-kernel framework for testing kernel code
> specifically.  That should be what kunit provides, an "easy" way to
> write in-kernel tests for things.

kselftest provides a mechanism for in-kernel tests via modules.  For
example, see:

  tools/testing/selftests/vm/run_vmtests invokes:
tools/testing/selftests/vm/test_vmalloc.sh
  loads module:
test_vmalloc
(which is built from lib/test_vmalloc.c if CONFIG_TEST_VMALLOC)

A very quick and dirty search (likely to miss some tests) finds modules:

  test_bitmap
  test_bpf
  test_firmware
  test_printf
  test_static_key_base
  test_static_keys
  test_user_copy
  test_vmalloc

-Frank

> 
> Brendan, did I get it right?
> 
> thanks,
> 
> greg k-h
> .
> 

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-06 Thread Frank Rowand
On 5/1/19 4:01 PM, Brendan Higgins wrote:
> ## TLDR
> 
> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> 5.2.
> 
> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> we would merge through your tree when the time came? Am I remembering
> correctly?
> 
> ## Background
> 
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.

As a result of the emails replying to this patch thread, I am now
starting to look at kselftest.  My level of understanding is based
on some slide presentations, an LWN article, https://kselftest.wiki.kernel.org/
and a _tiny_ bit of looking at kselftest code.

tl;dr; I don't really understand kselftest yet.


(1) why KUnit exists

> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.

(2) KUnit is not meant to replace kselftest

> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.

My understanding is that the intent of KUnit is to avoid booting a kernel on
real hardware or in a virtual machine.  That seems to be a matter of semantics
to me because isn't invoking a UML Linux just running the Linux kernel in
a different form of virtualization?

So I do not understand why KUnit is an improvement over kselftest.

It seems to me that KUnit is just another piece of infrastructure that I
am going to have to be familiar with as a kernel developer.  More overhead,
more information to stuff into my tiny little brain.

I would guess that some developers will focus on just one of the two test
environments (and some will focus on both), splitting the development
resources instead of pooling them on a common infrastructure.

What am I missing?

-Frank


> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/v5.1-rc7/v1
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/v5.1-rc7/v1 branch.
> 
> ## Changes Since Last Version
> 
> None. I just rebased the last patchset on v5.1-rc7.
> 

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

2019-05-05 Thread Frank Rowand
On 5/3/19 4:14 PM, Brendan Higgins wrote:
>> On 5/2/19 10:36 PM, Brendan Higgins wrote:
>>> On Thu, May 2, 2019 at 6:45 PM Frank Rowand  wrote:
>>>>
>>>> On 5/2/19 4:45 PM, Brendan Higgins wrote:
>>>>> On Thu, May 2, 2019 at 2:16 PM Frank Rowand  
>>>>> wrote:
>>>>>>
>>>>>> On 5/2/19 11:07 AM, Brendan Higgins wrote:
>>>>>>> On Thu, May 2, 2019 at 4:02 AM Greg KH  
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
>>>>>>>>> From: Felix Guo 
>>>>>>>>>
>>>>>>>>> The ultimate goal is to create minimal isolated test binaries; in the
>>>>>>>>> meantime we are using UML to provide the infrastructure to run tests, 
>>>>>>>>> so
>>>>>>>>> define an abstract way to configure and run tests that allow us to
>>>>>>>>> change the context in which tests are built without affecting the 
>>>>>>>>> user.
>>>>>>>>> This also makes pretty and dynamic error reporting, and a lot of other
>>>>>>>>> nice features easier.
>>>>>>>>>
>>>>>>>>> kunit_config.py:
>>>>>>>>>   - parse .config and Kconfig files.
>>>>>>>>>
>>>>>>>>> kunit_kernel.py: provides helper functions to:
>>>>>>>>>   - configure the kernel using kunitconfig.
>>>>>>>>>   - build the kernel with the appropriate configuration.
>>>>>>>>>   - provide function to invoke the kernel and stream the output back.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Felix Guo 
>>>>>>>>> Signed-off-by: Brendan Higgins 
>>>>>>>>
>>>>>>>> Ah, here's probably my answer to my previous logging format question,
>>>>>>>> right?  What's the chance that these wrappers output stuff in a 
>>>>>>>> standard
>>>>>>>> format that test-framework-tools can already parse?  :)
>>>>>
>>>>> To be clear, the test-framework-tools format we are talking about is
>>>>> TAP13[1], correct?
>>>>
>>>> I'm not sure what the test community prefers for a format.  I'll let them
>>>> jump in and debate that question.
>>>>
>>>>
>>>>>
>>>>> My understanding is that is what kselftest is being converted to use.
>>>>>
>>>>>>>
>>>>>>> It should be pretty easy to do. I had some patches that pack up the
>>>>>>> results into a serialized format for a presubmit service; it should be
>>>>>>> pretty straightforward to take the same logic and just change the
>>>>>>> output format.
>>>>>>
>>>>>> When examining and trying out the previous versions of the patch I found
>>>>>> the wrappers useful to provide information about how to control and use
>>>>>> the tests, but I had no interest in using the scripts as they do not
>>>>>> fit in with my personal environment and workflow.
>>>>>>
>>>>>> In the previous versions of the patch, these helper scripts are optional,
>>>>>> which is good for my use case.  If the helper scripts are required to
>>>>>
>>>>> They are still optional.
>>>>>
>>>>>> get the data into the proper format then the scripts are not quite so
>>>>>> optional, they become the expected environment.  I think the proper
>>>>>> format should exist without the helper scripts.
>>>>>
>>>>> That's a good point. A couple things,
>>>>>
>>>>> First off, supporting TAP13, either in the kernel or the wrapper
>>>>> script is not hard, but I don't think that is the real issue that you
>>>>> raise.
>>>>>
>>>>> If your only concern is that you will always be able to have human
>>>>> readable KUnit results printed to the kernel log, that is a guarantee
>>>>> I feel comfortable making. Beyond that, I think it is going to take a
>>>>> long while before I wo

Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

2019-05-03 Thread Frank Rowand
On 5/2/19 10:36 PM, Brendan Higgins wrote:
> On Thu, May 2, 2019 at 6:45 PM Frank Rowand  wrote:
>>
>> On 5/2/19 4:45 PM, Brendan Higgins wrote:
>>> On Thu, May 2, 2019 at 2:16 PM Frank Rowand  wrote:
>>>>
>>>> On 5/2/19 11:07 AM, Brendan Higgins wrote:
>>>>> On Thu, May 2, 2019 at 4:02 AM Greg KH  wrote:
>>>>>>
>>>>>> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
>>>>>>> From: Felix Guo 
>>>>>>>
>>>>>>> The ultimate goal is to create minimal isolated test binaries; in the
>>>>>>> meantime we are using UML to provide the infrastructure to run tests, so
>>>>>>> define an abstract way to configure and run tests that allow us to
>>>>>>> change the context in which tests are built without affecting the user.
>>>>>>> This also makes pretty and dynamic error reporting, and a lot of other
>>>>>>> nice features easier.
>>>>>>>
>>>>>>> kunit_config.py:
>>>>>>>   - parse .config and Kconfig files.
>>>>>>>
>>>>>>> kunit_kernel.py: provides helper functions to:
>>>>>>>   - configure the kernel using kunitconfig.
>>>>>>>   - build the kernel with the appropriate configuration.
>>>>>>>   - provide function to invoke the kernel and stream the output back.
>>>>>>>
>>>>>>> Signed-off-by: Felix Guo 
>>>>>>> Signed-off-by: Brendan Higgins 
>>>>>>
>>>>>> Ah, here's probably my answer to my previous logging format question,
>>>>>> right?  What's the chance that these wrappers output stuff in a standard
>>>>>> format that test-framework-tools can already parse?  :)
>>>
>>> To be clear, the test-framework-tools format we are talking about is
>>> TAP13[1], correct?
>>
>> I'm not sure what the test community prefers for a format.  I'll let them
>> jump in and debate that question.
>>
>>
>>>
>>> My understanding is that is what kselftest is being converted to use.
>>>
>>>>>
>>>>> It should be pretty easy to do. I had some patches that pack up the
>>>>> results into a serialized format for a presubmit service; it should be
>>>>> pretty straightforward to take the same logic and just change the
>>>>> output format.
>>>>
>>>> When examining and trying out the previous versions of the patch I found
>>>> the wrappers useful to provide information about how to control and use
>>>> the tests, but I had no interest in using the scripts as they do not
>>>> fit in with my personal environment and workflow.
>>>>
>>>> In the previous versions of the patch, these helper scripts are optional,
>>>> which is good for my use case.  If the helper scripts are required to
>>>
>>> They are still optional.
>>>
>>>> get the data into the proper format then the scripts are not quite so
>>>> optional, they become the expected environment.  I think the proper
>>>> format should exist without the helper scripts.
>>>
>>> That's a good point. A couple things,
>>>
>>> First off, supporting TAP13, either in the kernel or the wrapper
>>> script is not hard, but I don't think that is the real issue that you
>>> raise.
>>>
>>> If your only concern is that you will always be able to have human
>>> readable KUnit results printed to the kernel log, that is a guarantee
>>> I feel comfortable making. Beyond that, I think it is going to take a
>>> long while before I would feel comfortable guaranteeing anything about
>>> how will KUnit work, what kind of data it will want to expose, and how
>>> it will be organized. I think the wrapper script provides a nice
>>> facade that I can maintain, can mediate between the implementation
>>> details and the user, and can mediate between the implementation
>>> details and other pieces of software that might want to consume
>>> results.
>>>
>>> [1] https://testanything.org/tap-version-13-specification.html
>>
>> My concern is based on a focus on my little part of the world
>> (which in _previous_ versions of the patch series was the devicetree
>> unittest.c tests being converted to use the kunit infrastructure).
&

Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

2019-05-02 Thread Frank Rowand
On 5/2/19 4:45 PM, Brendan Higgins wrote:
> On Thu, May 2, 2019 at 2:16 PM Frank Rowand  wrote:
>>
>> On 5/2/19 11:07 AM, Brendan Higgins wrote:
>>> On Thu, May 2, 2019 at 4:02 AM Greg KH  wrote:
>>>>
>>>> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
>>>>> From: Felix Guo 
>>>>>
>>>>> The ultimate goal is to create minimal isolated test binaries; in the
>>>>> meantime we are using UML to provide the infrastructure to run tests, so
>>>>> define an abstract way to configure and run tests that allow us to
>>>>> change the context in which tests are built without affecting the user.
>>>>> This also makes pretty and dynamic error reporting, and a lot of other
>>>>> nice features easier.
>>>>>
>>>>> kunit_config.py:
>>>>>   - parse .config and Kconfig files.
>>>>>
>>>>> kunit_kernel.py: provides helper functions to:
>>>>>   - configure the kernel using kunitconfig.
>>>>>   - build the kernel with the appropriate configuration.
>>>>>   - provide function to invoke the kernel and stream the output back.
>>>>>
>>>>> Signed-off-by: Felix Guo 
>>>>> Signed-off-by: Brendan Higgins 
>>>>
>>>> Ah, here's probably my answer to my previous logging format question,
>>>> right?  What's the chance that these wrappers output stuff in a standard
>>>> format that test-framework-tools can already parse?  :)
> 
> To be clear, the test-framework-tools format we are talking about is
> TAP13[1], correct?

I'm not sure what the test community prefers for a format.  I'll let them
jump in and debate that question.


> 
> My understanding is that is what kselftest is being converted to use.
> 
>>>
>>> It should be pretty easy to do. I had some patches that pack up the
>>> results into a serialized format for a presubmit service; it should be
>>> pretty straightforward to take the same logic and just change the
>>> output format.
>>
>> When examining and trying out the previous versions of the patch I found
>> the wrappers useful to provide information about how to control and use
>> the tests, but I had no interest in using the scripts as they do not
>> fit in with my personal environment and workflow.
>>
>> In the previous versions of the patch, these helper scripts are optional,
>> which is good for my use case.  If the helper scripts are required to
> 
> They are still optional.
> 
>> get the data into the proper format then the scripts are not quite so
>> optional, they become the expected environment.  I think the proper
>> format should exist without the helper scripts.
> 
> That's a good point. A couple things,
> 
> First off, supporting TAP13, either in the kernel or the wrapper
> script is not hard, but I don't think that is the real issue that you
> raise.
> 
> If your only concern is that you will always be able to have human
> readable KUnit results printed to the kernel log, that is a guarantee
> I feel comfortable making. Beyond that, I think it is going to take a
> long while before I would feel comfortable guaranteeing anything about
> how will KUnit work, what kind of data it will want to expose, and how
> it will be organized. I think the wrapper script provides a nice
> facade that I can maintain, can mediate between the implementation
> details and the user, and can mediate between the implementation
> details and other pieces of software that might want to consume
> results.
> 
> [1] https://testanything.org/tap-version-13-specification.html

My concern is based on a focus on my little part of the world
(which in _previous_ versions of the patch series was the devicetree
unittest.c tests being converted to use the kunit infrastructure).
If I step back and think of the entire kernel globally I may end
up with a different conclusion - but I'm going to remain myopic
for this email.

I want the test results to be usable by me and my fellow
developers.  I prefer that the test results be easily accessible
(current printk() implementation means that kunit messages are
just as accessible as the current unittest.c printk() output).
If the printk() output needs to be filtered through a script
to generate the actual test results then that is sub-optimal
to me.  It is one more step added to my workflow.  And
potentially with an embedded target a major pain to get a
data file (the kernel log file) transferred from a target
to my development host.

I want a reported test failure to be easy to trace back to th

Re: [PATCH v2 04/17] kunit: test: add kunit_stream a std::stream like logger

2019-05-02 Thread Frank Rowand
On 5/2/19 1:25 PM, Brendan Higgins wrote:
> On Thu, May 2, 2019 at 4:00 AM Greg KH  wrote:
>>
>> On Wed, May 01, 2019 at 04:01:13PM -0700, Brendan Higgins wrote:
>>> A lot of the expectation and assertion infrastructure prints out fairly
>>> complicated test failure messages, so add a C++ style log library for
>>> for logging test results.
>>
>> Ideally we would always use a standard logging format, like the
>> kselftest tests all are aiming to do.  That way the output can be easily
>> parsed by tools to see if the tests succeed/fail easily.
>>
>> Any chance of having this logging framework enforcing that format as
>> well?
> 
> I agree with your comment on the later patch that we should handle
> this at the wrapper script layer (KUnit tool).

This discussion is a little confusing, because it is spread across two
patches.

I do not agree that this should be handled in the wrapper script, as
noted in my reply to patch 12, so not repeating it here.

-Frank
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v2 12/17] kunit: tool: add Python wrappers for running KUnit tests

2019-05-02 Thread Frank Rowand
On 5/2/19 11:07 AM, Brendan Higgins wrote:
> On Thu, May 2, 2019 at 4:02 AM Greg KH  wrote:
>>
>> On Wed, May 01, 2019 at 04:01:21PM -0700, Brendan Higgins wrote:
>>> From: Felix Guo 
>>>
>>> The ultimate goal is to create minimal isolated test binaries; in the
>>> meantime we are using UML to provide the infrastructure to run tests, so
>>> define an abstract way to configure and run tests that allow us to
>>> change the context in which tests are built without affecting the user.
>>> This also makes pretty and dynamic error reporting, and a lot of other
>>> nice features easier.
>>>
>>> kunit_config.py:
>>>   - parse .config and Kconfig files.
>>>
>>> kunit_kernel.py: provides helper functions to:
>>>   - configure the kernel using kunitconfig.
>>>   - build the kernel with the appropriate configuration.
>>>   - provide function to invoke the kernel and stream the output back.
>>>
>>> Signed-off-by: Felix Guo 
>>> Signed-off-by: Brendan Higgins 
>>
>> Ah, here's probably my answer to my previous logging format question,
>> right?  What's the chance that these wrappers output stuff in a standard
>> format that test-framework-tools can already parse?  :)
> 
> It should be pretty easy to do. I had some patches that pack up the
> results into a serialized format for a presubmit service; it should be
> pretty straightforward to take the same logic and just change the
> output format.

When examining and trying out the previous versions of the patch I found
the wrappers useful to provide information about how to control and use
the tests, but I had no interest in using the scripts as they do not
fit in with my personal environment and workflow.

In the previous versions of the patch, these helper scripts are optional,
which is good for my use case.  If the helper scripts are required to
get the data into the proper format then the scripts are not quite so
optional, they become the expected environment.  I think the proper
format should exist without the helper scripts.

-Frank
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest

2019-03-21 Thread Frank Rowand
On 3/21/19 6:30 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand  wrote:
>>
>> On 2/27/19 7:52 PM, Brendan Higgins wrote:

< snip >  but thanks for the comments in the snipped section.


>>
>> Thanks for leaving 18/19 and 19/19 off in v4.
> 
> Sure, no problem. It was pretty clear that it was a waste of both of
> our times to continue discussing those at this juncture. :-)
> 
> Do you still want me to try to convert the DT not-exactly-unittest to
> KUnit? I would kind of prefer (I don't feel *super* strongly about the
> matter) we don't call it that since I was intending for it to be the
> flagship initial example, but I certainly don't mind trying to clean
> this patch up to get it up to snuff. It's really just a question of
> whether it is worth it to you.

In the long term, if KUnit is adopted by the kernel, then I think it
probably makes sense for devicetree unittest to convert from using
our own unittest() function to report an individual test pass/fail
to instead use something like KUNIT_EXPECT_*() to provide more
consistent test messages to test frameworks.  That is assuming
KUNIT_EXPECT_*() provides comparable functionality.  I still have
not looked into that question since the converted tests (patch 15/17
in v4) still does not execute without throwing internal errors.

If that conversion occurred, I would also avoid the ASSERTs.

> 
> < snip >
> 
> Cheers!
> 

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest

2019-03-21 Thread Frank Rowand
On 3/21/19 5:22 PM, Frank Rowand wrote:
> On 2/27/19 7:52 PM, Brendan Higgins wrote:

< snip >

>> Now I know that, hermeticity especially, but other features as well
>> (test suite summary, error on unused test case function, etc) are not
>> actually in KUnit as it is under consideration here. Maybe it would be
>> best to save these last two patches (18/19, and 19/19) until I have
>> these other features checked in and reconsider them then?
> 
> Thanks for leaving 18/19 and 19/19 off in v4.

Oops, they got into v4 but as 16/17 and 17/17, I think.  But it sounds
like you are planning to leave them out of v5.

> 
> -Frank
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-03-21 Thread Frank Rowand
On 3/4/19 3:01 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
>  wrote:
>>
>> This patch set proposes KUnit, a lightweight unit testing and mocking
>> framework for the Linux kernel.
>>
> 
> 
> 
>> ## More information on KUnit
>>
>> There is a bunch of documentation near the end of this patch set that
>> describes how to use KUnit and best practices for writing unit tests.
>> For convenience I am hosting the compiled docs here:
>> https://google.github.io/kunit-docs/third_party/kernel/docs/
>> Additionally for convenience, I have applied these patches to a branch:
>> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
>> The repo may be cloned with:
>> git clone https://kunit.googlesource.com/linux
>> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>>
>> ## Changes Since Last Version
>>
>>  - Got KUnit working on (hypothetically) all architectures (tested on
>>x86), as per Rob's (and other's) request
>>  - Punting all KUnit features/patches depending on UML for now.
>>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>>kunit: test: add KUnit test runner core", as requested by Luis.
>>  - Added support to kunit_tool to allow it to build kernels in external
>>directories, as suggested by Kieran.
>>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>>by Kieran and Luis.
>>  - Cleaned up, and reformatted a bunch of stuff.
>>
>> --
>> 2.21.0.rc0.258.g878e2cd30e-goog
>>
> 
> Someone suggested I should send the next revision out as "PATCH"
> instead of "RFC" since there seems to be general consensus about
> everything at a high level, with a couple exceptions.
> 
> At this time I am planning on sending the next revision out as "[PATCH
> v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> framework". Initially I wasn't sure if the next revision should be
> "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> strong objection to the former.
> 
> In the next revision, I will be dropping the last two of three patches
> for the DT unit tests as there doesn't seem to be enough features
> currently available to justify the heavy refactoring I did; however, I

Thank you.


> will still include the patch that just converts everything over to
> KUnit without restructuring the test cases:
> https://lkml.org/lkml/2019/2/14/1133

The link doesn't work for me (don't worry about that), so I'm assuming
this is:

   [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

The conversation on that patch ended after:

   >> After adding patch 15, there are a lot of "unittest internal error" 
messages.
   > 
   > Yeah, I meant to ask you about that. I thought it was due to a change
   > you made, but after further examination, just now, I found it was my
   > fault. Sorry for not mentioning that anywhere. I will fix it in v5.

It is not worth my time to look at patch 15 when it is that broken.  So I
have not done any review of it.

So no, I think you are still in the RFC stage unless you drop patch 15.

> 
> I should have the next revision out in a week or so.
> 

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v4 17/17] of: unittest: split up some super large test cases

2019-03-21 Thread Frank Rowand
On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split up the super large test cases of_unittest_find_node_by_name and
> of_unittest_dynamic into properly sized and defined test cases.

I also still object to this patch.

-Frank


> 
> Signed-off-by: Brendan Higgins 
> ---
>  drivers/of/base-test.c | 297 ++---
>  1 file changed, 249 insertions(+), 48 deletions(-)
> 
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> index 3d3f4f1b74800..7b44c967ed2fd 100644
> --- a/drivers/of/base-test.c
> +++ b/drivers/of/base-test.c
> @@ -8,10 +8,10 @@
>  
>  #include "test-common.h"
>  
> -static void of_unittest_find_node_by_name(struct kunit *test)
> +static void of_test_find_node_by_name_basic(struct kunit *test)
>  {
>   struct device_node *np;
> - const char *options, *name;
> + const char *name;
>  
>   np = of_find_node_by_path("/testcase-data");
>   name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -20,11 +20,21 @@ static void of_unittest_find_node_by_name(struct kunit 
> *test)
>  "find /testcase-data failed\n");
>   of_node_put(np);
>   kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> +{
>   /* Test if trailing '/' works */
>   KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>   "trailing '/' on /testcase-data/ should fail\n");
>  
> +}
> +
> +static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> +{
> + struct device_node *np;
> + const char *name;
> +
>   np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>   KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>   name = kasprintf(GFP_KERNEL, "%pOF", np);
> @@ -33,6 +43,12 @@ static void of_unittest_find_node_by_name(struct kunit 
> *test)
>   "find /testcase-data/phandle-tests/consumer-a failed\n");
>   of_node_put(np);
>   kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_with_alias(struct kunit *test)
> +{
> + struct device_node *np;
> + const char *name;
>  
>   np = of_find_node_by_path("testcase-alias");
>   KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -41,10 +57,23 @@ static void of_unittest_find_node_by_name(struct kunit 
> *test)
>  "find testcase-alias failed\n");
>   of_node_put(np);
>   kfree(name);
> +}
>  
> +static void of_test_find_node_by_name_with_alias_and_slash(struct kunit 
> *test)
> +{
>   /* Test if trailing '/' works on aliases */
>   KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> - "trailing '/' on testcase-alias/ should fail\n");
> +"trailing '/' on testcase-alias/ should fail\n");
> +}
> +
> +/*
> + * TODO(brendanhigg...@google.com): This looks like a duplicate of
> + * of_test_find_node_by_name_multiple_components
> + */
> +static void of_test_find_node_by_name_multiple_components_2(struct kunit 
> *test)
> +{
> + struct device_node *np;
> + const char *name;
>  
>   np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>   KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> @@ -54,17 +83,33 @@ static void of_unittest_find_node_by_name(struct kunit 
> *test)
>   "find testcase-alias/phandle-tests/consumer-a failed\n");
>   of_node_put(np);
>   kfree(name);
> +}
> +
> +static void of_test_find_node_by_name_missing_path(struct kunit *test)
> +{
> + struct device_node *np;
>  
>   KUNIT_EXPECT_EQ_MSG(
>   test,
>   np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>   "non-existent path returned node %pOF\n", np);
>   of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> +{
> + struct device_node *np;
>  
>   KUNIT_EXPECT_EQ_MSG(
>   test, np = of_find_node_by_path("missing-alias"), NULL,
>   "non-existent alias returned node %pOF\n", np);
>   of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_missing_alias_with_relative_path(
> + struct kunit *test)
> +{
> + struct device_node *np;
>  
>   KUNIT_EXPECT_EQ_MSG(
>   test,
> @@ -72,12 +117,24 @@ static void of_unittest_find_node_by_name(struct kunit 
> *test)
>   "non-existent alias with relative path returned node %pOF\n",
>   np);
>   of_node_put(np);
> +}
> +
> +static void of_test_find_node_by_name_with_option(struct kunit *test)
> +{
> + struct device_node *np;
> + const char *options;
>  
>   np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>   KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>   KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>  "option path test failed\n");
>   of_node_put(np);
> +}
> +
> +static void of_test_fin

Re: [RFC v4 16/17] of: unittest: split out a couple of test cases from unittest

2019-03-21 Thread Frank Rowand
On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Split out a couple of test cases that these features in base.c from the
> unittest.c monolith. The intention is that we will eventually split out
> all test cases and group them together based on what portion of device
> tree they test.

I still object to this patch.  I do not want this code scattered into
additional files.

-Frank


> 
> Signed-off-by: Brendan Higgins 
> ---
>  drivers/of/Makefile  |   2 +-
>  drivers/of/base-test.c   | 214 
>  drivers/of/test-common.c | 175 
>  drivers/of/test-common.h |  16 ++
>  drivers/of/unittest.c| 345 +--
>  5 files changed, 407 insertions(+), 345 deletions(-)
>  create mode 100644 drivers/of/base-test.c
>  create mode 100644 drivers/of/test-common.c
>  create mode 100644 drivers/of/test-common.h
> 
> diff --git a/drivers/of/Makefile b/drivers/of/Makefile
> index 663a4af0cccd5..4a4bd527d586c 100644
> --- a/drivers/of/Makefile
> +++ b/drivers/of/Makefile
> @@ -8,7 +8,7 @@ obj-$(CONFIG_OF_PROMTREE) += pdt.o
>  obj-$(CONFIG_OF_ADDRESS)  += address.o
>  obj-$(CONFIG_OF_IRQ)+= irq.o
>  obj-$(CONFIG_OF_NET) += of_net.o
> -obj-$(CONFIG_OF_UNITTEST) += unittest.o
> +obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
>  obj-$(CONFIG_OF_MDIO)+= of_mdio.o
>  obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
>  obj-$(CONFIG_OF_RESOLVE)  += resolver.o
> diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
> new file mode 100644
> index 0..3d3f4f1b74800
> --- /dev/null
> +++ b/drivers/of/base-test.c
> @@ -0,0 +1,214 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Unit tests for functions defined in base.c.
> + */
> +#include 
> +
> +#include 
> +
> +#include "test-common.h"
> +
> +static void of_unittest_find_node_by_name(struct kunit *test)
> +{
> + struct device_node *np;
> + const char *options, *name;
> +
> + np = of_find_node_by_path("/testcase-data");
> + name = kasprintf(GFP_KERNEL, "%pOF", np);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> + KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +"find /testcase-data failed\n");
> + of_node_put(np);
> + kfree(name);
> +
> + /* Test if trailing '/' works */
> + KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> + "trailing '/' on /testcase-data/ should fail\n");
> +
> + np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> + name = kasprintf(GFP_KERNEL, "%pOF", np);
> + KUNIT_EXPECT_STREQ_MSG(
> + test, "/testcase-data/phandle-tests/consumer-a", name,
> + "find /testcase-data/phandle-tests/consumer-a failed\n");
> + of_node_put(np);
> + kfree(name);
> +
> + np = of_find_node_by_path("testcase-alias");
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> + name = kasprintf(GFP_KERNEL, "%pOF", np);
> + KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +"find testcase-alias failed\n");
> + of_node_put(np);
> + kfree(name);
> +
> + /* Test if trailing '/' works on aliases */
> + KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> + "trailing '/' on testcase-alias/ should fail\n");
> +
> + np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> + name = kasprintf(GFP_KERNEL, "%pOF", np);
> + KUNIT_EXPECT_STREQ_MSG(
> + test, "/testcase-data/phandle-tests/consumer-a", name,
> + "find testcase-alias/phandle-tests/consumer-a failed\n");
> + of_node_put(np);
> + kfree(name);
> +
> + KUNIT_EXPECT_EQ_MSG(
> + test,
> + np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> + "non-existent path returned node %pOF\n", np);
> + of_node_put(np);
> +
> + KUNIT_EXPECT_EQ_MSG(
> + test, np = of_find_node_by_path("missing-alias"), NULL,
> + "non-existent alias returned node %pOF\n", np);
> + of_node_put(np);
> +
> + KUNIT_EXPECT_EQ_MSG(
> + test,
> + np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> + "non-existent alias with relative path returned node %pOF\n",
> + np);
> + of_node_put(np);
> +
> + np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> + KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> +"option path test failed\n");
> + of_node_put(np);
> +
> + np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> + KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> +   

Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-03-21 Thread Frank Rowand
On 3/21/19 4:33 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe  wrote:
>>
>>
>>
>> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
>>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
>>> that type of thing has been that we would add support for a KUnit/UML
>>> version that is just for KUnit. It would mock out the necessary bits
>>> to provide a fake hardware implementation for anything that might
>>> depend on it. I wrote a prototype for mocking/faking MMIO that I
>>> presented to the list here[1]; it is not part of the current patchset
>>> because we decided it would be best to focus on getting an MVP in, but
>>> I plan on bringing it back up at some point. Anyway, what do you
>>> generally think of this approach?
>>
>> Yes, I was wondering if that might be possible. I think that's a great
>> approach but it will unfortunately take a lot of work before larger
>> swaths of the kernel are testable in Kunit with UML. Having more common
>> mocked infrastructure will be great by-product of it though.
> 
> Yeah, it's unfortunate that the best way to do something often takes
> so much longer.
> 
>>
>>> Awesome, I looked at the code you posted and it doesn't look like you
>>> have had too many troubles. One thing that stood out to me, why did
>>> you need to put it in the kunit/ dir?
>>
>> Yeah, writing the code was super easy. Only after, did I realized I
>> couldn't get it to easily build.
> 
> Yeah, we really need to fix that; unfortunately, broadly addressing
> that problem is really hard and will most likely take a long time.
> 
>>
>> Putting it in the kunit directory was necessary because nothing in the
>> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
>> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
>> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>>
>>> I am looking forward to see what you think!
>>
>> Generally, I'm impressed and want to see this work in upstream as soon
>> as possible so I can start to make use of it!
> 
> Great to hear! I was trying to get the next revision out this week,
> but addressing some of the comments is taking a little longer than
> expected. I should have something together fairly soon though
> (hopefully next week). Good news is that next revision will be
> non-RFC; most of the feedback has settled down and I think we are
> ready to start figuring out how to merge it. Fingers crossed :-)
> 
> Cheers

I'll be out of the office next week and will not be able to review.
Please hold off on any devicetree related files until after I review.

Thanks,

Frank

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v4 08/17] kunit: test: add support for test abort

2019-03-21 Thread Frank Rowand
On 2/27/19 11:42 PM, Brendan Higgins wrote:
> On Tue, Feb 19, 2019 at 10:44 PM Frank Rowand  wrote:
>>
>> On 2/19/19 7:39 PM, Brendan Higgins wrote:
>>> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand  
>>> wrote:
>>>>
>>>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>>>> Add support for aborting/bailing out of test cases. Needed for
>>>>> implementing assertions.
>>>>>
>>>>> Signed-off-by: Brendan Higgins 
>>>>> ---
>>>>> Changes Since Last Version
>>>>>  - This patch is new introducing a new cross-architecture way to abort
>>>>>out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>>>details).
>>>>>  - On a side note, this is not a complete replacement for the UML abort
>>>>>mechanism, but covers the majority of necessary functionality. UML
>>>>>architecture specific featurs have been dropped from the initial
>>>>>patchset.
>>>>> ---
>>>>>  include/kunit/test.h |  24 +
>>>>>  kunit/Makefile   |   3 +-
>>>>>  kunit/test-test.c| 127 ++
>>>>>  kunit/test.c | 208 +--
>>>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>>>  create mode 100644 kunit/test-test.c
>>>>
>>>> < snip >
>>>>
>>>>> diff --git a/kunit/test.c b/kunit/test.c
>>>>> index d18c50d5ed671..6e5244642ab07 100644
>>>>> --- a/kunit/test.c
>>>>> +++ b/kunit/test.c
>>>>> @@ -6,9 +6,9 @@
>>>>>   * Author: Brendan Higgins 
>>>>>   */
>>>>>
>>>>> -#include 
>>>>>  #include 
>>>>> -#include 
>>>>> +#include 
>>>>> +#include 
>>>>>  #include 
>>>>>
>>>>>  static bool kunit_get_success(struct kunit *test)
>>>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool 
>>>>> success)
>>>>>   spin_unlock_irqrestore(&test->lock, flags);
>>>>>  }
>>>>>
>>>>> +static bool kunit_get_death_test(struct kunit *test)
>>>>> +{
>>>>> + unsigned long flags;
>>>>> + bool death_test;
>>>>> +
>>>>> + spin_lock_irqsave(&test->lock, flags);
>>>>> + death_test = test->death_test;
>>>>> + spin_unlock_irqrestore(&test->lock, flags);
>>>>> +
>>>>> + return death_test;
>>>>> +}
>>>>> +
>>>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>>>> +{
>>>>> + unsigned long flags;
>>>>> +
>>>>> + spin_lock_irqsave(&test->lock, flags);
>>>>> + test->death_test = death_test;
>>>>> + spin_unlock_irqrestore(&test->lock, flags);
>>>>> +}
>>>>> +
>>>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>>>> int level,
>>>>> const char *fmt,
>>>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct 
>>>>> kunit_stream *stream)
>>>>>   stream->commit(stream);
>>>>>  }
>>>>>
>>>>> +static void __noreturn kunit_abort(struct kunit *test)
>>>>> +{
>>>>> + kunit_set_death_test(test, true);
>>>>> +
>>>>> + test->try_catch.throw(&test->try_catch);
>>>>> +
>>>>> + /*
>>>>> +  * Throw could not abort from test.
>>>>> +  */
>>>>> + kunit_err(test, "Throw could not abort from test!");
>>>>> + show_stack(NULL, NULL);
>>>>> + BUG();
>>>>
>>>> kunit_abort() is what will be call as the result of an assert failure.
>>>
>>> Yep. Does that need clarified somewhere.
>>>>
>>>> BUG(), which is a panic, which is crashing the system is not acceptable
>>>> in the Linux kernel.  You will just annoy Linus if you submit this.
>>>
>>> Sorry, I thought this was an acceptable use case since, a) this should
>>> never be compile

Re: [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-03-21 Thread Frank Rowand
On 12/5/18 3:10 PM, Brendan Higgins wrote:
> On Tue, Dec 4, 2018 at 5:49 AM Rob Herring  wrote:
>>
>> On Tue, Dec 4, 2018 at 5:40 AM Frank Rowand  wrote:
>>>
>>> Hi Brendan, Rob,
>>>
>>> Pulling a comment from way back in the v1 patch thread:
>>>
>>> On 10/17/18 3:22 PM, Brendan Higgins wrote:
>>>> On Wed, Oct 17, 2018 at 10:49 AM  wrote:
>>>
>>> < snip >
>>>
>>>> The test and the code under test are linked together in the same
>>>> binary and are compiled under Kbuild. Right now I am linking
>>>> everything into a UML kernel, but I would ultimately like to make
>>>> tests compile into completely independent test binaries. So each test
>>>> file would get compiled into its own test binary and would link
>>>> against only the code needed to run the test, but we are a bit of a
>>>> ways off from that.
>>>
>>> I have never used UML, so you should expect naive questions from me,
>>> exhibiting my lack of understanding.
>>>
>>> Does this mean that I have to build a UML architecture kernel to run
>>> the KUnit tests?
>>
>> In this version of the patch series, yes.
>>
>>> *** Rob, if the answer is yes, then it seems like for my workflow,
>>> which is to build for real ARM hardware, my work is doubled (or
>>> worse), because for every patch/commit that I apply, I not only have
>>> to build the ARM kernel and boot on the real hardware to test, I also
>>> have to build the UML kernel and boot in UML.  If that is correct
>>> then I see this as a major problem for me.
>>
>> I've already raised this issue elsewhere in the series. Restricting
>> the DT tests to UML is a non-starter.
> 

> I have already stated my position elsewhere on the matter, but in
> summary: Ensuring most tests can run without external dependencies
> (hardware, VM, etc) has a lot of benefits and should be supported in
> nearly all cases, but such tests should also work when compiled to run
> on real hardware/VM; the tooling might not be as good in the latter
> case, but I understand that there are good reasons to support it
> nonetheless.

And my needs are the exact opposite.  My tests must run on real hardware,
in the context of the real operating system subsystems and drivers
potentially causing issues.

It is useful if the tests can also run without that dependency.

-Frank


> 
> So I am going to try to add basic support for running tests on other
> architectures in the next version or two.

< snip >
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest

2019-03-21 Thread Frank Rowand
On 2/27/19 7:52 PM, Brendan Higgins wrote:
> On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand  wrote:
>>
>> On 2/18/19 2:25 PM, Frank Rowand wrote:
>>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand  
>>>> wrote:
>>>>>
>>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand  
>>>>>> wrote:
>>>>>>>
>>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand  
>>>>>>>> wrote:
>>>>>>>>>
>>
>> < snip >
>>
>>>
>>> It makes it harder for me to read the source of the tests and
>>> understand the order they will execute.  It also makes it harder
>>> for me to read through the actual tests (in this example the
>>> tests that are currently grouped in of_unittest_find_node_by_name())
>>> because of all the extra function headers injected into the
>>> existing single function to break it apart into many smaller
>>> functions.
>>
>> < snip >
>>
>>>>>> This is not something I feel particularly strongly about, it is just
>>>>>> pretty atypical from my experience to have so many unrelated test
>>>>>> cases in a single file.
>>>>>>
>>>>>> Maybe you would prefer that I break up the test cases first, and then
>>>>>> we split up the file as appropriate?
>>>>>
>>>>> I prefer that the test cases not be broken up arbitrarily.  There _may_
>>
>> I expect that I created confusion by putting this in a reply to patch 18/19.
>> It is actually a comment about patch 19/19.  Sorry about that.
>>
> 
> No worries.
> 
>>
>>>>
>>>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>>>> according to a pattern (breaking up the file, that is), but maybe I
>>>> just hadn't looked at enough examples.
>>>
>>> This goes back to the kunit model of putting each test into a separate
>>> function that can be a KUNIT_CASE().  That is a model that I do not agree
>>> with for devicetree.
>>
>> So now that I am actually talking about patch 19/19, let me give a concrete
>> example.  I will cut and paste (after my comments), the beginning portion
>> of base-test.c, after applying patch 19/19 (the "base version".  Then I
>> will cut and paste my alternative version which does not break the tests
>> down into individual functions (the "frank version").
> 
> Awesome, thanks for putting the comparison together!
> 
>>
>> I will also reply to this email with the base version and the frank version
>> as attachments, which will make it easier to save as separate versions
>> for easier viewing.  I'm not sure if an email with attachments will make
>> it through the list servers, but I am cautiously optimistic.
>>
>> I am using v4 of the patch series because I never got v3 to cleanly apply
>> and it is not a constructive use of my time to do so since I have v4 applied.
>>
>> One of the points I was trying to make is that readability suffers from the
>> approach taken by patches 18/19 and 19/19.
> 
> I understood that point.
> 
>>
>> The base version contains the extra text of a function header for each
>> unit test.  This is visual noise and makes the file larger.  It is also
>> one more possible location of an error (although not likely).
> 
> I don't see how it is much more visual noise than a comment.
> Admittedly, a space versus an underscore might be nice, but I think it
> is also more likely that a function name is more likely to be kept up
> to date than a comment even if they are both informational. It also
> forces the user to actually name all the tests. Even then, I am not
> married to doing it this exact way. The thing I really care about is
> isolating the code in each test case so that it can be executed
> separately.
> 
> A side thought, when I was proofreading this, it occurred to me that
> you might not like the function name over comment partly because you
> think about them differently. You aren't used to seeing a function
> used to frame things or communicate information in this way. Is this

No.  It is more visual clutter and it is more functional clutter that
potentially has to be validated.


> true? Admittedly, I have gotten used to a lot of unit test frameworks
&

Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest

2019-02-20 Thread Frank Rowand
On 2/20/19 12:44 PM, Frank Rowand wrote:
> On 2/18/19 2:25 PM, Frank Rowand wrote:
>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand  wrote:
>>>>
>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand  
>>>>> wrote:
>>>>>>
>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand  
>>>>>>> wrote:
>>>>>>>>
> 
> < snip >
> 
>>
>> It makes it harder for me to read the source of the tests and
>> understand the order they will execute.  It also makes it harder
>> for me to read through the actual tests (in this example the
>> tests that are currently grouped in of_unittest_find_node_by_name())
>> because of all the extra function headers injected into the
>> existing single function to break it apart into many smaller
>> functions.
> 
> < snip >
> 
>>>>> This is not something I feel particularly strongly about, it is just
>>>>> pretty atypical from my experience to have so many unrelated test
>>>>> cases in a single file.
>>>>>
>>>>> Maybe you would prefer that I break up the test cases first, and then
>>>>> we split up the file as appropriate?
>>>>
>>>> I prefer that the test cases not be broken up arbitrarily.  There _may_
> 
> I expect that I created confusion by putting this in a reply to patch 18/19.
> It is actually a comment about patch 19/19.  Sorry about that.
> 
> 
>>>
>>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>>> according to a pattern (breaking up the file, that is), but maybe I
>>> just hadn't looked at enough examples.
>>
>> This goes back to the kunit model of putting each test into a separate
>> function that can be a KUNIT_CASE().  That is a model that I do not agree
>> with for devicetree.
> 
> So now that I am actually talking about patch 19/19, let me give a concrete
> example.  I will cut and paste (after my comments), the beginning portion
> of base-test.c, after applying patch 19/19 (the "base version".  Then I
> will cut and paste my alternative version which does not break the tests
> down into individual functions (the "frank version").
> 
> I will also reply to this email with the base version and the frank version
> as attachments, which will make it easier to save as separate versions
> for easier viewing.  I'm not sure if an email with attachments will make
> it through the list servers, but I am cautiously optimistic.

base_version and frank_version attached.

-Frank


> 
> I am using v4 of the patch series because I never got v3 to cleanly apply
> and it is not a constructive use of my time to do so since I have v4 applied.
> 
> One of the points I was trying to make is that readability suffers from the
> approach taken by patches 18/19 and 19/19.
> 
> The base version contains the extra text of a function header for each
> unit test.  This is visual noise and makes the file larger.  It is also
> one more possible location of an error (although not likely).
> 
> The frank version has converted each of the new function headers into
> a comment, using the function name with '_' converted to ' '.  The
> comments are more readable than the function headers.  Note that I added
> an extra blank line before each comment, which violates the kernel
> coding standards, but I feel this makes the code more readable.
> 
> The base version needs to declare each of the individual test functions
> in of_test_find_node_by_name_cases[]. It is possible that a test function
> could be left out of of_test_find_node_by_name_cases[], in error.  This
> will result in a compile warning (I think warning instead of error, but
> I have not verified that) so the error might be caught or it might be
> overlooked.
> 
> In the base version, the order of execution of the test code requires
> bouncing back and forth between the test functions and the coding of
> of_test_find_node_by_name_cases[].
> 
> In the frank version the order of execution of the test code is obvious.
> 
> It is possible that a test function could be left out of
> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> warning (I think warning instead of error, but I have not verified that)
> so it might be caught or it might be overlooked.
> 
> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> less.  

Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest

2019-02-20 Thread Frank Rowand
On 2/18/19 2:25 PM, Frank Rowand wrote:
> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand  wrote:
>>>
>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand  
>>>> wrote:
>>>>>
>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand  
>>>>>> wrote:
>>>>>>>

< snip >

>
> It makes it harder for me to read the source of the tests and
> understand the order they will execute.  It also makes it harder
> for me to read through the actual tests (in this example the
> tests that are currently grouped in of_unittest_find_node_by_name())
> because of all the extra function headers injected into the
> existing single function to break it apart into many smaller
> functions.

< snip >

>>>> This is not something I feel particularly strongly about, it is just
>>>> pretty atypical from my experience to have so many unrelated test
>>>> cases in a single file.
>>>>
>>>> Maybe you would prefer that I break up the test cases first, and then
>>>> we split up the file as appropriate?
>>>
>>> I prefer that the test cases not be broken up arbitrarily.  There _may_

I expect that I created confusion by putting this in a reply to patch 18/19.
It is actually a comment about patch 19/19.  Sorry about that.


>>
>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>> according to a pattern (breaking up the file, that is), but maybe I
>> just hadn't looked at enough examples.
> 
> This goes back to the kunit model of putting each test into a separate
> function that can be a KUNIT_CASE().  That is a model that I do not agree
> with for devicetree.

So now that I am actually talking about patch 19/19, let me give a concrete
example.  I will cut and paste (after my comments), the beginning portion
of base-test.c, after applying patch 19/19 (the "base version".  Then I
will cut and paste my alternative version which does not break the tests
down into individual functions (the "frank version").

I will also reply to this email with the base version and the frank version
as attachments, which will make it easier to save as separate versions
for easier viewing.  I'm not sure if an email with attachments will make
it through the list servers, but I am cautiously optimistic.

I am using v4 of the patch series because I never got v3 to cleanly apply
and it is not a constructive use of my time to do so since I have v4 applied.

One of the points I was trying to make is that readability suffers from the
approach taken by patches 18/19 and 19/19.

The base version contains the extra text of a function header for each
unit test.  This is visual noise and makes the file larger.  It is also
one more possible location of an error (although not likely).

The frank version has converted each of the new function headers into
a comment, using the function name with '_' converted to ' '.  The
comments are more readable than the function headers.  Note that I added
an extra blank line before each comment, which violates the kernel
coding standards, but I feel this makes the code more readable.

The base version needs to declare each of the individual test functions
in of_test_find_node_by_name_cases[]. It is possible that a test function
could be left out of of_test_find_node_by_name_cases[], in error.  This
will result in a compile warning (I think warning instead of error, but
I have not verified that) so the error might be caught or it might be
overlooked.

In the base version, the order of execution of the test code requires
bouncing back and forth between the test functions and the coding of
of_test_find_node_by_name_cases[].

In the frank version the order of execution of the test code is obvious.

It is possible that a test function could be left out of
of_test_find_node_by_name_cases[], in error.  This will result in a compile
warning (I think warning instead of error, but I have not verified that)
so it might be caught or it might be overlooked.

The base version is 265 lines.  The frank version is 208 lines, 57 lines
less.  Less is better.


## ==  base version  

// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include 

#include 

#include "test-common.h"

static void of_test_find_node_by_name_basic(struct kunit *test)
{
struct device_node *np;
const char *name;

np = of_find_node_by_path("/testcase-data");
name = kasprintf(GFP_KERNEL, "%pOF", np);
KUNIT_ASSERT_NOT_ERR_OR_

Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-02-19 Thread Frank Rowand
On 2/19/19 10:34 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand  wrote:
> 
>> I have not read through the patches in any detail.  I have read some of
>> the code to try to understand the patches to the devicetree unit tests.
>> So that may limit how valid my comments below are.
> 
> No problem.
> 
>>
>> I found the code difficult to read in places where it should have been
>> much simpler to read.  Structuring the code in a pseudo object oriented
>> style meant that everywhere in a code path that I encountered a dynamic
>> function call, I had to go find where that dynamic function call was
>> initialized (and being the cautious person that I am, verify that
>> no where else was the value of that dynamic function call).  With
>> primitive vi and tags, that search would have instead just been a
>> simple key press (or at worst a few keys) if hard coded function
>> calls were done instead of dynamic function calls.  In the code paths
>> that I looked at, I did not see any case of a dynamic function being
>> anything other than the value it was originally initialized as.
>> There may be such cases, I did not read the entire patch set.  There
>> may also be cases envisioned in the architects mind of how this
>> flexibility may be of future value.  Dunno.
> 
> Yeah, a lot of it is intended to make architecture specific
> implementations and some other future work easier. Some of it is also
> for testing purposes. Admittedly some is for neither reason, but given
> the heavy usage elsewhere, I figured there was no harm since it was
> all private internal usage anyway.
> 

Increasing the cost for me (and all the other potential code readers)
to read the code is harm.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v4 08/17] kunit: test: add support for test abort

2019-02-19 Thread Frank Rowand
On 2/19/19 7:39 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 11:52 AM Frank Rowand  wrote:
>>
>> On 2/14/19 1:37 PM, Brendan Higgins wrote:
>>> Add support for aborting/bailing out of test cases. Needed for
>>> implementing assertions.
>>>
>>> Signed-off-by: Brendan Higgins 
>>> ---
>>> Changes Since Last Version
>>>  - This patch is new introducing a new cross-architecture way to abort
>>>out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>>>details).
>>>  - On a side note, this is not a complete replacement for the UML abort
>>>mechanism, but covers the majority of necessary functionality. UML
>>>architecture specific featurs have been dropped from the initial
>>>patchset.
>>> ---
>>>  include/kunit/test.h |  24 +
>>>  kunit/Makefile   |   3 +-
>>>  kunit/test-test.c| 127 ++
>>>  kunit/test.c | 208 +--
>>>  4 files changed, 353 insertions(+), 9 deletions(-)
>>>  create mode 100644 kunit/test-test.c
>>
>> < snip >
>>
>>> diff --git a/kunit/test.c b/kunit/test.c
>>> index d18c50d5ed671..6e5244642ab07 100644
>>> --- a/kunit/test.c
>>> +++ b/kunit/test.c
>>> @@ -6,9 +6,9 @@
>>>   * Author: Brendan Higgins 
>>>   */
>>>
>>> -#include 
>>>  #include 
>>> -#include 
>>> +#include 
>>> +#include 
>>>  #include 
>>>
>>>  static bool kunit_get_success(struct kunit *test)
>>> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool 
>>> success)
>>>   spin_unlock_irqrestore(&test->lock, flags);
>>>  }
>>>
>>> +static bool kunit_get_death_test(struct kunit *test)
>>> +{
>>> + unsigned long flags;
>>> + bool death_test;
>>> +
>>> + spin_lock_irqsave(&test->lock, flags);
>>> + death_test = test->death_test;
>>> + spin_unlock_irqrestore(&test->lock, flags);
>>> +
>>> + return death_test;
>>> +}
>>> +
>>> +static void kunit_set_death_test(struct kunit *test, bool death_test)
>>> +{
>>> + unsigned long flags;
>>> +
>>> + spin_lock_irqsave(&test->lock, flags);
>>> + test->death_test = death_test;
>>> + spin_unlock_irqrestore(&test->lock, flags);
>>> +}
>>> +
>>>  static int kunit_vprintk_emit(const struct kunit *test,
>>> int level,
>>> const char *fmt,
>>> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct 
>>> kunit_stream *stream)
>>>   stream->commit(stream);
>>>  }
>>>
>>> +static void __noreturn kunit_abort(struct kunit *test)
>>> +{
>>> + kunit_set_death_test(test, true);
>>> +
>>> + test->try_catch.throw(&test->try_catch);
>>> +
>>> + /*
>>> +  * Throw could not abort from test.
>>> +  */
>>> + kunit_err(test, "Throw could not abort from test!");
>>> + show_stack(NULL, NULL);
>>> + BUG();
>>
>> kunit_abort() is what will be call as the result of an assert failure.
> 
> Yep. Does that need clarified somewhere.
>>
>> BUG(), which is a panic, which is crashing the system is not acceptable
>> in the Linux kernel.  You will just annoy Linus if you submit this.
> 
> Sorry, I thought this was an acceptable use case since, a) this should
> never be compiled in a production kernel, b) we are in a pretty bad,
> unpredictable state if we get here and keep going. I think you might
> have said elsewhere that you think "a" is not valid? In any case, I
> can replace this with a WARN, would that be acceptable?

A WARN may or may not make sense, depending on the context.  It may
be sufficient to simply report a test failure (as in the old version
of case (2) below.

Answers to "a)" and "b)":

a) it might be in a production kernel

a') it is not acceptable in my development kernel either

b) No.  You don't crash a developer's kernel either unless it is
required to avoid data corruption.

b') And you can not do replacements like:

(1) in of_unittest_check_tree_linkage()

-  old  -

if (!of_root)
return;

-  new  -

KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_root);

(2) in of_unittest_property_string()

-  old  -

/* of_property_read_string_index() tests */
rc = of_property_read_string_index(np, "string-property", 0, strings);
unittest(rc == 0 && !strcmp(strings[0], "foobar"), 
"of_property_read_string_index() failure; rc=%i\n", rc);

-  new  -

/* of_property_read_string_index() tests */
rc = of_property_read_string_index(np, "string-property", 0, strings);
KUNIT_ASSERT_EQ(test, rc, 0);
KUNIT_EXPECT_STREQ(test, strings[0], "foobar");


If a test fails, that is no reason to abort testing.  The remainder of the unit
tests can still run.  There may be cascading failures, but that is ok.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v3 17/19] of: unittest: migrate tests to run on KUnit

2019-02-18 Thread Frank Rowand
On 2/12/19 5:44 PM, Brendan Higgins wrote:
> On Wed, Nov 28, 2018 at 12:56 PM Rob Herring  wrote:
>>
>> On Wed, Nov 28, 2018 at 1:38 PM Brendan Higgins
>>  wrote:
>>>
>>> Migrate tests without any cleanup, or modifying test logic in anyway to
>>> run under KUnit using the KUnit expectation and assertion API.
>>
>> Nice! You beat me to it. This is probably going to conflict with what
>> is in the DT tree for 4.21. Also, please Cc the DT list for
>> drivers/of/ changes.
>>
>> Looks good to me, but a few mostly formatting comments below.
> 
> I just realized that we never talked about your other comments, and I
> still have some questions. (Sorry, it was the last thing I looked at
> while getting v4 ready.) No worries if you don't get to it before I
> send v4 out, I just didn't want you to think I was ignoring you.
> 
>>
>>>
>>> Signed-off-by: Brendan Higgins 
>>> ---
>>>  drivers/of/Kconfig|1 +
>>>  drivers/of/unittest.c | 1405 ++---
>>>  2 files changed, 752 insertions(+), 654 deletions(-)
>>>
> 
>>> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
>>> index 41b49716ac75f..a5ef44730ffdb 100644
>>> --- a/drivers/of/unittest.c
>>> +++ b/drivers/of/unittest.c
> 
>>> -
>>> -static void __init of_unittest_find_node_by_name(void)
>>> +static void of_unittest_find_node_by_name(struct kunit *test)
>>
>> Why do we have to drop __init everywhere? The tests run later?
> 
>>From the standpoint of a unit test __init doesn't really make any
> sense, right? I know that right now we are running as part of a
> kernel, but the goal should be that a unit test is not part of a
> kernel and we just include what we need.
> 
> Even so, that's the future. For now, I did not put the KUnit
> infrastructure in the .init section because I didn't think it belonged
> there. In practice, KUnit only knows how to run during the init phase
> of the kernel, but I don't think it should be restricted there. You
> should be able to run tests whenever you want because you should be
> able to test anything right? I figured any restriction on that is
> misleading and will potentially get in the way at worst, and
> unnecessary at best especially since people shouldn't build a
> production kernel with all kinds of unit tests inside.
> 
>>
>>>  {
>>> struct device_node *np;
>>> const char *options, *name;
>>>
> 
>>>
>>>
>>> -   np = of_find_node_by_path("/testcase-data/missing-path");
>>> -   unittest(!np, "non-existent path returned node %pOF\n", np);
>>> +   KUNIT_EXPECT_EQ_MSG(test,
>>> +   
>>> of_find_node_by_path("/testcase-data/missing-path"),
>>> +   NULL,
>>> +   "non-existent path returned node %pOF\n", np);
>>
>> 1 tab indent would help with less vertical code (in general, not this
>> one so much).
> 
> Will do.
> 
>>
>>> of_node_put(np);
>>>
>>> -   np = of_find_node_by_path("missing-alias");
>>> -   unittest(!np, "non-existent alias returned node %pOF\n", np);
>>> +   KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), 
>>> NULL,
>>> +   "non-existent alias returned node %pOF\n", np);
>>> of_node_put(np);
>>>
>>> -   np = of_find_node_by_path("testcase-alias/missing-path");
>>> -   unittest(!np, "non-existent alias with relative path returned node 
>>> %pOF\n", np);
>>> +   KUNIT_EXPECT_EQ_MSG(test,
>>> +   
>>> of_find_node_by_path("testcase-alias/missing-path"),
>>> +   NULL,
>>> +   "non-existent alias with relative path returned 
>>> node %pOF\n",
>>> +   np);
>>> of_node_put(np);
>>>
> 
>>>
>>> -static void __init of_unittest_property_string(void)
>>> +static void of_unittest_property_string(struct kunit *test)
>>>  {
>>> const char *strings[4];
>>> struct device_node *np;
>>> int rc;
>>>
>>> np = 
>>> of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>>> -   if (!np) {
>>> -   pr_err("No testcase data in device tree\n");
>>> -   return;
>>> -   }
>>> -
>>> -   rc = of_property_match_string(np, "phandle-list-names", "first");
>>> -   unittest(rc == 0, "first expected:0 got:%i\n", rc);
>>> -   rc = of_property_match_string(np, "phandle-list-names", "second");
>>> -   unittest(rc == 1, "second expected:1 got:%i\n", rc);
>>> -   rc = of_property_match_string(np, "phandle-list-names", "third");
>>> -   unittest(rc == 2, "third expected:2 got:%i\n", rc);
>>> -   rc = of_property_match_string(np, "phandle-list-names", "fourth");
>>> -   unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
>>> -   rc = of_property_match_string(np, "missing-property", "blah");
>>> -   unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
>>> -   rc = of_property_match_string(np, "empty-property", "

Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest

2019-02-18 Thread Frank Rowand
On 2/15/19 2:56 AM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand  wrote:
>>
>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand  wrote:
>>>>
>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand  
>>>>> wrote:
>>>>>>
>>>>>> Hi Brendan,
>>>>>>
>>>>>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>>>>>> Split out a couple of test cases that these features in base.c from the
>>>>>>> unittest.c monolith. The intention is that we will eventually split out
>>>>>>> all test cases and group them together based on what portion of device
>>>>>>> tree they test.
>>>>>>
>>>>>> Why does splitting this file apart improve the implementation?
>>>>>
>>>>> This is in preparation for patch 19/19 and other hypothetical future
>>>>> patches where test cases are split up and grouped together by what
>>>>> portion of DT they test (for example the parsing tests and the
>>>>> platform/device tests would probably go separate files as well). This
>>>>> patch by itself does not do anything useful, but I figured it made
>>>>> patch 19/19 (and, if you like what I am doing, subsequent patches)
>>>>> easier to review.
>>>>
>>>> I do not see any value in splitting the devicetree tests into
>>>> multiple files.
>>>>
>>>> Please help me understand what the benefits of such a split are.
>>
>> Note that my following comments are specific to the current devicetree
>> unittests, and may not apply to the general case of unit tests in other
>> subsystems.
>>
> Note taken.
>>
>>> Sorry, I thought it made sense in context of what I am doing in the
>>> following patch. All I am trying to do is to provide an effective way
>>> of grouping test cases. To be clear, the idea, assuming you agree, is
>>
>> Looking at _just_ the first few fragments of the following patch, the
>> change is to break down a moderate size function of related tests,
>> of_unittest_find_node_by_name(), into a lot of extremely small functions.
> 
> Hmm...I wouldn't call that a moderate function. By my standards those
> functions are pretty large. In any case, I want to limit the
> discussion to specifically what a test case should look like, and the
> general consensus outside of the kernel is that unit test cases should
> be very very small. The reason is that each test case is supposed to> test 
> one specific property; it should be obvious what that property
> is; and it should be obvious what is needed to exercise that property.

That is a valid model and philosophy of unit test design.

It is not a model that the devicetree unit tests can be shoe horned
into.  Sort of...  In a sense, the existing devicetree unit tests
already to that, if you consider each unittest() (and sometime a few
lines of code that creates the result that unittest() checks) to be a separate
unit test.  But the kunit model does not consider the sort of
equivalent KUNIT_EXPECT_EQ(), etc, to be a unit test, the unit test
in kunit would be KUNIT_CASE().  Although it is a little confusing to
me that the initialization and clean up on exit occur one level
higher than KUNIT_CASE(), in struct kunit_module.  I think the
confusion is just a matter of slight conflict in the documentation
(btw, the documents where very helpful for me to understand the
overall concepts and model).


>> Then to find the execution order of the many small functions requires
>> finding the array of_test_find_node_by_name_cases[].  Then I have to
> 
> Execution order shouldn't matter. Each test case should be totally
> hermetic. Obviously in this case we depend on the preceeding test case
> to clean up properly, but that is something I am working on.

But the order _does_ matter for the devicetree unit tests.

That is one of the problems.  The devicetree unit tests are not small,
independent tests.  Some of the tests change state in a way that
following tests depend upon.

The design documents also mention that each unit test should have
a pre-test initialization, and a post-test cleanup to remove the
results of the initialization.

The devicetree unit tests have a large, intrusive initialization.
Once again, not a good fit for this model.

The devicetree unit tests also have an undocumented (and not at all
obvious) need to leave state changed in some cases after the test
completes.  There are cases where the way that I fully valida

Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-02-18 Thread Frank Rowand
On 2/14/19 1:37 PM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
> 
> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
> 
> ## Changes Since Last Version
> 
>  - Got KUnit working on (hypothetically) all architectures (tested on
>x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
> 

I have not read through the patches in any detail.  I have read some of
the code to try to understand the patches to the devicetree unit tests.
So that may limit how valid my comments below are.

I found the code difficult to read in places where it should have been
much simpler to read.  Structuring the code in a pseudo object oriented
style meant that everywhere in a code path that I encountered a dynamic
function call, I had to go find where that dynamic function call was
initialized (and being the cautious person that I am, verify that
no where else was the value of that dynamic function call).  With
primitive vi and tags, that search would have instead just been a
simple key press (or at worst a few keys) if hard coded function
calls were done instead of dynamic function calls.  In the code paths
that I looked at, I did not see any case of a dynamic function being
anything other than the value it was originally initialized as.
There may be such cases, I did not read the entire patch set.  There
may also be cases envisioned in the architects mind of how this
flexibility may be of future value.  Dunno.

-Frank
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v4 08/17] kunit: test: add support for test abort

2019-02-18 Thread Frank Rowand
On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Add support for aborting/bailing out of test cases. Needed for
> implementing assertions.
> 
> Signed-off-by: Brendan Higgins 
> ---
> Changes Since Last Version
>  - This patch is new introducing a new cross-architecture way to abort
>out of a test case (needed for KUNIT_ASSERT_*, see next patch for
>details).
>  - On a side note, this is not a complete replacement for the UML abort
>mechanism, but covers the majority of necessary functionality. UML
>architecture specific featurs have been dropped from the initial
>patchset.
> ---
>  include/kunit/test.h |  24 +
>  kunit/Makefile   |   3 +-
>  kunit/test-test.c| 127 ++
>  kunit/test.c | 208 +--
>  4 files changed, 353 insertions(+), 9 deletions(-)
>  create mode 100644 kunit/test-test.c

< snip >

> diff --git a/kunit/test.c b/kunit/test.c
> index d18c50d5ed671..6e5244642ab07 100644
> --- a/kunit/test.c
> +++ b/kunit/test.c
> @@ -6,9 +6,9 @@
>   * Author: Brendan Higgins 
>   */
>  
> -#include 
>  #include 
> -#include 
> +#include 
> +#include 
>  #include 
>  
>  static bool kunit_get_success(struct kunit *test)
> @@ -32,6 +32,27 @@ static void kunit_set_success(struct kunit *test, bool 
> success)
>   spin_unlock_irqrestore(&test->lock, flags);
>  }
>  
> +static bool kunit_get_death_test(struct kunit *test)
> +{
> + unsigned long flags;
> + bool death_test;
> +
> + spin_lock_irqsave(&test->lock, flags);
> + death_test = test->death_test;
> + spin_unlock_irqrestore(&test->lock, flags);
> +
> + return death_test;
> +}
> +
> +static void kunit_set_death_test(struct kunit *test, bool death_test)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(&test->lock, flags);
> + test->death_test = death_test;
> + spin_unlock_irqrestore(&test->lock, flags);
> +}
> +
>  static int kunit_vprintk_emit(const struct kunit *test,
> int level,
> const char *fmt,
> @@ -70,13 +91,29 @@ static void kunit_fail(struct kunit *test, struct 
> kunit_stream *stream)
>   stream->commit(stream);
>  }
>  
> +static void __noreturn kunit_abort(struct kunit *test)
> +{
> + kunit_set_death_test(test, true);
> +
> + test->try_catch.throw(&test->try_catch);
> +
> + /*
> +  * Throw could not abort from test.
> +  */
> + kunit_err(test, "Throw could not abort from test!");
> + show_stack(NULL, NULL);
> + BUG();

kunit_abort() is what will be call as the result of an assert failure.

BUG(), which is a panic, which is crashing the system is not acceptable
in the Linux kernel.  You will just annoy Linus if you submit this.

-Frank

< snip >
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

2019-02-15 Thread Frank Rowand
On 2/14/19 1:37 PM, Brendan Higgins wrote:
> Migrate tests without any cleanup, or modifying test logic in anyway to
> run under KUnit using the KUnit expectation and assertion API.
> 
> Signed-off-by: Brendan Higgins 
> ---
>  drivers/of/Kconfig|1 +
>  drivers/of/unittest.c | 1310 +
>  2 files changed, 671 insertions(+), 640 deletions(-)
> 
> diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
> index ad3fcad4d75b8..f309399deac20 100644
> --- a/drivers/of/Kconfig
> +++ b/drivers/of/Kconfig
> @@ -15,6 +15,7 @@ if OF
>  config OF_UNITTEST
>   bool "Device Tree runtime unit tests"
>   depends on !SPARC
> + depends on KUNIT
>   select IRQ_DOMAIN
>   select OF_EARLY_FLATTREE
>   select OF_RESOLVE
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c

These comments are from applying the patches to 5.0-rc3.

The final hunk of this patch fails to apply because it depends upon

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

If I apply that patch then I can apply patches 15 through 17.

If I apply patches 1 through 14 and boot the uml kernel then the devicetree
unittest result is:

  ### dt-test ### FAIL of_unittest_overlay_high_level():2372 overlay_base_root 
not initialized
  ### dt-test ### end of unittest - 219 passed, 1 failed

This is as expected from your previous reports, and is fixed after
applying

   [PATCH v1 0/1] of: unittest: unflatten device tree on UML when testing.

with the devicetree unittest result of:

   ### dt-test ### end of unittest - 224 passed, 0 failed

After adding patch 15, there are a lot of "unittest internal error" messages.

-Frank


> index effa4e2b9d992..96de69ccb3e63 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -26,186 +26,189 @@
>  
>  #include 
>  
> +#include ### dt-test ### end of unittest - 224 passed, 0 failed
> +
>  #include "of_private.h"
>  
> -static struct unittest_results {
> - int passed;
> - int failed;
> -} unittest_results;
> -
> -#define unittest(result, fmt, ...) ({ \
> - bool failed = !(result); \
> - if (failed) { \
> - unittest_results.failed++; \
> - pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); 
> \
> - } else { \
> - unittest_results.passed++; \
> - pr_debug("pass %s():%i\n", __func__, __LINE__); \
> - } \
> - failed; \
> -})
> -
> -static void __init of_unittest_find_node_by_name(void)
> +static void of_unittest_find_node_by_name(struct kunit *test)
>  {
>   struct device_node *np;
>   const char *options, *name;
>  
>   np = of_find_node_by_path("/testcase-data");
>   name = kasprintf(GFP_KERNEL, "%pOF", np);
> - unittest(np && !strcmp("/testcase-data", name),
> - "find /testcase-data failed\n");
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> + KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +"find /testcase-data failed\n");
>   of_node_put(np);
>   kfree(name);
>  
>   /* Test if trailing '/' works */
> - np = of_find_node_by_path("/testcase-data/");
> - unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
> + KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> + "trailing '/' on /testcase-data/ should fail\n");
>  
>   np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>   name = kasprintf(GFP_KERNEL, "%pOF", np);
> - unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> + KUNIT_EXPECT_STREQ_MSG(
> + test, "/testcase-data/phandle-tests/consumer-a", name,
>   "find /testcase-data/phandle-tests/consumer-a failed\n");
>   of_node_put(np);
>   kfree(name);
>  
>   np = of_find_node_by_path("testcase-alias");
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>   name = kasprintf(GFP_KERNEL, "%pOF", np);
> - unittest(np && !strcmp("/testcase-data", name),
> - "find testcase-alias failed\n");
> + KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> +"find testcase-alias failed\n");
>   of_node_put(np);
>   kfree(name);
>  
>   /* Test if trailing '/' works on aliases */
> - np = of_find_node_by_path("testcase-alias/");
> - unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
> + KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> + "trailing '/' on testcase-alias/ should fail\n");
>  
>   np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>   name = kasprintf(GFP_KERNEL, "%pOF", np);
> - unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
> + KUNIT_EXPECT_STREQ_MSG(
> + test, "/testcase-data/phan

Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest

2019-02-14 Thread Frank Rowand
On 2/14/19 4:56 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand  wrote:
>>
>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand  wrote:
>>>>
>>>> Hi Brendan,
>>>>
>>>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>>>> Split out a couple of test cases that these features in base.c from the
>>>>> unittest.c monolith. The intention is that we will eventually split out
>>>>> all test cases and group them together based on what portion of device
>>>>> tree they test.
>>>>
>>>> Why does splitting this file apart improve the implementation?
>>>
>>> This is in preparation for patch 19/19 and other hypothetical future
>>> patches where test cases are split up and grouped together by what
>>> portion of DT they test (for example the parsing tests and the
>>> platform/device tests would probably go separate files as well). This
>>> patch by itself does not do anything useful, but I figured it made
>>> patch 19/19 (and, if you like what I am doing, subsequent patches)
>>> easier to review.
>>
>> I do not see any value in splitting the devicetree tests into
>> multiple files.
>>
>> Please help me understand what the benefits of such a split are.

Note that my following comments are specific to the current devicetree
unittests, and may not apply to the general case of unit tests in other
subsystems.


> Sorry, I thought it made sense in context of what I am doing in the
> following patch. All I am trying to do is to provide an effective way
> of grouping test cases. To be clear, the idea, assuming you agree, is

Looking at _just_ the first few fragments of the following patch, the
change is to break down a moderate size function of related tests,
of_unittest_find_node_by_name(), into a lot of extremely small functions.
Then to find the execution order of the many small functions requires
finding the array of_test_find_node_by_name_cases[].  Then I have to
chase off into the kunit test runner core, where I find that the set
of tests in of_test_find_node_by_name_cases[] is processed by a
late_initcall().  So now the order of the various test groupings,
declared via module_test(), are subject to the fragile orderings
of initcalls.

There are ordering dependencies within the devicetree unittests.

I do not like breaking the test cases down into such small atoms.

I do not see any value __for devicetree unittests__ of having
such small atoms.

It makes it harder for me to read the source of the tests and
understand the order they will execute.  It also makes it harder
for me to read through the actual tests (in this example the
tests that are currently grouped in of_unittest_find_node_by_name())
because of all the extra function headers injected into the
existing single function to break it apart into many smaller
functions.

Breaking the tests into separate chunks, each chunk invoked
independently as the result of module_test() of each chunk,
loses the summary result for the devicetree unittests of
how many tests are run and how many passed.  This is the
only statistic that I need to determine whether the
unittests have detected a new fault caused by a specific
patch or commit.  I don't need to look at any individual
test result unless the overall result reports a failure.


> that we would follow up with several other patches like this one and
> the subsequent patch, one which would pull out a couple test
> functions, as I have done here, and another that splits those
> functions up into a bunch of proper test cases.
> 
> I thought that having that many unrelated test cases in a single file
> would just be a pain to sort through deal with, review, whatever.

Having all the test cases in a single file makes it easier for me to
read, understand, modify, and maintain the tests.


> This is not something I feel particularly strongly about, it is just
> pretty atypical from my experience to have so many unrelated test
> cases in a single file.
> 
> Maybe you would prefer that I break up the test cases first, and then
> we split up the file as appropriate?

I prefer that the test cases not be broken up arbitrarily.  There _may_
be cases where the devicetree unittests are currently not well grouped
and may benefit from change, but if so that should be handled independently
of any transformation into a KUnit framework.


> I just assumed that we would agree it would be way too much stuff for
> a single file, so I went ahead and broke it up first, because I
> thought it would make it easier to review in that order rather than
> the other way around.



___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest

2019-02-14 Thread Frank Rowand
On 12/5/18 3:54 PM, Brendan Higgins wrote:
> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand  wrote:
>>
>> Hi Brendan,
>>
>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>> Split out a couple of test cases that these features in base.c from the
>>> unittest.c monolith. The intention is that we will eventually split out
>>> all test cases and group them together based on what portion of device
>>> tree they test.
>>
>> Why does splitting this file apart improve the implementation?
> 
> This is in preparation for patch 19/19 and other hypothetical future
> patches where test cases are split up and grouped together by what
> portion of DT they test (for example the parsing tests and the
> platform/device tests would probably go separate files as well). This
> patch by itself does not do anything useful, but I figured it made
> patch 19/19 (and, if you like what I am doing, subsequent patches)
> easier to review.

I do not see any value in splitting the devicetree tests into
multiple files.

Please help me understand what the benefits of such a split are.

Thanks,

Frank
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework

2018-12-04 Thread Frank Rowand
Hi Brendan, Rob,

On 11/28/18 11:36 AM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
> 
> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/4.19/v3
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/4.19/v3 branch.
> 
> ## Changes Since Last Version
> 
>  - Changed namespace prefix from `test_*` to `kunit_*` as requested by
>Shuah.


>  - Started converting/cleaning up the device tree unittest to use KUnit.
>  - Started adding KUnit expectations with custom messages.
> 

Sorry I missed your reply to me in the v1 patch thread.  I've been
traveling a lot the last few weeks.  I'm starting to read messages
that occurred late in the v1 patch thread and the v2 patch thread,
so I'm just coming up to speed on this.

My comments below are motivated by adding the devicetree unittest to
this version of the patch series.

Pulling a comment from way back in the v1 patch thread:

On 10/17/18 3:22 PM, Brendan Higgins wrote:
> On Wed, Oct 17, 2018 at 10:49 AM  wrote:

< snip >

> The test and the code under test are linked together in the same
> binary and are compiled under Kbuild. Right now I am linking
> everything into a UML kernel, but I would ultimately like to make
> tests compile into completely independent test binaries. So each test
> file would get compiled into its own test binary and would link
> against only the code needed to run the test, but we are a bit of a
> ways off from that.

I have never used UML, so you should expect naive questions from me,
exhibiting my lack of understanding.

Does this mean that I have to build a UML architecture kernel to run
the KUnit tests?

*** Rob, if the answer is yes, then it seems like for my workflow,
which is to build for real ARM hardware, my work is doubled (or
worse), because for every patch/commit that I apply, I not only have
to build the ARM kernel and boot on the real hardware to test, I also
have to build the UML kernel and boot in UML.  If that is correct
then I see this as a major problem for me.

Brenden, in the above quote you said that in the future you would
like to make the "tests compile into completely independent test
binaries".  I am assuming those are intended to run as standalone
user space programs instead of inside UML.  Is that correct?  If
so, how will KUnit tests be able to test code that uses locking
mechanisms that require instructions that are not available to
user space execution?  (I _think_ that such instructions may be
present, depending on which locking mechanism, but I might be
mistaken.)

Another possible concern that I have for removing the devicetree
unit tests from my normal kernel build process is that I think
th

Re: [RFC v3 18/19] of: unittest: split out a couple of test cases from unittest

2018-12-04 Thread Frank Rowand
Hi Brendan,

On 11/28/18 11:36 AM, Brendan Higgins wrote:
> Split out a couple of test cases that these features in base.c from the
> unittest.c monolith. The intention is that we will eventually split out
> all test cases and group them together based on what portion of device
> tree they test.

Why does splitting this file apart improve the implementation?


> 
> Signed-off-by: Brendan Higgins 
> ---
>  drivers/of/Makefile  |   2 +-
>  drivers/of/base-test.c   | 214 ++
>  drivers/of/test-common.c | 149 ++
>  drivers/of/test-common.h |  16 ++
>  drivers/of/unittest.c| 316 +--
>  5 files changed, 381 insertions(+), 316 deletions(-)
>  create mode 100644 drivers/of/base-test.c
>  create mode 100644 drivers/of/test-common.c
>  create mode 100644 drivers/of/test-common.h
>
< snip >

-Frank
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v3 17/19] of: unittest: migrate tests to run on KUnit

2018-12-04 Thread Frank Rowand
On 11/28/18 11:36 AM, Brendan Higgins wrote:
> Migrate tests without any cleanup, or modifying test logic in anyway to
> run under KUnit using the KUnit expectation and assertion API.
> 
> Signed-off-by: Brendan Higgins 
> ---
>  drivers/of/Kconfig|1 +
>  drivers/of/unittest.c | 1405 ++---
>  2 files changed, 752 insertions(+), 654 deletions(-)

< snip >

I am travelling and will not have an opportunity to properly review this
patch, patch 18, or patch 19 until next week.

-Frank

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [RFC v3 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework

2018-12-04 Thread Frank Rowand
On 11/28/18 11:36 AM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 


> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this

This question might be a misunderstanding of the intent of some of the
terminology in the above paragraph, so this is mostly a request for
clarification.

With my pre-conception of what unit tests are, I read "test a single unit
of code" to mean a relatively narrow piece of a subsystem.  So if I
understand correctly, taking examples from patch 17 "of: unittest:
migrate tests to run on KUnit", each function call like
KUNIT_ASSERT_NOT_ERR_OR_NULL(), KUNIT_EXPECT_STREQ_MSG(), and
KUNIT_EXPECT_EQ_MSG() are each a separate unit test, and thus the
paragraph says that each of these function calls should have no
dependencies outside the test.  Do I understand that correctly?

< snip >

-Frank

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [Ksummit-discuss] [RFC PATCH 2/3] MAINTAINERS, Handbook: Subsystem Profile

2018-11-16 Thread Frank Rowand
On 11/15/18 4:11 PM, Frank Rowand wrote:
> On 11/14/18 8:53 PM, Dan Williams wrote:

< snip >

>> +
>> +
>> +Time Zone / Office Hours
>> +
>> +Let contributors know the time of day when one or more maintainers are
>> +usually actively monitoring the mailing list.
> 
> I would strike "actively monitoring the mailing list".  To me, it should
> be what are the hours of the day that the maintainer might happen to poll
> (or might receive an interrupt) from the appropriate communications
> channels (could be IRC, could be email, etc).
> 
> For my area, I would want to say something like: I tend to be active
> between 17:00 UTC (18:00 UTC when daylight savings) and 25:00 (26:00),
> but often will check for urgent or brief items up until 07:00 (08:00).
> I interact with email via a poll model.  I interact with IRC via a
> pull model and often overlook IRC activity for multiple days).

   typo, that should be "poll", not "pull"

> 
> -Frank
> 

> 

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [Ksummit-discuss] [RFC PATCH 2/3] MAINTAINERS, Handbook: Subsystem Profile

2018-11-15 Thread Frank Rowand
On 11/14/18 8:53 PM, Dan Williams wrote:
> As presented at the 2018 Linux Plumbers conference [1], the Subsystem
> Profile is proposed as a way to reduce friction between committers and
> maintainers and perhaps encourage conversations amongst maintainers
> about best practice policies.

Thanks for working on this.


> The profile contains short answers to some of the common policy
> questions a contributor might have, or that a maintainer might consider
> formalizing. The current list of maintenance policies is:
> 
> Overview: General introduction to maintaining the subsystem
> Core: List of source files considered core
> Leaf: List of source files that consume core functionality
> Patches or Pull requests: Simple statement of expected submission format
> Last -rc for new feature submissions: Expected lead time for submissions
> Last -rc to merge features: Deadline for merge decisions
> Non-author Ack / Review Tags Required: Patch review economics
> Test Suite: Pass this suite before requesting inclusion
> Resubmit Cadence: When to ping the maintainer
> Trusted Reviewers: Help for triaging patches
> Time Zone / Office Hours: When might a maintainer be available
> Checkpatch / Style Cleanups: Policy on pure cleanup patches
> Off-list review: Request for review gates
> TODO: Potential development tasks up for grabs, or active focus areas
> 
> The goal of the Subsystem Profile is to set expectations for
> contributors and interim or replacement maintainers for a subsystem.
> 
> See Documentation/maintainer/subsystem-profile.rst for more details, and
> a follow-on example profile for the libnvdimm subsystem.
> 
> [1]: https://linuxplumbersconf.org/event/2/contributions/59/
> 
> Cc: Jonathan Corbet 
> Cc: Thomas Gleixner 
> Cc: Mauro Carvalho Chehab 
> Cc: Steve French 
> Cc: Greg Kroah-Hartman 
> Cc: Linus Torvalds 
> Cc: Tobin C. Harding 
> Cc: Olof Johansson 
> Cc: Martin K. Petersen 
> Cc: Daniel Vetter 
> Cc: Joe Perches 
> Cc: Dmitry Vyukov 
> Signed-off-by: Dan Williams 
> ---
>  Documentation/maintainer/index.rst |1 
>  Documentation/maintainer/subsystem-profile.rst |  145 
> 
>  MAINTAINERS|4 +
>  3 files changed, 150 insertions(+)
>  create mode 100644 Documentation/maintainer/subsystem-profile.rst
> 
> diff --git a/Documentation/maintainer/index.rst 
> b/Documentation/maintainer/index.rst
> index 2a14916930cb..1e6b1aaa6024 100644
> --- a/Documentation/maintainer/index.rst
> +++ b/Documentation/maintainer/index.rst
> @@ -11,4 +11,5 @@ additions to this manual.
>  
> configure-git
> pull-requests
> +   subsystem-profile
>  
> diff --git a/Documentation/maintainer/subsystem-profile.rst 
> b/Documentation/maintainer/subsystem-profile.rst
> new file mode 100644
> index ..a74b624e0972
> --- /dev/null
> +++ b/Documentation/maintainer/subsystem-profile.rst
> @@ -0,0 +1,145 @@
> +.. _subsystemprofile:
> +
> +Subsystem Profile
> +=
> +
> +The Subsystem Profile is a collection of policy positions that a
> +maintainer or maintainer team establishes for the their subsystem. While
> +there is a wide range of technical nuance on maintaining disparate
> +sections of the kernel, the Subsystem Profile documents a known set of
> +major process policies that vary between subsystems.  What follows is a
> +list of policy questions a maintainer can answer and include a document
> +in the kernel, or on an external website. It advertises to other
> +maintainers and contributors the local policy of the subsystem. Some
> +sections are optional like "Overview", "Off-list review", and "TODO".
> +The others are recommended for all subsystems to address, but no section
> +is mandatory. In addition there are no wrong answers, just document how
> +a subsystem typically operates. Note that the profile follows the
> +subsystem not the maintainer, i.e. there is no expectation that a
> +maintainer of multiple subsystems deploys the same policy across those
> +subsystems.
> +
> +
> +Overview
> +
> +In this optional section of the profile provide a free form overview of
> +the subsystem written as a hand-off document. In other words write a
> +note to someone that would receive the “keys to the castle” in the event
> +of extended or unexpected absence. “So, you have recently become the
> +maintainer of the XYZ subsystem, condolences, it is a thankless job,
> +here is the lay of the land.” Details to consider are the extended
> +details that are not included in MAINTAINERS, and not addressed by the
> +other profile questions below. For example details like, who has access
> +to the git tree, branches that are pulled into -next, relevant
> +specifications, issue trackers, and sensitive code areas. If available
> +the Overview should link to other subsystem documentation that may
> +clarify, re-iterate, emphasize / de-emphasize portions of the global
> +process documentation for contributors (CodingStyle, SubmittingPat

Re: [RFC v2 00/14] kunit: introduce KUnit, the Linux kernel unit testing framework

2018-11-07 Thread Frank Rowand
On 11/6/18 5:17 PM, Brendan Higgins wrote:
> On Fri, Nov 2, 2018 at 11:23 AM Shuah Khan  wrote:
>>
>> Hi Brendan,
> 
>> Framework looks good. I think it would be helpful to include a real test
> 
> Great to hear!
> 
>> in the patch series to get a feel for how effective it is.
> 
> Alright, will do. Rob suggested converting
> https://elixir.bootlin.com/linux/v4.19/source/drivers/of/unittest.c to
> KUnit, so that might offer a good comparison.

drivers/of/unittest.c might be a bit bigger and more complex test than
you want to start with.

-Frank

< snip >
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm