On 11/23/2015 07:18 PM, Simon Glass wrote:
> Hi Stephen,
> 
> On 23 November 2015 at 18:45, Simon Glass <s...@chromium.org> wrote:
>> Hi Stephen,
>>
>> On 22 November 2015 at 10:30, Stephen Warren <swar...@wwwdotorg.org> wrote:
>>> On 11/21/2015 09:49 AM, Simon Glass wrote:
>>>> Hi Stephen,
>>>>
>>>> On 19 November 2015 at 12:09, Stephen Warren <swar...@wwwdotorg.org> wrote:
>>>>>
>>>>> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>>>>>
>>>>>> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>>>>>>
>>>>>>> Hi Stephen,
>>>>>>>
>>>>>>> On 14 November 2015 at 23:53, Stephen Warren <swar...@wwwdotorg.org>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> This tool aims to test U-Boot by executing U-Boot shell commands
>>>>>>>> using the
>>>>>>>> console interface. A single top-level script exists to execute or 
>>>>>>>> attach
>>>>>>>> to the U-Boot console, run the entire script of tests against it, and
>>>>>>>> summarize the results. Advantages of this approach are:
>>>>>>>>
>>>>>>>> - Testing is performed in the same way a user or script would interact
>>>>>>>>    with U-Boot; there can be no disconnect.
>>>>>>>> - There is no need to write or embed test-related code into U-Boot
>>>>>>>> itself.
>>>>>>>>    It is asserted that writing test-related code in Python is simpler
>>>>>>>> and
>>>>>>>>    more flexible that writing it all in C.
>>>>>>>> - It is reasonably simple to interact with U-Boot in this way.
>>>>>>>>
>>>>>>>> A few simple tests are provided as examples. Soon, we should convert as
>>>>>>>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>>>>>>>
>>>>>>>
>>>>>>> It's great to see this and thank you for putting in the effort!
>>>>>>>
>>>>>>> It looks like a good way of doing functional tests. I still see a role
>>>>>>> for unit tests and things like test/dm. But if we can arrange to call
>>>>>>> all U-Boot tests (unit and functional) from one 'test.py' command that
>>>>>>> would be a win.
>>>>>>>
>>>>>>> I'll look more when I can get it to work - see below.
>>>>>
>>>>> ...
>>>>>>
>>>>>> made it print a message about checking the docs for missing
>>>>>> requirements. I can probably patch the top-level test.py to do the same.
>>>>>
>>>>>
>>>>> I've pushed such a patch to:
>>>>>
>>>>> git://github.com/swarren/u-boot.git tegra_dev
>>>>> (the separate pytests branch has now been deleted)
>>>>>
>>>>> There are also a variety of other patches there related to this testing 
>>>>> infra-structure. I guess I'll hold off sending them to the list until 
>>>>> there's been some general feedback on the patches I've already posted, 
>>>>> but feel free to pull the branch down and play with it. Note that it's 
>>>>> likely to get rebased as I work.
>>>>
>>>> OK I got it working thank you. It is horribly slow though - do you
>>>> know what is holding it up? For me to takes 12 seconds to run the
>>>> (very basic) tests.
>>>
>>> It looks like pexpect includes a default delay to simulate human
>>> interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
>>> and add the following somewhere soon after the assignment to self.p:
>>>
>>>             self.p.delaybeforesend = 0
>>>
>>> ... that will more than halve the execution time. (8.3 -> 3.5s on my
>>> 5-year-old laptop).
>>>
>>> That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
>>> for some easy-to-use automated testing.
>>
>> Sure, but my reference is to the difference between a native C test
>> and this framework. As we add more and more tests the overhead will be
>> significant. If it takes 8 seconds to run the current (fairly trivial)
>> tests, it might take a minute to run a larger suite, and to me that is
>> too long (e.g. to bisect for a failing commit).
>>
>> I wonder what is causing the delay?
>>
>>>
>>>> Also please see dm_test_usb_tree() which uses a console buffer to
>>>> check command output.
>>>
>>> OK, I'll take a look.
>>>
>>>> I wonder if we should use something like that
>>>> for simple unit tests, and use python for the more complicated
>>>> functional tests?
>>>
>>> I'm not sure that's a good idea; it'd be best to settle on a single way
>>> of executing tests so that (a) people don't have to run/implement
>>> different kinds of tests in different ways (b) we can leverage test code
>>> across as many tests as possible.
>>>
>>> (Well, doing unit tests and system level tests differently might be
>>> necessary since one calls functions and the other uses the shell "user
>>> interface", but having multiple ways of doing e.g. system tests doesn't
>>> seem like a good idea.)
>>
>> As you found with some of the tests, it is convenient/necessary to be
>> able to call U-Boot C functions in some tests. So I don't see this as
>> a one-size-fits-all solution.
>>
>> I think it is perfectly reasonable for the python framework to run the
>> existing C tests - there is no need to rewrite them in Python. Also
>> for the driver model tests - we can just run the tests from some sort
>> of python wrapper and get the best of both worlds, right?
>>
>> Please don't take this to indicate any lack of enthusiasm for what you
>> are doing - it's a great development and I'm sure it will help a lot!
>> We really need to unify all the tests so we can run them all in one
>> step.
>>
>> I just think we should aim to have the automated tests run in a few
>> seconds (let's say 5-10 at the outside). We need to make sure that the
>> python framework will allow this even when running thousands of tests.
> 
> BTW I would like to see if buildman can run tests automatically on
> each commit. It's been a long-term goal for a while.

Related, I was wondering if the test script's --build could/should rely
on buildman somehow. That might save the user having to set
CROSS_COMPILE before running test.py, assuming they'd already set up
buildman.
_______________________________________________
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot

Reply via email to