Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-12-02 Thread Simon Glass
Hi Stephen,

On 1 December 2015 at 16:24, Stephen Warren  wrote:
> On 12/01/2015 09:40 AM, Simon Glass wrote:
> ...
>>
>> At present we don't have a sensible test framework for anything other
>> than sandbox, so to me the main benefit is that with your setup, we
>> do.
>>
>> The benefit of the existing sandbox tests is that they are very fast.
>> We could bisect for a test failure in a few minutes. I'd like to make
>> sure that we can still write C tests (that are called from your
>> framework with results integrated into it) and that the Python tests
>> are also fast.
>>
>> How do we move this forward? Are you planing to resend the patch with
>> the faster approach?
>
>
> I'm tempted to squash down all/most the fixes/enhancements I've made since
> posting the original into a single commit rather than sending follow-on
> enhancements, since none of it is applied yet. I can keep the various test
> implementations etc. in separate commits as a series. Does that seem
> reasonable?

It does to me. I think ideally we should have the infrastructure in one
patch (i.e. with just a noddy/sample test). Then you can add tests in
another patch or patches.

>
> I need to do some more testing/clean-up of the version that doesn't use
> pexpect. For example, I have only tested sandbox and not real HW, and also
> haven't tested (and perhaps implemented some of) the support for matching
> unexpected error messages in the console log. Still, that all shouldn't
take
> too long.

OK sounds good.

Regards,
Simon
___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-12-01 Thread Stephen Warren

On 12/01/2015 09:40 AM, Simon Glass wrote:
...

At present we don't have a sensible test framework for anything other
than sandbox, so to me the main benefit is that with your setup, we
do.

The benefit of the existing sandbox tests is that they are very fast.
We could bisect for a test failure in a few minutes. I'd like to make
sure that we can still write C tests (that are called from your
framework with results integrated into it) and that the Python tests
are also fast.

How do we move this forward? Are you planing to resend the patch with
the faster approach?


I'm tempted to squash down all/most the fixes/enhancements I've made 
since posting the original into a single commit rather than sending 
follow-on enhancements, since none of it is applied yet. I can keep the 
various test implementations etc. in separate commits as a series. Does 
that seem reasonable?


I need to do some more testing/clean-up of the version that doesn't use 
pexpect. For example, I have only tested sandbox and not real HW, and 
also haven't tested (and perhaps implemented some of) the support for 
matching unexpected error messages in the console log. Still, that all 
shouldn't take too long.

___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-12-01 Thread Simon Glass
Hi Stephen,

On 30 November 2015 at 10:13, Stephen Warren  wrote:
>
> On 11/26/2015 07:52 PM, Simon Glass wrote:
>>
>> Hi Stephen,
>>
>> On 24 November 2015 at 13:28, Stephen Warren  wrote:
>>>
>>> On 11/24/2015 12:04 PM, Simon Glass wrote:


 Hi Stephen,

 On 23 November 2015 at 21:44, Stephen Warren 
 wrote:
>
>
> On 11/23/2015 06:45 PM, Simon Glass wrote:
>>
>>
>> On 22 November 2015 at 10:30, Stephen Warren 
>> wrote:
>>>
>>>
>>> On 11/21/2015 09:49 AM, Simon Glass wrote:
>>>
>>>
>>>
 OK I got it working thank you. It is horribly slow though - do you
 know what is holding it up? For me to takes 12 seconds to run the
 (very basic) tests.
>>>
>>>
>>> ..
>>>
> I put a bit of time measurement into run_command() and found that on my
> system at work, for p.send("the shell command to execute") was actually
> (marginally) slower on sandbox than on real HW, despite real HW being a
> 115200 baud serial port, and the code splitting the shell commands into
> chunks that are sent and waited for synchronously to avoid overflowing
> UART FIFOs. I'm not sure why this is. Looking at U-Boot's console, it
> seems to be non-blocking, so I don't think termios VMIN/VTIME come into
> play (setting them to 0 made no difference), and the two raw modes took
> the same time. I meant to look into pexpect's termios settings to see if
> there was anything to tweak there, but forgot today.
>
> I did do one experiment to compare expect (the Tcl version) and pexpect.
> If I do roughly the following in both:
>
> spawn u-boot (sandbox)
> wait for prompt
> 100 times:
>   send "echo $foo\n"
>   wait for "echo $foo"
>   wait for shell prompt
> send "reset"
> wait for "reset"
> send "\n"
>
> ... then Tcl is about 3x faster on my system (IIRC 0.5 vs. 1.5s). If I
> remove all the "wait"s, then IIRC Tcl was about 15x faster or more.
> That's a pity. Still, I'm sure as heck not going to rewrite all this in
> Tcl:-( I wonder if something similar to pexpect but more targetted at
> simple "interactive shell" cases would remove any of that overhead.



 It is possible that we should use sandbox in 'cooked' mode so that
 lines an entered synchronously. The -t option might help here, or we
 may need something else.
>>>
>>>
>>>
>>> I don't think cooked mode will work, since I believe cooked is
>>> line-buffered, yet when U-Boot emits the shell prompt there's no \n printed
>>> afterwards.
>>
>>
>> Do you mean we need fflush() after writing the prompt? If so, that
>> should be easy to arrange. We have a similar problem with the LCD, and
>> added lcd_sync().
>
>
> Anything U-Boot does will only affect its own buffer when sending into the 
> PTY.
>
> If the test program used cooked mode for its reading side of the PTY, then 
> even with fflush() on the sending side, I don't believe reading from the PTY 
> would return characters until a \n appeared.

It normally works for me - do you have the PTY set up correctly?

>
> FWIW, passing "-t cooked" to U-Boot (which affects data in the other 
> direction to the discussion above) (plus hacking the code to disable 
> terminal-level input echoing) doesn't make any difference to the test timing. 
> That's not particularly surprising, since the test program sends each command 
> as a single write, so it's likely that U-Boot reads each command into its 
> stdin buffers in one go anyway.

Yes, I'm not really sure what is going on. But we should try to avoid
unnecessary waits and delays in the test framework, and spend as much
effort as possible actually running test rather than dealing with I/O,
etc.

>
>>> FWIW, I hacked out pexpect and replaced it with some custom code. That
>>> reduced by sandbox execution time from ~5.1s to ~2.3s. Execution time
>>> against real HW didn't seem to be affected at all. Some features like
>>> timeouts and complete error handling are still missing, but I don't think
>>> that would affect the execution time. See my github tree for the WIP patch.
>>
>>
>> Interesting, that's a big improvement. I wonder if we should look at
>> building U-Boot with SWIG to remove all these overheads? Then the
>> U-Boot command line (and any other feature we want) could become a
>> Python class. Of course that would only work for sandbox.
>
>
> SWIG doesn't seem like a good direction; it would re-introduce different 
> paths between sandbox and non-sandbox again. One of the main benefits of the 
> test/py/ approach is that sandbox and real HW are treated the same.

At present we don't have a sensible test framework for anything other
than sandbox, so to me the main benefit is that with your setup, we
do.

The benefit of the existing sandbox tests is that they are very fast.
We could bisect for a test failure in a few minutes. I'd like to make
sure that we can still writ

Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-30 Thread Stephen Warren

On 11/26/2015 07:52 PM, Simon Glass wrote:

Hi Stephen,

On 24 November 2015 at 13:28, Stephen Warren  wrote:

On 11/24/2015 12:04 PM, Simon Glass wrote:


Hi Stephen,

On 23 November 2015 at 21:44, Stephen Warren 
wrote:


On 11/23/2015 06:45 PM, Simon Glass wrote:


On 22 November 2015 at 10:30, Stephen Warren 
wrote:


On 11/21/2015 09:49 AM, Simon Glass wrote:




OK I got it working thank you. It is horribly slow though - do you
know what is holding it up? For me to takes 12 seconds to run the
(very basic) tests.


..


I put a bit of time measurement into run_command() and found that on my
system at work, for p.send("the shell command to execute") was actually
(marginally) slower on sandbox than on real HW, despite real HW being a
115200 baud serial port, and the code splitting the shell commands into
chunks that are sent and waited for synchronously to avoid overflowing
UART FIFOs. I'm not sure why this is. Looking at U-Boot's console, it
seems to be non-blocking, so I don't think termios VMIN/VTIME come into
play (setting them to 0 made no difference), and the two raw modes took
the same time. I meant to look into pexpect's termios settings to see if
there was anything to tweak there, but forgot today.

I did do one experiment to compare expect (the Tcl version) and pexpect.
If I do roughly the following in both:

spawn u-boot (sandbox)
wait for prompt
100 times:
  send "echo $foo\n"
  wait for "echo $foo"
  wait for shell prompt
send "reset"
wait for "reset"
send "\n"

... then Tcl is about 3x faster on my system (IIRC 0.5 vs. 1.5s). If I
remove all the "wait"s, then IIRC Tcl was about 15x faster or more.
That's a pity. Still, I'm sure as heck not going to rewrite all this in
Tcl:-( I wonder if something similar to pexpect but more targetted at
simple "interactive shell" cases would remove any of that overhead.



It is possible that we should use sandbox in 'cooked' mode so that
lines an entered synchronously. The -t option might help here, or we
may need something else.



I don't think cooked mode will work, since I believe cooked is
line-buffered, yet when U-Boot emits the shell prompt there's no \n printed
afterwards.


Do you mean we need fflush() after writing the prompt? If so, that
should be easy to arrange. We have a similar problem with the LCD, and
added lcd_sync().


Anything U-Boot does will only affect its own buffer when sending into 
the PTY.


If the test program used cooked mode for its reading side of the PTY, 
then even with fflush() on the sending side, I don't believe reading 
from the PTY would return characters until a \n appeared.


FWIW, passing "-t cooked" to U-Boot (which affects data in the other 
direction to the discussion above) (plus hacking the code to disable 
terminal-level input echoing) doesn't make any difference to the test 
timing. That's not particularly surprising, since the test program sends 
each command as a single write, so it's likely that U-Boot reads each 
command into its stdin buffers in one go anyway.



FWIW, I hacked out pexpect and replaced it with some custom code. That
reduced by sandbox execution time from ~5.1s to ~2.3s. Execution time
against real HW didn't seem to be affected at all. Some features like
timeouts and complete error handling are still missing, but I don't think
that would affect the execution time. See my github tree for the WIP patch.


Interesting, that's a big improvement. I wonder if we should look at
building U-Boot with SWIG to remove all these overheads? Then the
U-Boot command line (and any other feature we want) could become a
Python class. Of course that would only work for sandbox.


SWIG doesn't seem like a good direction; it would re-introduce different 
paths between sandbox and non-sandbox again. One of the main benefits of 
the test/py/ approach is that sandbox and real HW are treated the same.

___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-26 Thread Simon Glass
Hi Stephen,

On 24 November 2015 at 13:28, Stephen Warren  wrote:
> On 11/24/2015 12:04 PM, Simon Glass wrote:
>>
>> Hi Stephen,
>>
>> On 23 November 2015 at 21:44, Stephen Warren 
>> wrote:
>>>
>>> On 11/23/2015 06:45 PM, Simon Glass wrote:

 On 22 November 2015 at 10:30, Stephen Warren 
 wrote:
>
> On 11/21/2015 09:49 AM, Simon Glass wrote:
>
>
>> OK I got it working thank you. It is horribly slow though - do you
>> know what is holding it up? For me to takes 12 seconds to run the
>> (very basic) tests.
>
> ..
>
>>> I put a bit of time measurement into run_command() and found that on my
>>> system at work, for p.send("the shell command to execute") was actually
>>> (marginally) slower on sandbox than on real HW, despite real HW being a
>>> 115200 baud serial port, and the code splitting the shell commands into
>>> chunks that are sent and waited for synchronously to avoid overflowing
>>> UART FIFOs. I'm not sure why this is. Looking at U-Boot's console, it
>>> seems to be non-blocking, so I don't think termios VMIN/VTIME come into
>>> play (setting them to 0 made no difference), and the two raw modes took
>>> the same time. I meant to look into pexpect's termios settings to see if
>>> there was anything to tweak there, but forgot today.
>>>
>>> I did do one experiment to compare expect (the Tcl version) and pexpect.
>>> If I do roughly the following in both:
>>>
>>> spawn u-boot (sandbox)
>>> wait for prompt
>>> 100 times:
>>>  send "echo $foo\n"
>>>  wait for "echo $foo"
>>>  wait for shell prompt
>>> send "reset"
>>> wait for "reset"
>>> send "\n"
>>>
>>> ... then Tcl is about 3x faster on my system (IIRC 0.5 vs. 1.5s). If I
>>> remove all the "wait"s, then IIRC Tcl was about 15x faster or more.
>>> That's a pity. Still, I'm sure as heck not going to rewrite all this in
>>> Tcl:-( I wonder if something similar to pexpect but more targetted at
>>> simple "interactive shell" cases would remove any of that overhead.
>>
>>
>> It is possible that we should use sandbox in 'cooked' mode so that
>> lines an entered synchronously. The -t option might help here, or we
>> may need something else.
>
>
> I don't think cooked mode will work, since I believe cooked is
> line-buffered, yet when U-Boot emits the shell prompt there's no \n printed
> afterwards.

Do you mean we need fflush() after writing the prompt? If so, that
should be easy to arrange. We have a similar problem with the LCD, and
added lcd_sync().

>
> FWIW, I hacked out pexpect and replaced it with some custom code. That
> reduced by sandbox execution time from ~5.1s to ~2.3s. Execution time
> against real HW didn't seem to be affected at all. Some features like
> timeouts and complete error handling are still missing, but I don't think
> that would affect the execution time. See my github tree for the WIP patch.

Interesting, that's a big improvement. I wonder if we should look at
building U-Boot with SWIG to remove all these overheads? Then the
U-Boot command line (and any other feature we want) could become a
Python class. Of course that would only work for sandbox.

Regards,
Simon
___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-24 Thread Stephen Warren

On 11/24/2015 12:04 PM, Simon Glass wrote:

Hi Stephen,

On 23 November 2015 at 21:44, Stephen Warren  wrote:

On 11/23/2015 06:45 PM, Simon Glass wrote:

On 22 November 2015 at 10:30, Stephen Warren  wrote:

On 11/21/2015 09:49 AM, Simon Glass wrote:



OK I got it working thank you. It is horribly slow though - do you
know what is holding it up? For me to takes 12 seconds to run the
(very basic) tests.

..

I put a bit of time measurement into run_command() and found that on my
system at work, for p.send("the shell command to execute") was actually
(marginally) slower on sandbox than on real HW, despite real HW being a
115200 baud serial port, and the code splitting the shell commands into
chunks that are sent and waited for synchronously to avoid overflowing
UART FIFOs. I'm not sure why this is. Looking at U-Boot's console, it
seems to be non-blocking, so I don't think termios VMIN/VTIME come into
play (setting them to 0 made no difference), and the two raw modes took
the same time. I meant to look into pexpect's termios settings to see if
there was anything to tweak there, but forgot today.

I did do one experiment to compare expect (the Tcl version) and pexpect.
If I do roughly the following in both:

spawn u-boot (sandbox)
wait for prompt
100 times:
 send "echo $foo\n"
 wait for "echo $foo"
 wait for shell prompt
send "reset"
wait for "reset"
send "\n"

... then Tcl is about 3x faster on my system (IIRC 0.5 vs. 1.5s). If I
remove all the "wait"s, then IIRC Tcl was about 15x faster or more.
That's a pity. Still, I'm sure as heck not going to rewrite all this in
Tcl:-( I wonder if something similar to pexpect but more targetted at
simple "interactive shell" cases would remove any of that overhead.


It is possible that we should use sandbox in 'cooked' mode so that
lines an entered synchronously. The -t option might help here, or we
may need something else.


I don't think cooked mode will work, since I believe cooked is 
line-buffered, yet when U-Boot emits the shell prompt there's no \n 
printed afterwards.


FWIW, I hacked out pexpect and replaced it with some custom code. That 
reduced by sandbox execution time from ~5.1s to ~2.3s. Execution time 
against real HW didn't seem to be affected at all. Some features like 
timeouts and complete error handling are still missing, but I don't 
think that would affect the execution time. See my github tree for the 
WIP patch.

___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-24 Thread Simon Glass
Hi Stephen,

On 23 November 2015 at 21:44, Stephen Warren  wrote:
> On 11/23/2015 06:45 PM, Simon Glass wrote:
>> Hi Stephen,
>>
>> On 22 November 2015 at 10:30, Stephen Warren  wrote:
>>> On 11/21/2015 09:49 AM, Simon Glass wrote:
 Hi Stephen,

 On 19 November 2015 at 12:09, Stephen Warren  wrote:
>
> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>
>> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>>
>>> Hi Stephen,
>>>
>>> On 14 November 2015 at 23:53, Stephen Warren 
>>> wrote:

 This tool aims to test U-Boot by executing U-Boot shell commands
 using the
 console interface. A single top-level script exists to execute or 
 attach
 to the U-Boot console, run the entire script of tests against it, and
 summarize the results. Advantages of this approach are:

 - Testing is performed in the same way a user or script would interact
with U-Boot; there can be no disconnect.
 - There is no need to write or embed test-related code into U-Boot
 itself.
It is asserted that writing test-related code in Python is simpler
 and
more flexible that writing it all in C.
 - It is reasonably simple to interact with U-Boot in this way.

 A few simple tests are provided as examples. Soon, we should convert as
 many as possible of the other tests in test/* and test/cmd_ut.c too.
>>>
>>>
>>> It's great to see this and thank you for putting in the effort!
>>>
>>> It looks like a good way of doing functional tests. I still see a role
>>> for unit tests and things like test/dm. But if we can arrange to call
>>> all U-Boot tests (unit and functional) from one 'test.py' command that
>>> would be a win.
>>>
>>> I'll look more when I can get it to work - see below.
>
> ...
>>
>> made it print a message about checking the docs for missing
>> requirements. I can probably patch the top-level test.py to do the same.
>
>
> I've pushed such a patch to:
>
> git://github.com/swarren/u-boot.git tegra_dev
> (the separate pytests branch has now been deleted)
>
> There are also a variety of other patches there related to this testing 
> infra-structure. I guess I'll hold off sending them to the list until 
> there's been some general feedback on the patches I've already posted, 
> but feel free to pull the branch down and play with it. Note that it's 
> likely to get rebased as I work.

 OK I got it working thank you. It is horribly slow though - do you
 know what is holding it up? For me to takes 12 seconds to run the
 (very basic) tests.
>>>
>>> It looks like pexpect includes a default delay to simulate human
>>> interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
>>> and add the following somewhere soon after the assignment to self.p:
>>>
>>> self.p.delaybeforesend = 0
>>>
>>> ... that will more than halve the execution time. (8.3 -> 3.5s on my
>>> 5-year-old laptop).
>>>
>>> That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
>>> for some easy-to-use automated testing.
>>
>> Sure, but my reference is to the difference between a native C test
>> and this framework. As we add more and more tests the overhead will be
>> significant. If it takes 8 seconds to run the current (fairly trivial)
>> tests, it might take a minute to run a larger suite, and to me that is
>> too long (e.g. to bisect for a failing commit).
>>
>> I wonder what is causing the delay?
>
> I actually hope the opposite.
>
> Most of the tests supported today are the most trivial possible tests,
> i.e. they take very little CPU time on the target to implement. I would
> naively expect that once we implement more interesting tests (USB Mass
> Storage, USB enumeration, eMMC/SD/USB data reading, Ethernet DHCP/TFTP,
> ...) the command invocation overhead will rapidly become insignificant.
> This certainly seems to be true for the UMS test I have locally, but who
> knows whether this will be more generally true.

We do have a USB enumeration and storage test including data reading.
We have some simple 'ping' Ethernet tests. These run in close to no
time (they fudge the timer).

I think you are referring to tests running on real hardware. In that
case I'm sure you are right - e.g. the USB or Ethernet PHY delays will
dwarf the framework time.

I should have been clear that I am most concerned about sandbox tests
running quickly. To me that is where we have most of gain/lose.

>
> I put a bit of time measurement into run_command() and found that on my
> system at work, for p.send("the shell command to execute") was actually
> (marginally) slower on sandbox than on real HW, despite real HW being a
> 115200 baud serial port, and the code splitting the shell commands into
> chunks that are sent and waited for sy

Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-23 Thread Stephen Warren
On 11/23/2015 06:45 PM, Simon Glass wrote:
> Hi Stephen,
> 
> On 22 November 2015 at 10:30, Stephen Warren  wrote:
>> On 11/21/2015 09:49 AM, Simon Glass wrote:
>>> Hi Stephen,
>>>
>>> On 19 November 2015 at 12:09, Stephen Warren  wrote:

 On 11/19/2015 10:00 AM, Stephen Warren wrote:
>
> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>
>> Hi Stephen,
>>
>> On 14 November 2015 at 23:53, Stephen Warren 
>> wrote:
>>>
>>> This tool aims to test U-Boot by executing U-Boot shell commands
>>> using the
>>> console interface. A single top-level script exists to execute or attach
>>> to the U-Boot console, run the entire script of tests against it, and
>>> summarize the results. Advantages of this approach are:
>>>
>>> - Testing is performed in the same way a user or script would interact
>>>with U-Boot; there can be no disconnect.
>>> - There is no need to write or embed test-related code into U-Boot
>>> itself.
>>>It is asserted that writing test-related code in Python is simpler
>>> and
>>>more flexible that writing it all in C.
>>> - It is reasonably simple to interact with U-Boot in this way.
>>>
>>> A few simple tests are provided as examples. Soon, we should convert as
>>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>>
>>
>> It's great to see this and thank you for putting in the effort!
>>
>> It looks like a good way of doing functional tests. I still see a role
>> for unit tests and things like test/dm. But if we can arrange to call
>> all U-Boot tests (unit and functional) from one 'test.py' command that
>> would be a win.
>>
>> I'll look more when I can get it to work - see below.

 ...
>
> made it print a message about checking the docs for missing
> requirements. I can probably patch the top-level test.py to do the same.


 I've pushed such a patch to:

 git://github.com/swarren/u-boot.git tegra_dev
 (the separate pytests branch has now been deleted)

 There are also a variety of other patches there related to this testing 
 infra-structure. I guess I'll hold off sending them to the list until 
 there's been some general feedback on the patches I've already posted, but 
 feel free to pull the branch down and play with it. Note that it's likely 
 to get rebased as I work.
>>>
>>> OK I got it working thank you. It is horribly slow though - do you
>>> know what is holding it up? For me to takes 12 seconds to run the
>>> (very basic) tests.
>>
>> It looks like pexpect includes a default delay to simulate human
>> interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
>> and add the following somewhere soon after the assignment to self.p:
>>
>> self.p.delaybeforesend = 0
>>
>> ... that will more than halve the execution time. (8.3 -> 3.5s on my
>> 5-year-old laptop).
>>
>> That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
>> for some easy-to-use automated testing.
> 
> Sure, but my reference is to the difference between a native C test
> and this framework. As we add more and more tests the overhead will be
> significant. If it takes 8 seconds to run the current (fairly trivial)
> tests, it might take a minute to run a larger suite, and to me that is
> too long (e.g. to bisect for a failing commit).
> 
> I wonder what is causing the delay?

I actually hope the opposite.

Most of the tests supported today are the most trivial possible tests,
i.e. they take very little CPU time on the target to implement. I would
naively expect that once we implement more interesting tests (USB Mass
Storage, USB enumeration, eMMC/SD/USB data reading, Ethernet DHCP/TFTP,
...) the command invocation overhead will rapidly become insignificant.
This certainly seems to be true for the UMS test I have locally, but who
knows whether this will be more generally true.

I put a bit of time measurement into run_command() and found that on my
system at work, for p.send("the shell command to execute") was actually
(marginally) slower on sandbox than on real HW, despite real HW being a
115200 baud serial port, and the code splitting the shell commands into
chunks that are sent and waited for synchronously to avoid overflowing
UART FIFOs. I'm not sure why this is. Looking at U-Boot's console, it
seems to be non-blocking, so I don't think termios VMIN/VTIME come into
play (setting them to 0 made no difference), and the two raw modes took
the same time. I meant to look into pexpect's termios settings to see if
there was anything to tweak there, but forgot today.

I did do one experiment to compare expect (the Tcl version) and pexpect.
If I do roughly the following in both:

spawn u-boot (sandbox)
wait for prompt
100 times:
send "echo $foo\n"
wait for "echo $foo"
wait for shell prompt
send "reset"
wait for "reset"
send "\n"

... then

Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-23 Thread Stephen Warren
On 11/23/2015 07:18 PM, Simon Glass wrote:
> Hi Stephen,
> 
> On 23 November 2015 at 18:45, Simon Glass  wrote:
>> Hi Stephen,
>>
>> On 22 November 2015 at 10:30, Stephen Warren  wrote:
>>> On 11/21/2015 09:49 AM, Simon Glass wrote:
 Hi Stephen,

 On 19 November 2015 at 12:09, Stephen Warren  wrote:
>
> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>
>> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>>
>>> Hi Stephen,
>>>
>>> On 14 November 2015 at 23:53, Stephen Warren 
>>> wrote:

 This tool aims to test U-Boot by executing U-Boot shell commands
 using the
 console interface. A single top-level script exists to execute or 
 attach
 to the U-Boot console, run the entire script of tests against it, and
 summarize the results. Advantages of this approach are:

 - Testing is performed in the same way a user or script would interact
with U-Boot; there can be no disconnect.
 - There is no need to write or embed test-related code into U-Boot
 itself.
It is asserted that writing test-related code in Python is simpler
 and
more flexible that writing it all in C.
 - It is reasonably simple to interact with U-Boot in this way.

 A few simple tests are provided as examples. Soon, we should convert as
 many as possible of the other tests in test/* and test/cmd_ut.c too.
>>>
>>>
>>> It's great to see this and thank you for putting in the effort!
>>>
>>> It looks like a good way of doing functional tests. I still see a role
>>> for unit tests and things like test/dm. But if we can arrange to call
>>> all U-Boot tests (unit and functional) from one 'test.py' command that
>>> would be a win.
>>>
>>> I'll look more when I can get it to work - see below.
>
> ...
>>
>> made it print a message about checking the docs for missing
>> requirements. I can probably patch the top-level test.py to do the same.
>
>
> I've pushed such a patch to:
>
> git://github.com/swarren/u-boot.git tegra_dev
> (the separate pytests branch has now been deleted)
>
> There are also a variety of other patches there related to this testing 
> infra-structure. I guess I'll hold off sending them to the list until 
> there's been some general feedback on the patches I've already posted, 
> but feel free to pull the branch down and play with it. Note that it's 
> likely to get rebased as I work.

 OK I got it working thank you. It is horribly slow though - do you
 know what is holding it up? For me to takes 12 seconds to run the
 (very basic) tests.
>>>
>>> It looks like pexpect includes a default delay to simulate human
>>> interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
>>> and add the following somewhere soon after the assignment to self.p:
>>>
>>> self.p.delaybeforesend = 0
>>>
>>> ... that will more than halve the execution time. (8.3 -> 3.5s on my
>>> 5-year-old laptop).
>>>
>>> That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
>>> for some easy-to-use automated testing.
>>
>> Sure, but my reference is to the difference between a native C test
>> and this framework. As we add more and more tests the overhead will be
>> significant. If it takes 8 seconds to run the current (fairly trivial)
>> tests, it might take a minute to run a larger suite, and to me that is
>> too long (e.g. to bisect for a failing commit).
>>
>> I wonder what is causing the delay?
>>
>>>
 Also please see dm_test_usb_tree() which uses a console buffer to
 check command output.
>>>
>>> OK, I'll take a look.
>>>
 I wonder if we should use something like that
 for simple unit tests, and use python for the more complicated
 functional tests?
>>>
>>> I'm not sure that's a good idea; it'd be best to settle on a single way
>>> of executing tests so that (a) people don't have to run/implement
>>> different kinds of tests in different ways (b) we can leverage test code
>>> across as many tests as possible.
>>>
>>> (Well, doing unit tests and system level tests differently might be
>>> necessary since one calls functions and the other uses the shell "user
>>> interface", but having multiple ways of doing e.g. system tests doesn't
>>> seem like a good idea.)
>>
>> As you found with some of the tests, it is convenient/necessary to be
>> able to call U-Boot C functions in some tests. So I don't see this as
>> a one-size-fits-all solution.
>>
>> I think it is perfectly reasonable for the python framework to run the
>> existing C tests - there is no need to rewrite them in Python. Also
>> for the driver model tests - we can just run the tests from some sort
>> of python wrapper and get the best of both worlds, right?
>>
>> Please don't take this to indicate any lack of enthusiasm for what

Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-23 Thread Simon Glass
Hi Stephen,

On 23 November 2015 at 18:45, Simon Glass  wrote:
> Hi Stephen,
>
> On 22 November 2015 at 10:30, Stephen Warren  wrote:
>> On 11/21/2015 09:49 AM, Simon Glass wrote:
>>> Hi Stephen,
>>>
>>> On 19 November 2015 at 12:09, Stephen Warren  wrote:

 On 11/19/2015 10:00 AM, Stephen Warren wrote:
>
> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>
>> Hi Stephen,
>>
>> On 14 November 2015 at 23:53, Stephen Warren 
>> wrote:
>>>
>>> This tool aims to test U-Boot by executing U-Boot shell commands
>>> using the
>>> console interface. A single top-level script exists to execute or attach
>>> to the U-Boot console, run the entire script of tests against it, and
>>> summarize the results. Advantages of this approach are:
>>>
>>> - Testing is performed in the same way a user or script would interact
>>>with U-Boot; there can be no disconnect.
>>> - There is no need to write or embed test-related code into U-Boot
>>> itself.
>>>It is asserted that writing test-related code in Python is simpler
>>> and
>>>more flexible that writing it all in C.
>>> - It is reasonably simple to interact with U-Boot in this way.
>>>
>>> A few simple tests are provided as examples. Soon, we should convert as
>>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>>
>>
>> It's great to see this and thank you for putting in the effort!
>>
>> It looks like a good way of doing functional tests. I still see a role
>> for unit tests and things like test/dm. But if we can arrange to call
>> all U-Boot tests (unit and functional) from one 'test.py' command that
>> would be a win.
>>
>> I'll look more when I can get it to work - see below.

 ...
>
> made it print a message about checking the docs for missing
> requirements. I can probably patch the top-level test.py to do the same.


 I've pushed such a patch to:

 git://github.com/swarren/u-boot.git tegra_dev
 (the separate pytests branch has now been deleted)

 There are also a variety of other patches there related to this testing 
 infra-structure. I guess I'll hold off sending them to the list until 
 there's been some general feedback on the patches I've already posted, but 
 feel free to pull the branch down and play with it. Note that it's likely 
 to get rebased as I work.
>>>
>>> OK I got it working thank you. It is horribly slow though - do you
>>> know what is holding it up? For me to takes 12 seconds to run the
>>> (very basic) tests.
>>
>> It looks like pexpect includes a default delay to simulate human
>> interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
>> and add the following somewhere soon after the assignment to self.p:
>>
>> self.p.delaybeforesend = 0
>>
>> ... that will more than halve the execution time. (8.3 -> 3.5s on my
>> 5-year-old laptop).
>>
>> That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
>> for some easy-to-use automated testing.
>
> Sure, but my reference is to the difference between a native C test
> and this framework. As we add more and more tests the overhead will be
> significant. If it takes 8 seconds to run the current (fairly trivial)
> tests, it might take a minute to run a larger suite, and to me that is
> too long (e.g. to bisect for a failing commit).
>
> I wonder what is causing the delay?
>
>>
>>> Also please see dm_test_usb_tree() which uses a console buffer to
>>> check command output.
>>
>> OK, I'll take a look.
>>
>>> I wonder if we should use something like that
>>> for simple unit tests, and use python for the more complicated
>>> functional tests?
>>
>> I'm not sure that's a good idea; it'd be best to settle on a single way
>> of executing tests so that (a) people don't have to run/implement
>> different kinds of tests in different ways (b) we can leverage test code
>> across as many tests as possible.
>>
>> (Well, doing unit tests and system level tests differently might be
>> necessary since one calls functions and the other uses the shell "user
>> interface", but having multiple ways of doing e.g. system tests doesn't
>> seem like a good idea.)
>
> As you found with some of the tests, it is convenient/necessary to be
> able to call U-Boot C functions in some tests. So I don't see this as
> a one-size-fits-all solution.
>
> I think it is perfectly reasonable for the python framework to run the
> existing C tests - there is no need to rewrite them in Python. Also
> for the driver model tests - we can just run the tests from some sort
> of python wrapper and get the best of both worlds, right?
>
> Please don't take this to indicate any lack of enthusiasm for what you
> are doing - it's a great development and I'm sure it will help a lot!
> We really need to unify all the tests so we can run them all in one
> step.
>
> I just think we 

Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-23 Thread Simon Glass
Hi Stephen,

On 22 November 2015 at 10:30, Stephen Warren  wrote:
> On 11/21/2015 09:49 AM, Simon Glass wrote:
>> Hi Stephen,
>>
>> On 19 November 2015 at 12:09, Stephen Warren  wrote:
>>>
>>> On 11/19/2015 10:00 AM, Stephen Warren wrote:

 On 11/19/2015 07:45 AM, Simon Glass wrote:
>
> Hi Stephen,
>
> On 14 November 2015 at 23:53, Stephen Warren 
> wrote:
>>
>> This tool aims to test U-Boot by executing U-Boot shell commands
>> using the
>> console interface. A single top-level script exists to execute or attach
>> to the U-Boot console, run the entire script of tests against it, and
>> summarize the results. Advantages of this approach are:
>>
>> - Testing is performed in the same way a user or script would interact
>>with U-Boot; there can be no disconnect.
>> - There is no need to write or embed test-related code into U-Boot
>> itself.
>>It is asserted that writing test-related code in Python is simpler
>> and
>>more flexible that writing it all in C.
>> - It is reasonably simple to interact with U-Boot in this way.
>>
>> A few simple tests are provided as examples. Soon, we should convert as
>> many as possible of the other tests in test/* and test/cmd_ut.c too.
>
>
> It's great to see this and thank you for putting in the effort!
>
> It looks like a good way of doing functional tests. I still see a role
> for unit tests and things like test/dm. But if we can arrange to call
> all U-Boot tests (unit and functional) from one 'test.py' command that
> would be a win.
>
> I'll look more when I can get it to work - see below.
>>>
>>> ...

 made it print a message about checking the docs for missing
 requirements. I can probably patch the top-level test.py to do the same.
>>>
>>>
>>> I've pushed such a patch to:
>>>
>>> git://github.com/swarren/u-boot.git tegra_dev
>>> (the separate pytests branch has now been deleted)
>>>
>>> There are also a variety of other patches there related to this testing 
>>> infra-structure. I guess I'll hold off sending them to the list until 
>>> there's been some general feedback on the patches I've already posted, but 
>>> feel free to pull the branch down and play with it. Note that it's likely 
>>> to get rebased as I work.
>>
>> OK I got it working thank you. It is horribly slow though - do you
>> know what is holding it up? For me to takes 12 seconds to run the
>> (very basic) tests.
>
> It looks like pexpect includes a default delay to simulate human
> interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
> and add the following somewhere soon after the assignment to self.p:
>
> self.p.delaybeforesend = 0
>
> ... that will more than halve the execution time. (8.3 -> 3.5s on my
> 5-year-old laptop).
>
> That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
> for some easy-to-use automated testing.

Sure, but my reference is to the difference between a native C test
and this framework. As we add more and more tests the overhead will be
significant. If it takes 8 seconds to run the current (fairly trivial)
tests, it might take a minute to run a larger suite, and to me that is
too long (e.g. to bisect for a failing commit).

I wonder what is causing the delay?

>
>> Also please see dm_test_usb_tree() which uses a console buffer to
>> check command output.
>
> OK, I'll take a look.
>
>> I wonder if we should use something like that
>> for simple unit tests, and use python for the more complicated
>> functional tests?
>
> I'm not sure that's a good idea; it'd be best to settle on a single way
> of executing tests so that (a) people don't have to run/implement
> different kinds of tests in different ways (b) we can leverage test code
> across as many tests as possible.
>
> (Well, doing unit tests and system level tests differently might be
> necessary since one calls functions and the other uses the shell "user
> interface", but having multiple ways of doing e.g. system tests doesn't
> seem like a good idea.)

As you found with some of the tests, it is convenient/necessary to be
able to call U-Boot C functions in some tests. So I don't see this as
a one-size-fits-all solution.

I think it is perfectly reasonable for the python framework to run the
existing C tests - there is no need to rewrite them in Python. Also
for the driver model tests - we can just run the tests from some sort
of python wrapper and get the best of both worlds, right?

Please don't take this to indicate any lack of enthusiasm for what you
are doing - it's a great development and I'm sure it will help a lot!
We really need to unify all the tests so we can run them all in one
step.

I just think we should aim to have the automated tests run in a few
seconds (let's say 5-10 at the outside). We need to make sure that the
python framework will allow this even when running thousands of tests.

Regards,

Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-23 Thread Stephen Warren

On 11/23/2015 04:44 PM, Tom Rini wrote:

On Thu, Nov 19, 2015 at 10:00:32AM -0700, Stephen Warren wrote:

...

See the following in test/py/README.md:


## Requirements

The test suite is implemented using pytest. Interaction with the U-Boot
console uses pexpect. Interaction with real hardware uses the tools of your
choice; you get to implement various "hook" scripts that are called by the
test suite at the appropriate time.

On Debian or Debian-like distributions, the following packages are required.
Similar package names should exist in other distributions.

| Package| Version tested (Ubuntu 14.04) |
| -- | - |
| python | 2.7.5-5ubuntu3|
| python-pytest  | 2.5.1-1   |
| python-pexpect | 3.1-1ubuntu0.1|


In the main Python code, I trapped at least one exception location
and made it print a message about checking the docs for missing
requirements. I can probably patch the top-level test.py to do the
same.


Isn't there some way to inject the local to U-Boot copy of the libraries
in?  I swear I've done something like that before in python..


It would certainly be possible to either check in the required Python 
libraries in the U-Boot source tree, or include instructions for people 
to manually create a "virtualenv" (or perhaps even automatically do this 
from test.py). However, I was hoping to avoid the need to for that since 
those options are a bit more complex than "just install these 3 packages 
and run the script". (And in fact I've already mentioned 
virtualenv-based setup instructions in the README for people which 
archaic distros).


Still, if we find that varying versions of pytest/pexpect don't work 
well, we could certainly choose one of those options.


BTW, I've created a ton of patches on top of all these that I haven't 
posted yet. See:


git://github.com/swarren/u-boot.git tegra_dev

I'm not sure if I should squash all that into a V2 of this patch, or 
just post them all as incremental fixes/enhancements?

___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-23 Thread Tom Rini
On Thu, Nov 19, 2015 at 10:00:32AM -0700, Stephen Warren wrote:
> On 11/19/2015 07:45 AM, Simon Glass wrote:
> >Hi Stephen,
> >
> >On 14 November 2015 at 23:53, Stephen Warren  wrote:
> >>This tool aims to test U-Boot by executing U-Boot shell commands using the
> >>console interface. A single top-level script exists to execute or attach
> >>to the U-Boot console, run the entire script of tests against it, and
> >>summarize the results. Advantages of this approach are:
> >>
> >>- Testing is performed in the same way a user or script would interact
> >>   with U-Boot; there can be no disconnect.
> >>- There is no need to write or embed test-related code into U-Boot itself.
> >>   It is asserted that writing test-related code in Python is simpler and
> >>   more flexible that writing it all in C.
> >>- It is reasonably simple to interact with U-Boot in this way.
> >>
> >>A few simple tests are provided as examples. Soon, we should convert as
> >>many as possible of the other tests in test/* and test/cmd_ut.c too.
> >
> >It's great to see this and thank you for putting in the effort!
> >
> >It looks like a good way of doing functional tests. I still see a role
> >for unit tests and things like test/dm. But if we can arrange to call
> >all U-Boot tests (unit and functional) from one 'test.py' command that
> >would be a win.
> >
> >I'll look more when I can get it to work - see below.
> ...
> >I get this on my Ubuntu 64-bit machine (14.04.3)
> >
> >$ ./test/py/test.py --bd sandbox --buildTraceback (most recent call last):
> >   File "./test/py/test.py", line 12, in 
> > os.execvp("py.test", args)
> >   File "/usr/lib/python2.7/os.py", line 344, in execvp
> > _execvpe(file, args)
> >   File "/usr/lib/python2.7/os.py", line 380, in _execvpe
> > func(fullname, *argrest)
> >OSError: [Errno 2] No such file or directory
> 
> "py.test" isn't in your $PATH. Did you install it? See the following
> in test/py/README.md:
> 
> >## Requirements
> >
> >The test suite is implemented using pytest. Interaction with the U-Boot
> >console uses pexpect. Interaction with real hardware uses the tools of your
> >choice; you get to implement various "hook" scripts that are called by the
> >test suite at the appropriate time.
> >
> >On Debian or Debian-like distributions, the following packages are required.
> >Similar package names should exist in other distributions.
> >
> >| Package| Version tested (Ubuntu 14.04) |
> >| -- | - |
> >| python | 2.7.5-5ubuntu3|
> >| python-pytest  | 2.5.1-1   |
> >| python-pexpect | 3.1-1ubuntu0.1|
> 
> In the main Python code, I trapped at least one exception location
> and made it print a message about checking the docs for missing
> requirements. I can probably patch the top-level test.py to do the
> same.

Isn't there some way to inject the local to U-Boot copy of the libraries
in?  I swear I've done something like that before in python..


-- 
Tom


signature.asc
Description: Digital signature
___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-22 Thread Stephen Warren
On 11/21/2015 09:49 AM, Simon Glass wrote:
> Hi Stephen,
> 
> On 19 November 2015 at 12:09, Stephen Warren  wrote:
>>
>> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>>
>>> On 11/19/2015 07:45 AM, Simon Glass wrote:

 Hi Stephen,

 On 14 November 2015 at 23:53, Stephen Warren 
 wrote:
>
> This tool aims to test U-Boot by executing U-Boot shell commands
> using the
> console interface. A single top-level script exists to execute or attach
> to the U-Boot console, run the entire script of tests against it, and
> summarize the results. Advantages of this approach are:
>
> - Testing is performed in the same way a user or script would interact
>with U-Boot; there can be no disconnect.
> - There is no need to write or embed test-related code into U-Boot
> itself.
>It is asserted that writing test-related code in Python is simpler
> and
>more flexible that writing it all in C.
> - It is reasonably simple to interact with U-Boot in this way.
>
> A few simple tests are provided as examples. Soon, we should convert as
> many as possible of the other tests in test/* and test/cmd_ut.c too.


 It's great to see this and thank you for putting in the effort!

 It looks like a good way of doing functional tests. I still see a role
 for unit tests and things like test/dm. But if we can arrange to call
 all U-Boot tests (unit and functional) from one 'test.py' command that
 would be a win.

 I'll look more when I can get it to work - see below.
>>
>> ...
>>>
>>> made it print a message about checking the docs for missing
>>> requirements. I can probably patch the top-level test.py to do the same.
>>
>>
>> I've pushed such a patch to:
>>
>> git://github.com/swarren/u-boot.git tegra_dev
>> (the separate pytests branch has now been deleted)
>>
>> There are also a variety of other patches there related to this testing 
>> infra-structure. I guess I'll hold off sending them to the list until 
>> there's been some general feedback on the patches I've already posted, but 
>> feel free to pull the branch down and play with it. Note that it's likely to 
>> get rebased as I work.
> 
> OK I got it working thank you. It is horribly slow though - do you
> know what is holding it up? For me to takes 12 seconds to run the
> (very basic) tests.

It looks like pexpect includes a default delay to simulate human
interaction. If you edit test/py/uboot_console_base.py ensure_spawned()
and add the following somewhere soon after the assignment to self.p:

self.p.delaybeforesend = 0

... that will more than halve the execution time. (8.3 -> 3.5s on my
5-year-old laptop).

That said, even your 12s or my 8.3s doesn't seem like a bad price to pay
for some easy-to-use automated testing.

> Also please see dm_test_usb_tree() which uses a console buffer to
> check command output.

OK, I'll take a look.

> I wonder if we should use something like that
> for simple unit tests, and use python for the more complicated
> functional tests?

I'm not sure that's a good idea; it'd be best to settle on a single way
of executing tests so that (a) people don't have to run/implement
different kinds of tests in different ways (b) we can leverage test code
across as many tests as possible.

(Well, doing unit tests and system level tests differently might be
necessary since one calls functions and the other uses the shell "user
interface", but having multiple ways of doing e.g. system tests doesn't
seem like a good idea.)
___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-21 Thread Simon Glass
Hi Stephen,

On 19 November 2015 at 12:09, Stephen Warren  wrote:
>
> On 11/19/2015 10:00 AM, Stephen Warren wrote:
>>
>> On 11/19/2015 07:45 AM, Simon Glass wrote:
>>>
>>> Hi Stephen,
>>>
>>> On 14 November 2015 at 23:53, Stephen Warren 
>>> wrote:

 This tool aims to test U-Boot by executing U-Boot shell commands
 using the
 console interface. A single top-level script exists to execute or attach
 to the U-Boot console, run the entire script of tests against it, and
 summarize the results. Advantages of this approach are:

 - Testing is performed in the same way a user or script would interact
with U-Boot; there can be no disconnect.
 - There is no need to write or embed test-related code into U-Boot
 itself.
It is asserted that writing test-related code in Python is simpler
 and
more flexible that writing it all in C.
 - It is reasonably simple to interact with U-Boot in this way.

 A few simple tests are provided as examples. Soon, we should convert as
 many as possible of the other tests in test/* and test/cmd_ut.c too.
>>>
>>>
>>> It's great to see this and thank you for putting in the effort!
>>>
>>> It looks like a good way of doing functional tests. I still see a role
>>> for unit tests and things like test/dm. But if we can arrange to call
>>> all U-Boot tests (unit and functional) from one 'test.py' command that
>>> would be a win.
>>>
>>> I'll look more when I can get it to work - see below.
>
> ...
>>
>> made it print a message about checking the docs for missing
>> requirements. I can probably patch the top-level test.py to do the same.
>
>
> I've pushed such a patch to:
>
> git://github.com/swarren/u-boot.git tegra_dev
> (the separate pytests branch has now been deleted)
>
> There are also a variety of other patches there related to this testing 
> infra-structure. I guess I'll hold off sending them to the list until there's 
> been some general feedback on the patches I've already posted, but feel free 
> to pull the branch down and play with it. Note that it's likely to get 
> rebased as I work.

OK I got it working thank you. It is horribly slow though - do you
know what is holding it up? For me to takes 12 seconds to run the
(very basic) tests.

Also please see dm_test_usb_tree() which uses a console buffer to
check command output. I wonder if we should use something like that
for simple unit tests, and use python for the more complicated
functional tests?

Regards,
Simon
___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-19 Thread Stephen Warren

On 11/19/2015 10:00 AM, Stephen Warren wrote:

On 11/19/2015 07:45 AM, Simon Glass wrote:

Hi Stephen,

On 14 November 2015 at 23:53, Stephen Warren 
wrote:

This tool aims to test U-Boot by executing U-Boot shell commands
using the
console interface. A single top-level script exists to execute or attach
to the U-Boot console, run the entire script of tests against it, and
summarize the results. Advantages of this approach are:

- Testing is performed in the same way a user or script would interact
   with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot
itself.
   It is asserted that writing test-related code in Python is simpler
and
   more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.

A few simple tests are provided as examples. Soon, we should convert as
many as possible of the other tests in test/* and test/cmd_ut.c too.


It's great to see this and thank you for putting in the effort!

It looks like a good way of doing functional tests. I still see a role
for unit tests and things like test/dm. But if we can arrange to call
all U-Boot tests (unit and functional) from one 'test.py' command that
would be a win.

I'll look more when I can get it to work - see below.

...

made it print a message about checking the docs for missing
requirements. I can probably patch the top-level test.py to do the same.


I've pushed such a patch to:

git://github.com/swarren/u-boot.git tegra_dev
(the separate pytests branch has now been deleted)

There are also a variety of other patches there related to this testing 
infra-structure. I guess I'll hold off sending them to the list until 
there's been some general feedback on the patches I've already posted, 
but feel free to pull the branch down and play with it. Note that it's 
likely to get rebased as I work.

___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-19 Thread Stephen Warren

On 11/19/2015 07:45 AM, Simon Glass wrote:

Hi Stephen,

On 14 November 2015 at 23:53, Stephen Warren  wrote:

This tool aims to test U-Boot by executing U-Boot shell commands using the
console interface. A single top-level script exists to execute or attach
to the U-Boot console, run the entire script of tests against it, and
summarize the results. Advantages of this approach are:

- Testing is performed in the same way a user or script would interact
   with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot itself.
   It is asserted that writing test-related code in Python is simpler and
   more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.

A few simple tests are provided as examples. Soon, we should convert as
many as possible of the other tests in test/* and test/cmd_ut.c too.


It's great to see this and thank you for putting in the effort!

It looks like a good way of doing functional tests. I still see a role
for unit tests and things like test/dm. But if we can arrange to call
all U-Boot tests (unit and functional) from one 'test.py' command that
would be a win.

I'll look more when I can get it to work - see below.

...

I get this on my Ubuntu 64-bit machine (14.04.3)

$ ./test/py/test.py --bd sandbox --buildTraceback (most recent call last):
   File "./test/py/test.py", line 12, in 
 os.execvp("py.test", args)
   File "/usr/lib/python2.7/os.py", line 344, in execvp
 _execvpe(file, args)
   File "/usr/lib/python2.7/os.py", line 380, in _execvpe
 func(fullname, *argrest)
OSError: [Errno 2] No such file or directory


"py.test" isn't in your $PATH. Did you install it? See the following in 
test/py/README.md:



## Requirements

The test suite is implemented using pytest. Interaction with the U-Boot
console uses pexpect. Interaction with real hardware uses the tools of your
choice; you get to implement various "hook" scripts that are called by the
test suite at the appropriate time.

On Debian or Debian-like distributions, the following packages are required.
Similar package names should exist in other distributions.

| Package| Version tested (Ubuntu 14.04) |
| -- | - |
| python | 2.7.5-5ubuntu3|
| python-pytest  | 2.5.1-1   |
| python-pexpect | 3.1-1ubuntu0.1|


In the main Python code, I trapped at least one exception location and 
made it print a message about checking the docs for missing 
requirements. I can probably patch the top-level test.py to do the same.

___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


Re: [U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-19 Thread Simon Glass
Hi Stephen,

On 14 November 2015 at 23:53, Stephen Warren  wrote:
> This tool aims to test U-Boot by executing U-Boot shell commands using the
> console interface. A single top-level script exists to execute or attach
> to the U-Boot console, run the entire script of tests against it, and
> summarize the results. Advantages of this approach are:
>
> - Testing is performed in the same way a user or script would interact
>   with U-Boot; there can be no disconnect.
> - There is no need to write or embed test-related code into U-Boot itself.
>   It is asserted that writing test-related code in Python is simpler and
>   more flexible that writing it all in C.
> - It is reasonably simple to interact with U-Boot in this way.
>
> A few simple tests are provided as examples. Soon, we should convert as
> many as possible of the other tests in test/* and test/cmd_ut.c too.

It's great to see this and thank you for putting in the effort!

It looks like a good way of doing functional tests. I still see a role
for unit tests and things like test/dm. But if we can arrange to call
all U-Boot tests (unit and functional) from one 'test.py' command that
would be a win.

I'll look more when I can get it to work - see below.

>
> In the future, I hope to publish (out-of-tree) the hook scripts, relay
> control utilities, and udev rules I will use for my own HW setup.
>
> See README.md for more details!
>
> Signed-off-by: Stephen Warren 
> ---
>  .gitignore   |   1 +
>  test/py/README.md| 287 
> +++
>  test/py/board_jetson_tk1.py  |   1 +
>  test/py/board_sandbox.py |   1 +
>  test/py/board_seaboard.py|   1 +
>  test/py/conftest.py  | 225 +++
>  test/py/multiplexed_log.css  |  70 +
>  test/py/multiplexed_log.py   | 172 +
>  test/py/pytest.ini   |   5 +
>  test/py/soc_tegra124.py  |   1 +
>  test/py/soc_tegra20.py   |   1 +
>  test/py/test.py  |  12 ++
>  test/py/test_000_version.py  |   9 ++
>  test/py/test_env.py  |  96 
>  test/py/test_help.py |   2 +
>  test/py/test_md.py   |  12 ++
>  test/py/test_sandbox_exit.py |  15 ++
>  test/py/test_unknown_cmd.py  |   4 +
>  test/py/uboot_console_base.py| 143 +
>  test/py/uboot_console_exec_attach.py |  28 
>  test/py/uboot_console_sandbox.py |  22 +++
>  21 files changed, 1108 insertions(+)
>  create mode 100644 test/py/README.md
>  create mode 100644 test/py/board_jetson_tk1.py
>  create mode 100644 test/py/board_sandbox.py
>  create mode 100644 test/py/board_seaboard.py
>  create mode 100644 test/py/conftest.py
>  create mode 100644 test/py/multiplexed_log.css
>  create mode 100644 test/py/multiplexed_log.py
>  create mode 100644 test/py/pytest.ini
>  create mode 100644 test/py/soc_tegra124.py
>  create mode 100644 test/py/soc_tegra20.py
>  create mode 100755 test/py/test.py
>  create mode 100644 test/py/test_000_version.py
>  create mode 100644 test/py/test_env.py
>  create mode 100644 test/py/test_help.py
>  create mode 100644 test/py/test_md.py
>  create mode 100644 test/py/test_sandbox_exit.py
>  create mode 100644 test/py/test_unknown_cmd.py
>  create mode 100644 test/py/uboot_console_base.py
>  create mode 100644 test/py/uboot_console_exec_attach.py
>  create mode 100644 test/py/uboot_console_sandbox.py

I get this on my Ubuntu 64-bit machine (14.04.3)

$ ./test/py/test.py --bd sandbox --buildTraceback (most recent call last):
  File "./test/py/test.py", line 12, in 
os.execvp("py.test", args)
  File "/usr/lib/python2.7/os.py", line 344, in execvp
_execvpe(file, args)
  File "/usr/lib/python2.7/os.py", line 380, in _execvpe
func(fullname, *argrest)
OSError: [Errno 2] No such file or directory

Regards,
Simon
___
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot


[U-Boot] [PATCH] Implement pytest-based test infrastructure

2015-11-14 Thread Stephen Warren
This tool aims to test U-Boot by executing U-Boot shell commands using the
console interface. A single top-level script exists to execute or attach
to the U-Boot console, run the entire script of tests against it, and
summarize the results. Advantages of this approach are:

- Testing is performed in the same way a user or script would interact
  with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot itself.
  It is asserted that writing test-related code in Python is simpler and
  more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.

A few simple tests are provided as examples. Soon, we should convert as
many as possible of the other tests in test/* and test/cmd_ut.c too.

In the future, I hope to publish (out-of-tree) the hook scripts, relay
control utilities, and udev rules I will use for my own HW setup.

See README.md for more details!

Signed-off-by: Stephen Warren 
---
 .gitignore   |   1 +
 test/py/README.md| 287 +++
 test/py/board_jetson_tk1.py  |   1 +
 test/py/board_sandbox.py |   1 +
 test/py/board_seaboard.py|   1 +
 test/py/conftest.py  | 225 +++
 test/py/multiplexed_log.css  |  70 +
 test/py/multiplexed_log.py   | 172 +
 test/py/pytest.ini   |   5 +
 test/py/soc_tegra124.py  |   1 +
 test/py/soc_tegra20.py   |   1 +
 test/py/test.py  |  12 ++
 test/py/test_000_version.py  |   9 ++
 test/py/test_env.py  |  96 
 test/py/test_help.py |   2 +
 test/py/test_md.py   |  12 ++
 test/py/test_sandbox_exit.py |  15 ++
 test/py/test_unknown_cmd.py  |   4 +
 test/py/uboot_console_base.py| 143 +
 test/py/uboot_console_exec_attach.py |  28 
 test/py/uboot_console_sandbox.py |  22 +++
 21 files changed, 1108 insertions(+)
 create mode 100644 test/py/README.md
 create mode 100644 test/py/board_jetson_tk1.py
 create mode 100644 test/py/board_sandbox.py
 create mode 100644 test/py/board_seaboard.py
 create mode 100644 test/py/conftest.py
 create mode 100644 test/py/multiplexed_log.css
 create mode 100644 test/py/multiplexed_log.py
 create mode 100644 test/py/pytest.ini
 create mode 100644 test/py/soc_tegra124.py
 create mode 100644 test/py/soc_tegra20.py
 create mode 100755 test/py/test.py
 create mode 100644 test/py/test_000_version.py
 create mode 100644 test/py/test_env.py
 create mode 100644 test/py/test_help.py
 create mode 100644 test/py/test_md.py
 create mode 100644 test/py/test_sandbox_exit.py
 create mode 100644 test/py/test_unknown_cmd.py
 create mode 100644 test/py/uboot_console_base.py
 create mode 100644 test/py/uboot_console_exec_attach.py
 create mode 100644 test/py/uboot_console_sandbox.py

diff --git a/.gitignore b/.gitignore
index 33abbd3d0783..b276b3a160bb 100644
--- a/.gitignore
+++ b/.gitignore
@@ -20,6 +20,7 @@
 *.bin
 *.patch
 *.cfgtmp
+*.pyc
 
 # host programs on Cygwin
 *.exe
diff --git a/test/py/README.md b/test/py/README.md
new file mode 100644
index ..70104d2f3b5e
--- /dev/null
+++ b/test/py/README.md
@@ -0,0 +1,287 @@
+# U-Boot pytest suite
+
+## Introduction
+
+This tool aims to test U-Boot by executing U-Boot shell commands using the
+console interface. A single top-level script exists to execute or attach to the
+U-Boot console, run the entire script of tests against it, and summarize the
+results. Advantages of this approach are:
+
+- Testing is performed in the same way a user or script would interact with
+  U-Boot; there can be no disconnect.
+- There is no need to write or embed test-related code into U-Boot itself.
+  It is asserted that writing test-related code in Python is simpler and more
+  flexible that writing it all in C.
+- It is reasonably simple to interact with U-Boot in this way.
+
+## Requirements
+
+The test suite is implemented using pytest. Interaction with the U-Boot
+console uses pexpect. Interaction with real hardware uses the tools of your
+choice; you get to implement various "hook" scripts that are called by the
+test suite at the appropriate time.
+
+On Debian or Debian-like distributions, the following packages are required.
+Similar package names should exist in other distributions.
+
+| Package| Version tested (Ubuntu 14.04) |
+| -- | - |
+| python | 2.7.5-5ubuntu3|
+| python-pytest  | 2.5.1-1   |
+| python-pexpect | 3.1-1ubuntu0.1|
+
+The test script supports either:
+
+- Executing a sandbox port of U-Boot on the local machine as a sub-process,
+  and interacting with it over stdin/stdout.
+- Executing external "hook" scripts to flash a U-Boot binary onto a physical