Re: [RFC] Create test script(s?) for regression testing

2018-11-07 Thread Shuah Khan
On 11/07/2018 12:10 PM, Mauro Carvalho Chehab wrote:
> Em Wed, 07 Nov 2018 12:06:55 +0200
> Laurent Pinchart  escreveu:
> 
>> Hi Hans,
>>
>> On Wednesday, 7 November 2018 10:05:12 EET Hans Verkuil wrote:
>>> On 11/06/2018 08:58 PM, Laurent Pinchart wrote:  
 On Tuesday, 6 November 2018 15:56:34 EET Hans Verkuil wrote:  
> On 11/06/18 14:12, Laurent Pinchart wrote:  
>> On Tuesday, 6 November 2018 13:36:55 EET Sakari Ailus wrote:  
>>> On Tue, Nov 06, 2018 at 09:37:07AM +0100, Hans Verkuil wrote:  
 Hi all,

 After the media summit (heavy on test discussions) and the V4L2 event
 regression we just found it is clear we need to do a better job with
 testing.

 All the pieces are in place, so what is needed is to combine it and
 create a script that anyone of us as core developers can run to check
 for regressions. The same script can be run as part of the kernelci
 regression testing.  
>>>
>>> I'd say that *some* pieces are in place. Of course, the more there is,
>>> the better.
>>>
>>> The more there are tests, the more important it would be they're
>>> automated, preferrably without the developer having to run them on his/
>>> her own machine.  
>>
>> From my experience with testing, it's important to have both a core set
>> of tests (a.k.a. smoke tests) that can easily be run on developers'
>> machines, and extended tests that can be offloaded to a shared testing
>> infrastructure (but possibly also run locally if desired).  
>
> That was my idea as well for the longer term. First step is to do the
> basic smoke tests (i.e. run compliance tests, do some (limited) streaming
> test).
>
> There are more extensive (and longer running) tests that can be done, but
> that's something to look at later.
>   
 We have four virtual drivers: vivid, vim2m, vimc and vicodec. The last
 one is IMHO not quite good enough yet for testing: it is not fully
 compliant to the upcoming stateful codec spec. Work for that is
 planned as part of an Outreachy project.

 My idea is to create a script that is maintained as part of v4l-utils
 that loads the drivers and runs v4l2-compliance and possibly other
 tests against the virtual drivers.  
> 
> (adding Shuah)
> 
> IMO, the best would be to have something like that as part of Kernel
> self test, as this could give a broader covering than just Kernel CI.
> 

I agree with the broader coverage benefit that comes with adding tests to 
kselftest.
It makes it easier for making changes to tests/tools coupled with kernel/driver
changes. Common TAP13 reporting can be taken advantage of without doing any 
additional
work in the tests if author chooses to do so.

Tests can be added such that they don't get run by default if there is a reason 
do so
and Kernel CI and other rings can invoke it as a special case if necessary.

There are very clear advantages to making these tests part of the kernel source 
tree.
We can discuss at the Kernel Summit next week if you are interested.

thanks,
-- Shuah


Re: [RFC] Create test script(s?) for regression testing

2018-11-07 Thread Laurent Pinchart
Hi Mauro,

On Wednesday, 7 November 2018 21:53:20 EET Mauro Carvalho Chehab wrote:
> Em Wed, 07 Nov 2018 21:35:32 +0200 Laurent Pinchart escreveu:
> > On Wednesday, 7 November 2018 21:10:35 EET Mauro Carvalho Chehab wrote:

[snip]

> >> I'm with Hans on that matter: better to start with an absolute minimum
> >> of dependencies (like just: make, autotools, c, c++, bash),
> > 
> > On a site note, for a new project, we might want to move away from
> > autotools. cmake and meson are possible alternatives that are way less
> > painful.
> 
> Each toolset has advantages or disadvantages. We all know how
> autotools can be painful.
> 
> One bad thing with cmake is that they deprecate stuff. A long-live project
> usually require several" backward" compat stuff at cmake files in order
> to cope with different behaviors that change as cmake evolves.

I don't know how much of a problem that would be. My experience with cmake is 
good so far, but I haven't used it in a large scale project with 10+ years of 
contributions :-)

> I never used mason myself.

It's the build system we picked for libcamera, I expect to provide feedback in 
the not too distant future.

[snip]

-- 
Regards,

Laurent Pinchart





Re: [RFC] Create test script(s?) for regression testing

2018-11-07 Thread Mauro Carvalho Chehab
Em Wed, 07 Nov 2018 21:35:32 +0200
Laurent Pinchart  escreveu:

> Hi Mauro,
> 
> On Wednesday, 7 November 2018 21:10:35 EET Mauro Carvalho Chehab wrote:
> > Em Wed, 07 Nov 2018 12:06:55 +0200 Laurent Pinchart escreveu:  
> > > On Wednesday, 7 November 2018 10:05:12 EET Hans Verkuil wrote:  
> > >> On 11/06/2018 08:58 PM, Laurent Pinchart wrote:  
> > >>> On Tuesday, 6 November 2018 15:56:34 EET Hans Verkuil wrote:  
> >  On 11/06/18 14:12, Laurent Pinchart wrote:  
> > > On Tuesday, 6 November 2018 13:36:55 EET Sakari Ailus wrote:  
> > >> On Tue, Nov 06, 2018 at 09:37:07AM +0100, Hans Verkuil wrote:  
> > >>> Hi all,
> > >>> 
> > >>> After the media summit (heavy on test discussions) and the V4L2
> > >>> event regression we just found it is clear we need to do a better
> > >>> job with testing.
> > >>> 
> > >>> All the pieces are in place, so what is needed is to combine it
> > >>> and create a script that anyone of us as core developers can run to
> > >>> check for regressions. The same script can be run as part of the
> > >>> kernelci regression testing.  
> > >> 
> > >> I'd say that *some* pieces are in place. Of course, the more there
> > >> is, the better.
> > >> 
> > >> The more there are tests, the more important it would be they're
> > >> automated, preferrably without the developer having to run them on
> > >> his/her own machine.  
> > > 
> > > From my experience with testing, it's important to have both a core
> > > set of tests (a.k.a. smoke tests) that can easily be run on
> > > developers' machines, and extended tests that can be offloaded to a
> > > shared testing infrastructure (but possibly also run locally if
> > > desired).  
> >  
> >  That was my idea as well for the longer term. First step is to do the
> >  basic smoke tests (i.e. run compliance tests, do some (limited)
> >  streaming test).
> >  
> >  There are more extensive (and longer running) tests that can be done,
> >  but that's something to look at later.
> >    
> > >>> We have four virtual drivers: vivid, vim2m, vimc and vicodec. The
> > >>> last one is IMHO not quite good enough yet for testing: it is not
> > >>> fully compliant to the upcoming stateful codec spec. Work for that
> > >>> is planned as part of an Outreachy project.
> > >>> 
> > >>> My idea is to create a script that is maintained as part of
> > >>> v4l-utils that loads the drivers and runs v4l2-compliance and
> > >>> possibly other tests against the virtual drivers.  
> > 
> > (adding Shuah)
> > 
> > IMO, the best would be to have something like that as part of Kernel
> > self test, as this could give a broader covering than just Kernel CI.
> > 
> > Yeah, I know that one of the concerns is that the *-compliance stuff
> > we have are written in C++ and it is easier to maintain then at
> > v4l-utils, but maybe it would be acceptable at kselftest to have a
> > test bench there with would download the sources from a git tree
> > and then build just the v4l2-compliance stuff, e. g. having a Kernel
> > self test target that would do something like:
> > 
> > git clone --depth 1 git://linuxtv.org/v4l-utils.git tests && \
> > cd tests && ./autogen.sh && make tests && ./run_tests.sh  
> 
> Let me make sure I understand this properly. Are you proposing to add to 
> kselftest, which is part of the Linux kernel, and as such benefits from the 
> level of trust of Linus' tree, and which is run by a very large number of 
> machines from developer workstations to automated large-scale test 
> infrastructure, a provision to execute locally code that is downloaded at 
> runtime from the internet, with all the security issues this implies ?

No, I'm not proposing to make it unsafe. The above is just a rogue
example to explain an idea. The actual implementation should take
security into account. It could, for example, use things like downloading
a signed tarball and run it inside a container, use some git tree
hosted at kernel.org, etc.

> 
> > (the actual selftest target would likely be different, as it
> >  should take into account make O=)
> > 
> > If this would be acceptable upstream, then we'll need to stick with the
> > output format defined by Kernel Self Test[1].
> > 
> > [1] I guess it uses the TAP13 format:
> > https://testanything.org/tap-version-13-specification.html
> >   
> > >> How about spending a little time to pick a suitable framework for
> > >> running the tests? It could be useful to get more informative
> > >> reports than just pass / fail.  
> > > 
> > > We should keep in mind that other tests will be added later, and the
> > > test framework should make that easy.  
> >  
> >  Since we want to be able to run this on kernelci.org, I think it
> >  makes sense to let the kernelci folks (Hi Ezequiel!) decide this.  
> > >>> 
> > >>> KernelCI 

Re: [RFC] Create test script(s?) for regression testing

2018-11-07 Thread Ezequiel Garcia
On Wed, 2018-11-07 at 09:05 +0100, Hans Verkuil wrote:
> On 11/06/2018 08:58 PM, Laurent Pinchart wrote:
> > Hi Hans,
> > 
> > On Tuesday, 6 November 2018 15:56:34 EET Hans Verkuil wrote:
> > > On 11/06/18 14:12, Laurent Pinchart wrote:
> > > > On Tuesday, 6 November 2018 13:36:55 EET Sakari Ailus wrote:
> > > > > On Tue, Nov 06, 2018 at 09:37:07AM +0100, Hans Verkuil wrote:
> > > > > > Hi all,
> > > > > > 
> > > > > > After the media summit (heavy on test discussions) and the V4L2 
> > > > > > event
> > > > > > regression we just found it is clear we need to do a better job with
> > > > > > testing.
> > > > > > 
> > > > > > All the pieces are in place, so what is needed is to combine it and
> > > > > > create a script that anyone of us as core developers can run to 
> > > > > > check
> > > > > > for regressions. The same script can be run as part of the kernelci
> > > > > > regression testing.
> > > > > 
> > > > > I'd say that *some* pieces are in place. Of course, the more there is,
> > > > > the better.
> > > > > 
> > > > > The more there are tests, the more important it would be they're
> > > > > automated, preferrably without the developer having to run them on 
> > > > > his/
> > > > > her own machine.
> > > > 
> > > > From my experience with testing, it's important to have both a core set 
> > > > of
> > > > tests (a.k.a. smoke tests) that can easily be run on developers' 
> > > > machines,
> > > > and extended tests that can be offloaded to a shared testing
> > > > infrastructure (but possibly also run locally if desired).
> > > 
> > > That was my idea as well for the longer term. First step is to do the 
> > > basic
> > > smoke tests (i.e. run compliance tests, do some (limited) streaming test).
> > > 
> > > There are more extensive (and longer running) tests that can be done, but
> > > that's something to look at later.
> > > 
> > > > > > We have four virtual drivers: vivid, vim2m, vimc and vicodec. The 
> > > > > > last
> > > > > > one is IMHO not quite good enough yet for testing: it is not fully
> > > > > > compliant to the upcoming stateful codec spec. Work for that is 
> > > > > > planned
> > > > > > as part of an Outreachy project.
> > > > > > 
> > > > > > My idea is to create a script that is maintained as part of 
> > > > > > v4l-utils
> > > > > > that loads the drivers and runs v4l2-compliance and possibly other 
> > > > > > tests
> > > > > > against the virtual drivers.
> > > > > 
> > > > > How about spending a little time to pick a suitable framework for 
> > > > > running
> > > > > the tests? It could be useful to get more informative reports than 
> > > > > just
> > > > > pass / fail.
> > > > 
> > > > We should keep in mind that other tests will be added later, and the 
> > > > test
> > > > framework should make that easy.
> > > 
> > > Since we want to be able to run this on kernelci.org, I think it makes 
> > > sense
> > > to let the kernelci folks (Hi Ezequiel!) decide this.
> > 
> > KernelCI isn't the only test infrastructure out there, so let's not forget 
> > about the other ones.
> 
> True, but they are putting time and money into this, so they get to choose as
> far as I am concerned :-)
> 

Well, we are also resource-constrained, so my idea for the first iteration
is to pick the simplest yet useful setup. We plan to leverage existing
tests only. Currently xxx-compliance tools are what cover more.

I believe that CI and tests should be independent components.

> If others are interested and willing to put up time and money, they should let
> themselves be known.
> 
> I'm not going to work on such an integration, although I happily accept 
> patches.
> 
> > > As a developer all I need is a script to run smoke tests so I can catch 
> > > most
> > > regressions (you never catch all).
> > > 
> > > I'm happy to work with them to make any changes to compliance tools and
> > > scripts so they fit better into their test framework.
> > > 
> > > The one key requirement to all this is that you should be able to run 
> > > these
> > > tests without dependencies to all sorts of external packages/libraries.
> > 
> > v4l-utils already has a set of dependencies, but those are largely 
> > manageable. 
> > For v4l2-compliance we'll install libv4l, which depends on libjpeg.
> 
> That's already too much. You can manually build v4l2-compliance with no 
> dependencies
> whatsoever, but we're missing a Makefile target for that. It's been useful for
> embedded systems with poor cross-compile environments.
> 
> It is really very useful to be able to compile those core utilities with no
> external libraries other than glibc. You obviously will loose some 
> functionality
> when you compile it that way.
> 
> These utilities are not like a typical application. I really don't care how 
> many
> libraries are linked in by e.g. qv4l2, xawtv, etc. But for v4l2-ctl, 
> v4l2-compliance,
> cec-ctl/follower/compliance (and probably a few others as well) you want a 
> minimum
> of dependencies so you can run 

Re: [RFC] Create test script(s?) for regression testing

2018-11-07 Thread Laurent Pinchart
Hi Mauro,

On Wednesday, 7 November 2018 21:10:35 EET Mauro Carvalho Chehab wrote:
> Em Wed, 07 Nov 2018 12:06:55 +0200 Laurent Pinchart escreveu:
> > On Wednesday, 7 November 2018 10:05:12 EET Hans Verkuil wrote:
> >> On 11/06/2018 08:58 PM, Laurent Pinchart wrote:
> >>> On Tuesday, 6 November 2018 15:56:34 EET Hans Verkuil wrote:
>  On 11/06/18 14:12, Laurent Pinchart wrote:
> > On Tuesday, 6 November 2018 13:36:55 EET Sakari Ailus wrote:
> >> On Tue, Nov 06, 2018 at 09:37:07AM +0100, Hans Verkuil wrote:
> >>> Hi all,
> >>> 
> >>> After the media summit (heavy on test discussions) and the V4L2
> >>> event regression we just found it is clear we need to do a better
> >>> job with testing.
> >>> 
> >>> All the pieces are in place, so what is needed is to combine it
> >>> and create a script that anyone of us as core developers can run to
> >>> check for regressions. The same script can be run as part of the
> >>> kernelci regression testing.
> >> 
> >> I'd say that *some* pieces are in place. Of course, the more there
> >> is, the better.
> >> 
> >> The more there are tests, the more important it would be they're
> >> automated, preferrably without the developer having to run them on
> >> his/her own machine.
> > 
> > From my experience with testing, it's important to have both a core
> > set of tests (a.k.a. smoke tests) that can easily be run on
> > developers' machines, and extended tests that can be offloaded to a
> > shared testing infrastructure (but possibly also run locally if
> > desired).
>  
>  That was my idea as well for the longer term. First step is to do the
>  basic smoke tests (i.e. run compliance tests, do some (limited)
>  streaming test).
>  
>  There are more extensive (and longer running) tests that can be done,
>  but that's something to look at later.
>  
> >>> We have four virtual drivers: vivid, vim2m, vimc and vicodec. The
> >>> last one is IMHO not quite good enough yet for testing: it is not
> >>> fully compliant to the upcoming stateful codec spec. Work for that
> >>> is planned as part of an Outreachy project.
> >>> 
> >>> My idea is to create a script that is maintained as part of
> >>> v4l-utils that loads the drivers and runs v4l2-compliance and
> >>> possibly other tests against the virtual drivers.
> 
> (adding Shuah)
> 
> IMO, the best would be to have something like that as part of Kernel
> self test, as this could give a broader covering than just Kernel CI.
> 
> Yeah, I know that one of the concerns is that the *-compliance stuff
> we have are written in C++ and it is easier to maintain then at
> v4l-utils, but maybe it would be acceptable at kselftest to have a
> test bench there with would download the sources from a git tree
> and then build just the v4l2-compliance stuff, e. g. having a Kernel
> self test target that would do something like:
> 
>   git clone --depth 1 git://linuxtv.org/v4l-utils.git tests && \
>   cd tests && ./autogen.sh && make tests && ./run_tests.sh

Let me make sure I understand this properly. Are you proposing to add to 
kselftest, which is part of the Linux kernel, and as such benefits from the 
level of trust of Linus' tree, and which is run by a very large number of 
machines from developer workstations to automated large-scale test 
infrastructure, a provision to execute locally code that is downloaded at 
runtime from the internet, with all the security issues this implies ?

> (the actual selftest target would likely be different, as it
>  should take into account make O=)
> 
> If this would be acceptable upstream, then we'll need to stick with the
> output format defined by Kernel Self Test[1].
> 
> [1] I guess it uses the TAP13 format:
>   https://testanything.org/tap-version-13-specification.html
> 
> >> How about spending a little time to pick a suitable framework for
> >> running the tests? It could be useful to get more informative
> >> reports than just pass / fail.
> > 
> > We should keep in mind that other tests will be added later, and the
> > test framework should make that easy.
>  
>  Since we want to be able to run this on kernelci.org, I think it
>  makes sense to let the kernelci folks (Hi Ezequiel!) decide this.
> >>> 
> >>> KernelCI isn't the only test infrastructure out there, so let's not
> >>> forget about the other ones.
> >> 
> >> True, but they are putting time and money into this, so they get to
> >> choose as far as I am concerned :-)
> 
> Surely, but no matter who is paying, if one wants to merge things upstream,
> he/she has to stick with upstream ruleset.
> 
> That's said, we should try to not make life harder than it should be for
> it, but some things should be standardized, if we want future contributions
> there. At very minimal, from my side, I'd like it to be as much compatible
> with 

Re: [RFC] Create test script(s?) for regression testing

2018-11-07 Thread Mauro Carvalho Chehab
Em Wed, 07 Nov 2018 12:06:55 +0200
Laurent Pinchart  escreveu:

> Hi Hans,
> 
> On Wednesday, 7 November 2018 10:05:12 EET Hans Verkuil wrote:
> > On 11/06/2018 08:58 PM, Laurent Pinchart wrote:  
> > > On Tuesday, 6 November 2018 15:56:34 EET Hans Verkuil wrote:  
> > >> On 11/06/18 14:12, Laurent Pinchart wrote:  
> > >>> On Tuesday, 6 November 2018 13:36:55 EET Sakari Ailus wrote:  
> >  On Tue, Nov 06, 2018 at 09:37:07AM +0100, Hans Verkuil wrote:  
> > > Hi all,
> > > 
> > > After the media summit (heavy on test discussions) and the V4L2 event
> > > regression we just found it is clear we need to do a better job with
> > > testing.
> > > 
> > > All the pieces are in place, so what is needed is to combine it and
> > > create a script that anyone of us as core developers can run to check
> > > for regressions. The same script can be run as part of the kernelci
> > > regression testing.  
> >  
> >  I'd say that *some* pieces are in place. Of course, the more there is,
> >  the better.
> >  
> >  The more there are tests, the more important it would be they're
> >  automated, preferrably without the developer having to run them on his/
> >  her own machine.  
> > >>> 
> > >>> From my experience with testing, it's important to have both a core set
> > >>> of tests (a.k.a. smoke tests) that can easily be run on developers'
> > >>> machines, and extended tests that can be offloaded to a shared testing
> > >>> infrastructure (but possibly also run locally if desired).  
> > >> 
> > >> That was my idea as well for the longer term. First step is to do the
> > >> basic smoke tests (i.e. run compliance tests, do some (limited) streaming
> > >> test).
> > >> 
> > >> There are more extensive (and longer running) tests that can be done, but
> > >> that's something to look at later.
> > >>   
> > > We have four virtual drivers: vivid, vim2m, vimc and vicodec. The last
> > > one is IMHO not quite good enough yet for testing: it is not fully
> > > compliant to the upcoming stateful codec spec. Work for that is
> > > planned as part of an Outreachy project.
> > > 
> > > My idea is to create a script that is maintained as part of v4l-utils
> > > that loads the drivers and runs v4l2-compliance and possibly other
> > > tests against the virtual drivers.  

(adding Shuah)

IMO, the best would be to have something like that as part of Kernel
self test, as this could give a broader covering than just Kernel CI.

Yeah, I know that one of the concerns is that the *-compliance stuff
we have are written in C++ and it is easier to maintain then at
v4l-utils, but maybe it would be acceptable at kselftest to have a
test bench there with would download the sources from a git tree
and then build just the v4l2-compliance stuff, e. g. having a Kernel
self test target that would do something like:

git clone --depth 1 git://linuxtv.org/v4l-utils.git tests && \
cd tests && ./autogen.sh && make tests && ./run_tests.sh

(the actual selftest target would likely be different, as it 
 should take into account make O=)

If this would be acceptable upstream, then we'll need to stick with the
output format defined by Kernel Self Test[1].

[1] I guess it uses the TAP13 format:
https://testanything.org/tap-version-13-specification.html

> >  
> >  How about spending a little time to pick a suitable framework for
> >  running the tests? It could be useful to get more informative reports
> >  than just pass / fail.  
> > >>> 
> > >>> We should keep in mind that other tests will be added later, and the
> > >>> test framework should make that easy.  
> > >> 
> > >> Since we want to be able to run this on kernelci.org, I think it makes
> > >> sense to let the kernelci folks (Hi Ezequiel!) decide this.  
> > > 
> > > KernelCI isn't the only test infrastructure out there, so let's not forget
> > > about the other ones.  
> > 
> > True, but they are putting time and money into this, so they get to choose
> > as far as I am concerned :-)  

Surely, but no matter who is paying, if one wants to merge things upstream,
he/she has to stick with upstream ruleset.

That's said, we should try to not make life harder than it should be for
it, but some things should be standardized, if we want future contributions
there. At very minimal, from my side, I'd like it to be as much compatible
with Kernel selftest infrastructure as possible.

I would try to avoid placing KernelCI-specific stuff (like adding LAVA code)
inside v4l-utils tree. With regards to that, one alternative would be to
split KernelCI specific code on a different tree and use "git subtree".

> It's still our responsibility to give V4L2 a good test framework, and to 
> drive 
> it in the right direction. We don't accept V4L2 API extensions blindly just 
> because a company happens to put time and money into it (there may have been 
> one exception, but it's not 

Re: [RFC] Create test script(s?) for regression testing

2018-11-07 Thread Laurent Pinchart
Hi Hans,

On Wednesday, 7 November 2018 10:05:12 EET Hans Verkuil wrote:
> On 11/06/2018 08:58 PM, Laurent Pinchart wrote:
> > On Tuesday, 6 November 2018 15:56:34 EET Hans Verkuil wrote:
> >> On 11/06/18 14:12, Laurent Pinchart wrote:
> >>> On Tuesday, 6 November 2018 13:36:55 EET Sakari Ailus wrote:
>  On Tue, Nov 06, 2018 at 09:37:07AM +0100, Hans Verkuil wrote:
> > Hi all,
> > 
> > After the media summit (heavy on test discussions) and the V4L2 event
> > regression we just found it is clear we need to do a better job with
> > testing.
> > 
> > All the pieces are in place, so what is needed is to combine it and
> > create a script that anyone of us as core developers can run to check
> > for regressions. The same script can be run as part of the kernelci
> > regression testing.
>  
>  I'd say that *some* pieces are in place. Of course, the more there is,
>  the better.
>  
>  The more there are tests, the more important it would be they're
>  automated, preferrably without the developer having to run them on his/
>  her own machine.
> >>> 
> >>> From my experience with testing, it's important to have both a core set
> >>> of tests (a.k.a. smoke tests) that can easily be run on developers'
> >>> machines, and extended tests that can be offloaded to a shared testing
> >>> infrastructure (but possibly also run locally if desired).
> >> 
> >> That was my idea as well for the longer term. First step is to do the
> >> basic smoke tests (i.e. run compliance tests, do some (limited) streaming
> >> test).
> >> 
> >> There are more extensive (and longer running) tests that can be done, but
> >> that's something to look at later.
> >> 
> > We have four virtual drivers: vivid, vim2m, vimc and vicodec. The last
> > one is IMHO not quite good enough yet for testing: it is not fully
> > compliant to the upcoming stateful codec spec. Work for that is
> > planned as part of an Outreachy project.
> > 
> > My idea is to create a script that is maintained as part of v4l-utils
> > that loads the drivers and runs v4l2-compliance and possibly other
> > tests against the virtual drivers.
>  
>  How about spending a little time to pick a suitable framework for
>  running the tests? It could be useful to get more informative reports
>  than just pass / fail.
> >>> 
> >>> We should keep in mind that other tests will be added later, and the
> >>> test framework should make that easy.
> >> 
> >> Since we want to be able to run this on kernelci.org, I think it makes
> >> sense to let the kernelci folks (Hi Ezequiel!) decide this.
> > 
> > KernelCI isn't the only test infrastructure out there, so let's not forget
> > about the other ones.
> 
> True, but they are putting time and money into this, so they get to choose
> as far as I am concerned :-)

It's still our responsibility to give V4L2 a good test framework, and to drive 
it in the right direction. We don't accept V4L2 API extensions blindly just 
because a company happens to put time and money into it (there may have been 
one exception, but it's not the rule), we instead review all proposals 
carefully. The same should be true with tests.

> If others are interested and willing to put up time and money, they should
> let themselves be known.
> 
> I'm not going to work on such an integration, although I happily accept
> patches.
> 
> >> As a developer all I need is a script to run smoke tests so I can catch
> >> most regressions (you never catch all).
> >> 
> >> I'm happy to work with them to make any changes to compliance tools and
> >> scripts so they fit better into their test framework.
> >> 
> >> The one key requirement to all this is that you should be able to run
> >> these tests without dependencies to all sorts of external packages/
> >> libraries.
> > 
> > v4l-utils already has a set of dependencies, but those are largely
> > manageable. For v4l2-compliance we'll install libv4l, which depends on
> > libjpeg.
> 
> That's already too much. You can manually build v4l2-compliance with no
> dependencies whatsoever, but we're missing a Makefile target for that. It's
> been useful for embedded systems with poor cross-compile environments.

I don't think depending on libv4l and libjpeg would be a big issue. On the 
other hand, given what v4l2-compliance do, one could also argue that it should 
not use libv4l at all and go straight for the kernel API. This boils down to 
the question of whether we consider libv4l as part of the official V4L2 stack, 
or if we want to officially deprecate it given that it hasn't really lived to 
the promises it made.

> It is really very useful to be able to compile those core utilities with no
> external libraries other than glibc. You obviously will loose some
> functionality when you compile it that way.
> 
> These utilities are not like a typical application. I really don't care how
> many libraries are linked in by e.g. 

Re: [RFC] Create test script(s?) for regression testing

2018-11-07 Thread Hans Verkuil
On 11/06/2018 08:58 PM, Laurent Pinchart wrote:
> Hi Hans,
> 
> On Tuesday, 6 November 2018 15:56:34 EET Hans Verkuil wrote:
>> On 11/06/18 14:12, Laurent Pinchart wrote:
>>> On Tuesday, 6 November 2018 13:36:55 EET Sakari Ailus wrote:
 On Tue, Nov 06, 2018 at 09:37:07AM +0100, Hans Verkuil wrote:
> Hi all,
>
> After the media summit (heavy on test discussions) and the V4L2 event
> regression we just found it is clear we need to do a better job with
> testing.
>
> All the pieces are in place, so what is needed is to combine it and
> create a script that anyone of us as core developers can run to check
> for regressions. The same script can be run as part of the kernelci
> regression testing.

 I'd say that *some* pieces are in place. Of course, the more there is,
 the better.

 The more there are tests, the more important it would be they're
 automated, preferrably without the developer having to run them on his/
 her own machine.
>>>
>>> From my experience with testing, it's important to have both a core set of
>>> tests (a.k.a. smoke tests) that can easily be run on developers' machines,
>>> and extended tests that can be offloaded to a shared testing
>>> infrastructure (but possibly also run locally if desired).
>>
>> That was my idea as well for the longer term. First step is to do the basic
>> smoke tests (i.e. run compliance tests, do some (limited) streaming test).
>>
>> There are more extensive (and longer running) tests that can be done, but
>> that's something to look at later.
>>
> We have four virtual drivers: vivid, vim2m, vimc and vicodec. The last
> one is IMHO not quite good enough yet for testing: it is not fully
> compliant to the upcoming stateful codec spec. Work for that is planned
> as part of an Outreachy project.
>
> My idea is to create a script that is maintained as part of v4l-utils
> that loads the drivers and runs v4l2-compliance and possibly other tests
> against the virtual drivers.

 How about spending a little time to pick a suitable framework for running
 the tests? It could be useful to get more informative reports than just
 pass / fail.
>>>
>>> We should keep in mind that other tests will be added later, and the test
>>> framework should make that easy.
>>
>> Since we want to be able to run this on kernelci.org, I think it makes sense
>> to let the kernelci folks (Hi Ezequiel!) decide this.
> 
> KernelCI isn't the only test infrastructure out there, so let's not forget 
> about the other ones.

True, but they are putting time and money into this, so they get to choose as
far as I am concerned :-)

If others are interested and willing to put up time and money, they should let
themselves be known.

I'm not going to work on such an integration, although I happily accept patches.

> 
>> As a developer all I need is a script to run smoke tests so I can catch most
>> regressions (you never catch all).
>>
>> I'm happy to work with them to make any changes to compliance tools and
>> scripts so they fit better into their test framework.
>>
>> The one key requirement to all this is that you should be able to run these
>> tests without dependencies to all sorts of external packages/libraries.
> 
> v4l-utils already has a set of dependencies, but those are largely 
> manageable. 
> For v4l2-compliance we'll install libv4l, which depends on libjpeg.

That's already too much. You can manually build v4l2-compliance with no 
dependencies
whatsoever, but we're missing a Makefile target for that. It's been useful for
embedded systems with poor cross-compile environments.

It is really very useful to be able to compile those core utilities with no
external libraries other than glibc. You obviously will loose some functionality
when you compile it that way.

These utilities are not like a typical application. I really don't care how many
libraries are linked in by e.g. qv4l2, xawtv, etc. But for v4l2-ctl, 
v4l2-compliance,
cec-ctl/follower/compliance (and probably a few others as well) you want a 
minimum
of dependencies so you can run them everywhere, even with the crappiest 
toolchains
or cross-compile environments.

> 
>>> Regarding the test output, many formats exist (see
>>> https://testanything.org/ and
>>> https://chromium.googlesource.com/chromium/src/+/master/docs/testing/
>>> json_test_results_format.md for instance), we should pick one of the
>>> leading industry standards (what those standards are still needs to be
>>> researched  :-)).
>>>
 Do note that for different hardware the tests would be likely different
 as well although there are classes of devices for which the exact same
 tests would be applicable.
>>>
>>> See http://git.ideasonboard.com/renesas/vsp-tests.git for an example of
>>> device-specific tests. I think some of that could be generalized.
>>>
> It should be simple to use and require very little in the way of
> 

Re: [RFC] Create test script(s?) for regression testing

2018-11-06 Thread Laurent Pinchart
Hi Hans,

On Tuesday, 6 November 2018 15:56:34 EET Hans Verkuil wrote:
> On 11/06/18 14:12, Laurent Pinchart wrote:
> > On Tuesday, 6 November 2018 13:36:55 EET Sakari Ailus wrote:
> >> On Tue, Nov 06, 2018 at 09:37:07AM +0100, Hans Verkuil wrote:
> >>> Hi all,
> >>> 
> >>> After the media summit (heavy on test discussions) and the V4L2 event
> >>> regression we just found it is clear we need to do a better job with
> >>> testing.
> >>> 
> >>> All the pieces are in place, so what is needed is to combine it and
> >>> create a script that anyone of us as core developers can run to check
> >>> for regressions. The same script can be run as part of the kernelci
> >>> regression testing.
> >> 
> >> I'd say that *some* pieces are in place. Of course, the more there is,
> >> the better.
> >> 
> >> The more there are tests, the more important it would be they're
> >> automated, preferrably without the developer having to run them on his/
> >> her own machine.
> > 
> > From my experience with testing, it's important to have both a core set of
> > tests (a.k.a. smoke tests) that can easily be run on developers' machines,
> > and extended tests that can be offloaded to a shared testing
> > infrastructure (but possibly also run locally if desired).
> 
> That was my idea as well for the longer term. First step is to do the basic
> smoke tests (i.e. run compliance tests, do some (limited) streaming test).
> 
> There are more extensive (and longer running) tests that can be done, but
> that's something to look at later.
> 
> >>> We have four virtual drivers: vivid, vim2m, vimc and vicodec. The last
> >>> one is IMHO not quite good enough yet for testing: it is not fully
> >>> compliant to the upcoming stateful codec spec. Work for that is planned
> >>> as part of an Outreachy project.
> >>> 
> >>> My idea is to create a script that is maintained as part of v4l-utils
> >>> that loads the drivers and runs v4l2-compliance and possibly other tests
> >>> against the virtual drivers.
> >> 
> >> How about spending a little time to pick a suitable framework for running
> >> the tests? It could be useful to get more informative reports than just
> >> pass / fail.
> > 
> > We should keep in mind that other tests will be added later, and the test
> > framework should make that easy.
> 
> Since we want to be able to run this on kernelci.org, I think it makes sense
> to let the kernelci folks (Hi Ezequiel!) decide this.

KernelCI isn't the only test infrastructure out there, so let's not forget 
about the other ones.

> As a developer all I need is a script to run smoke tests so I can catch most
> regressions (you never catch all).
> 
> I'm happy to work with them to make any changes to compliance tools and
> scripts so they fit better into their test framework.
> 
> The one key requirement to all this is that you should be able to run these
> tests without dependencies to all sorts of external packages/libraries.

v4l-utils already has a set of dependencies, but those are largely manageable. 
For v4l2-compliance we'll install libv4l, which depends on libjpeg.

> > Regarding the test output, many formats exist (see
> > https://testanything.org/ and
> > https://chromium.googlesource.com/chromium/src/+/master/docs/testing/
> > json_test_results_format.md for instance), we should pick one of the
> > leading industry standards (what those standards are still needs to be
> > researched  :-)).
> > 
> >> Do note that for different hardware the tests would be likely different
> >> as well although there are classes of devices for which the exact same
> >> tests would be applicable.
> > 
> > See http://git.ideasonboard.com/renesas/vsp-tests.git for an example of
> > device-specific tests. I think some of that could be generalized.
> > 
> >>> It should be simple to use and require very little in the way of
> >>> dependencies. Ideally no dependencies other than what is in v4l-utils so
> >>> it can easily be run on an embedded system as well.
> >>> 
> >>> For a 64-bit kernel it should run the tests both with 32-bit and 64-bit
> >>> applications.
> >>> 
> >>> It should also test with both single and multiplanar modes where
> >>> available.
> >>> 
> >>> Since vivid emulates CEC as well, it should run CEC tests too.
> >>> 
> >>> As core developers we should have an environment where we can easily
> >>> test our patches with this script (I use a VM for that).
> >>> 
> >>> I think maintaining the script (or perhaps scripts) in v4l-utils is best
> >>> since that keeps it in sync with the latest kernel and v4l-utils
> >>> developments.
> >> 
> >> Makes sense --- and that can be always changed later on if there's a need
> >> to.
> > 
> > I wonder whether that would be best going forward, especially if we want
> > to add more tests. Wouldn't a v4l-tests project make sense ?
> 
> Let's see what happens. The more repos you have, the harder it becomes to
> keep everything in sync with the latest kernel code.

Why is that ? How would a v4l-tests repository 

Re: [RFC] Create test script(s?) for regression testing

2018-11-06 Thread Hans Verkuil
On 11/06/18 14:12, Laurent Pinchart wrote:
> Hello,
> 
> On Tuesday, 6 November 2018 13:36:55 EET Sakari Ailus wrote:
>> On Tue, Nov 06, 2018 at 09:37:07AM +0100, Hans Verkuil wrote:
>>> Hi all,
>>>
>>> After the media summit (heavy on test discussions) and the V4L2 event
>>> regression we just found it is clear we need to do a better job with
>>> testing.
>>>
>>> All the pieces are in place, so what is needed is to combine it and create
>>> a script that anyone of us as core developers can run to check for
>>> regressions. The same script can be run as part of the kernelci
>>> regression testing.
>>
>> I'd say that *some* pieces are in place. Of course, the more there is, the
>> better.
>>
>> The more there are tests, the more important it would be they're automated,
>> preferrably without the developer having to run them on his/her own
>> machine.
> 
> From my experience with testing, it's important to have both a core set of 
> tests (a.k.a. smoke tests) that can easily be run on developers' machines, 
> and 
> extended tests that can be offloaded to a shared testing infrastructure (but 
> possibly also run locally if desired).

That was my idea as well for the longer term. First step is to do the basic
smoke tests (i.e. run compliance tests, do some (limited) streaming test).

There are more extensive (and longer running) tests that can be done, but
that's something to look at later.

>>> We have four virtual drivers: vivid, vim2m, vimc and vicodec. The last one
>>> is IMHO not quite good enough yet for testing: it is not fully compliant
>>> to the upcoming stateful codec spec. Work for that is planned as part of
>>> an Outreachy project.
>>>
>>> My idea is to create a script that is maintained as part of v4l-utils that
>>> loads the drivers and runs v4l2-compliance and possibly other tests
>>> against the virtual drivers.
>>
>> How about spending a little time to pick a suitable framework for running
>> the tests? It could be useful to get more informative reports than just
>> pass / fail.
> 
> We should keep in mind that other tests will be added later, and the test 
> framework should make that easy.

Since we want to be able to run this on kernelci.org, I think it makes sense
to let the kernelci folks (Hi Ezequiel!) decide this. As a developer all I
need is a script to run smoke tests so I can catch most regressions (you never
catch all).

I'm happy to work with them to make any changes to compliance tools and scripts
so they fit better into their test framework.

The one key requirement to all this is that you should be able to run these
tests without dependencies to all sorts of external packages/libraries.

> Regarding the test output, many formats exist (see https://testanything.org/ 
> and https://chromium.googlesource.com/chromium/src/+/master/docs/testing/
> json_test_results_format.md for instance), we should pick one of the leading 
> industry standards (what those standards are still needs to be researched 
> :-)).
> 
>> Do note that for different hardware the tests would be likely different as
>> well although there are classes of devices for which the exact same tests
>> would be applicable.
> 
> See http://git.ideasonboard.com/renesas/vsp-tests.git for an example of 
> device-specific tests. I think some of that could be generalized.
> 
>>> It should be simple to use and require very little in the way of
>>> dependencies. Ideally no dependencies other than what is in v4l-utils so
>>> it can easily be run on an embedded system as well.
>>>
>>> For a 64-bit kernel it should run the tests both with 32-bit and 64-bit
>>> applications.
>>>
>>> It should also test with both single and multiplanar modes where
>>> available.
>>>
>>> Since vivid emulates CEC as well, it should run CEC tests too.
>>>
>>> As core developers we should have an environment where we can easily test
>>> our patches with this script (I use a VM for that).
>>>
>>> I think maintaining the script (or perhaps scripts) in v4l-utils is best
>>> since that keeps it in sync with the latest kernel and v4l-utils
>>> developments.
>>
>> Makes sense --- and that can be always changed later on if there's a need
>> to.
> 
> I wonder whether that would be best going forward, especially if we want to 
> add more tests. Wouldn't a v4l-tests project make sense ?
> 

Let's see what happens. The more repos you have, the harder it becomes to keep
everything in sync with the latest kernel code.

My experience is that if you want to have good tests, then writing tests should
be as easy as possible. Keep dependencies at an absolute minimum.

Let's be honest, we (well, mainly me) are doing these tests as a side job, it's
not our main focus. Anything that makes writing tests more painful is bad and
just gets in the way.

Regards,

Hans


Re: [RFC] Create test script(s?) for regression testing

2018-11-06 Thread Ezequiel Garcia
On Tue, 2018-11-06 at 09:37 +0100, Hans Verkuil wrote:
> Hi all,
> 
> After the media summit (heavy on test discussions) and the V4L2 event 
> regression
> we just found it is clear we need to do a better job with testing.
> 
> All the pieces are in place, so what is needed is to combine it and create a
> script that anyone of us as core developers can run to check for regressions.
> The same script can be run as part of the kernelci regression testing.
> 
> We have four virtual drivers: vivid, vim2m, vimc and vicodec. The last one
> is IMHO not quite good enough yet for testing: it is not fully compliant to 
> the
> upcoming stateful codec spec. Work for that is planned as part of an Outreachy
> project.
> 
> My idea is to create a script that is maintained as part of v4l-utils that
> loads the drivers and runs v4l2-compliance and possibly other tests against
> the virtual drivers.
> 
> It should be simple to use and require very little in the way of dependencies.
> Ideally no dependencies other than what is in v4l-utils so it can easily be 
> run
> on an embedded system as well.
> 
> For a 64-bit kernel it should run the tests both with 32-bit and 64-bit
> applications.
> 
> It should also test with both single and multiplanar modes where available.
> 
> Since vivid emulates CEC as well, it should run CEC tests too.
> 
> As core developers we should have an environment where we can easily test
> our patches with this script (I use a VM for that).
> 

It's quite trivial to setup a qemu environment for this, e.g. you can
use virtme [1] and set it up so that it runs a script after booting.

> I think maintaining the script (or perhaps scripts) in v4l-utils is best since
> that keeps it in sync with the latest kernel and v4l-utils developments.
> 
> Comments? Ideas?
> 

Sounds great. I think it makes a lot of sense to have a script for CIs
and developers to run.

I guess we can start simple, with just a bash script?

> Regards,
> 
>   Hans

[1] 
https://www.collabora.com/news-and-blog/blog/2018/09/18/virtme-the-kernel-developers-best-friend/



Re: [RFC] Create test script(s?) for regression testing

2018-11-06 Thread Laurent Pinchart
Hello,

On Tuesday, 6 November 2018 13:36:55 EET Sakari Ailus wrote:
> On Tue, Nov 06, 2018 at 09:37:07AM +0100, Hans Verkuil wrote:
> > Hi all,
> > 
> > After the media summit (heavy on test discussions) and the V4L2 event
> > regression we just found it is clear we need to do a better job with
> > testing.
> > 
> > All the pieces are in place, so what is needed is to combine it and create
> > a script that anyone of us as core developers can run to check for
> > regressions. The same script can be run as part of the kernelci
> > regression testing.
> 
> I'd say that *some* pieces are in place. Of course, the more there is, the
> better.
> 
> The more there are tests, the more important it would be they're automated,
> preferrably without the developer having to run them on his/her own
> machine.

>From my experience with testing, it's important to have both a core set of 
tests (a.k.a. smoke tests) that can easily be run on developers' machines, and 
extended tests that can be offloaded to a shared testing infrastructure (but 
possibly also run locally if desired).

> > We have four virtual drivers: vivid, vim2m, vimc and vicodec. The last one
> > is IMHO not quite good enough yet for testing: it is not fully compliant
> > to the upcoming stateful codec spec. Work for that is planned as part of
> > an Outreachy project.
> > 
> > My idea is to create a script that is maintained as part of v4l-utils that
> > loads the drivers and runs v4l2-compliance and possibly other tests
> > against the virtual drivers.
> 
> How about spending a little time to pick a suitable framework for running
> the tests? It could be useful to get more informative reports than just
> pass / fail.

We should keep in mind that other tests will be added later, and the test 
framework should make that easy.

Regarding the test output, many formats exist (see https://testanything.org/ 
and https://chromium.googlesource.com/chromium/src/+/master/docs/testing/
json_test_results_format.md for instance), we should pick one of the leading 
industry standards (what those standards are still needs to be researched 
:-)).

> Do note that for different hardware the tests would be likely different as
> well although there are classes of devices for which the exact same tests
> would be applicable.

See http://git.ideasonboard.com/renesas/vsp-tests.git for an example of 
device-specific tests. I think some of that could be generalized.

> > It should be simple to use and require very little in the way of
> > dependencies. Ideally no dependencies other than what is in v4l-utils so
> > it can easily be run on an embedded system as well.
> > 
> > For a 64-bit kernel it should run the tests both with 32-bit and 64-bit
> > applications.
> > 
> > It should also test with both single and multiplanar modes where
> > available.
> > 
> > Since vivid emulates CEC as well, it should run CEC tests too.
> > 
> > As core developers we should have an environment where we can easily test
> > our patches with this script (I use a VM for that).
> > 
> > I think maintaining the script (or perhaps scripts) in v4l-utils is best
> > since that keeps it in sync with the latest kernel and v4l-utils
> > developments.
> 
> Makes sense --- and that can be always changed later on if there's a need
> to.

I wonder whether that would be best going forward, especially if we want to 
add more tests. Wouldn't a v4l-tests project make sense ?

-- 
Regards,

Laurent Pinchart





Re: [RFC] Create test script(s?) for regression testing

2018-11-06 Thread Sakari Ailus
Hi Hans,

On Tue, Nov 06, 2018 at 09:37:07AM +0100, Hans Verkuil wrote:
> Hi all,
> 
> After the media summit (heavy on test discussions) and the V4L2 event 
> regression
> we just found it is clear we need to do a better job with testing.
> 
> All the pieces are in place, so what is needed is to combine it and create a
> script that anyone of us as core developers can run to check for regressions.
> The same script can be run as part of the kernelci regression testing.

I'd say that *some* pieces are in place. Of course, the more there is, the
better.

The more there are tests, the more important it would be they're automated,
preferrably without the developer having to run them on his/her own
machine.

> 
> We have four virtual drivers: vivid, vim2m, vimc and vicodec. The last one
> is IMHO not quite good enough yet for testing: it is not fully compliant to 
> the
> upcoming stateful codec spec. Work for that is planned as part of an Outreachy
> project.
> 
> My idea is to create a script that is maintained as part of v4l-utils that
> loads the drivers and runs v4l2-compliance and possibly other tests against
> the virtual drivers.

How about spending a little time to pick a suitable framework for running
the tests? It could be useful to get more informative reports than just
pass / fail.

Do note that for different hardware the tests would be likely different as
well although there are classes of devices for which the exact same tests
would be applicable.

> 
> It should be simple to use and require very little in the way of dependencies.
> Ideally no dependencies other than what is in v4l-utils so it can easily be 
> run
> on an embedded system as well.
> 
> For a 64-bit kernel it should run the tests both with 32-bit and 64-bit
> applications.
> 
> It should also test with both single and multiplanar modes where available.
> 
> Since vivid emulates CEC as well, it should run CEC tests too.
> 
> As core developers we should have an environment where we can easily test
> our patches with this script (I use a VM for that).
> 
> I think maintaining the script (or perhaps scripts) in v4l-utils is best since
> that keeps it in sync with the latest kernel and v4l-utils developments.

Makes sense --- and that can be always changed later on if there's a need
to.

-- 
Regards,

Sakari Ailus
sakari.ai...@linux.intel.com


[RFC] Create test script(s?) for regression testing

2018-11-06 Thread Hans Verkuil
Hi all,

After the media summit (heavy on test discussions) and the V4L2 event regression
we just found it is clear we need to do a better job with testing.

All the pieces are in place, so what is needed is to combine it and create a
script that anyone of us as core developers can run to check for regressions.
The same script can be run as part of the kernelci regression testing.

We have four virtual drivers: vivid, vim2m, vimc and vicodec. The last one
is IMHO not quite good enough yet for testing: it is not fully compliant to the
upcoming stateful codec spec. Work for that is planned as part of an Outreachy
project.

My idea is to create a script that is maintained as part of v4l-utils that
loads the drivers and runs v4l2-compliance and possibly other tests against
the virtual drivers.

It should be simple to use and require very little in the way of dependencies.
Ideally no dependencies other than what is in v4l-utils so it can easily be run
on an embedded system as well.

For a 64-bit kernel it should run the tests both with 32-bit and 64-bit
applications.

It should also test with both single and multiplanar modes where available.

Since vivid emulates CEC as well, it should run CEC tests too.

As core developers we should have an environment where we can easily test
our patches with this script (I use a VM for that).

I think maintaining the script (or perhaps scripts) in v4l-utils is best since
that keeps it in sync with the latest kernel and v4l-utils developments.

Comments? Ideas?

Regards,

Hans