On 08/21/2018 11:04 AM, Wang, Yang (Young) wrote:
Hi All,

I'm working on this ticket:
https://bugzilla.yoctoproject.org/show_bug.cgi?id=12372

Thanks for investigating the bug/enhancement and posting your thoughts.
I'm jumping in without much expertise to try to get the ball rolling.


As far as I know, the following are all true nowadays:
- Ptest needs to be run on real hardware and it takes few hours to finish
- Ptest can be run within OEQA, it can also be run independently
- LAVA is a good open source test  framework which:
    - can manage both real hardware and different kinds of simulators as the 
test devices
    - provides well managed logging system and test reports
How to automatically run Ptest? I think running it with LAVA is a good solution, but ... LAVA is running as a server which can manage test jobs submitted to it, here is a typical LAVA job:
https://staging.validation.linaro.org/scheduler/job/231942/definition
As you can see, it defines the device type, test images which will used, the 
test cases and a lot of others.

That's a good clear format.

I believe that what people are thinking is that we'd have:

device_type: x86

job_name: x86_64 oeqa
...

actions:
- deploy:
 ...

- boot:
...

- test:
    timeout:
      minutes: 2
    definitions:
 << some thing that makes the target and lava server wait for
    oeqa to run >>
      name: oeqa-test

So the typical automatic way to run a test through LAVA is to write a script which use a LAVA job template, replace images with the expected ones, and then submit it to LAVA though a command, for example:
$ lava-tool submit-job http://<user>@<lava-server> x86_64_ job_oeqa-ptest.yaml

That would still work given the above oeqa job.

No doubt there's additional glue code that would
be nice to write that would allow automatically creating
the lava yaml that boots the system into a state where oeqa
code takes over.

I've never used it and only just found the code but
I bet that adding another controller to:

git://git.yoctoproject.org/meta-yocto

$ ls  meta-yocto-bsp/lib/oeqa/controllers/
beaglebonetarget.py  edgeroutertarget.py  grubtarget.py  __init__.py

is what would make sense.

This command will return a job id (take #231942 as an example), and then the 
script can get all logs and reports based on LAVA server address and this job 
id, for example:
- execution log: https://staging.validation.linaro.org/scheduler/job/231942
- test report: 
https://staging.validation.linaro.org/results/231942/0_smoke-tests

Suspect that this is were the design intent diverges.

Usually lava runs the whole system, and I think we just
want it to manage the hardware and then step out of the way.
There'd likely be an api to allow oeqa and lava to communicate
so that for example oeqa could tell lava that the tests were done.

All lava would know is that an oeqa test ran and it's completion
status.

So, as far as I can tell, it may not be an appropriate way to integrate LAVA 
test into a bitbake command as we run it with simple test harness, LAVA is an 
advanced test framework and it manages all jobs submit to it well.
Please comment if you have better idea about this ticket.

I'm really going on a few conversations that I've had or chats
on IRC so hopefully someone else can step up and comment on both Young's
initial email and my interpretation of where we're trying to
get to.

Thanks,

--
# Randy MacLeod
# Wind River Linux
--
_______________________________________________
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core

Reply via email to