Hello Richard,
as usual, thanks for the prompt feedback !

On 2/25/23 13:32, Richard Purdie wrote:
> On Sat, 2023-02-25 at 09:15 +0000, Richard Purdie via
> lists.openembedded.org wrote:
>> On Fri, 2023-02-24 at 18:06 +0000, Richard Purdie via
>> lists.openembedded.org wrote:
>>> Hi Alexis,
>>>
>>> Firstly, this looks very much improved, thanks. It is great to start to
>>> see some meaningful data from this.
>>>
>>> On Fri, 2023-02-24 at 17:45 +0100, Alexis Lothoré via
>>> lists.openembedded.org wrote:
>>>> After manual inspection on some entries, the remaining oeselftest 
>>>> regression
>>>> raised in the report seems valid. There are still some issues to tackle:
>>>> - it seems that now one major remaining source of noise is on the "runtime"
>>>>   tests (comparison to tests not run on "target" results)
>>>> - when a ptest managed by oe-selftest fails, I guess the remaining tests 
>>>> are not
>>>>   run, so when 1 failure is logged, we have many "PASSED->None" 
>>>> transitions in
>>>>   regression report, we should probably silence it.
>>>> - some transitions appear as regression while those are in fact 
>>>> improvements
>>>>   (e.g: "UNRESOLVED->PASSED")
>>>
>>> I had quick play. Firstly, if I try "yocto_testresults_query.py
>>> regression-report 4.2_M1 4.2_M2" in an openembedded-core repository
>>> instead of poky, it breaks. That isn't surprising but we should either
>>> make it work or show a sensible error.

Oh right, I am working in a Poky build configuration, so I have assumed that 
this
would be the unique use case.
Since the test results commits are tightly coupled to revisions in poky (so not
oecore), I plan to merely log an error about not found revision (and suggesting
the user to check that the repository is poky and not oecore).
But please let me know if I miss a major use case here and that a smarter
fallback plan (shallow-clone poky if we are running in oecore ?) is needed


>> I think I might be tempted to merge this series and then we can change
>> the code to improve from here as this is clearly a vast improvement on
>> where we were! Improvements can be incremental on top of these changes.

I am in favor of this :) If it is OK for you, I will just re-submit a series 
with
the fix for the proper error logging when running the tool from oecore and not 
poky.

Next we could introduce all the suggestions you have suggested, but I feel that
with the quick increase of "hotfixes" count to support issues with older test
results, and for the sake of maintainability of resulttool and its submodules,
those specific hotfixes need to be properly isolated (and documented), like in a
"regression_quirks.py" or something like that. What do you think ?

Alexis

-- 
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#177719): 
https://lists.openembedded.org/g/openembedded-core/message/177719
Mute This Topic: https://lists.openembedded.org/mt/97209732/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to