Hello Richard,
On 2/26/23 13:15, Richard Purdie wrote:
> On Sat, 2023-02-25 at 16:59 +0100, Alexis Lothoré wrote:
>> Hello Richard,
>> as usual, thanks for the prompt feedback !
>>
>> On 2/25/23 13:32, Richard Purdie wrote:
>>> On Sat, 2023-02-25 at 09:15 +0000, Richard Purdie via
>>> lists.openembedded.org wrote:
>>>> On Fri, 2023-02-24 at 18:06 +0000, Richard Purdie via
>>>> lists.openembedded.org wrote:
>>>>> Hi Alexis,
>>>>>
>>>>> Firstly, this looks very much improved, thanks. It is great to start to
>>>>> see some meaningful data from this.
>>>>>
>>>>> On Fri, 2023-02-24 at 17:45 +0100, Alexis Lothoré via
>>>>> lists.openembedded.org wrote:
>>>>>> After manual inspection on some entries, the remaining oeselftest 
>>>>>> regression
>>>>>> raised in the report seems valid. There are still some issues to tackle:
>>>>>> - it seems that now one major remaining source of noise is on the 
>>>>>> "runtime"
>>>>>>   tests (comparison to tests not run on "target" results)
>>>>>> - when a ptest managed by oe-selftest fails, I guess the remaining tests 
>>>>>> are not
>>>>>>   run, so when 1 failure is logged, we have many "PASSED->None" 
>>>>>> transitions in
>>>>>>   regression report, we should probably silence it.
>>>>>> - some transitions appear as regression while those are in fact 
>>>>>> improvements
>>>>>>   (e.g: "UNRESOLVED->PASSED")
>>>>>
>>>>> I had quick play. Firstly, if I try "yocto_testresults_query.py
>>>>> regression-report 4.2_M1 4.2_M2" in an openembedded-core repository
>>>>> instead of poky, it breaks. That isn't surprising but we should either
>>>>> make it work or show a sensible error.
>>
>> Oh right, I am working in a Poky build configuration, so I have assumed that 
>> this
>> would be the unique use case.
>> Since the test results commits are tightly coupled to revisions in poky (so 
>> not
>> oecore), I plan to merely log an error about not found revision (and 
>> suggesting
>> the user to check that the repository is poky and not oecore).
>> But please let me know if I miss a major use case here and that a smarter
>> fallback plan (shallow-clone poky if we are running in oecore ?) is needed
> 
> I'm happy to for it just to give an human readable error, someone can
> add this functionality if they need/want it.

ACK

>>>> I think I might be tempted to merge this series and then we can change
>>>> the code to improve from here as this is clearly a vast improvement on
>>>> where we were! Improvements can be incremental on top of these changes.
>>
>> I am in favor of this :) If it is OK for you, I will just re-submit a series 
>> with
>> the fix for the proper error logging when running the tool from oecore and 
>> not poky.
>>
>> Next we could introduce all the suggestions you have suggested, but I feel 
>> that
>> with the quick increase of "hotfixes" count to support issues with older test
>> results, and for the sake of maintainability of resulttool and its 
>> submodules,
>> those specific hotfixes need to be properly isolated (and documented), like 
>> in a
>> "regression_quirks.py" or something like that. What do you think ?
> 
> I'm hoping we don't have many of these quirks. We have a huge history
> at this point so it would be sad if the tool can't work with it. From
> what I've seen so far, we can manage with the code in the regression
> module itself. I've tried to add some comments.
> 
> I wondered what to do with this series since I needed to get M3 built.
> Since this series was available and mostly usable, it would be better
> to have a nicer report this time, it is a good test of the code.
> 
> In the end I've merged most of it, along with my two tweaks to handle
> LTP and the bigger ptest results issue. I couldn't take one set of the
> selftests since they simply don't work. This will give us a useful
> realworld test of the M3 report.

Ok, great. For the selftest failing, my bad, adding proper logging was one of
those "one last change before sending", and obviously I did forget to re-run the
tests before sending.
> 
> I'm working on the assumption you'll send a follow up series with the
> tests, the oe-core check and some of the other issues I've mentioned in
> other emails?

Absolutely. Besides the tests, oecore check and the improvements mentioned in
this mail thread, the next thing I was keeping in mind is fixing the report
generation against "master-next" branches you have mentioned a few weeks ago.

Regards,
-- 
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#177737): 
https://lists.openembedded.org/g/openembedded-core/message/177737
Mute This Topic: https://lists.openembedded.org/mt/97209732/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to