Re: [yocto] [qa-build-notification] QA notification for completed autobuilder build (yocto-4.3.rc1)

2023-10-18 Thread Richard Purdie
On Wed, 2023-10-18 at 06:16 +, Pokybuild User wrote:
> A build flagged for QA (yocto-4.3.rc1) was completed on the autobuilder 
> and is available at:
> 
> 
> https://autobuilder.yocto.io/pub/releases/yocto-4.3.rc1
> 
> 
> Build URL: 
> https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/6062

There was one failure in the build, a serial login issue on ttyS1, so
an occurrence of our annoying 6.5 issue. This is the first time we've
seen it with the workaround applied.

The question is to proceed with rc1 in testing, or apply the upstream
fixes and try an rc2 with that? I'm torn...

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61386): https://lists.yoctoproject.org/g/yocto/message/61386
Mute This Topic: https://lists.yoctoproject.org/mt/102034597/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH] config.json : Fix repro-meta-oe layer dependency

2023-10-14 Thread Richard Purdie
On Fri, 2023-10-13 at 16:34 +0200, Fabien Thomas wrote:
> Fix meta-filesystem build issue due to hidden dependencies (not set by
> LAYERDEPENDS) smbnetfs(meta-filesystem) depends on
> samba(meta-networking) that depends on python3-dnspython (meta-python)
> 
> Create new build-st dir for each layer to avoid chain of
> failure between steps
> 
> Signed-off-by: Fabien Thomas 
> Reviewed-by: Yoann Congal 
> ---
>  config.json | 22 --
>  1 file changed, 12 insertions(+), 10 deletions(-)

https://autobuilder.yoctoproject.org/typhoon/#/builders/155/builds/5

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61342): https://lists.yoctoproject.org/g/yocto/message/61342
Mute This Topic: https://lists.yoctoproject.org/mt/101941142/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] are there any plans for a RISC-V reference board?

2023-10-13 Thread Richard Purdie
On Fri, 2023-10-13 at 03:09 -0400, Robert P. J. Day wrote:
> On Tue, 10 Oct 2023, Josef Holzmayr wrote:
> > It has been discussed at numerous occasions. The main blocker is: we
> > need a commitment for the maintenance. So if a high-ranking member
> > decides to push forwards with this and allocate resources, or a new
> > member from the RISC-V ecosystem steps up to make it happen, then
> > the project is all ears.
> > 
> > Greetz,
> > Josef
> 
>Ask and ye shall receive. I just got the following private note
> from jiaqi.d...@starfivetech.com, who obviously saw my earlier post
> (and agreed to let me reproduce his response to me to the list):
> 
> > I think VisionFive 2 is an appropriate reference board. We offer
> > Long Term Support for JH7110 and VisionFive 2. After being upgraded,
> > VisionFive 2 with big improvements in the processor work frequency,
> > multimedia processing capabilities, and scalability. Since August of
> > last year, we have continued to promote the ecosystem of VF2, it has
> > successively adapted to Deepin OS, Ubuntu OS, UEFI EDK2, OpenWrt,
> > PPSSPP…you can check out the RVspace forum. So we would like to work
> > with the developers and provide LTS if there are specific project
> > requirements.
> 
>   Sound promising?

The project has been talking to the RISC-V Foundation for a while about
this. The tricky part is that adding official support for a new
architecture isn't a cheap or easy undertaking for the project.

The autobuilder for example has hardware to pay for as well as
bandwidth and power costs. If we add a new architecture, the test
matrix is larger and the costs rise. There is also the increased human
load of more tests, more failures and more work to do.

Intel, AMD and ARM are platinum members and this partly allows for the
architecture support. MIPS and PowerPC are on the list for legacy
reasons and their testing scope is being scaled back.

We've discussed and agreed that adding a new architecture with all the
work it entails is at least around a platinum membership level of
impact for us. Therefore, to have official support of a new
architecture, we've asked that there be support either from existing
membership or new members of around that level.

When you look at the details of what you get, that price tag is
actually quite reasonable for the level of automated testing performed.

We've been clear about our position on this for a while. There are
actually discussions in progress about it with both the foundation and
RISE and I have some reasons to be optimistic but some of this has been
going back and forth for a number of years. FWIW, personally I would
love to see the support officially there and it in our test matrix.

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61332): https://lists.yoctoproject.org/g/yocto/message/61332
Mute This Topic: https://lists.yoctoproject.org/mt/101869785/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH] oe-mirror: Add job to mirror openembedded sources

2023-10-11 Thread Richard Purdie
On Tue, 2023-10-10 at 16:49 +0200, David Pierret wrote:
> - Add a job that call run-fetchuniverse
> - Add script run-fetchuniverse
>- loop over each LTS branches
>- Download all the meta-openembedded layers sources.
>- Call to oe-selftest test_yocto_source_mirror after fetching.
> We are not using a second step to keep checkout consistency with potential 
> merge.
> 
> Signed-off-by: David Pierret 
> Reviewed-by: Yoann Congal 
> ---
>  config.json   | 13 +++
>  scripts/run-fetchuniverse | 49 +++
>  2 files changed, 62 insertions(+)
>  create mode 100755 scripts/run-fetchuniverse
> 
> diff --git a/config.json b/config.json
> index 3acb710..8d59be8 100644
> --- a/config.json
> +++ b/config.json
> @@ -1420,6 +1420,19 @@
>  "${SCRIPTSDIR}/setup-auh ${HELPERBUILDDIR}; 
> ${SCRIPTSDIR}/run-auh ${HELPERBUILDDIR} ${WEBPUBLISH_DIR}/pub/auh/"
>  ]
>  },
> +"oe-mirror" : {

This needs to be called something like meta-oe-mirror to differentiate
it from other "OE" things.


> +"SDKMACHINE" : "x86_64",
> +"MACHINE" : "qemux86-64",
> +"NEEDREPOS" : ["poky", "meta-openembedded"],
> +
> +"step1" : {
> +"shortname" : "Sources pre-fetching",
> +"EXTRAPLAINCMDS" : ["${SCRIPTSDIR}/run-fetchuniverse 
> ${BUILDDIR} ${SCRIPTSDIR}"],
> +"extravars" : [
> +"SOURCE_MIRROR_FETCH = '1'"
> +]
> +}
> +},
>  "a-quick" : {
>  "TEMPLATE" : "trigger-build"
>  },
> diff --git a/scripts/run-fetchuniverse b/scripts/run-fetchuniverse
> new file mode 100755
> index 000..9ef82de
> --- /dev/null
> +++ b/scripts/run-fetchuniverse
> @@ -0,0 +1,49 @@
> +#!/bin/bash
> +#
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +BUILDDIR=`realpath $1`
> +SCRIPTSDIR=`realpath $2`
> +ROOTDIR=$BUILDDIR/..
> +
> +#
> +# Fetch mirror for each LTS branch and test integrity
> +#
> +for branch in master mickledore langdale kirkstone dunfell; do

Rather than iterate here, we should add a configuration to each branch
of -helper where we want to enable this.

> +  # For each layer
> +  for repo in bitbake poky meta-openembedded ; do
> +pushd $ROOTDIR/$repo || exit 1
> +  # reset branch
> +  git reset origin/$branch --hard
> +popd || exit 1
> +  done
> +  # remove config to be re-generated by oe-init-build-env
> +  rm conf/local.conf
> +  rm conf/bblayers.conf
> +  rm -f conf/templateconf.cfg
> +  rm tmp/ -rf
> +
> +  # got back to ROOTDIR
> +  pushd $ROOTDIR || exit 1
> +  # generate config for current branch
> +  . ./poky/oe-init-build-env build
> +
> +  # We need to add the meta-openembedded layers before fetch
> +  bitbake-layers add-layer ${BUILDDIR}/../meta-openembedded/meta-filesystems 
> ${BUILDDIR}/../meta-openembedded/meta-gnome \
> +   ${BUILDDIR}/../meta-openembedded/meta-initramfs 
> ${BUILDDIR}/../meta-openembedded/meta-multimedia \
> +   ${BUILDDIR}/../meta-openembedded/meta-networking 
> ${BUILDDIR}/../meta-openembedded/meta-oe \
> +   ${BUILDDIR}/../meta-openembedded/meta-perl 
> ${BUILDDIR}/../meta-openembedded/meta-python \
> +   ${BUILDDIR}/../meta-openembedded/meta-webserver 
> ${BUILDDIR}/../meta-openembedded/meta-xfce

Then hopefully we can massively simplify the script!

Cheers,

Richard

> +  # call bitbake to fetch universe
> +  bitbake universe -c fetch -k
> +
> +  # call oe-selftest to execute test_yocto_source_mirror
> +  # We need the selftest layer
> +  bitbake-layers add-layer $BUILDDIR/../meta-selftest
> +  # ensure no previous build-st directory exist
> +  rm -rf ${BUILDDIR}/../build-st
> +  ${SCRIPTSDIR}/checkvnc; DISPLAY=:1 oe-selftest -r 
> buildoptions.SourceMirroring.test_yocto_source_mirror || exit 1
> +  popd || exit 1
> +done
> +
> +
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61323): https://lists.yoctoproject.org/g/yocto/message/61323
Mute This Topic: https://lists.yoctoproject.org/mt/101876565/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [auh][PATCH] weeklyjob: Add Openembedded update-helper conf

2023-10-11 Thread Richard Purdie
On Wed, 2023-10-11 at 11:02 +0200, Alexander Kanavin wrote:
> On Tue, 10 Oct 2023 at 16:59, David Pierret  wrote:
> > We've chosen to use a template configuration file rather than multiple
> > configuration files (1 per meta). The template file contains tags
> > between braces that is replaced in the script using simple `sed` calls.
> > The meta list to be analyzed is defined via the `meta_list` variable.
> > This variable matches all `meta-*` directories
> 
> I would suggest that AUH itself is modified to accept the dynamic
> layer_* parts as command line arguments. Tweaking things with sed is
> difficult to understand or maintain, and can be easily avoided here.
> 
> Then the other commit for autobuilder-helper can as well be greatly
> simplified and can reuse existing scripts instead of
> copy-paste-tweaking them.

I agree, the goal here is to be able to use these tools with other
layers so fixing them to be more generic and configurable rather than
hacking them is much preferred.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61322): https://lists.yoctoproject.org/g/yocto/message/61322
Mute This Topic: https://lists.yoctoproject.org/mt/101876809/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH] config.json : reproducible-meta-oe build fixes

2023-10-10 Thread Richard Purdie
On Tue, 2023-10-10 at 12:07 +0200, Fabien Thomas wrote:
> - Distro has been changed to Poky so exclude Poky's layers from world to
> limit reproducibility tests to the targeted layer only.
> - Execution on autobuilder shows that addlayer steps often fail :
> reorder layer in ADDLAYER list regarding their dependencies.
> - Fix shortid building issues by setting USERADD_U(G)ID_TABLES to
> passwd and group files. These files must exist in meta-openmbedded.
> - Shorten the steps shortnames to improve readability on UI.
> - Add a directory level for output reports, for convenience.
> 
> Signed-off-by: Fabien Thomas 
> Reviewed-by: Yoann Congal 
> ---
>  config.json | 107 ++--
>  1 file changed, 70 insertions(+), 37 deletions(-)
> 
> diff --git a/config.json b/config.json
> index df3493b..af4ab39 100644
> --- a/config.json
> +++ b/config.json
> @@ -296,116 +296,149 @@
>  ],
>  "extravars" : [
>  "EXCLUDE_FROM_WORLD:layer-core = '1'",
> +"EXCLUDE_FROM_WORLD:layer-yocto = '1'",
> +"EXCLUDE_FROM_WORLD:layer-yoctobsp = '1'",
>  "EXCLUDE_FROM_WORLD:layer-selftest = '1'",
>  "OEQA_REPRODUCIBLE_TEST_TARGET = 'world'"
>  ],
>  "step1" : {
> -"shortname" : "Reproducible Selftest for openembedded 
> meta-filesystems layer",
> -"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded-meta-filesystems/
>  DISPLAY=:1 oe-selftest -r reproducible"],
> +"shortname" : "Repro test OE meta-filesystems",
> +"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-filesystems/
>  DISPLAY=:1 oe-selftest -r reproducible"],
>  "ADDLAYER" : [
> -"${BUILDDIR}/../meta-openembedded/meta-filesystems",
>  "${BUILDDIR}/../meta-openembedded/meta-oe"
> +"${BUILDDIR}/../meta-openembedded/meta-networking",
> +"${BUILDDIR}/../meta-openembedded/meta-filesystems",
>  ],
>  "extravars" : [
> -"EXCLUDE_FROM_WORLD:layer-openembedded-layer = '1'"
> +"EXCLUDE_FROM_WORLD:layer-openembedded-layer = '1'",
> +"EXCLUDE_FROM_WORLD:layer-networking-layer = '1'",
> +"USERADD_UID_TABLES:append = ' 
> files/static-passwd-meta-filesystems'",
> +"USERADD_GID_TABLES:append = ' 
> files/static-group-meta-networking'",
> +"USERADD_UID_TABLES:append = ' 
> files/static-passwd-meta-networking'",
> +"USERADD_GID_TABLES:append = ' 
> files/static-group-meta-oe'",
> +"USERADD_UID_TABLES:append = ' 
> files/static-passwd-meta-oe'"
>  ]
>  },
>  "step2" : {
> -"shortname" : "Reproducible Selftest for openembedded 
> meta-gnome layer",
> -"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded-meta-gnome/
>  DISPLAY=:1 oe-selftest -r reproducible"],
> +"shortname" : "Repro test OE meta-gnome",
> +"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/meta-gnome/
>  DISPLAY=:1 oe-selftest -r reproducible"],
>  "ADDLAYER" : [
> -"${BUILDDIR}/../meta-openembedded/meta-gnome",
>  "${BUILDDIR}/../meta-openembedded/meta-oe",
> -"${BUILDDIR}/../meta-openembedded/meta-networking",
>  "${BUILDDIR}/../meta-openembedded/meta-python"
> +"${BUILDDIR}/../meta-openembedded/meta-networking",
> +"${BUILDDIR}/../meta-openembedded/meta-gnome",
>  ],
>  "extravars" : [
>  "EXCLUDE_FROM_WORLD:layer-openembedded-layer = '1'",
>  "EXCLUDE_FROM_WORLD:layer-networking-layer = '1'",
> -"EXCLUDE_FROM_WORLD:layer-meta-python = '1'"
> +"EXCLUDE_FROM_WORLD:layer-meta-python = '1'",
> +"USERADD_UID_TABLES:append = ' 
> files/static-passwd-meta-gnome'",
> +"USERADD_GID_TABLES:append = ' 
> files/static-group-meta-networking'",
> +"USERADD_UID_TABLES:append = ' 
> files/static-passwd-meta-networking'",
> +"USERADD_GID_TABLES:append = ' 
> files/static-group-meta-oe'",
> +"USERADD_UID_TABLES:append = ' 
> files/static-passwd-meta-oe'"
>  ]
>  },
>  "step3" : {
> -"shortname" : "Reproducible Selftest for openembedded 
> meta-initramfs layer",
> -  

Re: [yocto] [yocto-autobuilder-helper][PATCH v2] config.json: add reproducible-openembedded build

2023-10-05 Thread Richard Purdie
On Thu, 2023-10-05 at 20:46 +0100, Richard Purdie via
lists.yoctoproject.org wrote:
> On Thu, 2023-10-05 at 17:42 +0100, Richard Purdie via
> lists.yoctoproject.org wrote:
> > On Mon, 2023-10-02 at 16:54 +0200, Fabien Thomas wrote:
> > > The purpose of this new builder is to report the reproducibility status
> > > of all meta-openembedded recipes layer by layer. It use the same
> > > reproducible selftest than OE-Core but setting only world as target,
> > > and excluding all oecore and other openembedded layer recipes.
> > > Also, the report output directory is split by layers.
> > > 
> > > Signed-off-by: Fabien Thomas 
> > > Reviewed-by: Yoann Congal 
> > > ---
> > > 
> > > Changes v1->v2 :
> > > * Split builder into multiple steps, one for each openembedded layers.
> > > * Split reports output in the same way, one for each layers.
> > > * For each step, only to be tested layer and its dependancies are added.
> > > * Every other layers than the one that is tested is excluded from world.
> > 
> > Thanks, I've merged this with a tweak of "reproducible-meta-
> > openembedded" -> "reproducible-meta-oe" just to keep things more
> > readable in the UI.
> > 
> > There is a test run queued here:
> > 
> > https://autobuilder.yoctoproject.org/typhoon/#/buildrequests/454516?redirect_to_build=true
> > 
> > but things are a bit busy atm on the autobuilder so it is queued.
> 
> That broke as did the second attempt. I've pushed controller side
> tweaks and some tweaks to helper which means this is now running:
> 
> https://autobuilder.yoctoproject.org/typhoon/#/builders/155/builds/3
> 
> Obviously there are still some failures but it is doing more this time
> and a least running something in some steps.

Whilst a lot of it failed, there was one piece that succeeded and
generated diffoscope output:

https://autobuilder.yoctoproject.org/typhoon/#/builders/155/builds/3/steps/21/logs/stdio
http://autobuilder.yocto.io/pub/repro-fail-openembedded-meta-initramfs/oe-reproducible-20231005-y6zo9rio/packages/diff-html/

As well as fixing the failures, the "short" names need work to make the
UI readable.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61251): https://lists.yoctoproject.org/g/yocto/message/61251
Mute This Topic: https://lists.yoctoproject.org/mt/101713575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH v2] config.json: add reproducible-openembedded build

2023-10-05 Thread Richard Purdie
On Thu, 2023-10-05 at 17:42 +0100, Richard Purdie via
lists.yoctoproject.org wrote:
> On Mon, 2023-10-02 at 16:54 +0200, Fabien Thomas wrote:
> > The purpose of this new builder is to report the reproducibility status
> > of all meta-openembedded recipes layer by layer. It use the same
> > reproducible selftest than OE-Core but setting only world as target,
> > and excluding all oecore and other openembedded layer recipes.
> > Also, the report output directory is split by layers.
> > 
> > Signed-off-by: Fabien Thomas 
> > Reviewed-by: Yoann Congal 
> > ---
> > 
> > Changes v1->v2 :
> > * Split builder into multiple steps, one for each openembedded layers.
> > * Split reports output in the same way, one for each layers.
> > * For each step, only to be tested layer and its dependancies are added.
> > * Every other layers than the one that is tested is excluded from world.
> 
> Thanks, I've merged this with a tweak of "reproducible-meta-
> openembedded" -> "reproducible-meta-oe" just to keep things more
> readable in the UI.
> 
> There is a test run queued here:
> 
> https://autobuilder.yoctoproject.org/typhoon/#/buildrequests/454516?redirect_to_build=true
> 
> but things are a bit busy atm on the autobuilder so it is queued.

That broke as did the second attempt. I've pushed controller side
tweaks and some tweaks to helper which means this is now running:

https://autobuilder.yoctoproject.org/typhoon/#/builders/155/builds/3

Obviously there are still some failures but it is doing more this time
and a least running something in some steps.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61250): https://lists.yoctoproject.org/g/yocto/message/61250
Mute This Topic: https://lists.yoctoproject.org/mt/101713575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH v2] config.json: add reproducible-openembedded build

2023-10-05 Thread Richard Purdie
On Mon, 2023-10-02 at 16:54 +0200, Fabien Thomas wrote:
> The purpose of this new builder is to report the reproducibility status
> of all meta-openembedded recipes layer by layer. It use the same
> reproducible selftest than OE-Core but setting only world as target,
> and excluding all oecore and other openembedded layer recipes.
> Also, the report output directory is split by layers.
> 
> Signed-off-by: Fabien Thomas 
> Reviewed-by: Yoann Congal 
> ---
> 
> Changes v1->v2 :
> * Split builder into multiple steps, one for each openembedded layers.
> * Split reports output in the same way, one for each layers.
> * For each step, only to be tested layer and its dependancies are added.
> * Every other layers than the one that is tested is excluded from world.

Thanks, I've merged this with a tweak of "reproducible-meta-
openembedded" -> "reproducible-meta-oe" just to keep things more
readable in the UI.

There is a test run queued here:

https://autobuilder.yoctoproject.org/typhoon/#/buildrequests/454516?redirect_to_build=true

but things are a bit busy atm on the autobuilder so it is queued.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61248): https://lists.yoctoproject.org/g/yocto/message/61248
Mute This Topic: https://lists.yoctoproject.org/mt/101713575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [qa-build-notification] QA notification for completed autobuilder build (yocto-4.3_M3.rc9)

2023-10-02 Thread Richard Purdie
On Mon, 2023-10-02 at 09:41 +, Jing Hui Tham wrote:
> Hi all,
>  
> Intel and WR YP QA is planning for QA execution for YP build 
> yocto-4.3_M3.rc9. We are planning to execute following tests for this cycle:
>  
> OEQA-manual tests for following module:
> 1. OE-Core
> 2. BSP-hw
>  
> Runtime auto test for following platforms:
>   1. MinnowBoard Turbot - 32bit
>   2. Kaby Lake (7th Generation Intel(r) Core(tm) Processors)
>   3. Tiger Lake (11th Generation Intel(r) Core(tm) Processors)
>   4. Alder Lake-S (12th Generation Intel(r) Core(tm) Processors)
>   5. Raptor Lake-P (13th Generation Intel(r) Core(tm) Processors)
>   6. Beaglebone
> 
>  
> ETA for completion Friday, October 9th. Please do expect some delay due to 
> holiday in China, will proceed to submit test report once completed. 
> Sorry for any inconvenience caused.

Sorry, I thought I'd made it clear: M3 rc9 is a test build as we've
already tested and shipped M3. The rc9 is just to test the url
additions and QA reports fixes so we're ready for M4. There is
therefore no need to QA this.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61187): https://lists.yoctoproject.org/g/yocto/message/61187
Mute This Topic: https://lists.yoctoproject.org/mt/101708429/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [Openembedded-architecture] Core workflow: sstate for all, bblock/bbunlock, tools for why is sstate not being reused?

2023-09-29 Thread Richard Purdie
On Fri, 2023-09-29 at 14:06 +0200, Alexander Kanavin wrote:
> On Thu, 28 Sept 2023 at 18:49, Richard Purdie
>  wrote:
> 
> > I'm curious to see what you find with analysis of bitbake-whatchanged.
> 
> I've taken a look a the script. It obtains the current location of
> STAMPS_DIR, then runs this:
> 
> # Generate the new stamps dir
> print("Generating the new stamps ... (need several minutes)")
> cmdline = "STAMPS_DIR=%s bitbake -S none %s" % (new_stampsdir,
> args.recipe)
> 
> Then it walks both trees, matching up file names with a regex:
> 
> # Match the stamp's filename
> # group(1): PE_PV (may no PE)
> # group(2): PR
> # group(3): TASK
> # group(4): HASH
> stamp_re = 
> re.compile("(?P.*)-(?Pr\d+)\.(?Pdo_\w+)\.(?P[^\.]*)")
> 
> Then there's some code that finds out what changed in the above
> between the two sets.
> 
> I don't see a way to make it work: messing about with STAMPS_DIR like
> that isn't supported, and will either do nothing, or remove the
> original stamps. Also stamp filenames aren't really a 'public API',
> are they?
> 
> Should the script simply be removed, or is there some better way to
> re-implement answering the 'what has changed' question in a way that
> doesn't flood the console with task hashes? I'd be glad to get
> suggestions for this.

I'd prefer to see some dedicated bitbake API used even if we need to
create/add it. tinfoil and some of the bblock/unlock work shows we can
get stamp data, the question would be how to get it without
"disturbing" the existing build.

By using dedicated API, we'd be able to control the console output.

Cheers,

Richard



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61163): https://lists.yoctoproject.org/g/yocto/message/61163
Mute This Topic: https://lists.yoctoproject.org/mt/101503345/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [Openembedded-architecture] Core workflow: sstate for all, bblock/bbunlock, tools for why is sstate not being reused?

2023-09-28 Thread Richard Purdie
On Thu, 2023-09-28 at 18:43 +0200, Alexander Kanavin wrote:
> On Fri, 22 Sept 2023 at 12:42, Richard Purdie
>  wrote:
> 
> > Things which used to be problematic:
> > 
> > a) changes involving changes to gcc-source since it uses a shared
> > sources stamps which confused the tools (at least used to). That may
> > have been before gcc-source became a recipe?
> > b) changes to a very common component (e.g. autoconf-native's
> > do_configure) which make it hard to understand where the root cause of
> > the changes came from
> > c) changes which affect many recipes at once, e.g. the do_configure
> > function in base.bbclass
> > 
> > It might be helpful to write test cases for the scenario you showed as
> > working above and some of the ones I mention above, then we can
> > document they work and have an easier way to add tests for issues
> > if/as/when we identify the problematic scenarios in future.
> 
> I've now written down the tests for these three scenarios and got them
> to pass (in oe-selftest too \0/):
> https://git.yoctoproject.org/poky-contrib/commit/?h=akanavin/sstate-for-all
> (check the commit message too)
> 
> I am going to look closer at bitbake-whatchanged, what it aims to do
> and why it doesn't work. I have a hunch it can produce useful high
> level reports, and so shouldn't be simply thrown away. 'bitbake -S
> printdiff' is too techy and verbose for some use cases. Maybe we can
> fold that functionality into 'bitbake -S whatchanged'.

I've wondered if we should split bitbake -S printdiff into a separate
utility? It exists from a time before we had bitbake command APIs.

I'm curious to see what you find with analysis of bitbake-whatchanged.
I'm also somewhat surprised the scenarios you're testing all work!

I'm guess one of the commits I pointed to must have fixed them (the
removal of paths from the sig files)?

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61157): https://lists.yoctoproject.org/g/yocto/message/61157
Mute This Topic: https://lists.yoctoproject.org/mt/101503345/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [PATCH][yocto-autobuilder-helper] config.json: add reproducible-openembedded build

2023-09-28 Thread Richard Purdie
On Thu, 2023-09-28 at 15:18 +0200, Fabien Thomas wrote:
> The purpose of this new build is to report the reproducibility status of
> all meta-openembedded recipes. It use the same reproducible selftest than
> OE-Core but setting only world as target, and excluding all oecore recipes
> from it. Also, the report output directory for meta-openembedded is
> separate from oecore's one.
> 
> Signed-off-by: Fabien Thomas 
> Reviewed-by: Yoann Congal 
> ---
>  config.json | 31 +++
>  1 file changed, 31 insertions(+)

This is a good start, thanks!

> diff --git a/config.json b/config.json
> index 05c6794..fde1b1f 100644
> --- a/config.json
> +++ b/config.json
> @@ -264,6 +264,34 @@
>  
>  }
>  },
> +"reproducible-openembedded" : {

We should call this something like reproducible-meta-oe since we need
to be clear this is meta-oe related.

> +"MACHINE" : "qemux86-64",
> +"SDKMACHINE" : "x86_64",
> +"DISTRO" : "None",
> +"NEEDREPOS" : ["oecore", "bitbake", "meta-openembedded"],
> +"extravars" : [
> +"EXCLUDE_FROM_WORLD:layer-core = '1'",
> +"EXCLUDE_FROM_WORLD:layer-selftest = '1'",
> +"OEQA_REPRODUCIBLE_TEST_TARGET = 'world'"

I like the use of the layer override :)

> +],
> +"step1" : {
> +"shortname" : "Reproducible Selftest for openembedded 
> layers",
> +"EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
> OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail-openembedded/ 
> DISPLAY=:1 oe-selftest -r reproducible"],
> +"ADDLAYER" : [
> +"${BUILDDIR}/../meta-selftest",
> +"${BUILDDIR}/../meta-openembedded/meta-filesystems",
> +"${BUILDDIR}/../meta-openembedded/meta-gnome",
> +"${BUILDDIR}/../meta-openembedded/meta-initramfs",
> +"${BUILDDIR}/../meta-openembedded/meta-multimedia",
> +"${BUILDDIR}/../meta-openembedded/meta-networking",
> +"${BUILDDIR}/../meta-openembedded/meta-oe",
> +"${BUILDDIR}/../meta-openembedded/meta-perl",
> +"${BUILDDIR}/../meta-openembedded/meta-python",
> +"${BUILDDIR}/../meta-openembedded/meta-webserver",
> +"${BUILDDIR}/../meta-openembedded/meta-xfce"
> +]
> +}
> +},

Would it make sense to split this into steps for each sublayer? I guess
the downside is we'd have to code the dependencies but those shouldn't
change very often at this point. It would split the build and results
into more manageable chunks?

>  "trigger-build" : {
>  "SDKMACHINE" : "x86_64",
>  "MACHINE" : "qemux86-64",
> @@ -955,6 +983,9 @@
>  "reproducible" : {
>  "TEMPLATE" : "reproducible"
>  },
> +"reproducible-openembedded" : {
> +"TEMPLATE" : "reproducible-openembedded"
> +},
>  "reproducible-ubuntu" : {
>  "TEMPLATE" : "reproducible"
>  },

Otherwise looks good and definitely the right direction.

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61155): https://lists.yoctoproject.org/g/yocto/message/61155
Mute This Topic: https://lists.yoctoproject.org/mt/101637262/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder2] [PATCH 4/4] Drop wikilog plugin, we're not going back to it!

2023-09-28 Thread Richard Purdie
Signed-off-by: Richard Purdie 
---
 README.md|  17 +-
 TODO |   2 -
 lib/__init__.py  |   0
 lib/wiki.py  | 210 -
 reporters/wikilog.py | 361 ---
 services.py  |   6 -
 6 files changed, 5 insertions(+), 591 deletions(-)
 delete mode 100644 lib/__init__.py
 delete mode 100644 lib/wiki.py
 delete mode 100644 reporters/wikilog.py

diff --git a/README.md b/README.md
index 1d976aa0..8b97dd5a 100644
--- a/README.md
+++ b/README.md
@@ -26,16 +26,14 @@ yocto-autobuilder-helper script parameters.
 
 ### Code layout
 - [builders.py](builders.py) -- configures the builders with minimal 
buildsteps to invoke the yocto-autobuilder-helper scripts
-- lib/
-  - [wiki.py](lib/wiki.py) -- implements some mediawiki related functionality 
as used by the wikilog plugin
 reporters/
-  - [wikilog.py](reporters/wikilog.py) -- our custom plugin to write info on 
build failures to a wiki page
+  - [swatbot.py](reporters/swatbot.py) -- our custom plugin to write info on 
build failures to a swatbot instance
 - steps/
   - [writelayerinfo.py](steps/writelayerinfo.py) -- write the user supplied 
(or default) repos to a JSON file for use by the scripts
 - [config.py](config.py) -- goal is to contain all values that might need 
changing to redeploy this code elsewhere. Goal hasn't yet been met.
 - [master.cfg](master.cfg) -- calls into other scripts to do most 
configuration. Cluster specific config still lives here (i.e. controller url).
 - [schedulers.py](schedulers.py) -- sets up the force schedulers with controls 
for modifying inputs for each builder.
-- [services.py](services.py) -- configures irc, mail and wikilog reporters.
+- [services.py](services.py) -- configures irc, mail and swatbot reporters.
 - [workers.py](workers.py) -- configures the worker objects
 - [www.py](www.py) -- sets up the web UI
 
@@ -50,14 +48,9 @@ yocto-autobuilder[2].
 custom buildset to iterate the repo_, branch_, and commit_ properties set by
 the schedulers and write a JSON file with the user's values.
 
-### WikiLog reporter
-[reporters/wikilog.py](reporters/wikilog.py) -- a buildbot service to listen
-for build failures and write some information on them to the configured wiki
-page.
-
-[lib/wiki.py](lib/wiki.py) -- some helper functions for the wiki plugin, much
-of this code can be replaced by porting the plugin to be a
-buildbot.util.service.HTTPClient implementation
+### Swatbot reporter
+[reporters/swatbot.py](reporters/swatbot.py) -- a buildbot service to listen
+for build failures and write some information on them to the swatbot instance 
configured.
 
 ## Deployment
 The following deployment steps assume that the target system has a copy of
diff --git a/TODO b/TODO
index d43bfeed..47b5f58b 100644
--- a/TODO
+++ b/TODO
@@ -1,10 +1,8 @@
 
  * Add nightly-checkuri
- * Add wikilog link on console page
  * per worker auth (workers.py & config.py)
  * Add IRC notifier (services.py) [Michael]
  * add mail notification functionality to yocto-autobuilder-helper, it already
knows how to iterate error reports. (services.py)
  * Simple script to start buildbot controller, janitor and PRServer [Michael]
  * Look into allowed_origins property of built in web server
- * switch wikilog to buildbot.util.service.HTTPClient?
diff --git a/lib/__init__.py b/lib/__init__.py
deleted file mode 100644
index e69de29b..
diff --git a/lib/wiki.py b/lib/wiki.py
deleted file mode 100644
index c9e14ae6..
--- a/lib/wiki.py
+++ /dev/null
@@ -1,210 +0,0 @@
-#
-# SPDX-License-Identifier: GPL-2.0-only
-#
-
-'''
-Created on Dec 13, 2016
-
-__author__ = "Joshua Lock"
-__copyright__ = "Copyright 2016, Intel Corp."
-__credits__ = ["Joshua Lock"]
-'''
-
-import codecs
-import hashlib
-import time
-import requests
-from twisted.python import log
-
-
-class YPWiki(object):
-MAX_TRIES = 5
-TIMEOUT = 60
-
-def __init__(self, wiki_uri, wiki_un, wiki_pass):
-self.wiki_uri = wiki_uri
-self.wiki_un = wiki_un
-self.wiki_pass = wiki_pass
-
-@staticmethod
-def retry_request(requesturl, **kwargs):
-"""
-Rather than failing when a request to a 'requesturl' throws an
-exception retry again a minute later. Perform this retry no more than
-5 times.
-
-@type   requesturl:  string
-"""
-kwargs['timeout'] = YPWiki.TIMEOUT
-
-def try_request():
-try:
-req = requests.get(requesturl, **kwargs)
-return req
-except (requests.exceptions.RequestException,
-requests.exceptions.Timeout):
-return None
-
-tries = 0
-req = None
-while not req and tries < YPWiki.MAX_TRIES:
-if tries > 0:
-time.sleep(60)
-req = try_request()
-tries = tr

[yocto] [yocto-autobuilder2] [PATCH 3/4] services: Update email notification config

2023-09-28 Thread Richard Purdie
Signed-off-by: Richard Purdie 
---
 services.py | 29 ++---
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/services.py b/services.py
index cc891434..bdfdbc1d 100644
--- a/services.py
+++ b/services.py
@@ -3,20 +3,27 @@
 #
 
 from buildbot.plugins import reporters
-
 from yoctoabb import config
-
+import os
 
 services = []
 
-# TODO: we'll replace this with functionality in yocto-autobuilder-helpers
-# to mail the error reports to the list
-# services.append(
-# reporters.MailNotifier(fromaddr="yocto-bui...@yoctoproject.org",
-#sendToInterestedUsers=False,
-#extraRecipients=["yocto-bui...@yoctoproject.org"],
-#mode=('failing',))
-# )
+with open(os.path.join(os.path.dirname(__file__), "default_mail.txt"), "r") as 
f:
+emailtext = "\n".join(f.readlines())
+
+formatter = reporters.MessageFormatter(template=emailtext)
+
+generator = reporters.BuildStatusGenerator(
+mode=('failing', 'warnings', 'exception', 'cancelled'),
+message_formatter=formatter,
+builders=['a-full', 'a-quick', 'buildperf-alma8', 'buildperf-debian11', 
'docs'])
+
+#services.append(
+# reporters.MailNotifier(fromaddr="control...@yoctoproject.org",
+#
extraRecipients=["yocto-bui...@lists.yoctoproject.org"],
+#generators=[generator])
+#)
+
 
 # services.append(
 # reporters.IRC(host="irc.freenode.net",
@@ -39,4 +46,4 @@ services = []
 # from yoctoabb.reporters import swatbot
 # services.append(
 # swatbot.SwatBot("http://localhost:8000/;, "buildbot-notifier", 
"password")
-# )
\ No newline at end of file
+# )
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61151): https://lists.yoctoproject.org/g/yocto/message/61151
Mute This Topic: https://lists.yoctoproject.org/mt/101634606/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder2] [PATCH 2/4] config.py: Add mickledore prev_release documentation on workers

2023-09-28 Thread Richard Purdie
Signed-off-by: Richard Purdie 
---
 config.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/config.py b/config.py
index 68c690cc..cc44ec3d 100644
--- a/config.py
+++ b/config.py
@@ -160,6 +160,7 @@ all_workers = workers + workers_bringup + workers_buildperf 
+ workers_arm
 
 # Worker filtering for older releases
 workers_prev_releases = {
+"mickledore" : ("alma8", "alma9", "debian10", "debian11", "fedora35", 
"fedora36", "fedora37", "fedora38", "opensuse153", "opensuse154", "ubuntu1804", 
"ubuntu2004","ubuntu2204", "perf-"),
 "langdale" : ("alma8", "alma9", "debian10", "debian11", "fedora35", 
"fedora36", "opensuse153", "opensuse154", "ubuntu1804", 
"ubuntu2004","ubuntu2204", "perf-"),
 "kirkstone" : ("alma8", "centos7", "centos8", "debian8", "debian9", 
"debian10", "debian11", "fedora29", "fedora30", "fedora31", "fedora32", 
"fedora33", "fedora34", "fedora35", "fedora36", "fedora37", "fedora38", 
"opensuse150", "opensuse151", "opensuse152", "opensuse153", "ubuntu1604", 
"ubuntu1804", "ubuntu1904", "ubuntu2004", "ubuntu2110", "ubuntu2204", "perf-"),
 "honister" : ("alma8", "centos7", "centos8", "debian8", "debian9", 
"debian10", "debian11", "fedora29", "fedora30", "fedora31", "fedora32", 
"fedora33", "fedora34", "fedora35", "opensuse150", "opensuse151", 
"opensuse152", "opensuse153", "ubuntu1604", "ubuntu1804", "ubuntu1904", 
"ubuntu2004", "ubuntu2110", "ubuntu2204", "perf-"),
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61150): https://lists.yoctoproject.org/g/yocto/message/61150
Mute This Topic: https://lists.yoctoproject.org/mt/101634605/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder2] [PATCH 1/4] config.py: Update perf workers to match autobuilder

2023-09-28 Thread Richard Purdie
Signed-off-by: Richard Purdie 
---
 config.py | 9 +
 schedulers.py | 6 ++
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/config.py b/config.py
index a9433be5..68c690cc 100644
--- a/config.py
+++ b/config.py
@@ -109,7 +109,7 @@ trigger_builders_wait_full_releases = {
  "oe-selftest-ubuntu", 
"oe-selftest-debian", "oe-selftest-centos"]
 }
 
-trigger_builders_wait_perf = ["buildperf-ubuntu1604", "buildperf-centos7", 
"buildperf-debian11"]
+trigger_builders_wait_perf = ["buildperf-debian11", "buildperf-alma8"]
 
 # Builders which are individually triggered
 builders_others = [
@@ -123,6 +123,8 @@ builders_others = [
 "qemuriscv64", "qemuriscv64-ptest", "qemuriscv64-tc",
 "qemuppc64", "qemuppc64-tc",
 "qemux86-ptest", "qemux86-ptest-fast",
+"buildperf-debian11",
+"buildperf-alma8",
 "auh"
 ]
 
@@ -149,8 +151,8 @@ workers = workers_ubuntu + workers_centos + workers_fedora 
+ workers_debian + wo
 workers_bringup = []
 # workers with wine on them for meta-mingw
 workers_wine = ["ubuntu1804-ty-3"]
-workers_buildperf = ["perf-ubuntu1604", "perf-centos7", "perf-debian11"]
 workers_arm = ["ubuntu1804-arm-1", "ubuntu2004-arm-1", "ubuntu2204-arm-1"]
+workers_buildperf = ["perf-debian11", "perf-alma8"]
 # workers which don't need buildtools for AUH and are able to send email to 
mailing lists
 workers_auh = ["alma8-ty-1", "alma8-ty-2"]
 
@@ -192,9 +194,8 @@ builder_to_workers = {
 "reproducible-opensuse": workers_opensuse,
 "reproducible-centos": workers_centos,
 "meta-mingw": workers_wine,
-"buildperf-ubuntu1604": ["perf-ubuntu1604"],
 "buildperf-debian11": ["perf-debian11"],
-"buildperf-centos7": ["perf-centos7"],
+"buildperf-alma8": ["perf-alma8"],
 "qemuarm-armhost": workers_arm,
 "qemuarm64-ptest": workers_arm,
 "qemuarm64-ptest-fast": workers_arm,
diff --git a/schedulers.py b/schedulers.py
index b70b5edd..4914bcf1 100644
--- a/schedulers.py
+++ b/schedulers.py
@@ -471,10 +471,8 @@ 
schedulers.append(sched.Nightly(name='nightly-check-layer-dunfell', properties=p
   builderNames=['check-layer-nightly'], dayOfWeek=[1, 4], 
hour=2, minute=0, codebases = {'' : {'branch' : 'dunfell'}}))
 
 # Run the build performance tests at 3am, 9am, 3pm and 9pm
-schedulers.append(sched.Nightly(name='nightly-buildperf-ubuntu1604', 
branch='master', properties=parent_default_props('buildperf-ubuntu1604'),
-  builderNames=['buildperf-ubuntu1604', 'buildperf-debian11'], 
hour=[3,9,15,21], minute=0))
-schedulers.append(sched.Nightly(name='nightly-buildperf-centos7', 
branch='master', properties=parent_default_props('buildperf-centos7'),
-  builderNames=['buildperf-centos7'], hour=[3,9,15,21], 
minute=0))
+schedulers.append(sched.Nightly(name='nightly-buildperf', branch='master', 
properties=parent_default_props('buildperf-debian11'),
+  builderNames=['buildperf-debian11', 'buildperf-alma8'], 
hour=[3,9,15,21], minute=0)
 
 # Run the AUH twice a month on 1st and 15th
 schedulers.append(sched.Nightly(name='nightly-auh', branch='master', 
properties=parent_default_props('auh'),
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61149): https://lists.yoctoproject.org/g/yocto/message/61149
Mute This Topic: https://lists.yoctoproject.org/mt/101634604/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder2] [PATCH] builders: Switch to new qa email scripts name, force updates to helper instead

2023-09-28 Thread Richard Purdie
Signed-off-by: Richard Purdie 
---
 builders.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/builders.py b/builders.py
index 70b2d066..539b7b4e 100644
--- a/builders.py
+++ b/builders.py
@@ -328,7 +328,7 @@ def create_parent_builder_factory(buildername, waitname):
 
 factory.addStep(steps.ShellCommand(
 command=[
-
util.Interpolate("%(prop:builddir)s/yocto-autobuilder-helper/scripts/send-qa-email"),
+
util.Interpolate("%(prop:builddir)s/yocto-autobuilder-helper/scripts/send_qa_email.py"),
 util.Property("send_email"),
 util.Interpolate("%(prop:builddir)s/layerinfo.json"),
 util.Interpolate("%(prop:sharedrepolocation)s"),
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61148): https://lists.yoctoproject.org/g/yocto/message/61148
Mute This Topic: https://lists.yoctoproject.org/mt/101634489/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Trouble enabling vtable-verify for gcc-cross

2023-09-27 Thread Richard Purdie
On Wed, 2023-09-27 at 09:40 -0500, Alex Roberts wrote:
> Hello All,
> 
> I'm having trouble getting gcc-cross-9.3.0 recipe to build libvtv. I
> have added “—enable-vtable-verify” to EXTRA_OECONF via a .bbappend.
> The vtable stubs are added to libstdc++ but libvtv itself is not
> compiled. This causes recipes compiling with -fstandard-verify to fail
> because libvtv doesn’t exist and cannot be linked in.
> 
> Looking at the autoconf files and the resulting Makefile, libvtv
> should be getting configured and compiled with the recipe, but I can
> not determine why it is not.
> 
> Does anyone have experience enabling vtable-verify to a cross-compiler
> toolchain?

If libvtv is a separate library (I've not looked), I suspect there are
tweaks needed to the gcc-runtime recipe to build it?

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61146): https://lists.yoctoproject.org/g/yocto/message/61146
Mute This Topic: https://lists.yoctoproject.org/mt/101618304/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [qa-build-notification] QA notification for completed autobuilder build (yocto-4.3_M3.rc9)

2023-09-26 Thread Richard Purdie
On Tue, 2023-09-26 at 21:01 +, Pokybuild User wrote:
> A build flagged for QA (yocto-4.3_M3.rc9) was completed on the 
> autobuilder and is available at:
> 
> 
> https://autobuilder.yocto.io/pub/releases/yocto-4.3_M3.rc9
> 
> 
> Build URL: 
> https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/5949
> 
> Build hash information: 
> 
> bitbake: 612bdf09416ff96d08b22cf92b077516d23d0963
> meta-agl: 4a14356a535dcd47f16f788ac830d0d03457c9da
> meta-arm: be9bbaa16ed0d60490fb8f915fc48e40d14478a5
> meta-aws: ba6a653f0b5a4fa55e71646f0fe1fd0673d2c947
> meta-intel: cc04677a2717be485bf9cecf4a3d48f8a1e601cc
> meta-mingw: 65ef95a74f6ae815f63f636ed53e140a26a014ce
> meta-openembedded: c40aebd422e60126c3b18e35c20a1cb38a795a52
> meta-virtualization: b65fe24b8cbcf8797f31d058a79f8a21985e955d
> oecore: 1adf70729dafc9729e665986ad2e2250cbd25c5b
> poky: 6cb490713ed30f1f13d25978aa0d377ce66b1861
> 
> 
> 
> This is an automated message from the Yocto Project Autobuilder
> Git: git://git.yoctoproject.org/yocto-autobuilder2
> Email: richard.pur...@linuxfoundation.org
> 

Please ignore this email from a QA perspective. The good news is the
URL addition worked but the "failure in the step if the QA regression
report failed" changes didn't.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61132): https://lists.yoctoproject.org/g/yocto/message/61132
Mute This Topic: https://lists.yoctoproject.org/mt/101605352/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper] [PATCH] scripts/send-qa-email: Add URL of build to QA email

2023-09-26 Thread Richard Purdie
On Tue, 2023-09-26 at 12:03 +0100, Richard Purdie via lists.yoctoproject.org 
wrote:
> Signed-off-by: Richard Purdie 
> ---
>  scripts/send_qa_email.py | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
> index d113377..3d02797 100755
> --- a/scripts/send_qa_email.py
> +++ b/scripts/send_qa_email.py
> @@ -101,6 +101,9 @@ def send_qa_email():
>  parser.add_argument('-r', '--release',
>  action='store',
>  help="The build/release 'name' for release purposes 
> (optional)")
> +parser.add_argument('--url',
> +action='store',
> +help="The url for the build")
>  
>  args = parser.parse_args()
>  
> @@ -212,12 +215,13 @@ def send_qa_email():
>  email += '''\n
>  A build flagged for QA (%s) was completed on the autobuilder and is 
> available at:\n\n
>  %s\n\n
> +Build URL: %s\n
>  Build hash information: \n
>  %s
>  
>  \nThis is an automated message from the Yocto Project Autobuilder\nGit: 
> git://git.yoctoproject.org/yocto-autobuilder2\nEmail: 
> richard.pur...@linuxfoundation.org\n
>  
> -''' % (args.release, args.publish_dir.replace(web_root, web_url), 
> buildhashes)
> +''' % (args.release, args.publish_dir.replace(web_root, web_url), 
> args.url, buildhashes)
>  
>  # Store a copy of the email in case it doesn't reach the lists
>  with open(os.path.join(args.publish_dir, "qa-email"), "wb") as qa_email:

I've also backported this to other branches for builds we actively use
on the autobuilder so we get this benefit on all the QA emails and
don't have to make the code inside buildbot conditional.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61128): https://lists.yoctoproject.org/g/yocto/message/61128
Mute This Topic: https://lists.yoctoproject.org/mt/101593068/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper] [PATCH] scripts/send-qa-email: Add URL of build to QA email

2023-09-26 Thread Richard Purdie
Signed-off-by: Richard Purdie 
---
 scripts/send_qa_email.py | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index d113377..3d02797 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -101,6 +101,9 @@ def send_qa_email():
 parser.add_argument('-r', '--release',
 action='store',
 help="The build/release 'name' for release purposes 
(optional)")
+parser.add_argument('--url',
+action='store',
+help="The url for the build")
 
 args = parser.parse_args()
 
@@ -212,12 +215,13 @@ def send_qa_email():
 email += '''\n
 A build flagged for QA (%s) was completed on the autobuilder and is 
available at:\n\n
 %s\n\n
+Build URL: %s\n
 Build hash information: \n
 %s
 
 \nThis is an automated message from the Yocto Project Autobuilder\nGit: 
git://git.yoctoproject.org/yocto-autobuilder2\nEmail: 
richard.pur...@linuxfoundation.org\n
 
-''' % (args.release, args.publish_dir.replace(web_root, web_url), 
buildhashes)
+''' % (args.release, args.publish_dir.replace(web_root, web_url), 
args.url, buildhashes)
 
 # Store a copy of the email in case it doesn't reach the lists
 with open(os.path.join(args.publish_dir, "qa-email"), "wb") as qa_email:
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61126): https://lists.yoctoproject.org/g/yocto/message/61126
Mute This Topic: https://lists.yoctoproject.org/mt/101593068/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper] [PATCH] qa_email: Ensure we set an error exit code for errors

2023-09-26 Thread Richard Purdie
Whilst the script needs to continue if we fail to generate a regression report,
set the exit code accordingly so our CI can flag the issue.

Signed-off-by: Richard Purdie 
---
 scripts/send_qa_email.py | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index 8f913a6..d113377 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -15,6 +15,8 @@ import logging
 
 import utils
 
+exitcode = 0
+
 def is_release_version(version):
 p = re.compile('\d{8}-\d+')
 return version is not None and p.match(version) is None
@@ -74,6 +76,7 @@ def generate_regression_report(querytool, targetrepodir, 
base, target, resultdir
f.write(regreport)
 except subprocess.CalledProcessError as e:
 error = str(e)
+exitcode = 1
 log.error(f"Error while generating report between {target} and {base} 
: {error}")
 
 def send_qa_email():
@@ -177,7 +180,7 @@ def send_qa_email():
 
 if args.send.lower() != 'true' or not args.publish_dir or not args.release:
 utils.printheader("Not sending QA email")
-sys.exit(0)
+sys.exit(exitcode)
 
 utils.printheader("Generating QA email")
 
@@ -225,6 +228,7 @@ def send_qa_email():
 # Many distros have sendmail in */sbin
 env["PATH"] = env["PATH"] + ":/usr/sbin:/sbin"
 subprocess.check_call('echo "' + email +' " | sendmail -t', shell=True, 
env=env)
+sys.exit(exitcode)
 
 if __name__ == "__main__":
 send_qa_email()
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61125): https://lists.yoctoproject.org/g/yocto/message/61125
Mute This Topic: https://lists.yoctoproject.org/mt/101592516/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper] [PATCH] qa_email: Add headers to the script to improve debugging

2023-09-26 Thread Richard Purdie
It is hard to tell which section of the code specific error messages
come from at present. Add more headers to the output so we can at
least tell which section the messages are from. It also adds some
timing information.

Signed-off-by: Richard Purdie 
---
 scripts/send_qa_email.py | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index fc7fccc..8f913a6 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -116,6 +116,7 @@ def send_qa_email():
 repodir = os.path.dirname(args.repojson) + "/build/repos"
 
 if 'poky' in repos and os.path.exists(resulttool) and 
os.path.exists(querytool) and args.results_dir:
+utils.printheader("Processing test report")
 # Need the finalised revisions (not 'HEAD')
 targetrepodir = "%s/poky" % (repodir)
 revision = subprocess.check_output(["git", "rev-parse", "HEAD"], 
cwd=targetrepodir).decode('utf-8').strip()
@@ -129,6 +130,7 @@ def send_qa_email():
 
 tempdir = tempfile.mkdtemp(prefix='sendqaemail.')
 try:
+utils.printheader("Importing test results repo data")
 cloneopts = []
 if comparebranch:
 cloneopts = ["--branch", comparebranch]
@@ -152,6 +154,8 @@ def send_qa_email():
 subprocess.check_call(["git", "branch", basebranch], 
cwd=tempdir)
 subprocess.check_call(["git", "checkout", basebranch], 
cwd=tempdir)
 
+utils.printheader("Storing results")
+
 subprocess.check_call([resulttool, "store", args.results_dir, 
tempdir])
 if comparebranch:
 subprocess.check_call(["git", "push", "--all", "--force"], 
cwd=tempdir)
@@ -162,6 +166,7 @@ def send_qa_email():
 elif is_release_version(args.release) and not comparebranch and 
not basebranch:
 log.warning("Test results not published on release version. 
Faulty AB configuration ?")
 
+utils.printheader("Processing regression report")
 regression_base, regression_target = 
get_regression_base_and_target(basebranch, comparebranch, args.release, 
targetrepodir)
 if regression_base and regression_target:
 generate_regression_report(querytool, targetrepodir, 
regression_base, regression_target, tempdir, args.results_dir, log)
@@ -174,6 +179,8 @@ def send_qa_email():
 utils.printheader("Not sending QA email")
 sys.exit(0)
 
+utils.printheader("Generating QA email")
+
 buildhashes = ""
 for repo in sorted(repos.keys()):
 # gplv2 is no longer built/tested in master
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61124): https://lists.yoctoproject.org/g/yocto/message/61124
Mute This Topic: https://lists.yoctoproject.org/mt/101592484/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [Openembedded-architecture] Core workflow: sstate for all, bblock/bbunlock, tools for why is sstate not being reused?

2023-09-22 Thread Richard Purdie
On Fri, 2023-09-22 at 11:17 +0200, Alexander Kanavin wrote:
> On Thu, 21 Sept 2023 at 16:39, chris.lapla...@agilent.com
>  wrote:
> 
> > That is very impressive and I'd also love to hear about what heuristics it 
> > uses.
> 
> It's actually rather simple. It uses glob.glob on stamps in tmp/, then
> on local sstate to find possible matches, then sorts them by mtime and
> takes the most recent. It's what would work most of the time, but we
> could add printdiff-all (print difference with all sstate matches) or
> printdiff-N (N most recent). It also could abstain from dumping
> locked-sigs.inc into cwd with both -S none and -S printdiff, unless
> explicitly asked
> 
> I just discovered there's also scripts/bitbake-whatchanged (that
> hasn't seen activity in years and is neither documented nor tested).
> Unsurprisingly then, it doesn't work in the same scenario:
> 
> 
> alex@Zen2:/srv/storage/alex/yocto/build-sstate$ bitbake-whatchanged
> libsolv-native
> Figuring out the STAMPS_DIR ...
> Generating the new stamps ... (need several minutes)
> 
> === Summary: (0 changed, 0 unchanged)
> Newly added: 0
> PV changed: 0
> PR changed: 0
> Dependencies changed: 0
> 
> Removing the newly generated stamps dir ...
> 
> 
> Maybe this is what RP was referring to when he said the tools don't
> work properly?

No, I've believed that should probably be removed. I think there was a
recent change to it.

I think we had a major step change in this functionality working when
this was fixed:

https://git.yoctoproject.org/poky/commit/?id=84a7485025dd4473403b8da36a0c979a3afd5e93

and this test case was added:

https://git.yoctoproject.org/poky/commit/?id=1bdcd76d2968c3cc6ec2815afceba1cf98efd6d5

Things which used to be problematic:

a) changes involving changes to gcc-source since it uses a shared
sources stamps which confused the tools (at least used to). That may
have been before gcc-source became a recipe?
b) changes to a very common component (e.g. autoconf-native's
do_configure) which make it hard to understand where the root cause of
the changes came from
c) changes which affect many recipes at once, e.g. the do_configure
function in base.bbclass

It might be helpful to write test cases for the scenario you showed as
working above and some of the ones I mention above, then we can
document they work and have an easier way to add tests for issues
if/as/when we identify the problematic scenarios in future.

As you mention, it also uses mtime so perhaps issues happen if you run
a different build, then try and go back to the other config? I suspect
once you understand the algorithm the code uses, you can pick holes in
it.

Cheers,

Richard




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61078): https://lists.yoctoproject.org/g/yocto/message/61078
Mute This Topic: https://lists.yoctoproject.org/mt/101503345/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Forcing uninative?

2023-09-20 Thread Richard Purdie
On Wed, 2023-09-20 at 14:39 -0700, Rudolf J Streif wrote:
> Thank you, Alexandre.
> 
> On 9/20/23 14:16, Alexandre Belloni wrote:
> > Hello,
> > 
> > On 20/09/2023 13:59:54-0700, Rudolf J Streif wrote:
> > > I need to resurrect a Yocto Project build environment based on honister. 
> > > My
> > > dev system has since moved on to a newer glibc etc. As expected, I am
> > > getting this warning
> > > 
> > > WARNING: Your host glibc version (2.37) is newer than that in uninative
> > > (2.34). Disabling uninative so that sstate is not corrupted.
> > > 
> > > The distro's gcc now is 13.2.1.
> > > 
> > > Now the following packages do not compile anymore:
> > > 
> > >   * rust-llvm-native : which there is a patch (applied to kirkstone):
> > > 
> > > https://lore.kernel.org/openembedded-core/CANPvuR=G1NxfJb67xD19FoNh4eTDsTM4TDyF+vDbh6crNH=d...@mail.gmail.com/T/
> > >   * libdnf-native : because std::uint32_t does not exist anymore
> > > 
> > > I don't care about the sstate as I am creating a new one. However, how 
> > > can I
> > > force uninative to be used and would it even solve the problem (I would
> > > think so as the idea of uninative is to isolate the build from the host
> > > libraries)?
> > > 
> > You should rather use buildtools. You can install those with
> > scripts/install-buildtools. The -r option is there to allow you to
> > select a target release. You can try something like:
> > 
> > poky/scripts/install-buildtools -d ~/YP/buildtools -r yocto-3.4.4 
> > --installer-version 3.4.4
> > 
> > This will then tell you to source 
> > buildtools/environment-setup-x86_64-pokysdk-linux
> OK, I did that. The environment only adds the paths. Then I sourced the 
> build environment as usual. Unfortunately, it still fails with the same 
> error messages.
> 

By using an older buildtools-extended tarball you should be able to
gain an older host compiler version which will avoid the compile
failures you mention. I suspect uninative will need to be disabled but
that shouldn't matter for your use case as you're not sharing the
sstate.

Cheers,

Richard



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61060): https://lists.yoctoproject.org/g/yocto/message/61060
Mute This Topic: https://lists.yoctoproject.org/mt/101487564/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [RFC] recipetool create to support pypi project

2023-09-20 Thread Richard Purdie
On Wed, 2023-09-20 at 18:48 +0200, Julien Stephan wrote:
> Hi all,
> 
> In the scope of the RFQ, I am currently looking at bug #14737 (adding
> support of PEP-517 in recipetool/devtool).
> 
> I am quite new to the recipetool/devtool tools and I have several
> ideas I would like to discuss before going into development that may
> not be appropriate/necessary for the tool.
> 
> As of today, from what I understand (please correct me if I am wrong)
> 'devtool add' supports the creation of recipes for python modules from
> their download tarball so something like: devtool add
> https://files.pythonhosted.org/packages/e4/29/f1e7113ca4632283a76cdbe40b62be860d268fdc0e8eab7216ffaf2aa0ef/subunit-0.0.0a0.tar.gz
> works, but:
> - the created recipe does NOT takes advantage of the pypi class: the
> created recipe will have the SRC_URI variable instead of using
> 'inherit pypi'
> - user has to find the url of the tarball
> 
> I think it would be really easier for the user to just use: 'devtool
> add https://pypi.org/project/subunit/' This will have several
> advantages:
> - easier to use
> - fetch latest version automatically
> - easy to fetch specific version with --version
> - can easily detect 'pypi.org' on the url and inherit pypi
> - can easily parse the webpage for several informations such as
> author, summary, homepage, bugtracker page .. (I know some of them can
> be extracted from  setup.py / pyproject.toml, but not always)
> 
> Looking at the create_recipe function of
> scripts/lib/recipetool/create.py, we can see that code is really git
> oriented even if you use a tarball, you go through a bunch of 'if'
> that doesn't apply.
> 
> I think we can modernize this part of the code. This would also be
> useful for perl modules and cpan or rust modules and cargo...
> 
> What do you think? Is this something that is worth investigating and
> implementing?

Basically, yes, it sounds like the right direction to me. Even nicer
might be:

devtool add pypi subunit

which might match well other possible options like:

devtool add cpan 

and other such usages.

This would take a bit more work up front to implement since it would be
in the form " " rather than the existing "" but that
is probably worth investing in.

As Alex mentioned, unit tests are key to keeping this maintainable. We
do have really good test suites for recipetool and devtool already so
we need to ensure new functionality has good coverage.

Cheers,

Richard





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61056): https://lists.yoctoproject.org/g/yocto/message/61056
Mute This Topic: https://lists.yoctoproject.org/mt/101482623/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Core workflow: sstate for all, bblock/bbunlock, tools for why is sstate not being reused?

2023-09-14 Thread Richard Purdie
On Thu, 2023-09-14 at 20:51 +0200, Alexander Kanavin wrote:
> On Thu, 14 Sept 2023 at 14:56, Richard Purdie
>  wrote:
> > For the task signatures, we need to think about some questions. If I
> > make a change locally, can I query how much will rebuild and how much
> > will be reused? There is bitbake --dry-run but perhaps it is time for a
> > an option (or dedicated separate command?) to give some statistics
> > about what bitbake would do? How much sstate would be reused?
> > 
> > That then logically leads into the questions, can we tell what has
> > changed? Why isn't my sstate being reused? For that we perhaps should
> > define some existing scenarios where it is currently very difficult to
> > work this out and then work out how we can report that information to
> > the user. These could become test cases?
> 
> So I think there are two questions here that the tools should answer:
> 
> 1. If I would run a build, what would be missing in the cache and need
> to be built? The missing cache objects are in a dependency hierarchy,
> so only those missing objects with no dependecies on other missing
> objects would be printed. That should be comparatively easy to add as
> bitbake already does those checks all the time.

Right, what we lack is a way for the user to ask this and see the
result easily. As you say, bitbake can do this already.

>  Is there something else that's easily done and useful to print?

I think there is also the scenario of:
"I've run a build and have an existing TMPDIR and stamp info. I've now
pulled in a change. How much is going to rebuild and more importantly
*why*?"

This is different to a remote sstate situation as you have the stamp
info of the previous build already there to compare against.

> 2. Then there's the question of *why* they are missing, which is
> harder to answer. If, say, curl:do_package is not in the cache, then
> the tool would have to walk the cache tree (I/O heavy operation as
> there is no index), make a list of all curl:do_package objects that
> are there, and do a recursive bitbake-diffsig (going up the task tree)
> on them vs the one we want. Then print them starting with the newest.
> Something like:
> 
> Existing cache objects are not suitable because:
>  was built on  and has a mismatching SRCREV
>  was built on  and has a different do_compile()

In theory you can do an:

ls 
sstate/*/*/sstate:curl:corei7-64-poky-linux:::corei7-64:10:*_package.tar.zst

and get a list of possible objects. Some key information was put into
the cache file names for this reason. Obviously this becomes much
trickier when the sstate is remote over http though.

"newest" is problematic in this context as you really want the closest
match. There is no concept of build date in the cache as it often isn't
relevant (building an old release for example). The only point a
date/time is used is for cleaning out the cache for files which haven't
been accessed in a long time.

The tools are already supposed to support doing this with local file
sstate sources, they just do a bad job at getting the diffs right. One
intent of this work item was to try and understand why they don't work
and address that so at least for filesystem sstate mirrors, you can get
better results. I don't know how we solve the remote http issue as yet.

> > One of the big problems in the past was that we lost much of the hash
> > information after parsing completed. This meant that if the hashes then
> > didn't match, we couldn't tell why as the original computation was
> > lost. I did some work on allowing us to retain more of the information
> > so that we didn't have to recompute it every time to be able to do
> > processing with it. I have to admit I've totally lost track of where I
> > got to with that.
> 
> Here's an idea I can't get out of my head. Right now, the cache is
> simply an amorphous mass of objects, with no information regarding how
> they were created. How about storing complete build confgurations as
> well into the same directory? There would be a dedicated, separate
> area for each configuration that placed objects into the cache,
> containing:
> - list of layers and revisions
> - config template used
> - complete content of build/conf
> - bitbake invocation (e.g. targets and prefixed variables like MACHINE etc.)
> - complete list of sstate objects that were produced as a result, so
> they can be checked for existence
> 
> This would be written into the cache dir at the very end of the build
> when everything else is already there.

I'm not sure this helps as much as you'd like. For example I build
core-image-sato-sdk on the autobuilder and populate this but you want
to build core-image-sato locally. There would be no info here that
would help with informa

Re: [yocto] Errno=No space left on device

2023-09-14 Thread Richard Purdie
On Thu, 2023-09-14 at 12:35 +, Dave Hitchman wrote:
> 
> Does anyone know what IS enough? Watching the video series
> around https://www.youtube.com/watch?v=bTEdfwtPtNY he doesnt mention
> changing this size. I dont actually know what this watch manager
> thing is supposed to be doing, there is one command prompt doing one
> thing, I cant believe that it would need that many things being
> watched? What is it even watching for? 

There is a "memory resident" mode for bitbake (BB_SERVER_TIMEOUT = "X")
and for that to work reliably, bitbake needs to know when to reparse
metadata. For that reason it does track the metadata changing. In
theory it does this even when not staying resident so it can detect
changes in the metadata during builds and inform the user not to do
that and also to reduce differences in the codepaths between memory
resident and non-resident codepaths.

You might think 65k sounds a lot but in a simple OE-Core checkout,
there are 2.5k directories and 19k files.

In order to reduce system impact, bitbake only monitors directories,
not files. I'd imagine somehow you have a lot of directories being
included in the watch paths.

FWIW in a given build there are millions of files being processed. We
should only be monitoring the recipe/config files though.

So whilst that doesn't answer your question fully and I don't know
where the files are coming from, 65k is a relatively easy number to
reach in the kinds of work bitbake does.

Cheers,

Richard





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61007): https://lists.yoctoproject.org/g/yocto/message/61007
Mute This Topic: https://lists.yoctoproject.org/mt/101356635/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Core workflow: sstate for all, bblock/bbunlock, tools for why is sstate not being reused?

2023-09-14 Thread Richard Purdie
On Thu, 2023-09-14 at 13:52 +0200, Alexander Kanavin wrote:
> On Tue, 12 Sept 2023 at 16:44, Stephen Jolley  wrote:
> > Alexander Kanavin will be working on the core workflow topic
> 
> I am now ready to start doing this, but before I do, I'd like to
> decompose the subject into manageable tasks with a bit of help from RP
> and the community:
> 
> https://www.yoctoproject.org/community/yocto-project-engineering-request-for-quotation/
> 
> 
> Core Workflow – Process Improvements
> 
> Background
> 
> The project builds everything from source by default. This means it
> has a reputation for being slow and heavy. There are ways the project
> can accelerate this which means faster workflows and improved
> developer experience but these are currently cumbersome to use.
> 
> Rationale
> 
> The project aims to defragment customised embedded Linux. This is
> important as if we succeed at this, it gives benefits to the wider
> ecosystem through making it easier to inject security fixes and an
> ability to share and collaborate without re-inventing the wheel.
> 
> To do this, we need to provide best-in-class support and compete
> against binary distributions for usability and speed. One way we can
> do this is provide better support for binary artifacts via our sstate
> mechanism. We do already have some of this functionality in our
> “extensible SDK” or “eSDK”.
> 
> Deliverables
> 
> Enable a public sstate mirror via a content delivery network (CDN) and
> populate using the autobuilder
> Ensure CDN sstate is reused under normal use case scenarios,
> particularly for slow components like rust-native. Identify any common
> sstate mismatch causes. Ensure test cases are added to cover the use
> cases and prevent regressions.
> Add lock and unlock commands to allow specific components to be locked
> down to specific sstate checksums or allow them to vary
> Allow switching between eSDK and non-eSDK modes
> Add tooling so we can understand why something is rebuilding when it
> isn’t expected to.
> ===
> 
> So: where to start? Do we need to 'design' something, or maybe time
> should go directly into addressing specific sore points? All feedback
> welcome.
> 
> There's also an unfinished patchset for adding bblock/bbunlock, which
> I would prefer to *not* 'take over and finish' but rather see the
> author get it merged:
> https://patchwork.yoctoproject.org/project/oe-core/list/?series=15276

To start with I'll try and write down and give a random walk through my
thoughts.

Certainly we need to get something like that patchset over the line. I
think it was blocked on a reply to this email:

https://lists.openembedded.org/g/openembedded-core/message/186497

which has suffered whilst I was pulled into the qemuppc mess. I feel
bad for not getting to a reply to that.

There are design elements to this work. We need to work out how we can
make eSDK and "normal" builds more similar and less of an overhead to
switch between one and the other. A "bblock all" command does partly
get you to an eSDK effectively so part of this may be switching eSDK to
use the new lock command. What other differences are there? What other
differences are necessary or make sense for the use cases eSDK was
designed for? How would you turn an existing build into an eSDK like
one? Could you provide a copy of a local build to someone else easily
using something like eSDK's tooling? What does the eSDK look like at
the end of this. One section we don't have good answers to yet is setup
and configuration although I know you've started on some of that.

For the task signatures, we need to think about some questions. If I
make a change locally, can I query how much will rebuild and how much
will be reused? There is bitbake --dry-run but perhaps it is time for a
an option (or dedicated separate command?) to give some statistics
about what bitbake would do? How much sstate would be reused?

That then logically leads into the questions, can we tell what has
changed? Why isn't my sstate being reused? For that we perhaps should
define some existing scenarios where it is currently very difficult to
work this out and then work out how we can report that information to
the user. These could become test cases?

One of the big problems in the past was that we lost much of the hash
information after parsing completed. This meant that if the hashes then
didn't match, we couldn't tell why as the original computation was
lost. I did some work on allowing us to retain more of the information
so that we didn't have to recompute it every time to be able to do
processing with it. I have to admit I've totally lost track of where I
got to with that.

Michael Halstead will be working on setting the CDN up so I'll let him
comment on when we'll have things ready for testing with that. We do
already have sstate shared from the autobuilder so some basic tests to
make sure our "base" shared files do work as expected is something
which can happen there already.

Another interesting 

[yocto] [yocto-autobuilder-helper] [PATCH 5/5] config.json: Drop most core-image-minimal-dev image tests and add core-image-full-cmdline

2023-09-07 Thread Richard Purdie
Signed-off-by: Richard Purdie 
---
 config.json | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/config.json b/config.json
index 685e957..59944f2 100644
--- a/config.json
+++ b/config.json
@@ -86,7 +86,7 @@
 ],
 "step1" : {
 "SDKMACHINE" : "x86_64",
-"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext core-image-sato:do_populate_sdk_ext",
+"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-full-cmdline 
core-image-sato:do_populate_sdk core-image-minimal:do_populate_sdk_ext 
core-image-sato:do_populate_sdk_ext",
 "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-sato:do_testsdk core-image-minimal:do_testsdkext 
core-image-sato:do_testsdkext"
 },
 "step2" : {
@@ -172,7 +172,7 @@
 "arch-hw" : {
 "BUILDINFO" : true,
 "step1" : {
-"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-ptest-all 
core-image-sato:do_populate_sdk",
+"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-ptest-all core-image-sato:do_populate_sdk",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
 }
 },
@@ -201,13 +201,13 @@
 "MACHINE" : "qemuarm64",
  "step1" : {
  "SDKMACHINE" : "x86_64",
- "BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext core-image-sato:do_populate_sdk_ext",
+ "BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext core-image-sato:do_populate_sdk_ext",
  "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-sato:do_testsdk core-image-minimal:do_testsdkext 
core-image-sato:do_testsdkext"
  },
 "step2" : {
 "SDKMACHINE" : "x86_64",
 "MACHINE" : "qemux86-64",
-"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-ptest-all 
core-image-sato:do_populate_sdk core-image-minimal:do_populate_sdk_ext",
+"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-ptest-all core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
 
 },
@@ -341,7 +341,7 @@
 "BUILDINFO" : true,
 "step1" : {
 "SDKMACHINE" : "aarch64",
-"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext core-image-sato:do_populate_sdk_ext",
+"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext core-image-sato:do_populate_sdk_ext",
 "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-sato:do_testsdk  core-image-minimal:do_testsdkext 
core-image-sato:do_testsdkext"
 }
 },
@@ -385,7 +385,7 @@
 },
 "step2" : {
 "SDKMACHINE" : "aarch64",
-"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext core-image-sato:do_populate_sdk_ext",
+"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext core-image-sato:do_populate_sdk_ext",
 "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-sato:do_tests

[yocto] [yocto-autobuilder-helper] [PATCH 3/5] config.json: Drop now unneeded SDKMACHINE settings

2023-09-07 Thread Richard Purdie
Signed-off-by: Richard Purdie 
---
 config.json | 28 
 1 file changed, 28 deletions(-)

diff --git a/config.json b/config.json
index df83bdb..bc73e93 100644
--- a/config.json
+++ b/config.json
@@ -102,7 +102,6 @@
 "IMAGE_INSTALL:append = ' ssh-pregen-hostkeys'"
 ],
 "step1" : {
-"SDKMACHINE" : "x86_64",
 "BBTARGETS" : "core-image-sato core-image-minimal 
core-image-sato:do_populate_sdk core-image-minimal:do_populate_sdk_ext 
core-image-sato:do_populate_sdk_ext",
 "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato:do_testsdk 
core-image-minimal:do_testsdkext core-image-sato:do_testsdkext"
 }
@@ -173,7 +172,6 @@
 "arch-hw" : {
 "BUILDINFO" : true,
 "step1" : {
-"SDKMACHINE" : "x86_64",
 "BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-ptest-all 
core-image-sato:do_populate_sdk",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
 }
@@ -181,7 +179,6 @@
 "arch-hw-qemu" : {
 "BUILDINFO" : true,
 "step1" : {
-"SDKMACHINE" : "x86_64",
 "BBTARGETS" : "core-image-minimal core-image-sato 
core-image-sato-sdk core-image-sato:do_populate_sdk 
core-image-sato:do_populate_sdk_ext",
 "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-sato:do_testsdk core-image-sato:do_testsdkext"
 },
@@ -239,7 +236,6 @@
 },
 "buildperf" : {
 "MACHINE" : "qemux86",
-"SDKMACHINE" : "x86_64",
 "SSTATEDIR" : ["SSTATE_DIR ?= '${HELPERBUILDDIR}/sstate'"],
 "SSTATEDIR_RELEASE" : ["SSTATE_DIR ?= '${HELPERBUILDDIR}/sstate'"],
 "PACKAGE_CLASSES" : "package_rpm",
@@ -259,7 +255,6 @@
 },
 "selftest" : {
 "MACHINE" : "qemux86-64",
-"SDKMACHINE" : "x86_64",
 "PACKAGE_CLASSES" : "package_rpm",
 "extravars" : [
 "RPM_GPG_SIGN_CHUNK = '1'",
@@ -285,7 +280,6 @@
 },
 "reproducible" : {
 "MACHINE" : "qemux86-64",
-"SDKMACHINE" : "x86_64",
 "step1" : {
 "shortname" : "Reproducible Selftest",
 "EXTRACMDS" : ["${SCRIPTSDIR}/checkvnc; 
OEQA_DEBUGGING_SAVED_OUTPUT=${BASE_SHAREDDIR}/pub/repro-fail/ DISPLAY=:1 
oe-selftest -r reproducible"],
@@ -294,7 +288,6 @@
 }
 },
 "trigger-build" : {
-"SDKMACHINE" : "x86_64",
 "MACHINE" : "qemux86-64",
 "step1" : {
 "shortname" : "Sources pre-fetching",
@@ -306,7 +299,6 @@
 }
 },
 "trigger-build-posttrigger" : {
-"SDKMACHINE" : "x86_64",
 "MACHINE" : "qemux86-64",
 "step1" : {
 "shortname" : "Source Mirror Selftest",
@@ -336,7 +328,6 @@
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
 },
 "step3" : {
-"SDKMACHINE" : "x86_64",
 "BBTARGETS" : "core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
 }
@@ -366,7 +357,6 @@
 "MACHINE" : "beaglebone-yocto",
 "TEMPLATE" : "arch-hw",
 "step2" : {
-"SDKMACHINE" : "x86_64",
 "BBTARGETS" : "core-image-minimal:do_populate_sdk_ext 
core-image-sato:do_populate_sdk"
 }
 },
@@ -455,14 +445,12 @@
 "step1": {
 "shortname": "Build for fvp-base",
 "MACHINE": "fvp-base",
-"SDKMACHINE" : "x86_64",
 "BBTARGETS": "core-image-minimal cor

[yocto] [yocto-autobuilder-helper] [PATCH 4/5] config.json: Consolodate oecore test steps

2023-09-07 Thread Richard Purdie
Signed-off-by: Richard Purdie 
---
 config.json | 8 ++--
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/config.json b/config.json
index bc73e93..685e957 100644
--- a/config.json
+++ b/config.json
@@ -319,17 +319,13 @@
 "BB_SIGNATURE_HANDLER = 'OEEquivHash'"
 ],
 "step1" : {
-"BBTARGETS" : "core-image-sato",
-"SANITYTARGETS" : "core-image-sato:do_testimage"
+"BBTARGETS" : "core-image-sato core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext",
+"SANITYTARGETS" : "core-image-sato:do_testimage 
core-image-sato:do_testsdk"
 },
 "step2" : {
 "SDKMACHINE" : "i686",
 "BBTARGETS" : "core-image-sato:do_populate_sdk",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
-},
-"step3" : {
-"BBTARGETS" : "core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext",
-"SANITYTARGETS" : "core-image-sato:do_testsdk"
 }
 },
 "qemuarm" : {
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60938): https://lists.yoctoproject.org/g/yocto/message/60938
Mute This Topic: https://lists.yoctoproject.org/mt/101218894/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper] [PATCH 2/5] config.json: Prefer x86_64 SDKMACHINE to i686

2023-09-07 Thread Richard Purdie
The i686 SDKMACHINE target makes little sense now, reduce it's usage 
substantially to
match use in reality and make x86_64 the default.

Signed-off-by: Richard Purdie 
---
 config.json | 49 +
 1 file changed, 25 insertions(+), 24 deletions(-)

diff --git a/config.json b/config.json
index 6d1076b..df83bdb 100644
--- a/config.json
+++ b/config.json
@@ -31,7 +31,7 @@
 "NEEDREPOS" : ["poky"],
 "DISTRO" : "poky",
 "MACHINE" : "qemux86-64",
-"SDKMACHINE" : "i686",
+"SDKMACHINE" : "x86_64",
 "PACKAGE_CLASSES" : "package_rpm package_deb package_ipk",
 "DLDIR" : "DL_DIR = '${BASE_SHAREDDIR}/current_sources'",
 "SSTATEDIR" : ["SSTATE_DIR ?= '${BASE_SHAREDDIR}/pub/sstate'"],
@@ -85,14 +85,14 @@
 "IMAGE_INSTALL:append = ' ssh-pregen-hostkeys'"
 ],
 "step1" : {
-"SDKMACHINE" : "i686",
-"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk",
-"SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-sato:do_testsdk"
+"SDKMACHINE" : "x86_64",
+"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext core-image-sato:do_populate_sdk_ext",
+"SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-sato:do_testsdk core-image-minimal:do_testsdkext 
core-image-sato:do_testsdkext"
 },
 "step2" : {
-"SDKMACHINE" : "x86_64",
-"BBTARGETS" : "core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext core-image-sato:do_populate_sdk_ext",
-"SANITYTARGETS" : "core-image-sato:do_testsdk 
core-image-minimal:do_testsdkext core-image-sato:do_testsdkext"
+"SDKMACHINE" : "i686",
+"BBTARGETS" : "core-image-sato:do_populate_sdk",
+"SANITYTARGETS" : "core-image-sato:do_testsdk"
 }
 },
 "arch-qemu-lite" : {
@@ -173,7 +173,7 @@
 "arch-hw" : {
 "BUILDINFO" : true,
 "step1" : {
-"SDKMACHINE" : "i686",
+"SDKMACHINE" : "x86_64",
 "BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-ptest-all 
core-image-sato:do_populate_sdk",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
 }
@@ -203,26 +203,27 @@
 "SSTATEDIR" : ["SSTATE_DIR ?= '${HELPERBUILDDIR}/sstate'"],
 "MACHINE" : "qemuarm64",
  "step1" : {
- "SDKMACHINE" : "i686",
- "BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk",
- "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-sato:do_testsdk"
+ "SDKMACHINE" : "x86_64",
+ "BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk 
core-image-minimal:do_populate_sdk_ext core-image-sato:do_populate_sdk_ext",
+ "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-sato:do_testsdk core-image-minimal:do_testsdkext 
core-image-sato:do_testsdkext"
  },
 "step2" : {
-"SDKMACHINE" : "i686",
+"SDKMACHINE" : "x86_64",
 "MACHINE" : "qemux86-64",
-"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-ptest-all 
core-image-sato:do_populate_sdk",
+"BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal c

[yocto] [yocto-autobuilder-helper] [PATCH 1/5] config.json: Set SDKMACHINE explicitly

2023-09-07 Thread Richard Purdie
SDKMACHINE defaults to i686 for historical reasons but it should no longer be 
the main
thing we test. Start to improve things by setting things by everywhere 
explicitly so
we can then rationalise to what makes sense.

Signed-off-by: Richard Purdie 
---
 config.json | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/config.json b/config.json
index ae5e4ea..6d1076b 100644
--- a/config.json
+++ b/config.json
@@ -85,6 +85,7 @@
 "IMAGE_INSTALL:append = ' ssh-pregen-hostkeys'"
 ],
 "step1" : {
+"SDKMACHINE" : "i686",
 "BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk",
 "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-sato:do_testsdk"
 },
@@ -172,6 +173,7 @@
 "arch-hw" : {
 "BUILDINFO" : true,
 "step1" : {
+"SDKMACHINE" : "i686",
 "BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-ptest-all 
core-image-sato:do_populate_sdk",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
 }
@@ -201,10 +203,12 @@
 "SSTATEDIR" : ["SSTATE_DIR ?= '${HELPERBUILDDIR}/sstate'"],
 "MACHINE" : "qemuarm64",
  "step1" : {
+ "SDKMACHINE" : "i686",
  "BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-sato:do_populate_sdk",
  "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-sato:do_testsdk"
  },
 "step2" : {
+"SDKMACHINE" : "i686",
 "MACHINE" : "qemux86-64",
 "BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-ptest-all 
core-image-sato:do_populate_sdk",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
@@ -326,6 +330,7 @@
 "SANITYTARGETS" : "core-image-sato:do_testimage"
 },
 "step2" : {
+"SDKMACHINE" : "i686",
 "BBTARGETS" : "core-image-sato:do_populate_sdk",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
 },
@@ -449,12 +454,14 @@
 "step1": {
 "shortname": "Build for fvp-base",
 "MACHINE": "fvp-base",
+"SDKMACHINE" : "i686",
 "BBTARGETS": "core-image-minimal core-image-sato 
core-image-sato:do_populate_sdk",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
 },
 "step2": {
 "shortname": "Build for generic-arm64 (poky)",
 "MACHINE": "generic-arm64",
+"SDKMACHINE" : "i686",
 "BBTARGETS": "core-image-minimal core-image-sato 
core-image-sato:do_populate_sdk",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
 },
@@ -462,6 +469,7 @@
 "shortname": "Build for generic-arm64 (poky-altcfg)",
 "DISTRO": "poky-altcfg",
 "MACHINE": "generic-arm64",
+"SDKMACHINE" : "i686",
 "BBTARGETS": "core-image-minimal core-image-sato 
core-image-sato:do_populate_sdk",
 "SANITYTARGETS" : "core-image-sato:do_testsdk"
 }
@@ -759,6 +767,7 @@
 "shortname" : "mip64 n32",
 "description" : "qemumips64 image using n32 as default",
 "MACHINE" : "qemumips64",
+"SDKMACHINE" : "i686",
 "BBTARGETS" : "core-image-minimal 
core-image-minimal:do_populate_sdk",
 "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-minimal:do_testsdk",
 "extravars" : [
@@ -773,6 +782,7 @@
 "pkgman-non-rpm" : {
 "BBTARGETS" : "core-image-sato core-image-sato-sdk 
core-image-minimal core-image-minimal-dev core-image-minimal:do_populate_sdk",
 "SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage 
core-image-minimal:do_testsdk",
+"SDKMACHINE" : "i686",
 "step1" : {
 "shortname" : "x86 ipk",
 "MACHINE" : "qemux86-64",
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60935): https://lists.yoctoproject.org/g/yocto/message/60935
Mute This Topic: https://lists.yoctoproject.org/mt/101218891/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper] [PATCH] config.json: Reduce testing on qemumips/qemumips64/qemuppc

2023-09-07 Thread Richard Purdie
Introduce the concept of "qemu lite" and reduce the testing on these platforms,
mainly with the removal of core-imaege-sato-sdk.

Signed-off-by: Richard Purdie 
---
 config.json | 34 --
 1 file changed, 28 insertions(+), 6 deletions(-)

diff --git a/config.json b/config.json
index 05c6794..fe95d2a 100644
--- a/config.json
+++ b/config.json
@@ -94,6 +94,17 @@
 "SANITYTARGETS" : "core-image-sato:do_testsdk 
core-image-minimal:do_testsdkext core-image-sato:do_testsdkext"
 }
 },
+"arch-qemu-lite" : {
+"BUILDINFO" : true,
+"BUILDHISTORY" : true,
+"extravars" : [
+"IMAGE_INSTALL:append = ' ssh-pregen-hostkeys'"
+],
+"step1" : {
+"BBTARGETS" : "core-image-sato core-image-minimal 
core-image-sato:do_populate_sdk core-image-minimal:do_populate_sdk_ext 
core-image-sato:do_populate_sdk_ext",
+"SANITYTARGETS" : "core-image-minimal:do_testimage 
core-image-sato:do_testimage core-image-sato:do_testsdk 
core-image-minimal:do_testsdkext core-image-sato:do_testsdkext"
+}
+},
 "toolchain-qemu" : {
 "BUILDINFO" : true,
 "extravars" : [
@@ -117,6 +128,18 @@
 "SANITYTARGETS" : "core-image-full-cmdline:do_testimage 
core-image-sato:do_testimage core-image-sato-sdk:do_testimage"
 }
 },
+"altcfg-qemu-lite" : {
+"DISTRO" : "poky-altcfg",
+"BUILDINFO" : true,
+"BUILDHISTORY" : true,
+"extravars" : [
+"IMAGE_INSTALL:append = ' ssh-pregen-hostkeys'"
+],
+"step1" : {
+"BBTARGETS" : "core-image-full-cmdline core-image-sato",
+"SANITYTARGETS" : "core-image-full-cmdline:do_testimage 
core-image-sato:do_testimage"
+}
+},
 "ptest-qemu" : {
 "BUILDINFO" : true,
 "BBTARGETS" : "core-image-ptest-all",
@@ -504,24 +527,23 @@
 },
 "qemumips" : {
 "MACHINE" : "qemumips",
-"TEMPLATE" : "arch-qemu"
+"TEMPLATE" : "arch-qemu-lite"
 },
 "qemumips-tc" : {
 "MACHINE" : "qemumips",
 "TEMPLATE" : "toolchain-qemu"
 },
-
 "qemumips-alt" : {
 "MACHINE" : "qemumips",
-"TEMPLATE" : "altcfg-qemu"
+"TEMPLATE" : "altcfg-qemu-lite"
 },
 "qemumips64" : {
 "MACHINE" : "qemumips64",
-"TEMPLATE" : "arch-qemu"
+"TEMPLATE" : "arch-qemu-lite"
 },
 "qemuppc" : {
 "MACHINE" : "qemuppc",
-"TEMPLATE" : "arch-qemu"
+"TEMPLATE" : "arch-qemu-lite"
 },
 "qemumips64-tc" : {
 "MACHINE" : "qemumips64",
@@ -533,7 +555,7 @@
 },
 "qemuppc-alt" : {
 "MACHINE" : "qemuppc",
-"TEMPLATE" : "altcfg-qemu"
+"TEMPLATE" : "altcfg-qemu-lite"
 },
 "qemux86" : {
 "MACHINE" : "qemux86",
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60934): https://lists.yoctoproject.org/g/yocto/message/60934
Mute This Topic: https://lists.yoctoproject.org/mt/101218314/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] SDK build generally broken?

2023-08-22 Thread Richard Purdie
On Tue, 2023-08-22 at 06:58 +, David Daniel wrote:
> Thanks for your reply! I cannot think of how I could mix the two but I
> checked out 4.2.2, added my machine, sourced oe-init-build-env and
> started the build. So the bitbake I use comes from the poky repository
> and the checkout I use.
> 
> Is there a good guideline what a machine must define and do in order to
> be fully compliant and compatible? I have gone through the BSP
> Developer Guide and provided all listed necessary items. Still it seems
> that I am missing something or that I am doing wrong. Since I can
> successfully build the image though, even when I include my own distro
> layer and use it all that SDK build problems seem awkward to me.
> 
> Or is there any particular thing an SDK build needs from a BSP layer
> that is not listed explicitely in the BSP Developer Guide?
> 
> Thanks for any help!
> Greets
> 
> Am Montag, dem 21.08.2023 um 23:53 +0100 schrieb Richard Purdie:
> > On Mon, 2023-08-21 at 12:49 +, David Daniel wrote:
> > > Hello everybody
> > > 
> > > Has anyone successfully built an SDK lately? I cannot build any
> > > SDK,
> > > neither for my image nor for core-image-minimal.
> > > 
> > > If I define my own image I get the error (I already wrote - nobody
> > > any
> > > idea?) that the dummy-sdk package is to be built for a different
> > > architecture:
> > > 
> > > 
> > > x86_64_nativesdk   3.1.2-r0    oe-repo  330 k
> > >  nativesdk-openssl-conf x86_64_nativesdk  
> > > 3.1.2-r0    oe-repo   12 k
> > >  target-sdk-provides-dummy  sdk_provides_dummy_target 
> > > 1.0-r0  oe-repo  9.0 k
> > > 
> > > THIS ERROR ONLY APPEARS IF I DEFINE MY OWN IMAGE (though the image
> > > can be built and works).
> > > 
> > > I guess this is due to the following code in the sdk-dummy recipe:
> > > 
> > > python() {
> > >     # Put the package somewhere separate to ensure it's never used
> > > except
> > >     # when we want it
> > >     # (note that we have to do this in anonymous python here to
> > > avoid
> > >     # allarch.bbclass disabling itself)
> > >     d.setVar('PACKAGE_ARCH', '${DUMMYARCH}')
> > > }
> > > 
> > > But furthermore, I am completely unable to build the SDK with my
> > > machine - I took the machine from the working image sources I am
> > > working with and took a plain master poky checkout (bleeding
> > > master).
> > > When I want to build the SDK for the core-image-minimal I get the
> > > following error:
> > > 
> > > ERROR: m4-native-1.4.19-r0 do_collect_spdx_deps: Error executing a
> > > python function in exec_func_python() autogenerated:
> > > 
> > > The stack trace of python calls that resulted in this
> > > exception/failure was:
> > > File: 'exec_func_python() autogenerated', lineno: 2, function:
> > > 
> > >  0001:
> > >  *** 0002:do_collect_spdx_deps(d)
> > >  0003:
> > > File: '/home/david/yocto/bf133/poky/meta/classes/create-spdx-
> > > 2.2.bbclass', lineno: 494, function: do_collect_spdx_deps
> > >  0490:    from pathlib import Path
> > >  0491:
> > >  0492:    spdx_deps_file = Path(d.getVar("SPDXDEPS"))
> > >  0493:
> > >  *** 0494:    deps = collect_direct_deps(d, "do_create_spdx")
> > >  0495:
> > >  0496:    with spdx_deps_file.open("w") as f:
> > >  0497:    json.dump(deps, f)
> > >  0498:}
> > > File: '/home/david/yocto/bf133/poky/meta/classes/create-spdx-
> > > 2.2.bbclass', lineno: 475, function: collect_direct_deps
> > >  0471:    deps = set()
> > >  0472:    for dep_name in this_dep[3]:
> > >  0473:    dep_data = taskdepdata[dep_name]
> > >  0474:    if dep_data[1] == dep_task and dep_data[0] != pn:
> > >  *** 0475:    deps.add((dep_data[0], dep_data[7]))
> > >  0476:
> > >  0477:    return sorted(deps)
> > >  0478:
> > >  0479:collect_direct_deps[vardepsexclude] += "BB_TASKDEPDATA"
> > > Exception: IndexError: list index out of range
> > > 
> > > ERROR: Logfile of failure stored in:
> > > /mnt/data/yocto/bf133/sdk/build/tmp/work/x86_64-linux/m4-
> > > native/1.4.19/temp/log.do_collect_spdx_deps.731506
> > > ERROR: Task (/home/david/yocto/bf133/poky/meta/r

Re: [yocto] SDK build generally broken?

2023-08-21 Thread Richard Purdie
On Mon, 2023-08-21 at 12:49 +, David Daniel wrote:
> Hello everybody
> 
> Has anyone successfully built an SDK lately? I cannot build any SDK,
> neither for my image nor for core-image-minimal.
> 
> If I define my own image I get the error (I already wrote - nobody any
> idea?) that the dummy-sdk package is to be built for a different
> architecture:
> 
> 
> x86_64_nativesdk   3.1.2-r0oe-repo  330 k
>  nativesdk-openssl-conf x86_64_nativesdk   3.1.2-r0   
>  oe-repo   12 k
>  target-sdk-provides-dummy  sdk_provides_dummy_target  1.0-r0 
>  oe-repo  9.0 k
> 
> THIS ERROR ONLY APPEARS IF I DEFINE MY OWN IMAGE (though the image can be 
> built and works).
> 
> I guess this is due to the following code in the sdk-dummy recipe:
> 
> python() {
> # Put the package somewhere separate to ensure it's never used except
> # when we want it
> # (note that we have to do this in anonymous python here to avoid
> # allarch.bbclass disabling itself)
> d.setVar('PACKAGE_ARCH', '${DUMMYARCH}')
> }
> 
> But furthermore, I am completely unable to build the SDK with my
> machine - I took the machine from the working image sources I am
> working with and took a plain master poky checkout (bleeding master).
> When I want to build the SDK for the core-image-minimal I get the
> following error:
> 
> ERROR: m4-native-1.4.19-r0 do_collect_spdx_deps: Error executing a python 
> function in exec_func_python() autogenerated:
> 
> The stack trace of python calls that resulted in this exception/failure was:
> File: 'exec_func_python() autogenerated', lineno: 2, function: 
>  0001:
>  *** 0002:do_collect_spdx_deps(d)
>  0003:
> File: '/home/david/yocto/bf133/poky/meta/classes/create-spdx-2.2.bbclass', 
> lineno: 494, function: do_collect_spdx_deps
>  0490:from pathlib import Path
>  0491:
>  0492:spdx_deps_file = Path(d.getVar("SPDXDEPS"))
>  0493:
>  *** 0494:deps = collect_direct_deps(d, "do_create_spdx")
>  0495:
>  0496:with spdx_deps_file.open("w") as f:
>  0497:json.dump(deps, f)
>  0498:}
> File: '/home/david/yocto/bf133/poky/meta/classes/create-spdx-2.2.bbclass', 
> lineno: 475, function: collect_direct_deps
>  0471:deps = set()
>  0472:for dep_name in this_dep[3]:
>  0473:dep_data = taskdepdata[dep_name]
>  0474:if dep_data[1] == dep_task and dep_data[0] != pn:
>  *** 0475:deps.add((dep_data[0], dep_data[7]))
>  0476:
>  0477:return sorted(deps)
>  0478:
>  0479:collect_direct_deps[vardepsexclude] += "BB_TASKDEPDATA"
> Exception: IndexError: list index out of range
> 
> ERROR: Logfile of failure stored in: 
> /mnt/data/yocto/bf133/sdk/build/tmp/work/x86_64-linux/m4-native/1.4.19/temp/log.do_collect_spdx_deps.731506
> ERROR: Task 
> (/home/david/yocto/bf133/poky/meta/recipes-devtools/m4/m4-native_1.4.19.bb:do_collect_spdx_deps)
>  failed with exit code '1'
> 
> Is the generation of an SDK outdated, not recommended, not maintained
> anymore? Or do you have any recommendations to troubleshoot these
> issues?

SDK builds are tested on the autobuilder so it does work, at least
there.

Are you using the right version of bitbake with that metadata? That
index issue looks a bit to me like it is the wrong version of bitbake.

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60841): https://lists.yoctoproject.org/g/yocto/message/60841
Mute This Topic: https://lists.yoctoproject.org/mt/100872021/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [OE-core] Yocto Project Status 15 August 2023 (WW33)

2023-08-15 Thread Richard Purdie
On Tue, 2023-08-15 at 21:13 +0200, Alexander Kanavin wrote:
> On Tue, 15 Aug 2023 at 16:52, Stephen Jolley  wrote:
> > The various systemd issues have mostly been resolved although the AUH did 
> > break as a result and will have to be fixed and rerun.
> 
> I sent the patch now - it's for yocto-autobuilder-helper, not AUH, so
> when that's applied, AUH job can re-run.

Thanks, merged and running!

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60804): https://lists.yoctoproject.org/g/yocto/message/60804
Mute This Topic: https://lists.yoctoproject.org/mt/100764924/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Intermittent build failure where dwarfsrcfiles doesn't exist

2023-08-11 Thread Richard Purdie
On Sat, 2023-08-12 at 07:50 +1200, Tom Isaacson wrote:
> My bad. Ok, so:
> * What function is responsible for copying the dwarfsrcfiles binary to
> poky/build/tmp/work/oclea_cv25-oclea-linux/my-app/1.0.53-8c74bcd/recipe-sysroot-native/usr/bin/dwarfsrcfiles?

It would come from the extend_recipe_sysroot function in
staging.bbclass.

You will see this function being called in various tasks as it will
show in logs in WORKDIR/temp/log.do_*

> * When the build completes the dwarfsrcfiles binary is present (even
> if it got there too late). Why, if I rerun the build with no changes,
> does it get removed and recopied?

There are a number of reasons, the example that comes to mine readily
is code which removes things it thinks are now invalid and then
installs new versions of them.

For example if the taskhash of dwarfsrcfiles changes (e.g. it's code
changed), it would rebuild it, then update copies in any recipe's
sysroot that depended on it.

> * I asked before, but I've just been doing this by adding debug
> output, but is there a guide on how to debug this directly? I can't
> figure out how to step into the Python code directly. I use VSCode.

I'm not ignoring the question, I simply don't know. I don't use VSCode.
I'm an old school C/kernel person who uses printk (or bb.warn() in
the bitbake case) to debug things.

Personally, I'd start with looking at the logs for my-app, see when
dwardsrc-files is being added or removed from the sysroot. The logs
should indicate that.

Removal could also happen at build setup time where bitbake prints
messages about removing things in which case the removal wouldn't be in
the recipes logs and isn't well logged, that code is central.

I'm doing my best to at least give you some answers, I'm afraid I don't
have them all, or tons of time to spend on this so I'm doing the best I
can. I'm assuming some answers are better than none, particularly if
you're looking at something you think is abnormal but really isn't.

I hope you can figure it out! If/as/when you do, help documenting it so
the next person can know what you wish was documented would be helpful
btw!

Cheers,

Richard




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60785): https://lists.yoctoproject.org/g/yocto/message/60785
Mute This Topic: https://lists.yoctoproject.org/mt/100617025/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Intermittent build failure where dwarfsrcfiles doesn't exist

2023-08-11 Thread Richard Purdie
On Fri, 2023-08-11 at 09:22 +1200, Tom Isaacson wrote:
> DEBUG: Checking
> /home/tom/Documents/Bitbucket/my-app/my-yocto-sdk/poky/build/tmp/work/x86_64-linux/dwarfsrcfiles-native/1.0-r0/sysroot-destdir//home/tom/Documents/Bitbucket/my-app/my-yocto-sdk/poky/build/tmp/work/x86_64-linux/dwarfsrcfiles-native/1.0-r0/recipe-sysroot-native/usr/libexec
> for binaries to process
> DEBUG: Python function relocatable_binaries_preprocess finished
> 
> This seems totally wrong - two full paths are being combined to make a
> path that can't possibly exist. Is this something to do with my setup?

No, those paths look correct. When it builds native sstate objects,
you'd expect to see path duplication like that.

https://docs.yoctoproject.org/ref-manual/faq.html#why-do-bindir-and-libdir-have-strange-values-for-native-recipes

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60778): https://lists.yoctoproject.org/g/yocto/message/60778
Mute This Topic: https://lists.yoctoproject.org/mt/100617025/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] sstate-cache miss analyzation

2023-08-09 Thread Richard Purdie
On Wed, 2023-08-09 at 13:31 -0600, Rusty Howell wrote:
> Hi, I was reading this very old mailing list post about SSTATE and
> cache hits.
> 
> https://www.openembedded.org/pipermail/openembedded-core/2014-June/211563.html
> 
> Is the information in that message still accurate?  Are there any
> newer tools available to help understand why a recipe got a cache
> miss?

The basics haven't changed and bitbake-diffsigs is still the tool which
can compare two sig files.

The basic idea is to take the siginfo file for the recipe that missed
(from the stamps directory) and compared it to the one you think should
have matched from sstate.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60764): https://lists.yoctoproject.org/g/yocto/message/60764
Mute This Topic: https://lists.yoctoproject.org/mt/100650228/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Intermittent build failure where dwarfsrcfiles doesn't exist

2023-08-08 Thread Richard Purdie
On Tue, 2023-08-08 at 18:10 +1200, Tom Isaacson wrote:
> We've been using Dunfell for a while but recently I've started getting
> an intermittent build failure from an app:
> ERROR: my-app-1.0.52-213f116 do_package: Fatal errors occurred in
> subprocesses:
> [Errno 2] No such file or directory: 'dwarfsrcfiles':
> 'dwarfsrcfiles': Traceback (most recent call last):
>  File 
> "/var/lib/jenkins/workspace/la_meta-my-layer_check-build/my-yocto-sdk/poky/meta/lib/oe/utils.py",
> line 280, in run
>ret = self._target(*self._args, **self._kwargs)
>  File 
> "/var/lib/jenkins/workspace/la_meta-my-layer_check-build/my-yocto-sdk/poky/meta/classes/package.bbclass",
> line 405, in splitdebuginfo
> sources = source_info(file, d)
>  File 
> "/var/lib/jenkins/workspace/la_meta-my-layer_check-build/my-yocto-sdk/poky/meta/classes/package.bbclass",
> line 353, in source_info
>output = subprocess.check_output(cmd, universal_newlines=True,
> stderr=subprocess.STDOUT)
>  File "/usr/lib/python3.7/subprocess.py", line 395, in check_output
>**kwargs).stdout
>  File "/usr/lib/python3.7/subprocess.py", line 472, in run
>with Popen(*popenargs, **kwargs) as process:
>  File "/usr/lib/python3.7/subprocess.py", line 775, in __init__
>restore_signals, start_new_session)
>  File "/usr/lib/python3.7/subprocess.py", line 1522, in _execute_child
>raise child_exception_type(errno_num, err_msg, err_filename)
> FileNotFoundError: [Errno 2] No such file or directory:
> 'dwarfsrcfiles': 'dwarfsrcfiles'
> 
> I assumed this was a race condition, so following
> https://docs.yoctoproject.org/3.1.26/dev-manual/dev-manual-common-tasks.html?highlight=parallel_make#debugging-parallel-make-races
> I added:
> PARALLEL_MAKE = ""
> to the app recipe, but it didn't seem to help. However, setting:
> PARALLEL_MAKE = "-j 20"
> does seem to make it fail every time (rather than intermittently).
> 
> dwarfsrcfiles is being built and I can run it after the build fails.
> I've upgraded to Kirkstone but I'm seeing the same problem. The recipe
> isn't doing anything to the package step so I'm not sure why it's
> running before dwarfsrcfiles is built. I can see īt's included in
> poky/meta/classes/package.bbclass but since it's a dependency I would
> have thought it has to be built before the packaging step can be done?

Rather than a make race, it is probably a task race and something like
a missing task dependency.

Run something like "bitbake my-app -c package -g" and have a look at
task-depends.dot that is generated, see if there is a dependency
between my-app:do_package and dwarfsrcfiles-native:do_populate_sysroot.
It should come from PACKAGE_DEPENDS in package.bbclass.

Cheers,

Richard



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60734): https://lists.yoctoproject.org/g/yocto/message/60734
Mute This Topic: https://lists.yoctoproject.org/mt/100617025/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Questions on multiconfig

2023-08-05 Thread Richard Purdie
On Sat, 2023-08-05 at 16:04 +0200, Stéphane Veyret wrote:
> > First of all, I am spammed with messages like:
> > 
> > NOTE: Deferred task
> > mc:initramfs:/home/stephane/Developpements/yocto/sources/poky/meta/
> > recipes-devtools/qemu/qemuwrapper-cross_1.0.bb:do_package_qa now
> > buildable
> > NOTE: Deferred task
> > mc:initramfs:virtual:native:/home/stephane/Developpements/yocto/sou
> > rces/poky/meta/recipes-
> > devtools/opkg/opkg_0.6.1.bb:do_populate_sysroot now buildable
> > 
> 
> 
> I am still spammed by all these messages. I’d like to understand why.
> And if possible, I’d also like to mute the messages I’ve seen and
> decided they are not important. It doesn’t seem possible, so I may
> send a patch in the future for that case.

https://git.yoctoproject.org/poky/commit/bitbake?id=ebb8b39463cef3c3d0f90f054c433b2f5256cb1a

It is also possible to write specific log configuration files to
control the different messages.

That said, see the discussion on the bitbake-devel list about this
patch and why merging it will probably prevent us debugging some kinds
of build issues in future :/.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60716): https://lists.yoctoproject.org/g/yocto/message/60716
Mute This Topic: https://lists.yoctoproject.org/mt/100491091/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [qa-build-notification] QA notification for completed autobuilder build (yocto-3.1.27.rc2)

2023-08-05 Thread Richard Purdie
On Sat, 2023-08-05 at 03:06 +, Pokybuild User wrote:
> A build flagged for QA (yocto-3.1.27.rc2) was completed on the autobuilder 
> and is available at:
> 
> 
> https://autobuilder.yocto.io/pub/releases/yocto-3.1.27.rc2
> 
> 
> Build hash information: 
> 
> bitbake: b60c7085ec370473bea9b3b4b65826a17638837f
> meta-agl: 583f80f8e9d085a98f1cbab3a9d8082503ab739e
> meta-arm: b1fe8443a7a72c65fa0fc3371f607c6671b3a882
> meta-aws: d72bb6b38de88caab73047feba16092ca38959c1
> meta-gplv2: 60b251c25ba87e946a0ca4cdc8d17b1cb09292ac
> meta-intel: 488af577a3f21f038c551612bb0af077fa2b743d
> meta-mingw: 524de686205b5d6736661d4532f5f98fee8589b7
> meta-openembedded: b8b0b06821d4d4df0cce4f07fa31a8ca1dd38f46
> meta-virtualization: 521459bf588435e847d981657485bae8d6f003b5
> oecore: 6dd64ca2d726d0b222a7608c65eb0a20454c3f99
> poky: 32da5ee9519119a16392acba29f9df4720a55c0f
> 
> 
> 
> This is an automated message from the Yocto Project Autobuilder
> Git: git://git.yoctoproject.org/yocto-autobuilder2
> Email: richard.pur...@linuxfoundation.org

Counts look much better with those failures fixed!

https://autobuilder.yocto.io/pub/releases/yocto-3.1.27.rc2/testresults/testresult-report.txt

I'd note you probably could do with the recent ltp module dependency
fix and the increased timeout to make the ltp results consistent. That
can wait until the next round though.

Cheers,

Richard




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60714): https://lists.yoctoproject.org/g/yocto/message/60714
Mute This Topic: https://lists.yoctoproject.org/mt/100561752/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [docs] Updating the list of sanity tested distros?

2023-07-28 Thread Richard Purdie
On Fri, 2023-07-28 at 20:41 +0200, Michael Opdenacker via
lists.yoctoproject.org wrote:
> Greetings,
> 
> As far as I understand, the list of distros that the Yocto Project 
> supports (or should claim it supports) is the intersection of the list 
> of sanity tested distros (SANITY_TESTED_DISTROS in 
> meta-poky/conf/distro/poky.conf) and the distribution versions that are 
> still publicly supported by their vendors (as checked on Wikipedia). 
> There's no point in claiming to support a distribution for which package 
> updates are no longer available.
> 
> Currently for master, this intersection is only:
> 
>   * Ubuntu 20.04 (LTS)
>   * Ubuntu 22.04 (LTS)
>   * Fedora 37
>   * Debian GNU/Linux 11.x (Bullseye)
>   * OpenSUSE Leap 15.4
> 
> Here are a few obvious absents here:
> 
>   * AlmaLinux: version 8.7 and 9.1 are still in SANITY_TESTED_DISTROS
> but they are obsolete. I see an AlmaLinux9 worker on
> https://autobuilder.yoctoproject.org/typhoon/#/workers, but I don't
> know whether it's 9.1 or 9.2 (not obsolete unlike 9.1).
>   * Fedora 38: important as Fedora 37 ends in November. I see a Fedora38
> worker, but there are still failed jobs, even though the majority
> are successful. When is a distro version ready to be added to
> SANITY_TESTED_DISTROS?.
>   * Debian 12: we also have such a worker, with a mix of successful and
> failed builds. Is this version eligible too?
>   * OpenSuse Leap 15.5, as OpenSuse Leap 15.4 expires in November too. I
> see a worker, but with only failed builds.
> 
> I'd say that the most urgent one is AlmaLinux 9.2 as otherwise we don't 
> support AlmaLinux any more. The next ones for the end of the year are 
> Fedora 38 and OpenSuse Leap 15.5.
> 
> Would you have any thoughts?
> 
> I was about to send a documentation update to declare that master (and 
> probably Mickledore and Kirkstone) only support 5 different distribution 
> versions, but a longer list would look better...

We keep seeming to discuss this but it is really quite simple, we
document what we test on.

We are currently testing on alma8 and alma9. I don't know which version
of alma9 but I'd suspect the most recent one and we can check. We can
list those until the workers are retired.

With regard to the failed builds, the question is whether the failed
builds are due to the patches under test or some underlying issue with
those systems. They would not be in general use in the pool if they
weren't able to build master so it is fine to document all of those
there for master. I'm sure the failures are due to failures in patches
under test at this point.

The exclusion list for older releases can be and is different as some
workers are not enabled for some releases. You can see the mappings
here:

https://git.yoctoproject.org/yocto-autobuilder2/tree/config.py#n160

i.e. mickledore is matching master so far and kirkstone isn't using
fedora38 yet (for example).

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60677): https://lists.yoctoproject.org/g/yocto/message/60677
Mute This Topic: https://lists.yoctoproject.org/mt/100419636/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [qa-build-notification] QA notification for completed autobuilder build (yocto-4.3_M2.rc2)

2023-07-27 Thread Richard Purdie
On Thu, 2023-07-27 at 12:01 +0100, Richard Purdie via
lists.yoctoproject.org wrote:
> On Thu, 2023-07-27 at 10:53 +0100, Richard Purdie via
> lists.yoctoproject.org wrote:
> > On Thu, 2023-07-27 at 00:39 +, Pokybuild User wrote:
> > > A build flagged for QA (yocto-4.3_M2.rc2) was completed on the 
> > > autobuilder and is available at:
> > > 
> > > 
> > > https://autobuilder.yocto.io/pub/releases/yocto-4.3_M2.rc2
> > > 
> > > 
> > > Build hash information: 
> > > 
> > > bitbake: 64bc00a46d1aacc23fe7e8d9a46a126f3a4bc318
> > > meta-agl: 25410a640b879405d4f93585f3d74c328f487cad
> > > meta-arm: cfcc8a4d13c26793ecab2d8da9274eb98b8cb4ce
> > > meta-aws: 86966086b2dfabd7113253bc5647f87fd6d42423
> > > meta-intel: f2bcad2943b11b8c9d9ecdd25551a841ba98eda0
> > > meta-mingw: 92258028e1b5664a9f832541d5c4f6de0bd05e07
> > > meta-openembedded: 74e70284acb2eb2f2a47a1ab1aa5ee0928d46344
> > > meta-virtualization: c5079557091b215663458807a8f617752614a6bd
> > > oecore: 6ac110c8954e5fdb71d5495e3eebc1ca3958dc19
> > > poky: 3eff0eb5ea77de20d85a2ffc64652579cbd7755c
> > > 
> > > 
> > > 
> > > This is an automated message from the Yocto Project Autobuilder
> > > Git: git://git.yoctoproject.org/yocto-autobuilder2
> > > Email: richard.pur...@linuxfoundation.org
> > > 
> > 
> > The regression report for rc2 looks better!
> > 
> > https://autobuilder.yocto.io/pub/releases/yocto-4.3_M2.rc2/testresults/testresult-regressions-report.txt
> > 
> > I particularly liked:
> > 
> > """
> >   Additionally, 124114 previously failing test(s) is/are now passing
> > """
> > 
> > :)
> > 
> > It does more clearly show where changes occurring which is really
> > helpful.
> > 
> > Not sure why the duplicates appeared in 
> > 
> > https://autobuilder.yocto.io/pub/releases/yocto-4.3_M2.rc2/testresults/testresult-report.txt
> > 
> > it appears the arm build was triggered twice from the same a-full:
> > 
> > https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5203
> > https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5204
> > 
> > and I have no idea why it did that. I'd suggest we just filter the
> > duplicates messages out the final report.
> > 
> > I am a bit puzzled by the ltp test counts and need to talk to Ross
> > about that.

Just to further update, it looks like it depends which worker the arm
builds ran on as to which set of test results we get. If it was
ubuntu2204-arm-1 it has ~1000 syscall tests, the ubuntu1804-arm-1 has
~250. If I increase the command without output timeout from 300s to
600s that changes to ~500 but some tests still timeout.

We really need to warn if the timeout occurs so I'm working on changes
to do that as well as trying to work out how long the tests really
take.

Ultimately we may decide to disable some tests.

Cheers,

Richard



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60669): https://lists.yoctoproject.org/g/yocto/message/60669
Mute This Topic: https://lists.yoctoproject.org/mt/100399913/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [qa-build-notification] QA notification for completed autobuilder build (yocto-4.3_M2.rc2)

2023-07-27 Thread Richard Purdie
On Thu, 2023-07-27 at 10:53 +0100, Richard Purdie via
lists.yoctoproject.org wrote:
> On Thu, 2023-07-27 at 00:39 +, Pokybuild User wrote:
> > A build flagged for QA (yocto-4.3_M2.rc2) was completed on the 
> > autobuilder and is available at:
> > 
> > 
> > https://autobuilder.yocto.io/pub/releases/yocto-4.3_M2.rc2
> > 
> > 
> > Build hash information: 
> > 
> > bitbake: 64bc00a46d1aacc23fe7e8d9a46a126f3a4bc318
> > meta-agl: 25410a640b879405d4f93585f3d74c328f487cad
> > meta-arm: cfcc8a4d13c26793ecab2d8da9274eb98b8cb4ce
> > meta-aws: 86966086b2dfabd7113253bc5647f87fd6d42423
> > meta-intel: f2bcad2943b11b8c9d9ecdd25551a841ba98eda0
> > meta-mingw: 92258028e1b5664a9f832541d5c4f6de0bd05e07
> > meta-openembedded: 74e70284acb2eb2f2a47a1ab1aa5ee0928d46344
> > meta-virtualization: c5079557091b215663458807a8f617752614a6bd
> > oecore: 6ac110c8954e5fdb71d5495e3eebc1ca3958dc19
> > poky: 3eff0eb5ea77de20d85a2ffc64652579cbd7755c
> > 
> > 
> > 
> > This is an automated message from the Yocto Project Autobuilder
> > Git: git://git.yoctoproject.org/yocto-autobuilder2
> > Email: richard.pur...@linuxfoundation.org
> > 
> 
> The regression report for rc2 looks better!
> 
> https://autobuilder.yocto.io/pub/releases/yocto-4.3_M2.rc2/testresults/testresult-regressions-report.txt
> 
> I particularly liked:
> 
> """
>   Additionally, 124114 previously failing test(s) is/are now passing
> """
> 
> :)
> 
> It does more clearly show where changes occurring which is really
> helpful.
> 
> Not sure why the duplicates appeared in 
> 
> https://autobuilder.yocto.io/pub/releases/yocto-4.3_M2.rc2/testresults/testresult-report.txt
> 
> it appears the arm build was triggered twice from the same a-full:
> 
> https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5203
> https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5204
> 
> and I have no idea why it did that. I'd suggest we just filter the
> duplicates messages out the final report.
> 
> I am a bit puzzled by the ltp test counts and need to talk to Ross
> about that.

Curiously, downloading:

https://autobuilder.yocto.io/pub/non-release/20230726-18/testresults/qemuarm64-ltp/testresults.json

shows:

==
qemuarm64 Ltp Test Result Summary
==
--
Recipe  | Passed| Failed  | Skipped   | Time(s)   
--
cap_bounds  | 1 | 0   | 0 | 4
commands| 17| 13  | 7 | 8
connectors  | 0 | 0   | 1 | 4
containers  | 65| 0   | 17| 3
controllers | 227   | 4   | 55| 2404
cve | 46| 2   | 31| 1132
dio | 30| 0   | 0 | 981
fcntl-locktests | 1 | 0   | 0 | 10
filecaps| 1 | 0   | 0 | 17
fs  | 61| 1   | 5 | 947
fs_bind | 84| 11  | 0 | 26
fs_perms_simple | 18| 0   | 0 | 0
fsx | 1 | 0   | 0 | 1
input   | 5 | 1   | 0 | 2
io  | 2 | 0   | 0 | 1
ipc | 6 | 0   | 0 | 5
math| 10| 0   | 0 | 9
mm  | 44| 1   | 25| 485
net.ipv6_lib| 6 | 0   | 0 | 1
nptl| 1 | 0   | 0 | 8
pty | 6 | 0   | 3 | 312
sched   | 9 | 0   | 2 | 118
syscalls| 281   | 11  | 19| 701
--

which is different to the main testreport. Something odd is going on.

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60662): https://lists.yoctoproject.org/g/yocto/message/60662
Mute This Topic: https://lists.yoctoproject.org/mt/100388056/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [qa-build-notification] QA notification for completed autobuilder build (yocto-4.3_M2.rc2)

2023-07-27 Thread Richard Purdie
On Thu, 2023-07-27 at 00:39 +, Pokybuild User wrote:
> A build flagged for QA (yocto-4.3_M2.rc2) was completed on the 
> autobuilder and is available at:
> 
> 
> https://autobuilder.yocto.io/pub/releases/yocto-4.3_M2.rc2
> 
> 
> Build hash information: 
> 
> bitbake: 64bc00a46d1aacc23fe7e8d9a46a126f3a4bc318
> meta-agl: 25410a640b879405d4f93585f3d74c328f487cad
> meta-arm: cfcc8a4d13c26793ecab2d8da9274eb98b8cb4ce
> meta-aws: 86966086b2dfabd7113253bc5647f87fd6d42423
> meta-intel: f2bcad2943b11b8c9d9ecdd25551a841ba98eda0
> meta-mingw: 92258028e1b5664a9f832541d5c4f6de0bd05e07
> meta-openembedded: 74e70284acb2eb2f2a47a1ab1aa5ee0928d46344
> meta-virtualization: c5079557091b215663458807a8f617752614a6bd
> oecore: 6ac110c8954e5fdb71d5495e3eebc1ca3958dc19
> poky: 3eff0eb5ea77de20d85a2ffc64652579cbd7755c
> 
> 
> 
> This is an automated message from the Yocto Project Autobuilder
> Git: git://git.yoctoproject.org/yocto-autobuilder2
> Email: richard.pur...@linuxfoundation.org
> 

The regression report for rc2 looks better!

https://autobuilder.yocto.io/pub/releases/yocto-4.3_M2.rc2/testresults/testresult-regressions-report.txt

I particularly liked:

"""
  Additionally, 124114 previously failing test(s) is/are now passing
"""

:)

It does more clearly show where changes occurring which is really
helpful.

Not sure why the duplicates appeared in 

https://autobuilder.yocto.io/pub/releases/yocto-4.3_M2.rc2/testresults/testresult-report.txt

it appears the arm build was triggered twice from the same a-full:

https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5203
https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/5204

and I have no idea why it did that. I'd suggest we just filter the
duplicates messages out the final report.

I am a bit puzzled by the ltp test counts and need to talk to Ross
about that.

Cheers,

Richard



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60661): https://lists.yoctoproject.org/g/yocto/message/60661
Mute This Topic: https://lists.yoctoproject.org/mt/100387492/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] File magic/sdk relocation

2023-07-25 Thread Richard Purdie
On Sun, 2023-07-23 at 14:03 +0200, Alexander Kanavin wrote:
> On Fri, 21 Jul 2023 at 19:11, Oleksandr Hnatiuk via
> lists.yoctoproject.org 
> wrote:
> > 
> > On Mon, Jun 19, 2023 at 08:27 PM, Alexander Kanavin wrote:
> > 
> > On closer look, it should even say:
> > export MAGIC="\$OECORE_NATIVE_SYSROOT/usr/share/misc/magic.mgc"
> > 
> > Hi Alexander,
> > 
> > We have submitted the patch[1] which fixes the build for our case but 
> > doesn't change the contents of file.sh which is a separate issue.
> > You suggested changing MAGIC to 
> > "\$OECORE_NATIVE_SYSROOT/usr/share/misc/magic.mgc".
> > We can also get the same result with "${datadir}/misc/magic.mgc" which is 
> > what was used for create_cmdline_wrapper in this recipe.
> > Which one is preferred?
> 
> I don't have a preference. If both options work, there's clearly more
> sdk relocation magic in various places than I thought was happening,
> so I would pick the option that is less likely to quietly regress, but
> right now I can't say which one that would be.

Personally, I'd really like to patch libmagic to move the path to it's
own section, then we can use the relocation technique we use with gcc
to change the patch and most of these issues go away...

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60647): https://lists.yoctoproject.org/g/yocto/message/60647
Mute This Topic: https://lists.yoctoproject.org/mt/99626082/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Test regression report for 4.2 M2

2023-07-20 Thread Richard Purdie
Hi,

We have the 4.3 M2 build so I was curious how the regression report
looked this time:

https://autobuilder.yocto.io/pub/releases/yocto-4.3_M2.rc1/testresults/testresult-regressions-report.txt

The good news is I think this is improving and we're getting there,
this one does have useful data and the comparison looks much improved.
I'm pleased we had the comparison data there, thanks Michael and
Alexis.

I think the biggest missing piece is I suspect when we show regressions
with PASS -> None, we should also note a "Additionally there were X new
test results" if that is the case as it would suggest tests were
replaced. I suspect that may be the case here but I can't know for
sure.

In this build I happen to know that:

* We added rust tests for many machines but there is no mention of this
(it isn't a regression but it would be nice to highlight the change in
summary)

* We fixed ~125,000 gcc tests for qemuarm64

* We fixed ~60,000 gcc tests for qemuppc

so perhaps we need some kind of summary about improvements too?

Ross is looking into some issues with flaky ltp test reporting. I do
have a fix queued to improve the glibc test results too for the next
milestone.

It is nice to see things starting to work better!

Cheers,

Richard



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60621): https://lists.yoctoproject.org/g/yocto/message/60621
Mute This Topic: https://lists.yoctoproject.org/mt/100253828/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [qa-build-notification] QA notification for completed autobuilder build (yocto-4.3_M2.rc1)

2023-07-20 Thread Richard Purdie
On Thu, 2023-07-20 at 10:54 +, Pokybuild User wrote:
> A build flagged for QA (yocto-4.3_M2.rc1) was completed on the 
> autobuilder and is available at:
> 
> 
> https://autobuilder.yocto.io/pub/releases/yocto-4.3_M2.rc1
> 
> 
> Build hash information: 
> 
> bitbake: 64bc00a46d1aacc23fe7e8d9a46a126f3a4bc318
> meta-agl: f1bb0ee68b18a39917e06bfbb08d677bbf8bdf25
> meta-arm: e67d9c4dbc892ef688ff960d68e02627fc99bb53
> meta-aws: 2b834db25df1dd90e2b05f89cf1ddf3790bfc220
> meta-intel: f96c815a14dab58abce5d4ce69e8fa76c9f6de3a
> meta-mingw: 4608d0bb7e47c52b8f6e9be259bfb1716fda9fd6
> meta-openembedded: d57c4655623a7271dc93cfb741ffdbf1b511a5c2
> meta-virtualization: 680f56d9e432c0cb7de41fe50610f218bf53ba1f
> oecore: 8bb047ad3bd93fcf655eeec53e6d1de1e7747140
> poky: f73ae292bc0b92df3cb76c6e8b220f18630f6bc7
> 
> 
> 
> This is an automated message from the Yocto Project Autobuilder
> Git: git://git.yoctoproject.org/yocto-autobuilder2
> Email: richard.pur...@linuxfoundation.org

There were two failures in this build. One was a meta-mingw issue and a
patch merged for that now. The other is a meta-agl with a patch needing
to be rebased. Both issues are in the other layers and not in the core
so the milestone release should be unaffected.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60619): https://lists.yoctoproject.org/g/yocto/message/60619
Mute This Topic: https://lists.yoctoproject.org/mt/100253401/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Request for Quotations for development work

2023-07-17 Thread Richard Purdie
On Tue, 2023-07-18 at 09:00 +1200, Tom Isaacson wrote:
> I read this and the linked articles and one piece in
> https://www.linux.com/audience/maintainer-confidential-opportunities-and-challenges-of-the-ubiquitous-but-under-resourced-yocto-project/
>   caught my eye:
> > One question that comes up a lot is the project’s development
> > model. We’re an “old school” patch on a mailing list, similar to
> > the kernel. New developers complain that we should have GitHub
> > workflows so they can make point-and-click patch submissions. I
> > have made submissions to other projects that way, and I can see the
> > attraction of it. Equally, it does depend a lot on your review
> > requirements. We want many people to see our patches, not just one
> > person, and we greatly benefit from that comprehensive peer review.
> 
> I'm an old school engineer but I've never used the "patch to mailing
> list" model so I find it quite difficult. It seemed like someone must
> have solved this problem already so I've been looking around and
> found https://gitgitgadget.github.io/ This is what the git mailing
> list use - you create a PR in Github and it sends the patch to the
> mailing list. You then respond to comments and update the PR. Finally
> someone merges your patch.
> It seems like this could be implemented alongside the existing
> Patchwork solution relatively easily and provide an alternate route
> to those of us who prefer the modern tooling.
> 
> For a complete solution you could add the same functionality to
> Patchwork (I assume you
> use https://github.com/getpatchwork/patchwork) to make it bi-
> directional - patches from the mailing list are created as PRs in
> Github, and PRs in Github have their patches emailed to the mailing
> list. This then solves your Patchtest problem - the tests could
> instead be implemented as CI builds in Github Actions (which is more
> maintainable) and the results fed back to the mailing list.
> Alternatives to Github are of course available.

I think it probably could be done, I've wondered about this myself. The
challenge is that even getting patchwork working reliably and then
having patchtest on top of that back and operational has been a
challenge. The next step is to get our testing back and see if we can
keep that going this time. We still don't have great automation about
updating patchwork patch status for example either.

If patchtest is successful, there are things we may be able to build on
top of that. I suspect the challenge will be "permissions" both on the
mailing list side and on the github/whatever side and whether it can
show/link the "real" users or not.

> Is the Patchtest source code available anywhere?

https://git.yoctoproject.org/patchtest/
https://git.yoctoproject.org/patchtest-oe/

Cheers,

Richard



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60596): https://lists.yoctoproject.org/g/yocto/message/60596
Mute This Topic: https://lists.yoctoproject.org/mt/99644713/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Request for Quotations for development work

2023-07-12 Thread Richard Purdie
On Tue, 2023-06-20 at 15:09 +0100, Richard Purdie via
lists.yoctoproject.org wrote:
> The project has some exciting plans in progress and I'm happy to say
> we're now in the fortunate position that we're able to seek quotations
> for some development work. For more details please see this page on
> website:
> 
> https://www.yoctoproject.org/community/yocto-project-engineering-request-for-quotation/
> 
> Please feel free to reach out to me or the Yocto Project TSC if there
> are any questions.

We've added an update to the end of the above link with a deadline for
quotations and some other answers to commonly asked questions.

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60554): https://lists.yoctoproject.org/g/yocto/message/60554
Mute This Topic: https://lists.yoctoproject.org/mt/99644713/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [meta-security][PATCH] layer: add more memory for Qemu machines

2023-06-29 Thread Richard Purdie
On Thu, 2023-06-29 at 13:45 +0300, Mikko Rapeli wrote:
> Hi,
> 
> On Wed, Jun 28, 2023 at 05:47:21PM +0100, Richard Purdie wrote:
> > On Wed, 2023-06-28 at 08:56 -0400, Armin Kuster wrote:
> > > Signed-off-by: Armin Kuster 
> > > ---
> > >  conf/layer.conf | 3 +++
> > >  1 file changed, 3 insertions(+)
> > > 
> > > diff --git a/conf/layer.conf b/conf/layer.conf
> > > index 334a945..5f289cb 100644
> > > --- a/conf/layer.conf
> > > +++ b/conf/layer.conf
> > > @@ -28,4 +28,7 @@ INHERIT += "sanity-meta-security"
> > >  
> > >  QB_KERNEL_CMDLINE_APPEND = " ${@bb.utils.contains('DISTRO_FEATURES', 
> > > 'apparmor', 'apparmor=1 security=apparmor', '', d)}"
> > >  
> > > +# We need more mem to run many apps in this layer
> > > +QB_MEM="-m 2048"
> > > +
> > >  addpylib ${LAYERDIR}/lib oeqa
> > 
> > Putting that unconditionally in a layer.conf is a pretty poor thing to
> > do for usability IMO as it effectively forces that decision on anyone
> > including the layer. There has to be a better way to handle that, at
> > least conditionally on some override?
> 
> Currently these belong logically to machine config but what about generic
> machine targets. Could an image config define how much memory is needed
> to run the image on qemu?

I believe Armin did move these to the image recipe which is what we do
in core in some cases for this too. I think that makes more sense.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60474): https://lists.yoctoproject.org/g/yocto/message/60474
Mute This Topic: https://lists.yoctoproject.org/mt/99829661/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] report one objcopy bug and fix

2023-06-29 Thread Richard Purdie
On Wed, 2023-06-28 at 19:48 -0700, cdd...@ambarella.com wrote:
> Hi Richard,
>     the tar version is
>  tar --version
> tar (GNU tar) 1.30
> Copyright (C) 2017 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later
> .
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.
> 
> Written by John Gilmore and Jay Fenlason.
> 
> both of the two distro are the same.


That is rather unfortunate as we can't tell them apart :( I'm not sure
what we can do about this other than warn that 20.04.3 isn't supported
and/or has a bug?

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60472): https://lists.yoctoproject.org/g/yocto/message/60472
Mute This Topic: https://lists.yoctoproject.org/mt/99828392/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] report one objcopy bug and fix

2023-06-28 Thread Richard Purdie
On Wed, 2023-06-28 at 00:55 -0700, cdd...@ambarella.com wrote:
> Hi,
>     A bug is triggered when using docker + Ubuntu 20.04.3 LTS. The
> bug message is :
> 
> File: 'xxx/build/poky/meta/classes/kernel-module-split.bbclass',
> lineno: 68, function: extract_modinfo
>  0064:    raise msg
>  0065:    cmd = "%sobjcopy -j .modinfo -O binary %s %s" %
> (d.getVar("HOST_PREFIX") or "", tmpkofile, tmpfile)
>  0066:    else:
>  0067:    cmd = "%sobjcopy -j .modinfo -O binary %s %s" %
> (d.getVar("HOST_PREFIX") or "", file, tmpfile)
>  *** 0068:    subprocess.check_call(cmd, shell=True)
>  Exception: subprocess.CalledProcessError: Command 'aarch64-poky-
> linux-objcopy -j .modinfo -O binary ../keymaps/rc-alink-dtu-m.ko  
> ../tmpwhs5uwlt' returned non-zero exit status 1.
>  
>     The  "rc-alink-dtu-m.ko" file is damaged, and can't  analysis by
> file command :
>    $ file -b rc-alink-dtu-m.ko
>   data
>     and the damaged is caused by 
> build/poky/meta/classes/package.bbclass  perform_packagecopy()
>         cmd = 'tar --exclude=./sysroot-only -cf - -C %s -p -S . | tar
> -xf - -C %s' % (dest, dvar)
> 
>     remove the "-S " can fix the bug.
>     notes: this bug only appears in Ubuntu 20.04.3 LTS, and Ubuntu
> 20.04.6 LTS do not have the issue.

Do you have the tar versions for the two distro versions? Was this a
bug that was fixed in tar?

I'm not sure we can easily work around this, it may be better to error
if a build is attempted with a known broken tar version?

Cheers,

Richard



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60467): https://lists.yoctoproject.org/g/yocto/message/60467
Mute This Topic: https://lists.yoctoproject.org/mt/99828392/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [meta-security][PATCH] layer: add more memory for Qemu machines

2023-06-28 Thread Richard Purdie
On Wed, 2023-06-28 at 08:56 -0400, Armin Kuster wrote:
> Signed-off-by: Armin Kuster 
> ---
>  conf/layer.conf | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/conf/layer.conf b/conf/layer.conf
> index 334a945..5f289cb 100644
> --- a/conf/layer.conf
> +++ b/conf/layer.conf
> @@ -28,4 +28,7 @@ INHERIT += "sanity-meta-security"
>  
>  QB_KERNEL_CMDLINE_APPEND = " ${@bb.utils.contains('DISTRO_FEATURES', 
> 'apparmor', 'apparmor=1 security=apparmor', '', d)}"
>  
> +# We need more mem to run many apps in this layer
> +QB_MEM="-m 2048"
> +
>  addpylib ${LAYERDIR}/lib oeqa

Putting that unconditionally in a layer.conf is a pretty poor thing to
do for usability IMO as it effectively forces that decision on anyone
including the layer. There has to be a better way to handle that, at
least conditionally on some override?

Cheers,

Richard



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60466): https://lists.yoctoproject.org/g/yocto/message/60466
Mute This Topic: https://lists.yoctoproject.org/mt/99829661/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] python3-speechrecognition building using yocto recipe #yocto

2023-06-21 Thread Richard Purdie
On Wed, 2023-06-21 at 04:10 -0700, lavkhush2...@gmail.com wrote:
> Hi all, 
> 
> I want to create .deb file of package python3-speechrecognition  , I 
> successfully builded natively on target board riscv platform and i
> copied binary from target board and with using  recipe i am copying
> binary in path: /usr/lib/python3.8/site-packages and trying  to
> create .deb file from using yocto environment.
> 
> speechrecognition recipe is-
> 
> DESCRIPTION = "speechrecognition Packages with dependencies library's
> "
>  
> LICENSE = "CLOSED"
>  
> SRC_URI += "file://speechrecognition"
>  
> S = "${WORKDIR}/speechrecognition"
>  
> do_install() {
>         install -d  ${D}${libdir}/
>         cp -r ${S}/lib/* ${D}${libdir}/
> }
>  
> FILES_${PN} += "${libdir}/*"
> 
> 
> I am facing one issue here:-
> 
> ERROR: speechrecognition-3.9.0-r0 do_populate_sysroot: Fatal errors occurred 
> in subprocesses:
> Command '['riscv64-oe-linux-strip', '--remove-section=.comment', 
> '--remove-section=.note', 
> '/home/kush/khu/sources/khu-build/tmp-glibc/work/riscv64-oe-linux/speechrecognition/3.9.0-r0/sysroot-destdir/usr/lib/python3.8/site-packages/flac-linux-x86']'
>  returned non-zero exit status 1.
> > Subprocess output:riscv64-oe-linux-strip: Unable to recognise the
> > format of the input file `/home/integration-
> > team/kush/khu/sources/khu-build/tmp-glibc/work/riscv64-oe-
> > linux/speechrecognition/3.9.0-r0/sysroot-
> > destdir/usr/lib/python3.8/site-packages/flac-linux-x86'


I'll make a wild guess that the binary in "flac-linux-x86" is an x86
one and the riscv strip doesn't like x86 binaries.

I suspect your riscv target device doesn't like them much either.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60374): https://lists.yoctoproject.org/g/yocto/message/60374
Mute This Topic: https://lists.yoctoproject.org/mt/99674007/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Request for Quotations for development work

2023-06-20 Thread Richard Purdie
Hi All,

The project has some exciting plans in progress and I'm happy to say
we're now in the fortunate position that we're able to seek quotations
for some development work. For more details please see this page on
website:

https://www.yoctoproject.org/community/yocto-project-engineering-request-for-quotation/

Please feel free to reach out to me or the Yocto Project TSC if there
are any questions.

Regards,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60365): https://lists.yoctoproject.org/g/yocto/message/60365
Mute This Topic: https://lists.yoctoproject.org/mt/99644713/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [ptest-runner2] [PATCH] utils: Ensure data is only written with a mutex held

2023-06-19 Thread Richard Purdie
Currently the code can race as there is a read/write thread handling the stdio 
but
there is no guarantee that when the process exits, the thread has handled all 
the
data. This results in output where "END:" isn't actually at the end of the logs
but somewhere in the middle of the output.

Synchronisation is hard. The easiest way I can see to fix this is to have a 
mutex
for the output and then in the main thread, after the child exits, read any 
remaining
data. This avoids concurrent writes corrupting the output and ensures END: is
actually at the end of the test data.

Signed-off-by: Richard Purdie 
---
 utils.c | 25 +
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git a/utils.c b/utils.c
index ec57fa4..65b1df3 100644
--- a/utils.c
+++ b/utils.c
@@ -63,6 +63,7 @@ static struct {
int timeouted;
pid_t pid;
int padding1;
+   pthread_mutex_t fd_lock;
 } _child_reader;
 
 static inline char *
@@ -317,12 +318,13 @@ read_child(void *arg)
 
do {
r = poll(pfds, 2, _child_reader.timeout*1000);
+   pthread_mutex_lock(&_child_reader.fd_lock);
if (r > 0) {
char buf[WAIT_CHILD_BUF_MAX_SIZE];
ssize_t n;
 
if (pfds[0].revents != 0) {
-   n = read(_child_reader.fds[0], buf, 
WAIT_CHILD_BUF_MAX_SIZE);
+   n = read(_child_reader.fds[0], buf, 
WAIT_CHILD_BUF_MAX_SIZE);
if (n > 0)
fwrite(buf, (size_t)n, 1, 
_child_reader.fps[0]);
}
@@ -338,11 +340,13 @@ read_child(void *arg)
// as much data from the system as possible and kill 
the test
collect_system_state(_child_reader.fps[0]);
_child_reader.timeouted = 1;
+   pthread_mutex_unlock(&_child_reader.fd_lock);
kill(-_child_reader.pid, SIGKILL);
 }
 
fflush(_child_reader.fps[0]);
fflush(_child_reader.fps[1]);
+   pthread_mutex_unlock(&_child_reader.fd_lock);
} while (1);
 
return NULL;
@@ -444,6 +448,8 @@ run_ptests(struct ptest_list *head, const struct 
ptest_options opts,
int slave;
int pgid = -1;
pthread_t tid;
+   ssize_t n;
+   char buf[WAIT_CHILD_BUF_MAX_SIZE];
 
if (opts.xml_filename) {
xh = xml_create(ptest_list_length(head), opts.xml_filename);
@@ -453,10 +459,10 @@ run_ptests(struct ptest_list *head, const struct 
ptest_options opts,
 
do
{
-   if ((rc = pipe(pipefd_stdout)) == -1)
+   if ((rc = pipe2(pipefd_stdout, O_NONBLOCK)) == -1)
break;
 
-   if ((rc = pipe(pipefd_stderr)) == -1) {
+   if ((rc = pipe2(pipefd_stderr, O_NONBLOCK)) == -1) {
close(pipefd_stdout[0]);
close(pipefd_stdout[1]);
break;
@@ -466,6 +472,11 @@ run_ptests(struct ptest_list *head, const struct 
ptest_options opts,
fprintf(fp, "ERROR: Unable to detach from controlling 
tty, %s\n", strerror(errno));
}
 
+   if (pthread_mutex_init(&_child_reader.fd_lock, NULL) != 0) {
+   printf("Failed to init mutex\n");
+   exit(EXIT_FAILURE);
+   }
+
_child_reader.fds[0] = pipefd_stdout[0];
_child_reader.fds[1] = pipefd_stderr[0];
_child_reader.fps[0] = fp;
@@ -535,8 +546,13 @@ run_ptests(struct ptest_list *head, const struct 
ptest_options opts,
entime = time(NULL);
duration = entime - sttime;
 
-   /* Now the child has exited, ensure buffers are 
in sync before writing */
+   pthread_mutex_lock(&_child_reader.fd_lock);
+   while ((n = read(_child_reader.fds[0], buf, 
WAIT_CHILD_BUF_MAX_SIZE)) > 0)
+   fwrite(buf, (size_t)n, 1, 
_child_reader.fps[0]);
+   while ((n = read(_child_reader.fds[1], buf, 
WAIT_CHILD_BUF_MAX_SIZE)) > 0)
+   fwrite(buf, (size_t)n, 1, 
_child_reader.fps[1]);
fflush(NULL);
+   pthread_mutex_unlock(&_child_reader.fd_lock);
 
if (status) {
fprintf(fp, "\nERROR: Exit status is 
%d\n", status);
@@ -558,6 +574,7 @@ run_ptests(struct ptest_list *head, const struct 
ptest_options opts,
 
pthread_cancel(tid);
pthread_j

Re: [yocto] Going on supporting Ubuntu 18.04?

2023-06-19 Thread Richard Purdie
On Mon, 2023-06-19 at 09:53 +, Ross Burton wrote:
> On 19 Jun 2023, at 10:48, Alex Kiernan via lists.yoctoproject.org 
>  wrote:
> > FWIW we swapped out our x86_64 build machines for Aarch64 build
> > machines (both 20.04) in AWS with pretty much zero pain - it went way
> > better than I was expecting. That said we don't run ptests, or qemu,
> > or …
> 
> Sure, our meta-arm CI is 90% running on graviton3, this isn’t an “arm
> is inherently unstable” thing but rather that some of the current
> physical arm workers were unstable: reporting core temperatures of
> several hundred degrees celsius and shutting down, etc. Halstead
> resolved the problems in the end but it wasn’t trivial.
> 
> The worker running 1804 is the oldest and by far the slowest, but
> it’s also been - so far - the most reliable.

To be clear I'm not saying there is any problem in general with ARM
machines! We just know that particular hardware, we had a lot of issues
when we first set it up. Since we got the config right it has behaved a
lot more reliably than the others. Changing the OS introduces risk and
my question was whether we wanted to rush into that, particularly as it
risks the machine being written off if things go badly, I was already
under pressure about that as it is getting old. Things have got better
since and nobody is interested on support on that particular older
system.

Cheers,

Richard





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60346): https://lists.yoctoproject.org/g/yocto/message/60346
Mute This Topic: https://lists.yoctoproject.org/mt/99619554/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Going on supporting Ubuntu 18.04?

2023-06-19 Thread Richard Purdie
On Mon, 2023-06-19 at 09:44 +, Ross Burton wrote:
> On 19 Jun 2023, at 10:34, Richard Purdie via lists.yoctoproject.org 
>  wrote:
> > The ARM worker worries me a lot more. The 1804 worker is currently
> > stable but I do worry a bit what will happen when we change the OS on
> > that machine. In theory it should be fine and it could well be but that
> > hardware was very painful in the past.
> 
> Each Arm worker runs a different release of Ubuntu, so I’ve no
> problem with asking Michael to reimage the 1804 machine with
> something newer (maybe even something not-Debian for coverage).  If
> it suddenly becomes less stable that would be a very interesting
> datapoint!

Interesting, yes. If it breaks, I suspect the official advice will be
to scrap that machine rather than fix it though and we'll then be down
on build power :/.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60343): https://lists.yoctoproject.org/g/yocto/message/60343
Mute This Topic: https://lists.yoctoproject.org/mt/99619554/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Going on supporting Ubuntu 18.04?

2023-06-19 Thread Richard Purdie
On Mon, 2023-06-19 at 11:02 +0200, Alexander Kanavin wrote:
> Even if we would have a subscription with Canonical, it would not be
> fair to ask contributors to fix issues that occur only on distribution
> available through commercial channels. So yes, it should be dropped
> ASAP.
> 
> Alex
> 
> On Mon, 19 Jun 2023 at 10:35, Michael Opdenacker via
> lists.yoctoproject.org
>  wrote:
> > 
> > Greetings,
> > 
> > I know that we are still testing and supporting Ubuntu 18.04, but should
> > we go on doing it?
> > 
> > This version no longer has public updates, so unless we have a
> > subscription with Canonical, we are going to be out of sync with the
> > updates that Ubuntu 18.04 subscribers get.

This isn't as simple as you'd think.

The x86 worker will be dropped when maintenance comes around to that
point on the autobuilder, it is in the queue.

The ARM worker worries me a lot more. The 1804 worker is currently
stable but I do worry a bit what will happen when we change the OS on
that machine. In theory it should be fine and it could well be but that
hardware was very painful in the past.

I appreciate the idealist "lets drop it ASAP" but there are other
things to consider. Do we want to lose Michael Halstead and myself for
a couple of weeks trying to fix a new OS on it? Do we want to lose a
third of our ARM build power?

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60340): https://lists.yoctoproject.org/g/yocto/message/60340
Mute This Topic: https://lists.yoctoproject.org/mt/99619554/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [ptest-runner2] [PATCH] utils: Ensure buffers are flushed after child exits

2023-06-18 Thread Richard Purdie
We currently wait for the child to exit but we don't flush the buffers.
This can mean the output ends up out of sync and the END: line isn't at
the end of the logs.

We've recently seen a lot of issues related to this on the autobuilder.
Add in a flush call for all fds to ensure buffers are in sync. This
does appear to improve warnings on the autobuilder now we started detecting
the issue.

Signed-off-by: Richard Purdie 
---
 README.md | 2 +-
 utils.c   | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 85ef58a..e6f548e 100644
--- a/README.md
+++ b/README.md
@@ -49,7 +49,7 @@ $ mtrace ./ptest-runner $MALLOC_TRACE
 ## Contributions
 
 For contribute please send a patch with subject prefix "[ptest-runner]" to 
-yo...@yoctoproject.org.
+yocto@lists.yoctoproject.org.
 
 ## Links
 
diff --git a/utils.c b/utils.c
index a67ac11..ec57fa4 100644
--- a/utils.c
+++ b/utils.c
@@ -535,6 +535,9 @@ run_ptests(struct ptest_list *head, const struct 
ptest_options opts,
entime = time(NULL);
duration = entime - sttime;
 
+   /* Now the child has exited, ensure buffers are 
in sync before writing */
+   fflush(NULL);
+
if (status) {
fprintf(fp, "\nERROR: Exit status is 
%d\n", status);
rc += 1;
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60333): https://lists.yoctoproject.org/g/yocto/message/60333
Mute This Topic: https://lists.yoctoproject.org/mt/99601736/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH 0/3] fix test results storage for mickledore

2023-06-16 Thread Richard Purdie
On Fri, 2023-06-16 at 16:58 +0200, Alexis Lothoré wrote:
> On 6/15/23 22:34, Alexis Lothoré wrote:
> > Hello Richard, Michael,
> > On 6/15/23 15:41, Richard Purdie wrote:
> > > On Wed, 2023-06-14 at 10:56 +0200, Alexis Lothoré via 
> > > lists.yoctoproject.org wrote:
> > > > From: Alexis Lothoré 
> > > > 
> > > > There must be a more robust rework to do (because the issue will likely
> > > > happen on each major delivery), but I aimed for the quick and small fix 
> > > > to
> > > > quickly bring back tests results storage without breaking other things 
> > > > in
> > > > the process
> > > 
> > > Thanks, I've merged this as it is a good first set of steps.
> > > 
> > > As I mentioned, I think we should hardcode poky + "not ending with -
> > > next" as the test, then we shouldn't run into this issue again.
> > 
> > ACK, will do the fix
> > > 
> > > I'd also like to retroactively push the test results for 4.2 since we
> > > have them and should be able to merge them onto the branch. I'd then
> > > like to see what the revised 4.3 M1 report looks like.
> > 
> > I have started importing the archive kindly prepared by Michael in 
> > poky-contrib
> > test-results repository, but I am struggling a bit regarding regression 
> > report
> > generation with freshly imported result. I still have to confirm if it is 
> > the
> > generated tag that is faulty or if it is a kind of an edge case in 
> > resulttool
> 
> So, I have managed to generate the regression report locally (there's likely a
> tag issue for older tests stored in test-results to be circumvented in
> resulttool), and it is a bit disappointing. The report is 13MB large, and is
> filled once again with false positive likely due to non static ptest names,
> likely due to leaky build logs. Here's a sample
> 
> ptestresult.gcc-g++-user.c-c++-common/Wbidi-chars-ranges.c  -std=gnu++14
> expected multiline pattern lines 13-17 was found: "\s*/\* \}
> if \(isAdmin\)  begin admins only \*/[^\n\r]*\n
> \^\n
> \|   \|
> \|[^\n\r]*\n   \|   \|
> end of bidirectional context[^\n\r]*\n   U\+202E \(RIGHT-TO-LEFT
> OVERRIDE\) U\+2066 \(LEFT-TO-RIGHT ISOLATE\)[^\n\r]*\n": PASS -> None
> ptestresult.gcc-g++-user.c-c++-common/Wbidi-chars-ranges.c  -std=gnu++14
> expected multiline pattern lines 26-31 was found: " /\* end admins only
>  \{ \*/[^\n\r]*\n   
> \^\n\|  \|\|[^\n\r]*\n
>  \|  \|end of bidirectional context[^\n\r]*\n
> \|  U\+2066 \(LEFT-TO-RIGHT ISOLATE\)[^\n\r]*\n
>   U\+202E \(RIGHT-TO-LEFT OVERRIDE\)[^\n\r]*\n": PASS -> None
> 
> Most of this noise is about gcc ptests, there is also a bit about python3 and
> ltp. I manually trimmed gcc false positive to reach a reasonable size, here 
> it is:
> https://pastebin.com/rYZ3qYMK

Thanks for getting us the diff!

Going through the details there, most of it is "expected" due to
changes in version of the components. I did wonder if we could somehow
show that version change?

I'm starting to wonder if we should:

a) file two bugs for cleaning up the python3 and gcc test results
b) summarise the python3 and gcc test results in the processing rather
than printing in full if the differences exceed some threshold (40
changes?)

Basically we need to make this report useful somehow, even if we have
to exclude some data for now until we can better process it.

I'm open to other ideas...

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60328): https://lists.yoctoproject.org/g/yocto/message/60328
Mute This Topic: https://lists.yoctoproject.org/mt/99523809/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH 0/3] fix test results storage for mickledore

2023-06-15 Thread Richard Purdie
On Wed, 2023-06-14 at 10:56 +0200, Alexis Lothoré via lists.yoctoproject.org 
wrote:
> From: Alexis Lothoré 
> 
> This series is a follow-up for the 4.3_M1.rc1 regression report issue.
> 
> It has been observed that the report is empty. This issue is linked to
> configuration description in yocto-autobuilder-helper, and has been
> identified through the following steps:
> - empty report is supposed to be a comparison between yocto-4.2 (4.2.rc3)
>   and 4.3_M1.rc1
> - yocto-4.2 results are almost empty: we only find test results from Intel
>   QA (pushed _after_ the AB build) and not the AB test results
> - tests results are managed by send-qa-email.send-qa-email uses resulttool
>   to systematically gather and store test results in local git directory
> - however, it looks for basebranch/comparebranch to know if those results
>   can be pushed onto git server, and those variables depend on config.json
>   content
> - yocto-4.2 (4.2.rc3) has been built on release branch mickledore
>   (https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/5212)
> - since mickledore is not yet described in config.json, send-qa-email
>   considers it as a "work" branch (contrary to a "release" branch) and does
>   not push test results
> 
> As a consequence:
> - first commit brings in python logger
> - second commit adds a warning when such case happen, since we are able to
>   detect it
> - third fix actually adds mickledore as a release branch to properly store
>   again test results
> 
> There must be a more robust rework to do (because the issue will likely
> happen on each major delivery), but I aimed for the quick and small fix to
> quickly bring back tests results storage without breaking other things in
> the process

Thanks, I've merged this as it is a good first set of steps.

As I mentioned, I think we should hardcode poky + "not ending with -
next" as the test, then we shouldn't run into this issue again.

I'd also like to retroactively push the test results for 4.2 since we
have them and should be able to merge them onto the branch. I'd then
like to see what the revised 4.3 M1 report looks like.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60309): https://lists.yoctoproject.org/g/yocto/message/60309
Mute This Topic: https://lists.yoctoproject.org/mt/99523809/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH] ab-janitor: only rm trashdir files over 60s old

2023-06-15 Thread Richard Purdie
On Tue, 2023-06-13 at 13:05 -0400, Trevor Gamblin wrote:
> [YOCTO #14952] -- https://bugzilla.yoctoproject.org/show_bug.cgi?id=14952
> 
> Help avoid contention by adding a check to ensure files and directories
> are at least 60s old when attempting to delete them from the trashdir,
> in case bitbake or another process is still actively using them.
> 
> Signed-off-by: Trevor Gamblin 
> ---
>  janitor/ab-janitor | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/janitor/ab-janitor b/janitor/ab-janitor
> index 080b598..659817e 100755
> --- a/janitor/ab-janitor
> +++ b/janitor/ab-janitor
> @@ -43,7 +43,13 @@ def trash_processor(trashdir):
>  try:
>  files = os.listdir(trashdir)
>  if files:
> -os.system("nice -n 10 ionice -c 3 rm %s/* -rf" % trashdir)
> +for file in files:
> +file_path = trashdir + "/" + file
> +file_age = time.time() - os.path.getmtime(file_path)
> +if file_age >= 60:
> +os.system("nice -n 10 ionice -c 3 rm %s -rf" % 
> file_path)
> +else:
> +print("Not removing '%s' - age is only %s seconds. 
> There may be another process using it" % (file_path, str(int(file_age
>  else:
>  time.sleep(120*60) # 30 minutes
>  except Exception as e:

Looks good, merged, thanks!

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60308): https://lists.yoctoproject.org/g/yocto/message/60308
Mute This Topic: https://lists.yoctoproject.org/mt/99510262/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #fakeroot #pseudo-native #xattr #attr-native #yocto

2023-06-14 Thread Richard Purdie
On Wed, 2023-06-14 at 08:48 -0700, jbarnesweb via lists.yoctoproject.org wrote:
> Thanks for your reply, Richard.
> 
> The  task was elided above. Here's what the recipe looks like now. (I removed 
> the fakeroot decorator from the function signature). It still stops with 
> Operation not permitted.
> 
> label_sign_verity() {
> ...
>     ${DUMPIMAGE} -T flat_dt -p 0 -o ${SOCOS_IMAGEDIR}/Image ${FITIMAGE_PATH} 
> #Does this contribute to the error? 
> ...
>     echo ${WHOAMI} >>${WORKDIR}/socosupdate_sign_and_package.txt # prints root
> ...
>     ${SETFATTR} -n security.ima -v 
> 0x${IMA_HEADER}${HASH_ALGO}${KEY_ID}${SIG_LENGTH}${DTB_SIG} 
> ${SOCOS_IMAGEDIR}/Image # Operation not permitted.
> }
> ...
> fakeroot do_sign_and_package() {
> ...
>     label_sign_verity ${SOCOSUPDATEDIR}/ ${FITIMAGE_PATH} 
> >>${WORKDIR}/socosupdate_sign_and_package.txt
> }
> ...
> addtask sign_and_package after do_deploy before do_build


Extended attributes aren't something we've done a lot of work with. In
theory pseudo does have support for them but it does get complicated as
there are things we need to emulate and things we shouldn't as they
relate to the host system.

It looks like pseudo does have xattr enabled by default. it is possible
setfattr is calling some glibc function we don't intercept, or it is
calling a syscall directly and bypassing glibc.

The code in pseudo is here:

https://git.yoctoproject.org/pseudo/tree/ports/linux/xattr/pseudo_wrappers.c#n155

called from here:

https://git.yoctoproject.org/pseudo/tree/ports/linux/xattr/guts/fsetxattr.c


You could put:

export PSEUDO_DEBUG = "nfoPdeViDxywc"

into the recipe in question and have a look at what the pseudo logfile
says (${WORKDIR}/pseudo/pseudo.log), see if it is catching this at all.
It will log a lot of info with that.

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60307): https://lists.yoctoproject.org/g/yocto/message/60307
Mute This Topic: https://lists.yoctoproject.org/mt/99511297/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #fakeroot:https://lists.yoctoproject.org/g/yocto/mutehashtag/fakeroot
Mute 
#pseudo-native:https://lists.yoctoproject.org/g/yocto/mutehashtag/pseudo-native
Mute #xattr:https://lists.yoctoproject.org/g/yocto/mutehashtag/xattr
Mute #attr-native:https://lists.yoctoproject.org/g/yocto/mutehashtag/attr-native
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH 0/3] fix test results storage for mickledore

2023-06-14 Thread Richard Purdie
On Wed, 2023-06-14 at 16:29 +0200, Alexandre Belloni wrote:
> On 14/06/2023 14:15:54+0200, Alexis Lothoré wrote:
> > On 6/14/23 12:31, Richard Purdie wrote:
> > > On Wed, 2023-06-14 at 10:56 +0200, Alexis Lothoré via
> > > lists.yoctoproject.org wrote:
> > > > From: Alexis Lothoré 
> > > > 
> > > > This series is a follow-up for the 4.3_M1.rc1 regression report issue.
> > > > 
> > > > It has been observed that the report is empty. This issue is linked to
> > > > configuration description in yocto-autobuilder-helper, and has been
> > > > identified through the following steps:
> > > > - empty report is supposed to be a comparison between yocto-4.2 
> > > > (4.2.rc3)
> > > >   and 4.3_M1.rc1
> > > > - yocto-4.2 results are almost empty: we only find test results from 
> > > > Intel
> > > >   QA (pushed _after_ the AB build) and not the AB test results
> > > > - tests results are managed by send-qa-email.send-qa-email uses 
> > > > resulttool
> > > >   to systematically gather and store test results in local git directory
> > > > - however, it looks for basebranch/comparebranch to know if those 
> > > > results
> > > >   can be pushed onto git server, and those variables depend on 
> > > > config.json
> > > >   content
> > > > - yocto-4.2 (4.2.rc3) has been built on release branch mickledore
> > > >   
> > > > (https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/5212)
> > > > - since mickledore is not yet described in config.json, send-qa-email
> > > >   considers it as a "work" branch (contrary to a "release" branch) and 
> > > > does
> > > >   not push test results
> > > > 
> > > > As a consequence:
> > > > - first commit brings in python logger
> > > > - second commit adds a warning when such case happen, since we are able 
> > > > to
> > > >   detect it
> > > > - third fix actually adds mickledore as a release branch to properly 
> > > > store
> > > >   again test results
> > > > 
> > > > There must be a more robust rework to do (because the issue will likely
> > > > happen on each major delivery), but I aimed for the quick and small fix 
> > > > to
> > > > quickly bring back tests results storage without breaking other things 
> > > > in
> > > > the process
> > > > 
> > > > Alexis Lothoré (3):
> > > >   scripts/send-qa-email: use logger instead of raw prints
> > > >   scripts/send-qa-email: print warning when test results are not stored
> > > >   config.json: add mickledore as direct push branch for test results
> > > 
> > > Thanks for the analysis. I agree we need to somehow fix this properly.
> > > One solution might be to always push for poky if the branch name
> > > doesn't end with -next?
> > 
> > That might work indeed. If we are sure enough that no custom/feature branch 
> > will
> > be used in poky with send-qa-email (ie, only in poky-contrib), I can do the 
> > fix
> > this way
> 
> I sometimes use a different branch name when testing things out (like 64
> bit time) but as long as we all know, we can probably ensure this ends in
> -next.

That would always be in poky-contrib though?

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60304): https://lists.yoctoproject.org/g/yocto/message/60304
Mute This Topic: https://lists.yoctoproject.org/mt/99523809/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH 0/3] fix test results storage for mickledore

2023-06-14 Thread Richard Purdie
On Wed, 2023-06-14 at 10:56 +0200, Alexis Lothoré via
lists.yoctoproject.org wrote:
> From: Alexis Lothoré 
> 
> This series is a follow-up for the 4.3_M1.rc1 regression report issue.
> 
> It has been observed that the report is empty. This issue is linked to
> configuration description in yocto-autobuilder-helper, and has been
> identified through the following steps:
> - empty report is supposed to be a comparison between yocto-4.2 (4.2.rc3)
>   and 4.3_M1.rc1
> - yocto-4.2 results are almost empty: we only find test results from Intel
>   QA (pushed _after_ the AB build) and not the AB test results
> - tests results are managed by send-qa-email.send-qa-email uses resulttool
>   to systematically gather and store test results in local git directory
> - however, it looks for basebranch/comparebranch to know if those results
>   can be pushed onto git server, and those variables depend on config.json
>   content
> - yocto-4.2 (4.2.rc3) has been built on release branch mickledore
>   (https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/5212)
> - since mickledore is not yet described in config.json, send-qa-email
>   considers it as a "work" branch (contrary to a "release" branch) and does
>   not push test results
> 
> As a consequence:
> - first commit brings in python logger
> - second commit adds a warning when such case happen, since we are able to
>   detect it
> - third fix actually adds mickledore as a release branch to properly store
>   again test results
> 
> There must be a more robust rework to do (because the issue will likely
> happen on each major delivery), but I aimed for the quick and small fix to
> quickly bring back tests results storage without breaking other things in
> the process
> 
> Alexis Lothoré (3):
>   scripts/send-qa-email: use logger instead of raw prints
>   scripts/send-qa-email: print warning when test results are not stored
>   config.json: add mickledore as direct push branch for test results

Thanks for the analysis. I agree we need to somehow fix this properly.
One solution might be to always push for poky if the branch name
doesn't end with -next?

Since we have the release artefacts for the release, could we add the
test results after the fact now?

Id' be interested to see the 4.3 M1 to 4.2 comparison rerun with that
added.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60297): https://lists.yoctoproject.org/g/yocto/message/60297
Mute This Topic: https://lists.yoctoproject.org/mt/99523809/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #fakeroot #pseudo-native #xattr #attr-native #yocto

2023-06-13 Thread Richard Purdie
On Tue, 2023-06-13 at 10:55 -0700, jbarnesweb via lists.yoctoproject.org wrote:
> I'm trying to understand why I get an Operation not permitted error (errno 
> EPERM) with a fakeroot function that calls setfattr.
> Using yocto packages pseudo-native/attr-native
>  
> In my recipe, this function is called
>  
> fakeroot label_sign_verity() {
> ...
>     ls -l  ${FITIMAGE_PATH} >>${WORKDIR}/socosupdate_sign_and_package.txt
> #prints -rw-r--r-- 2 1000 1000 55097242 Jun 12 20:35 
> tmp/deploy/images/fitImage-initramfs-nested-rootfs-overlake-overlake
> #1000 is the $USER uid
>  
>     ${DUMPIMAGE} -T flat_dt -p 0 -o ${SOCOS_IMAGEDIR}/Image ${FITIMAGE_PATH}
>  
> ...
>  
>     echo $(ls -l ${SOCOS_IMAGEDIR}/Image) 
> >>${WORKDIR}/socosupdate_sign_and_package.txt
> #prints -rw-r--r-- 2 1000 1000 55097242 Jun 12 20:35 tmp/deploy/images/Image
>  
> echo PSEUDO_PREFIX=${PSEUDO_PREFIX} 
> PSEUDO_LOCALSTATEDIR=${PSEUDO_LOCALSTATEDIR} ${FAKEROOTCMD} ${SETFATTR} -n 
> security.ima -v ${IMA_HEADER}${HASH_ALGO}${KEY_ID}${SIG_LENGTH}${IMAGE_SIG} 
> ${SOCOS_IMAGEDIR}/Image >>${WORKDIR}/socosupdate_sign_and_package.txt
>  
> #prints 
> PSEUDO_PREFIX=/datadisk/LSG/main/LSG-linux-yocto/tmp/sysroots-components/x86_64/pseudo-native/usr
>  
> PSEUDO_LOCALSTATEDIR=/datadisk/LSG/main/LSG-linux-yocto/tmp/work/aarch64-msft-linux/socosupdate/1.0-r0/pseudo/
>  
> /datadisk/LSG/main/LSG-linux-yocto/tmp/sysroots-components/x86_64/pseudo-native/usr/bin/pseudo
>  
> /datadisk/LSG/main/LSG-linux-yocto/tmp/sysroots-components/x86_64/attr-native/usr/bin/setfattr
>  -n security.ima -v 
> 03020462ff80020100077DC6A2B32D1313FE072B1A44C10CC94C5FDE507A1B8A16ED5836E49A95EB4C1AC53D7A2E2D5E74F2C972B033F83FD405A5AD68DF6B8954EBB8803AF7A774D4573283A1AA0694B38379E8BF94E0B4378EF865E6606BB3E9534A0B916813F6C64DE53AB2B1B7CE579B54ADBA702BA7CF3F60B17769318642804950EADB2E6417F24FFA88F8684B5F319D9731AC07B9B3BD82A52BFDFAD4D6212C5E5D9BA3F4249385F3DC67E21DEA99B00FDECE224B99142FCB81840D93A6AE007F799484B70B1399A674378C4001D03F7529A69F851E642374118E53AC24F1E8849ECD1BC69E75A120E5408E3B73C26D47C658DEA3A3BC1EF7B7738AB75B94B3B6DC06155EEF
>  /datadisk/LSG/main/LSG-linux-y
> octo/tmp/work/aarch64-msft-linux/socosupdate/1.0-r0/deploy-socosupdate/socosupdate1908//squashfs-root/opt/msft/ap/app/SocOsUpdate/images/Image
> #If I run the above in the same shell that I ran bitbake, it succeeds.
>  
> ${SETFATTR} -n security.ima -v 
> ${IMA_HEADER}${HASH_ALGO}${KEY_ID}${SIG_LENGTH}${INITRD_SIG} 
> ${SOCOS_IMAGEDIR}/Image 
> # above fails with Operation not permitted, with or without fakeroot 
> decoration in function signature.
>  
> I've also tried running the command with the pseudo preamble directly
> inside the recipe with and without the fakeroot decoration in the
> function signature. Fails the same way. When I echo $(whoami), it
> says root when fakeroot is used and $USER when not.

fakeroot context is a task level operation, not a function specific
one. Have you tried running the task calling this function under
fakeroot?

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60282): https://lists.yoctoproject.org/g/yocto/message/60282
Mute This Topic: https://lists.yoctoproject.org/mt/99511297/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #fakeroot:https://lists.yoctoproject.org/g/yocto/mutehashtag/fakeroot
Mute 
#pseudo-native:https://lists.yoctoproject.org/g/yocto/mutehashtag/pseudo-native
Mute #xattr:https://lists.yoctoproject.org/g/yocto/mutehashtag/xattr
Mute #attr-native:https://lists.yoctoproject.org/g/yocto/mutehashtag/attr-native
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH] clobberdir: only move dirs more than 60s old

2023-06-13 Thread Richard Purdie
On Wed, 2023-06-07 at 15:41 -0400, Trevor Gamblin wrote:
> See: https://bugzilla.yoctoproject.org/show_bug.cgi?id=14952
> 
> Help avoid contention by adding a check to ensure targets are at
> least 60s old when attempting to move them into a trash directory.
> 
> Signed-off-by: Trevor Gamblin 
> ---
>  janitor/clobberdir | 10 +++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/janitor/clobberdir b/janitor/clobberdir
> index 16b019e..832173c 100755
> --- a/janitor/clobberdir
> +++ b/janitor/clobberdir
> @@ -52,8 +52,12 @@ trashdir = utils.getconfig("TRASH_DIR", ourconfig)
>  for x in [clobberdir]:
>  if os.path.exists(x) and os.path.exists(trashdir):
>  if (os.stat(trashdir).st_dev == os.stat(x).st_dev):
> -trashdest = trashdir + "/" + str(int(time.time())) + '-'  + 
> str(random.randrange(100, 10, 2))
> -mkdir(trashdest)
> -subprocess.check_call(['mv', x, trashdest])
> +x_age = time.time() - os.path.getmtime(x)
> +if x_age > 60:
> +trashdest = trashdir + "/" + str(int(time.time())) + '-'  + 
> str(random.randrange(100, 10, 2))
> +mkdir(trashdest)
> +subprocess.check_call(['mv', x, trashdest])
> +else:
> +print("Not moving '%s' - age is only %s seconds. There may 
> be another process using it" % (x, str(int(x_age
>  else:
>  subprocess.check_call(['rm', "-rf", x])

I think we need to do this on the other side of the setup.

If we ask to run clobberdir, it needs to succeed and I don't think the
move causes us issues. What does cause problems is if a bitbake server
was still active on a build directory and shutting down. If we move the
directory from under it, that is ok, what causes problems is if we
actually delete it.

The age check therefore needs to be in trash_processor() in janitor/ab-
janitor.

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60270): https://lists.yoctoproject.org/g/yocto/message/60270
Mute This Topic: https://lists.yoctoproject.org/mt/99392420/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] sstate-cache issue with SPDX enabled

2023-06-08 Thread Richard Purdie
On Wed, 2023-06-07 at 08:32 -0700, m...@ambarella.com wrote:
> We’re using Poky Kirkstone(LTS) with the latest version for our
> SDK. Recently, we're trying to enable the feature of creating SBOM in
> SPDX. But we met a sstate-cache checking failure issue when enabling
> SPDX to generate SBOM. It will be appreciated if you have any idea
> for the root cause. Refer to below details for the issue.
> https://docs.yoctoproject.org/4.0.10/singleindex.html#creating-a-software-bill-of-materials

We did work on this in master recently and there have been fixes merged
there, after which we enabled SPDX by default. 

Unfortunately they were not straightforward and this may make it tricky
to backport into the older releases. We're still testing them in master
before considering that.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60235): https://lists.yoctoproject.org/g/yocto/message/60235
Mute This Topic: https://lists.yoctoproject.org/mt/99403018/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [meta-mingw] [PATCH] Ignore WINDRES from hash

2023-06-08 Thread Richard Purdie
On Thu, 2023-06-08 at 07:03 +, Samuli Piippo wrote:
> Amend 6badedaa3e4df5b00b5aa992c5c174e315b09972 and add WINDRES
> to be ignored from hash, otherwise it contaminates sstate cache
> for every recipe.
> 
> Signed-off-by: Samuli Piippo 
> ---
>  conf/machine-sdk/include/mingw32-common.inc | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/conf/machine-sdk/include/mingw32-common.inc 
> b/conf/machine-sdk/include/mingw32-common.inc
> index 9ec6e07..f2806d8 100644
> --- a/conf/machine-sdk/include/mingw32-common.inc
> +++ b/conf/machine-sdk/include/mingw32-common.inc
> @@ -46,7 +46,7 @@ export WINDMC
>  export WINDRES
>  export RC
>  
> -BB_BASEHASH_IGNORE_VARS:append = " WINDRES RC"
> +BB_BASEHASH_IGNORE_VARS:append = " WINDRES RC WINDRES"

Isn't WINDRES already listed there?

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60225): https://lists.yoctoproject.org/g/yocto/message/60225
Mute This Topic: https://lists.yoctoproject.org/mt/99401942/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Register out-of-tree fetcher with devtool

2023-06-04 Thread Richard Purdie
On Fri, 2023-06-02 at 04:47 +, Weihmann, Konrad (Avnet Embedded)
wrote:
> 
> 
> 
> Hi all,
>  
> we do have an out-of-tree fetcher that I would like to make use of
> with devtool, for instance for upgrade checking.
> Within our recipes the fetcher is registered by this workaround
> 
> python () {
>     import foo
>     bb.fetch2.methods.append(foo.FooFetcher())
> }
>  
> Which isn’t the nicest possible solution still does the trick.
>  
> But that doesn’t work for devtool, as it seems to not take any but
> oe-core lib-paths into consideration.
> 
> For devtool check-upgrade-status 
> 
> I get something like “no handler for foo://… found”.
> 
> Is there any way to register this out-of-tree fetcher module so
> tinfoil/devtool can “see” them?

Currently, no, there isn't a good API for doing this. It is something
I've wondered how to fix for a while.

The addpylib directive we recently added does go some way to allowing
layers to pragmatically add python modules. There might be options to
buildup upon that extend the fetcher from there?

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60174): https://lists.yoctoproject.org/g/yocto/message/60174
Mute This Topic: https://lists.yoctoproject.org/mt/99280594/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #Yocto #GitHub mirror - tags

2023-06-02 Thread Richard Purdie
On Fri, 2023-06-02 at 01:03 -0700, Nemo wrote:
> Hi,
> I was using the GitHub mirrorfor shallow clones, and noticed that
> while its syncing commits, the tags aren't being pushed there.
> The last sync with tags happened in February, so its missing all tags
> since.

I've been trying to get this automated but I've resolved it for now.

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60164): https://lists.yoctoproject.org/g/yocto/message/60164
Mute This Topic: https://lists.yoctoproject.org/mt/99281996/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #github:https://lists.yoctoproject.org/g/yocto/mutehashtag/github
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by bin/cmake) #hardknott

2023-05-24 Thread Richard Purdie
On Wed, 2023-05-24 at 05:04 -0700, Mahendra Sondagar wrote:
> Hi.. community 
> Hope all are doing well 
>  
> I'm dealing with the Yocto hardknott with my Ubuntu host 
> The Ubuntu version is Ubuntu 22.04.2 LTS
>  
> During the building of the custom image, everything goes well until
> the build process hits the below error message 
>  
> | make  -f CMakeFiles/Makefile2 preinstall
> | make[1]: Entering directory '/home/aio/AIO_Systems_Yocto/var-fslc-
> yocto/build_aio/tmp/work/x86_64-linux/cmake-native/3.19.5-r0/build'
> | make[1]: Nothing to be done for 'preinstall'.
> | make[1]: Leaving directory '/home/aio/AIO_Systems_Yocto/var-fslc-
> yocto/build_aio/tmp/work/x86_64-linux/cmake-native/3.19.5-r0/build'
> | ERROR: oe_runmake failed
> | Install the project...
> | bin/cmake -P cmake_install.cmake
> | WARNING: exit code 1 from a shell command.
> | bin/cmake: /home/aio/AIO_Systems_Yocto/var-fslc-
> yocto/build_aio/tmp/sysroots-uninative/x86_64-
> linux/usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.30' not found
> (required by bin/cmake)
> | make: *** [Makefile:162: install] Error 1
> | ERROR: ExecutionError('/home/aio/AIO_Systems_Yocto/var-fslc-
> yocto/build_aio/tmp/work/x86_64-linux/cmake-native/3.19.5-
> r0/temp/run.do_install.194605', 1, None, None)
>  
> for instance, I have also verified the installed version of the GLIBC
> via the below command 
> strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBCXX
>  
> From where I got to know that, it's already installed!
>  
> Can anyone help me here to resolve the issue?
> All comments and suggestions welcome  

Upgrading uninative to a version from master will probably help fix
that. As Khem mentions, hardknott is know longer supported through.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60116): https://lists.yoctoproject.org/g/yocto/message/60116
Mute This Topic: https://lists.yoctoproject.org/mt/99106807/21656
Mute #hardknott:https://lists.yoctoproject.org/g/yocto/mutehashtag/hardknott
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [meta-darwin][PATCH 1/2] Add new maintainers

2023-05-17 Thread Richard Purdie
On Wed, 2023-05-17 at 08:15 +0200, Etienne Cordonnier via
lists.yoctoproject.org wrote:
> From: Etienne Cordonnier 
> 
> Signed-off-by: Etienne Cordonnier 
> ---
>  README | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/README b/README
> index a58ab39..e4ac9bc 100644
> --- a/README
> +++ b/README
> @@ -48,7 +48,10 @@ This layer depends on:
>   layers: meta-clang
>   branch: kirkstone
>  
> -Layer Maintainer: Richard Purdie 
> +Layer Maintainers: Richard Purdie 
> +   Dominik Schnitzer 
> +   Etienne Cordonnier 
> +
>  
>  Please send changes to the yocto mailing list with [meta-darwin] in the 
> subject line,
>  cc'ing the maintainer.

Thanks, I've merged these. It is great to see the layer being used!

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60040): https://lists.yoctoproject.org/g/yocto/message/60040
Mute This Topic: https://lists.yoctoproject.org/mt/98943261/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [qa-build-notification] [yocto] QA notification for completed autobuilder build (yocto-3.1.25.rc1)

2023-05-11 Thread Richard Purdie
On Thu, 2023-05-11 at 03:38 +, Jing Hui Tham wrote:
> Hi All,
>  
> QA for yocto-3.1.25.rc1 is completed. This is the full report for this 
> release:  
> https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults
>  
> === Summary 
> No high milestone defects.
>  
> No new issue found. 

Since this was a green build with no issues found, I can approve this,
thanks!

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59980): https://lists.yoctoproject.org/g/yocto/message/59980
Mute This Topic: https://lists.yoctoproject.org/mt/98822460/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [qa-build-notification] [yocto] QA notification for completed autobuilder build (yocto-4.1.4.rc1)

2023-05-11 Thread Richard Purdie
On Mon, 2023-05-08 at 02:55 +, Jing Hui Tham wrote:
> Hi All,
>  
> QA for yocto-4.1.4.rc1 is completed. This is the full report for this 
> release:  
> https://git.yoctoproject.org/cgit/cgit.cgi/yocto-testresults-contrib/tree/?h=intel-yocto-testresults
>  
> === Summary 
> No high milestone defects.
>  
> No new issue found. 

Approved from the TSC, thanks!

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59979): https://lists.yoctoproject.org/g/yocto/message/59979
Mute This Topic: https://lists.yoctoproject.org/mt/98822458/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Yocto Project Community Manager updates

2023-05-04 Thread Richard Purdie
On Thu, 2023-05-04 at 16:40 +0200, Nicolas Dechesne wrote:
> After five years, I have decided to resign from my position as the
> Yocto Project Community Manager. I joined the OpenEmbedded community
> around 2008. I have fond memories of my early days, and still
> remember some of my first interactions on IRC and mailing lists! This
> is a very welcoming community, always helping new people with
> patience and kindness. Serving the project during the last five years
> is something I am very proud of, and I will be forever grateful to
> all of you for accepting me!

I know we've talked but I did want to publicly say thanks for
everything you've done in the role! 

We've had some good times and overcome some challenges in the last few
years, the project has definitely benefited from your involvement and
you'll be sorely missed.

Welcome to the role Josef, no pressure! :)

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59872): https://lists.yoctoproject.org/g/yocto/message/59872
Mute This Topic: https://lists.yoctoproject.org/mt/98685138/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] 4.2 release going to rc3

2023-04-22 Thread Richard Purdie
Hi,

Whilst we had a green QA report and were thinking rc2 would release, we
did get a user report of problems with graphics on qemu and sato was
inoperable (bug 15100).

Sadly our QA automation doesn't catch an issue like this and it mustn't
have shown on real hardware.

Due to the nature of the bug and the fact we have a fix (kernel upgrade
from 6.1.21 to 6.1.24), we're going to build an rc3. I was able to
include many of the migration guide and release note changes too.

It is build on the autobuilder at the moment. I have chosen to include
some CVE fixes and fairly critical bitbake fixes too since we are
rebuilding and they were available.

Cheers,

Richard





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59760): https://lists.yoctoproject.org/g/yocto/message/59760
Mute This Topic: https://lists.yoctoproject.org/mt/98431099/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [qa-build-notification] QA notification for completed autobuilder build (yocto-4.2.rc1)

2023-04-06 Thread Richard Purdie
Hi Jing Hui,

Unfortunately there is an issue that has come up regarding the recent
crate fetcher changes. This means we're going to need to move to an rc2
build so we should probably stop testing rc1.

I'll hopefully have that ready soon. Sorry about the churn.

Cheers,

Richard


On Thu, 2023-04-06 at 09:06 +, Jing Hui Tham wrote:
> Hi all,
>  
> Intel and WR YP QA is planning for QA execution for YP build yocto-4.2.rc1. 
> We are planning to execute following tests for this cycle:
>  
> OEQA-manual tests for following module:
> 1. OE-Core
> 2. BSP-hw
>  
> Runtime auto test for following platforms:
> 1. MinnowTurbot 32-bit
> 2. NUC 7
> 3. ADL
> 4. TGL NUC 11
> 5. Edgerouter
> 6. Beaglebone
>  
> ETA for completion next Tuesday, April 11.
>  
> Best regards,
> Jing Hui
> 
> 
> 
> > -Original Message-
> > From: qa-build-notificat...@lists.yoctoproject.org  > notificat...@lists.yoctoproject.org> On Behalf Of Richard Purdie
> > Sent: Thursday, April 6, 2023 3:57 PM
> > To:  
> > Cc: qa-build-notification 
> > Subject: [qa-build-notification] QA notification for completed autobuilder
> > build (yocto-4.2.rc1)
> > 
> > A build flagged for QA (yocto-4.2.rc1) was completed on the autobuilder and
> > is available at:
> > 
> > 
> > https://autobuilder.yocto.io/pub/releases/yocto-4.2.rc1
> > 
> > 
> > Build hash information:
> > 
> > bitbake: 46e1ea2e9a203992bb4de48ea21a8e736419ada2
> > meta-agl: 09135164a21a216c6e3e75d7decce896b92962f0
> > meta-arm: 3b4ab43d2f82b9db682a88dff4665807b62d1870
> > meta-aws: d243fc1ad45dde57b4caa182a92b685540475cb2
> > meta-intel: ec3147abdd0727fedc707330d58593097357f944
> > meta-mingw: 250617ffa524c082b848487359b9d045703d59c2
> > meta-openembedded: 60afa577e5656a7c82c902da11adc031d71ef8db
> > meta-virtualization: 48c07a48a0e7d56266f20a01bb0c8ebc1085453b
> > oecore: 5a96847e7ef2ff72958d739a91c90e2085c04bc6
> > poky: 08d564bd92fc476d7f0b4a6201b752e51cf93776
> > 
> > 
> > This is an automated message from the Yocto Project Autobuilder
> > Git: git://git.yoctoproject.org/yocto-autobuilder2
> > Email: richard.pur...@linuxfoundation.org
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> 
> 
> 
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59627): https://lists.yoctoproject.org/g/yocto/message/59627
Mute This Topic: https://lists.yoctoproject.org/mt/98101280/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] QA notification for completed autobuilder build (yocto-4.2.rc1)

2023-04-06 Thread Richard Purdie
A build flagged for QA (yocto-4.2.rc1) was completed on the autobuilder and is 
available at:


https://autobuilder.yocto.io/pub/releases/yocto-4.2.rc1


Build hash information: 

bitbake: 46e1ea2e9a203992bb4de48ea21a8e736419ada2
meta-agl: 09135164a21a216c6e3e75d7decce896b92962f0
meta-arm: 3b4ab43d2f82b9db682a88dff4665807b62d1870
meta-aws: d243fc1ad45dde57b4caa182a92b685540475cb2
meta-intel: ec3147abdd0727fedc707330d58593097357f944
meta-mingw: 250617ffa524c082b848487359b9d045703d59c2
meta-openembedded: 60afa577e5656a7c82c902da11adc031d71ef8db
meta-virtualization: 48c07a48a0e7d56266f20a01bb0c8ebc1085453b
oecore: 5a96847e7ef2ff72958d739a91c90e2085c04bc6
poky: 08d564bd92fc476d7f0b4a6201b752e51cf93776


This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.pur...@linuxfoundation.org




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59625): https://lists.yoctoproject.org/g/yocto/message/59625
Mute This Topic: https://lists.yoctoproject.org/mt/98100539/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH 2/2] scripts/generate-testresult-index.py: expose regression reports on web page

2023-03-24 Thread Richard Purdie
On Fri, 2023-03-24 at 10:00 +0100, Alexis Lothoré via
lists.yoctoproject.org wrote:
> From: Alexis Lothoré 
> 
> When available, expose tesresult-regressions-report.txt on non-release web 
> page,
> as it is done for many other artifacts currently
> 
> Signed-off-by: Alexis Lothoré 
> ---
>  scripts/generate-testresult-index.py | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/scripts/generate-testresult-index.py 
> b/scripts/generate-testresult-index.py
> index 09d2edb..122bac1 100755
> --- a/scripts/generate-testresult-index.py
> +++ b/scripts/generate-testresult-index.py
> @@ -30,6 +30,7 @@ index_template = """
>Type
>Branch
>Test Results Report
> +  Regressions Report
>Performance Reports
>ptest Logs
>Buildhistory
> @@ -43,6 +44,7 @@ index_template = """
> {% if entry[2] %} {{entry[2]}}{% endif %}
> {% if entry[4] %} {{entry[4]}}{% endif %}
>  {% if entry[3] %}Report{% endif %} 
> +{% if entry[9] %}Report{% endif %} 
> 
> {% for perfrep in entry[6] %}
>   {{perfrep[1]}}
> @@ -129,6 +131,10 @@ for build in sorted(os.listdir(path), key=keygen, 
> reverse=True):
>  if os.path.exists(buildpath + "/testresult-report.txt"):
>  testreport = reldir + "testresults/testresult-report.txt"
>  
> +regressionreport = ""
> +if os.path.exists(buildpath + "/testresult-regressions-report.txt"):
> +regressionreport = reldir + 
> "testresults/testresult-regressions-report.txt"
> +
>  ptestlogs = []
>  ptestseen = []
>  for p in glob.glob(buildpath + "/*-ptest/*.log"):
> @@ -165,7 +171,7 @@ for build in sorted(os.listdir(path), key=keygen, 
> reverse=True):
>  
>  branch = get_build_branch(buildpath)
>  
> -entries.append((build, reldir, btype, testreport, branch, buildhistory, 
> perfreports, ptestlogs, hd))
> +entries.append((build, reldir, btype, testreport, branch, buildhistory, 
> perfreports, ptestlogs, hd, regressionreport))
>  

In the interests of keeping that index page a manageable size, instead
of a new data column, I'd suggest we just add the link in the same TD
cell with the name "Regression"?

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59508): https://lists.yoctoproject.org/g/yocto/message/59508
Mute This Topic: https://lists.yoctoproject.org/mt/97819687/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH 0/8] fix regression reports generation on "master-next" branches

2023-03-22 Thread Richard Purdie
On Mon, 2023-03-13 at 15:51 +0100, Alexis Lothoré via
lists.yoctoproject.org wrote:
> From: Alexis Lothoré 
> 
> This series fixes regression report generation on "next" branches, as raised 
> in
> [1].
> 
> The first five patches are preparatory updates for the real fix, being either
> refactoring, cleanup or unit tests addition to better understand how 
> integration
> branches are used in send-qa-email.
> The proper fix is in 6th commit, followed by corresponding tests
> Finally, the last commit add Alexandre's "next" branch as "fork" branches to
> enable regression reports generation when testing patches, as suggested in [1]
> too.
> 
> Since patch testing branches are force-pushed on a regular basis, it is quite
> difficult to get a relevant testing scenario, so this series has been tested 
> by
> faking SHA1 in yocto_testresults_query to match some master-next results in
> yocto-testresults at the time of testing this series. I would gladly take
> feedback about this series running for real in a master-next branch
> 
> [1] https://lists.yoctoproject.org/g/yocto/message/59067
> 
> Alexis Lothoré (8):
>   scripts/utils: add unit tests for getcomparisonbranch
>   scripts/send-qa-email: remove unused variable
>   scripts/send-qa-email: invert boolean logic for release check
>   scripts/send-qa-email: protect is_release_version from None value
>   scripts/send-qa-email: add tests for is_release_version
>   scripts/send-qa-email: fix testing branches regression reporting
>   scripts/test_send_qa_email.py: add tests for base/target pair guessing
>   config: flag A. Belloni master-next branch as testing branch

I think there is a regression somewhere:

https://autobuilder.yoctoproject.org/typhoon/#/builders/85/builds/2085/steps/29/logs/stdio

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59489): https://lists.yoctoproject.org/g/yocto/message/59489
Mute This Topic: https://lists.yoctoproject.org/mt/97582163/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [linux-yocto] [kernel-cache][PATCH 08/11] bsp/intel-corei7-64-preempt-rt: Add support for Time-Sensitive Network

2023-03-07 Thread Richard Purdie
On Tue, 2023-03-07 at 21:12 +0100, Alexander Kanavin wrote:
> On Tue, 7 Mar 2023 at 19:23, Paul Gortmaker
>  wrote:
> > I won't repeat what others have already said, but just to be clear, the
> > kernel config for a platform is 100% independent from having "a rich
> > command line experience"
> > 
> > That is rootfs choice: "core-image-minimal" vs "core-image-full-cmdline".
> 
> There's also the question of what the BSP adds to the images from the
> machine definitions. Meta-intel in particular, adds *all* of
> linux-firmware, which takes up a cool 900Mb on target. Compare that to
> the modules (59Mb), or bzImage (11Mb). If you want to optimize the
> disk footprint for a specific board, it has to start with installing
> only the useful firmware; if you want to use the generic machine, then
> optimizing the kernel footprint makes no sense in light of what the
> complete set of firmwares consumes. No?

meta-intel may do that, it can do what it wants. OE-Core does not and
has no plans to. We install what is needed for a given platform and my
advice has always been machine definitions should specify what they
need which includes firmware.

That excess is the price meta-intel pays for trying to support
"everything" and behaving more like a desktop distro would. I don't
believe such products/devices are the ultimate target for most YP uses.
I agree they can be useful for some PoC/testing work.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#12237): 
https://lists.yoctoproject.org/g/linux-yocto/message/12237
Mute This Topic: https://lists.yoctoproject.org/mt/97420986/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [linux-yocto] [kernel-cache][PATCH 08/11] bsp/intel-corei7-64-preempt-rt: Add support for Time-Sensitive Network

2023-03-07 Thread Richard Purdie
On Tue, 2023-03-07 at 16:30 +0100, Alexander Kanavin wrote:
> On Tue, 7 Mar 2023 at 15:58, Paul Gortmaker
>  wrote:
> > Unfortunately, not everything is a module.  As to your "so what if the
> > existing machines enable everything under the sun and install it onto
> > the image" -- well obviously if people are evaluating Yocto for use in
> > a system with limited storage, and the initrd is 2x bigger than their
> > on-board storage size, then they are likely to just look elsewhere.
> 
> Forgot to mention: exactly for this purpose we have poky-tiny,
> linux-yocto-tiny, and core-image-initramfs-tiny. When those are
> available, I'd rather have the default choices approximate binary
> desktop distros as closely as possible - give me a rich command line
> experience with fullest support for whatever hardware there is.

I think it is worth being clear that Yocto Project is not a desktop
distro. We're interested in customisable targets and one of the
customisations is being able to minimise the kernel footprint.

Yes, there are cases you want *everything* but there are also cases
where you do not. The desktop kernel one binary solution is fine for
their target market but that isn't ours.

Tiny is an extreme of that but it doesn't mean "standard" YP looks like
a desktop. If you wanted a desktop distro, you can use a desktop
distro.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#12234): 
https://lists.yoctoproject.org/g/linux-yocto/message/12234
Mute This Topic: https://lists.yoctoproject.org/mt/97420986/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Question about psuedo abort errors

2023-03-05 Thread Richard Purdie
I saw you're still having problems even with the latest pseudo
revisions. I'll make some comments below.

On Thu, 2023-02-23 at 02:27 -0800, Tamis wrote:
> Install error:
> 
> + install -m 0644 
> WORKDIR/lib32-las/M-WBS0.1-r0/git/lasiflib/include/lasif_list.h 
> WORKDIR/lib32-las/M-WBS0.1-r0/git/lasiflib/include/lasif_pdinc.h 
> WORKDIR/lib32-las/M-WBS0.1-r0/git/lasiflib/include/lasif_util.h 
> WORKDIR/lib32-las/M-WBS0.1-r0/git/lasiflib/include/lasif_varmsg.h 
> WORKDIR/lib32-las/M-WBS0.1-r0/image/usr/include
>  
> abort()ing pseudo client by server request. See 
> https://wiki.yoctoproject.org/wiki/Pseudo_Abort for more details on this.
> Check logfile: WORDIR/lib32-las/M-WBS0.1-r0/pseudo//pseudo.log
>  
> Aborted (core dumped)
> + bb_exit_handler
> + ret=134
> + echo WARNING: exit code 134 from a shell command.
> WARNING: exit code 134 from a shell command.
> + exit 134
> -
> And pseudo log error output:
> --
> pid 26345 [parent 26344], doing new pid setup and server start
> Setup complete, sending SIGUSR1 to pid 26344.
> path mismatch [3 links]: ino 7868999 db 
> 'WORKDIR/lib32-las/M-WBS0.1-r0/package/usr/src/debug/lib32-las/M-WBS0.1-r0/git/lasiflib/include/lasif_list.h'
>  req 'WORKDIR/lib32-las/M-WBS0.1-r0/git/lasiflib/include/lasif_list.h'.

This is saying the file on disk with inode 7868999 is:
WORKDIR/lib32-las/M-WBS0.1-r0/git/lasiflib/include/lasif_list.h 

but the pseudo database thinks this is:

WORKDIR/lib32-las/M-WBS0.1-r0/package/usr/src/debug/lib32-las/M-WBS0.1-r0/git/lasiflib/include/lasif_list.h

so the question is when was this file created? It would be created by
do_package since it is in the /package/ directory. It would seem to be
being deleted outside of pseudo context.

> db cleanup for server shutdown, 08:38:52.482
> memory-to-file backup complete, 08:38:52.482.
> db cleanup finished, 08:38:52.482
> debug_logfile: fd 2
> pid 6423 [parent 6422], doing new pid setup and server start
> Setup complete, sending SIGUSR1 to pid 6422.
> path mismatch [3 links]: ino 7868999 db 
> 'WORKDIR/lib32-las/M-WBS0.1-r0/package/usr/src/debug/lib32-las/M-WBS0.1-r0/git/lasiflib/include/lasif_list.h'
>  req 'WORKDIR/lib32-las/M-WBS0.1-r0/git/lasiflib/include/lasif_list.h'.
> db cleanup for server shutdown, 09:13:45.899
> memory-to-file backup complete, 09:13:45.899.
> db cleanup finished, 09:13:45.899
> ---
> 
> I added the export PSEUDO_DEBUG = "nfoPcvdDyerpswikVx" in the recipe but I 
> couldn't understand much.
> In the whole build we have the RM_WORK enabled but for that current recipe we 
> have excluded it.

How are you excluding it? I have to wonder if that is somehow breaking
things due to dependency changes.

> Also the task order is the following:
> do_cleansstate (20698): log.do_cleansstate.20698
> do_cleanall (22404): log.do_cleanall.22404
> do_rm_work (24346): log.do_rm_work.24346
> do_fetch (24434): log.do_fetch.24434
> do_prepare_recipe_sysroot (24465): log.do_prepare_recipe_sysroot.24465
> do_unpack (24466): log.do_unpack.24466
> do_patch (24493): log.do_patch.24493
> do_populate_lic (24665): log.do_populate_lic.24665
> do_deploy_source_date_epoch (24666): log.do_deploy_source_date_epoch.24666
> do_configure (24765): log.do_configure.24765
> do_compile (24809): log.do_compile.24809
> do_install (27477): log.do_install.27477
> do_populate_sysroot (27510): log.do_populate_sysroot.27510
> do_package (27509): log.do_package.27509
> do_packagedata (29544): log.do_packagedata.29544
> do_package_qa (29592): log.do_package_qa.2959
> do_package_write_ipk (29593): log.do_package_write_ipk.29593
> 
> Ok build till now.
> Next invocation
> do_prepare_recipe_sysroot (23311): log.do_prepare_recipe_sysroot.23311
> do_configure (23769): log.do_configure.23769
> do_compile (24132): log.do_compile.24132
> do_install (26628): log.do_install.26628

Are you saying the first failure is here in the do_install above?

> do_install (32535): log.do_install.32535
> do_install (4454): log.do_install.4454
> do_install (26343): log.do_install.26343
> do_rm_work (4905): log.do_rm_work.4905

This is odd. Why would an rm_work run here?

> do_deploy_source_date_epoch (4919): log.do_deploy_source_date_epoch.4919
> do_configure (4954): log.do_configure.4954
> do_compile (5003): log.do_compile.5003
> do_install (6421): log.do_install.6421

This is also odd since if rm_work did run, shouldn't the unpack/patch
need to run again? Didn't you say you'd disabled rm_work for this
recipe?

> All above are failures in do_install phase. You see multiple
> invocations trying to fix the error and adding more debug.
> If I don't do a cleansstate or cleanall the recipe is not building.
> We do not delete or manipulate any 

Re: [yocto] Question about psuedo abort errors

2023-02-24 Thread Richard Purdie
Hi,

On Thu, 2023-02-23 at 02:27 -0800, Tamis wrote:
> I know that this might be an old thread but we seem to deal the exact
> same issue in our yocto builds, that's why I am writing here.
> Recently we switched from sumo yocto branch to dunfel.
> Now some recipes with nothing special in it, just compile and
> install, are failing in the install phase because of pseudo abort.
> Those recipes are installing via the do_install some headers files
> into the usr/include folder and later those headers are share via the
> sysroot to other recipes.
> I read what has been written above but I didn't see a final solution.
> So I am also trying to understand the error in order to find a
> solution.
> If we do a clean build  (bitbake lib32-las -c cleansstate) then the
> recipe compiles fines. But if we invoke the building without the
> cleasstate the recipe fails in install phase due to pseudo.

I'm curious if you upgrade to the latest pseudo revision whether this
helps at all. There was a fix that was partially related to issues like
this.

The error either means the file was deleted on disk without pseudo
seeing it, then it was created again or that there is some kind of race
in file creation confusing pseudo. The latest commits to pseudo try and
fix the latter.

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59296): https://lists.yoctoproject.org/g/yocto/message/59296
Mute This Topic: https://lists.yoctoproject.org/mt/91650136/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] AlmaLinux 8.5 support in Langdale

2023-02-22 Thread Richard Purdie
On Mon, 2023-02-20 at 13:09 +, Chaplin, Kris via
lists.yoctoproject.org wrote:
> Thanks Alex,
> 
> > Note that the list of supported host distros is set only for the poky
> > reference distro with a weak assignment, and not for oe-core or yocto
> > as a whole. If you do your own tests, and they don't show issues, you
> > can mark any distro as compatible, or install buildtools tarball to
> > insulate your build from problematic host tools (such as native gcc
> > compiler).
> That's understood - thank you.  As there were some changes upstream, I 
> wanted to ensure that we have as much overlap as possible, so customers 
> looking to integrate our meta layers together with others would have a 
> tried and tested path.
> > AlmaLinux in particular is a spinoff of RHEL and thus extremely
> > conservative within the same major version.
> 
> Yes - my assumption is that we should be ok with 8.x, but we always test 
> against a specific version / versions and document these, to save any 
> corner cases causing unexpected failures.
> 
> Appreciate the help.

Just to loop back, we've updated all the branches to mention 8.7 now
and tweaked some of the other entries too (thanks Steve!).

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59275): https://lists.yoctoproject.org/g/yocto/message/59275
Mute This Topic: https://lists.yoctoproject.org/mt/97084127/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] AlmaLinux 8.5 support in Langdale

2023-02-20 Thread Richard Purdie
On Mon, 2023-02-20 at 09:12 +, Chaplin, Kris via
lists.yoctoproject.org wrote:
> Hello,
> 
> I'm working with our team to test supported development OS for our 
> AMD/Xilinx PetaLinux distribution built on top of Langdale. The official 
> supported Linux distributions are Ubuntu, Fedora, AlmaLinux, Debian and 
> OpenSUSE, so we regtest our flow on each of these prior to advertising 
> support.
> 
> However, the supported version of AlmaLinux for Langdale is 8.5, and 
> this is not available for download any more 
> (https://mirrors.almalinux.org/isos.html).  The earliest version is 8.6, 
> with 8.7 being most current.  The 8.5 isos are stripped from the 
> mirrors, so url-games dont work here either.
> 
> What is the recommended verison of AlmaLinux to test against Langdale?  
> Is there a plan to update the Yocto release notes to reflect this?

We're testing against AlmaLinux 8.7 so I think this is a documentation
issue and the value hasn't been updated to match what we're now testing
against. I've updated master and I suspect we'll follow up with changes
to the other release branches.

Thanks for mentioning it.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59251): https://lists.yoctoproject.org/g/yocto/message/59251
Mute This Topic: https://lists.yoctoproject.org/mt/97084127/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [meta-oe][dunfell]nativesdk-postgresql - causing chown: invalid user: 'postgres:postgres'

2023-02-10 Thread Richard Purdie
On Thu, 2023-02-09 at 23:16 -0800, alexander.roda...@brose.com wrote:
> I integrated the changes of the commit you posted, however i still
> have the same error message coming up. Do i need to make changes in
> the recipe to make the  intercept work?

No, it should have changed PATH in the recipe and used these intercept
scripts. The intercept scripts should have changed the user/group to
"root", which would work under pseudo in the SDK context.

You'll probably have to debug it a little bit, I'd check to see if the
intercept scripts are being called, if so, then check the
substitutions. If not, work out what PATH looks like and why they're
not being called.

That patch is how we fixed the issue on master though so it should be
close, I'm surprised it doesn't work.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59208): https://lists.yoctoproject.org/g/yocto/message/59208
Mute This Topic: https://lists.yoctoproject.org/mt/96849110/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [meta-oe][dunfell]nativesdk-postgresql - causing chown: invalid user: 'postgres:postgres'

2023-02-09 Thread Richard Purdie
On Thu, 2023-02-09 at 00:05 -0800, alexander.roda...@brose.com wrote:
> Hello,
> 
> i am trying to install the PostgreSQL and the development package
> (postgresql-server-dev) as part of the SDK. 
> 
> I added via bbappend BBCLASSEXTED_append = "nativesdk" to the
> postgresql_12.9.bb. 
> 
> I can build an image for a raspberry 4 pi. 
> However when i try to populate the sdk it stops with an error during
> the do_install() for PostgreSQL:
> | chown: invalid user: ‘postgres:postgres’
> | WARNING: exit code 1 from a shell command.
> 
> This happens when the owner for pgsql libs is changed to the user
> postgres. 
> I did create a the user manually on the host system but this didn't
> help with the problem. 
> 
> If i build the image itself it works flawlessly. 
>  
> Any ideas what I'm doing wrong?

I suspect this was solved with 

https://git.yoctoproject.org/poky/commit/?id=8dd0a45c858290288e1a1fef046b5cfcf7ff6257

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59202): https://lists.yoctoproject.org/g/yocto/message/59202
Mute This Topic: https://lists.yoctoproject.org/mt/96849110/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [autobuilder][PATCH 0/3] ease send-qa-email usage with older history

2023-02-07 Thread Richard Purdie
On Tue, 2023-02-07 at 15:52 +0100, Alexis Lothoré wrote:
> Hello Richard,
> 
> On 2/7/23 15:42, Richard Purdie wrote:
> > On Tue, 2023-02-07 at 15:12 +0100, Alexis Lothoré via
> > lists.yoctoproject.org wrote:
> > > I assumed that no tool is currently trying to parse the regression 
> > > reports, so
> > > I chose arbitrarily the metadata format.
> > 
> > The reports have never been useful enough to parse so that should be
> > fine.
> 
> Good
> 
> > > @Richard : on a side note, the broken From field in my previous emails 
> > > was probably
> > > due to an accent in my name, it should be fixed now. Please let me know 
> > > if you
> > > still have issues with this, in this case I will force the --from field.
> > 
> > It still shows incorrectly for me. It is an issue with the bootlin mail
> > setup and DNS, not to do with the character encoding. My changing the
> > accent, it is now duplicating the signed off by lines so I'd avoid
> > doing that!
> Well, that's a shame, sorry for the annoyance. Once the patches are approved 
> or
> lead to a v2, I will resend with "From:" forced, except if you want me to
> re-send it right now.

Patches look otherwise ok to me so please resend!

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59194): https://lists.yoctoproject.org/g/yocto/message/59194
Mute This Topic: https://lists.yoctoproject.org/mt/96807553/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [autobuilder][PATCH 0/3] ease send-qa-email usage with older history

2023-02-07 Thread Richard Purdie
On Tue, 2023-02-07 at 15:12 +0100, Alexis Lothoré via
lists.yoctoproject.org wrote:
> Hello,
> this minor patch series bring minor fixes into send-qa-email to ease
> development/debugging in it:
> - fetching current build test results works well in CI builds because we are
>   sure that the build results will be in the 5 upper commits of testsresults
>   repository shallow clone (because it has been pushed just before preparing 
> the
>   regression report). It is not true when trying to run send-qa-email offline 
> on
>   older revisions (for development, debugging or improvement of scripting), so
>   we need to force-fetch the target revision as it is done currently for the
>   base revision. As a consequence, this new behaviour allows to make even more
>   "shallow" clones of testresults since all needed data will be force-fetched
> - When downloading a test regression report on official release webserver, we 
> do
>   not have information about what base have been used or what is the exact
>   tested revision (at least, without checking the corresponding build logs on
>   autobuilder). So I propose to add this information at the top of each
>   regression report, making it look like this:
> 
> == Regression report ==
> => Target:  master  (5e249ec855517765f4b99e8039cb888ffa09c211)
> => Base:4.2_M1  (4d19594b8bdacde6d809d3f2a25cff7c5a42295e)
> ===
> 
> Match: oeselftest_almalinux-9.1_qemux86-64_20221208113119
>oeselftest_centos-8_qemux86-64_20230126230327
> Match: oeselftest_debian-11_qemux86-64_20221208033325
>oeselftest_debian-11_qemux86-64_20230126150229
> [...]
> 
> I assumed that no tool is currently trying to parse the regression reports, so
> I chose arbitrarily the metadata format.

The reports have never been useful enough to parse so that should be
fine.

> @Richard : on a side note, the broken From field in my previous emails was 
> probably
> due to an accent in my name, it should be fixed now. Please let me know if you
> still have issues with this, in this case I will force the --from field.

It still shows incorrectly for me. It is an issue with the bootlin mail
setup and DNS, not to do with the character encoding. My changing the
accent, it is now duplicating the signed off by lines so I'd avoid
doing that!

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59192): https://lists.yoctoproject.org/g/yocto/message/59192
Mute This Topic: https://lists.yoctoproject.org/mt/96807553/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] error when try to use sudo command in recipe

2023-02-05 Thread Richard Purdie
On Sun, 2023-02-05 at 16:00 -0800, SIMON BABY wrote:
> Hi Richard,
> 
> I added extra code in the recipe to print the sudo permissions to
> compare with actual permissions.

I'll ask for the third time.

Are you running bitbake directly?

I'd like to understand the environment bitbake is being run in.
Something is breaking the ownership/permissions and I guess it is some
kind of container.


Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59174): https://lists.yoctoproject.org/g/yocto/message/59174
Mute This Topic: https://lists.yoctoproject.org/mt/96733939/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] error when try to use sudo command in recipe

2023-02-05 Thread Richard Purdie
On Sun, 2023-02-05 at 08:44 -0800, SIMON BABY wrote:
> On Sun, Feb 5, 2023 at 01:54 AM, Richard Purdie wrote:
> > something
> Hello Richard,
> I am building this layer for adding container images into the root
> file system and load on the target as part of the image download. The
> sudo command when i run outsidethe yocto build directory 
> seems to be good. when I trued using bitbake it fails always. I see
> the user and group ids are different when i printed inside the
> bitbake recipe.
> 
>  -rwsr-xr-x 1 nobody nogroup 166056 Jan 16 13:06 /usr/bin/sudo
> 
> it should like below:
> tdydev@eadlng-build-machine:~$ ls -ll /usr/bin/sudo
> -rwsr-xr-x 1 root root 166056 Jan 16 05:06 /usr/bin/sudo
> 

Are you running bitbake directly? How did you dump those sudo
permissions from inside bitbake?

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59172): https://lists.yoctoproject.org/g/yocto/message/59172
Mute This Topic: https://lists.yoctoproject.org/mt/96733939/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] error when try to use sudo command in recipe

2023-02-05 Thread Richard Purdie
On Sat, 2023-02-04 at 20:15 -0800, SIMON BABY wrote:
> Hi Richard,
> Thank you for the reply.
>  I was trying to add the
> layer https://github.com/savoirfairelinux/meta-embedded-containers in
> my yocto build. Below recipe uses sudo commands. 
> 
> https://github.com/savoirfairelinux/meta-embedded-containers/blob/b131baa90291c159d3a725b8d66786c85964556c/recipes-support/container-image/container-image.bb
> 
> while building it fails at sudo command and getting the error below.
> 
> sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit
> set

I don't see anything in there which runs that task under
fakeroot/pseudo. Are you building this within some kind of container
and/or under something like kas? The error suggests that sudo has been
restricted somehow but I don't think it is bitbake doing it.

I'd point out that do_pull_image task has at least three problems I
spotted at a quick glance:

a) It will kill *any* dockerd process on the system so it can't run in
parallel with any other build

b) The "sleep 5" tells you it is likely to suffer races

c) Using sudo within recipes is considered a bad practise and will
require setup of the right privlegdes to work properly

Cheers,

Richard





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59167): https://lists.yoctoproject.org/g/yocto/message/59167
Mute This Topic: https://lists.yoctoproject.org/mt/96733939/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



<    1   2   3   4   5   6   7   8   >