[OE-core] [PATCH 3/3] linux-firmware: Create bnx2x subpackage

2023-12-05 Thread Jason Andryuk
bnx2x is another broadcom ethernet adapter with its own firmware.  Place
it into its own subpackage.

Signed-off-by: Jason Andryuk 
---
 .../linux-firmware/linux-firmware_20231030.bb  | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb 
b/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb
index a1229e4827..6667f00612 100644
--- a/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb
+++ b/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb
@@ -341,6 +341,7 @@ PACKAGES =+ "${PN}-amphion-vpu-license ${PN}-amphion-vpu \
  ${PN}-ice-enhanced-license ${PN}-ice-enhanced \
  ${PN}-adsp-sst-license ${PN}-adsp-sst \
  ${PN}-bnx2 \
+ ${PN}-bnx2x \
  ${PN}-liquidio \
  ${PN}-nvidia-license \
  ${PN}-nvidia-tegra-k1 ${PN}-nvidia-tegra \
@@ -1104,6 +1105,12 @@ FILES:${PN}-whence-license = 
"${nonarch_base_libdir}/firmware/WHENCE"
 RDEPENDS:${PN}-bnx2 += "${PN}-whence-license"
 RPROVIDES:${PN}-bnx2 = "${PN}-bnx2-mips"
 
+LICENSE:${PN}-bnx2x = "WHENCE"
+
+FILES:${PN}-bnx2x = "${nonarch_base_libdir}/firmware/bnx2x/bnx2x*.fw"
+
+RDEPENDS:${PN}-bnx2x += "${PN}-whence-license"
+
 # For cirrus
 LICENSE:${PN}-cirrus = "Firmware-cirrus"
 LICENSE:${PN}-cirrus-license = "Firmware-cirrus"
-- 
2.43.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#191862): 
https://lists.openembedded.org/g/openembedded-core/message/191862
Mute This Topic: https://lists.openembedded.org/mt/102997937/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-core] [PATCH 2/3] linux-firmware: Change bnx2 packaging

2023-12-05 Thread Jason Andryuk
The bnx2 module uses both the mips and rv2p files, so package them all
together.  Remove -mips from the package name, but add an RPROVIDES for
compatibility.

Signed-off-by: Jason Andryuk 
---
 .../linux-firmware/linux-firmware_20231030.bb  | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb 
b/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb
index b849c086b6..a1229e4827 100644
--- a/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb
+++ b/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb
@@ -340,7 +340,7 @@ PACKAGES =+ "${PN}-amphion-vpu-license ${PN}-amphion-vpu \
  ${PN}-ice-license ${PN}-ice \
  ${PN}-ice-enhanced-license ${PN}-ice-enhanced \
  ${PN}-adsp-sst-license ${PN}-adsp-sst \
- ${PN}-bnx2-mips \
+ ${PN}-bnx2 \
  ${PN}-liquidio \
  ${PN}-nvidia-license \
  ${PN}-nvidia-tegra-k1 ${PN}-nvidia-tegra \
@@ -1087,18 +1087,22 @@ RDEPENDS:${PN}-bcm4356-pcie += "${PN}-cypress-license"
 LICENSE:${PN}-bcm4373 = "Firmware-cypress"
 RDEPENDS:${PN}-bcm4373 += "${PN}-cypress-license"
 
-# For Broadcom bnx2-mips
+# For Broadcom bnx2
 #
 # which is a separate case to the other Broadcom firmwares since its
 # license is contained in the shared WHENCE file.
 
-LICENSE:${PN}-bnx2-mips = "WHENCE"
+LICENSE:${PN}-bnx2 = "WHENCE"
 LICENSE:${PN}-whence-license = "WHENCE"
 
-FILES:${PN}-bnx2-mips = 
"${nonarch_base_libdir}/firmware/bnx2/bnx2-mips-09-6.2.1b.fw"
+FILES:${PN}-bnx2 = " \
+${nonarch_base_libdir}/firmware/bnx2/bnx2-mips*.fw \
+${nonarch_base_libdir}/firmware/bnx2/bnx2-rv2p*.fw \
+"
 FILES:${PN}-whence-license = "${nonarch_base_libdir}/firmware/WHENCE"
 
-RDEPENDS:${PN}-bnx2-mips += "${PN}-whence-license"
+RDEPENDS:${PN}-bnx2 += "${PN}-whence-license"
+RPROVIDES:${PN}-bnx2 = "${PN}-bnx2-mips"
 
 # For cirrus
 LICENSE:${PN}-cirrus = "Firmware-cirrus"
-- 
2.43.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#191861): 
https://lists.openembedded.org/g/openembedded-core/message/191861
Mute This Topic: https://lists.openembedded.org/mt/102997935/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-core] [PATCH 1/3] linux-firmware: Package iwlwifi .pnvm files

2023-12-05 Thread Jason Andryuk
The iwlwifi uses the .pnvm files for newer AX210+ cards, so package them
into the iwlwifi-misc subpackage.

Signed-off-by: Jason Andryuk 
---
 .../recipes-kernel/linux-firmware/linux-firmware_20231030.bb | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb 
b/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb
index c0394b9b3b..b849c086b6 100644
--- a/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb
+++ b/meta/recipes-kernel/linux-firmware/linux-firmware_20231030.bb
@@ -1187,7 +1187,10 @@ FILES:${PN}-iwlwifi-7265d   = 
"${nonarch_base_libdir}/firmware/iwlwifi-7265D-*.u
 FILES:${PN}-iwlwifi-8000c   = 
"${nonarch_base_libdir}/firmware/iwlwifi-8000C-*.ucode"
 FILES:${PN}-iwlwifi-8265   = 
"${nonarch_base_libdir}/firmware/iwlwifi-8265-*.ucode"
 FILES:${PN}-iwlwifi-9000   = 
"${nonarch_base_libdir}/firmware/iwlwifi-9000-*.ucode"
-FILES:${PN}-iwlwifi-misc   = "${nonarch_base_libdir}/firmware/iwlwifi-*.ucode"
+FILES:${PN}-iwlwifi-misc   = " \
+${nonarch_base_libdir}/firmware/iwlwifi-*.ucode \
+${nonarch_base_libdir}/firmware/iwlwifi-*.pnvm \
+"
 
 RDEPENDS:${PN}-iwlwifi-135-6 = "${PN}-iwlwifi-license"
 RDEPENDS:${PN}-iwlwifi-3160-7= "${PN}-iwlwifi-license"
-- 
2.43.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#191860): 
https://lists.openembedded.org/g/openembedded-core/message/191860
Mute This Topic: https://lists.openembedded.org/mt/102997933/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-core] [PATCH 0/3] linux-firmware:

2023-12-05 Thread Jason Andryuk
This is a small series to fixup up linux-firmware packaging:
The first adds some .pnvm files needed along with the .ucode by newer
iwlwifi cards to the linux-firmware-iwlwifi-misc package.

The second renames s/bnx2-mips/bnx2/ and includes more files needed by
the driver.

The third creates a new bnx2x package.

Jason Andryuk (3):
  linux-firmware: Package iwlwifi .pnvm files
  linux-firmware: Change bnx2 packaging
  linux-firmware: Create bnx2x subpackage

 .../linux-firmware/linux-firmware_20231030.bb | 26 ++-
 1 file changed, 20 insertions(+), 6 deletions(-)

-- 
2.43.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#191859): 
https://lists.openembedded.org/g/openembedded-core/message/191859
Mute This Topic: https://lists.openembedded.org/mt/102997931/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-core] do_image_qa and IMAGE_QA_COMMANDS

2021-04-05 Thread Jason Andryuk
Hi,

I noticed what may be a bug with do_image_qa and IMAGE_QA_COMMANDS.
My terminology may be a little off, but hopefully you get my point.

It seems the script content hash of a IMAGE_QA_COMMANDS command is not
added to the do_image_qa task hash.

For example, I have:
IMAGE_QA_COMMANDS += " \
do_image_check_module_sigs \
"

If I change do_image_check_module_sigs and re-run the image recipe,
bitbake does not execute do_image_qa with the new
do_image_check_module_sigs.

It's not a big deal, but it made it tricky to test my qa command.
When it was a standalone task (before I knew about IMAGE_QA_COMMANDS)
the task was re-executed with every change.

Regards,
Jason

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#150223): 
https://lists.openembedded.org/g/openembedded-core/message/150223
Mute This Topic: https://lists.openembedded.org/mt/81866020/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [OE-core] Mis-generation of shell script (run.do_install)?

2019-01-17 Thread Jason Andryuk
On Wed, Jan 16, 2019 at 3:28 PM Richard Purdie
 wrote:
> The data is in the codeparser cache which is first populated at parse
> time so its enough just to parse the machine+recipe in question, not
> build it. I think that explains the answer to a ew of your questions
> above.

Yes, thanks.

> Sorry for asking so many questions btw, I'd just really love to be able
> to reproduce this issue! Thanks for trying to help answer them too!
>
> Is the bitbake-cookerdeamon.log file still there for this build (in the
> top level build directory)?

I don't seem to have this file in any of my OpenXT builds.

I still have the "bad" bb_codeparser.dat file.  It is 30MB whereas the
new one is only 6.5MB.  I thought it may be excessively large, but I
actually have an 80MB one in a different build directory.

Anyway, it has 4 different entries that look like core2-32
python-async do_install()-s

3c6fe664c51d2f793f8fd0eb103d68cb - reproduces currently
3df9018676de219bb3e46e88eea09c98 - one matching binutils core2-64 do_install
382871fb17743ba9635d7efc4db7d993
ee6850bdcf70ba63dea37e09c78c599f

They all have
frozenset({'[', 'mv', 'test',
'/home/build/openxt-compartments/build/tmp-glibc/work/core2-32-oe-linux/python-async/0.6.2-r0/recipe-sysroot-native/usr/bin/python-native/python',
'sed', 'install', 'bbfatal_log', 'find', 'rm', 'rmdir'})

Eyeballing distutils_do_install, I don't see what could produce so
many variations.

Going into the new, clean build container, I can see those last two
hashes with different entries:
>>> d['ee6850bdcf70ba63dea37e09c78c599f']
frozenset({'tr', 'rm', 'sed', 'ln', 'cd', 'oe_multilib_header',
'autotools_do_install', 'echo', 'basename', 'install'})
>>> d['382871fb17743ba9635d7efc4db7d993']
frozenset({'tr', 'rm', 'sed', 'ln', 'cd', 'oe_multilib_header',
'autotools_do_install', 'echo', 'basename', 'install'})

and the expected core2-32 python-async do_install
>>> d['3c6fe664c51d2f793f8fd0eb103d68cb']
frozenset({'bbfatal_log', 'rm', 'test', 'sed', '[', 'rmdir',
'/home/build/openxt-compartments/build/tmp-glibc/work/core2-32-oe-linux/python-async/0.6.2-r0/recipe-sysroot-native/usr/bin/python-native/python',
'find', 'install', 'mv'})

I've only run one core2-32 build in the fresh container, so no 64bit
binutils at the original collision 3df9018676de219bb3e46e88eea09c98

Ok, I hacked up a script to check two bb_codeparser.dat files for
collisions.  Compare the current one with the "bad" one:
$ ./pickle-cmp.py cache/bb_codeparser.dat cache/bb_codeparser.dat.old-bad-one
Collision ee6850bdcf70ba63dea37e09c78c599f
frozenset({'echo', 'rm', 'autotools_do_install', 'tr',
'oe_multilib_header', 'cd', 'basename', 'sed', 'ln', 'install'})
frozenset({'find', 'test', 'rm', 'bbfatal_log', '[', 'sed', 'mv',
'/home/build/openxt-compartments/build/tmp-glibc/work/core2-32-oe-linux/python-async/0.6.2-r0/recipe-sysroot-native/usr/bin/python-native/python',
'rmdir', 'install'})
Collision 382871fb17743ba9635d7efc4db7d993
frozenset({'echo', 'rm', 'autotools_do_install', 'tr',
'oe_multilib_header', 'cd', 'basename', 'sed', 'ln', 'install'})
frozenset({'find', 'test', 'rm', 'bbfatal_log', '[', 'sed', 'mv',
'/home/build/openxt-compartments/build/tmp-glibc/work/core2-32-oe-linux/python-async/0.6.2-r0/recipe-sysroot-native/usr/bin/python-native/python',
'rmdir', 'install'})
Collision 5254083eac08e32fc68bc9421d7df287
frozenset({'autotools_do_install', 'rm', 'sed', 'touch', 'install'})
frozenset({'/etc/init.d/xenclient-boot-sound', 'true', ':', '['})
Collision d0701fd5c05175aeafc06d8ce34d3532
frozenset({'create-cracklib-dict', 'autotools_do_install'})
frozenset({'/etc/init.d/gateone', 'true', ':', '['})
Collision ec332415bd96520823ba383494e7a9a7
frozenset({'ln', 'popd', ':', 'pushd'})
frozenset({'DEPLOY_DIR', 'useradd_preinst', 'perform_useradd', 'PKGD',
'PKGDEST', 'pkg_preinst', 'MLPREFIX', 'perform_groupadd', 'PN',
'perform_groupmems', 'PACKAGES', 'NOAUTOPACKAGEDEBUG',
'USERADD_PACKAGES', 'WORKDIR'})
Collision 3df9018676de219bb3e46e88eea09c98
frozenset({'echo', 'rm', 'autotools_do_install', 'tr',
'oe_multilib_header', 'cd', 'basename', 'sed', 'ln', 'install'})
frozenset({'find', 'test', 'rm', 'bbfatal_log', '[', 'sed', 'mv',
'/home/build/openxt-compartments/build/tmp-glibc/work/core2-32-oe-linux/python-async/0.6.2-r0/recipe-sysroot-native/usr/bin/python-native/python',
'rmdir', 'install'})
Collision 0aa15eb469ad8854cda0b0675217b8f6
frozenset({'find', 'test', 'rm', 'bbfatal_log', '[', 'sed', 'mv',
'/home/build/openxt-compartments/build/tmp-glibc/work/core2-32-oe-linux/python-mock/2.0.0-r0/recipe-sysroot-native/usr/bin/python-native/python',
'rmdir', 'install'})
frozenset({'oe_runmake', 'find', 'true', 'test', 'echo', 'chmod',
'rm', 'mkdir', '[', 'oe_multilib_header', 'cd', 'lnr', 'basename',
'continue', 'mv', 'ln', 'local', 'install'})

Compare the current one with the fresh one from the other container (build4):
$ ./pickle-cmp.py cache/bb_codeparser.dat build4-codeparser.dat
Collision d0701fd5c05175aeafc06d8ce34d3532

Re: [OE-core] Mis-generation of shell script (run.do_install)?

2019-01-16 Thread Jason Andryuk
On Wed, Jan 16, 2019 at 9:02 AM Richard Purdie
 wrote:
>
> On Wed, 2019-01-16 at 08:55 -0500, Jason Andryuk wrote:
> > On Tue, Jan 8, 2019 at 1:26 PM 
> > wrote:
> > > On Tue, 2018-12-18 at 12:45 -0500, Jason Andryuk wrote:
> > > > I can definitively state I have a hash in bb_codeparser.dat with
> > > > an
> > > > incorrect shellCacheLine entry and I don't know how it got there.
> > > >
> > > > The bad hash is 3df9018676de219bb3e46e88eea09c98.  I've attached
> > > > a
> > > > file with the binutils do_install() contents which hash to that
> > > > value.
> > > >
> > > > The bad 3df9018676de219bb3e46e88eea09c98 entry in the
> > > > bb_codeparser.dat returned
> > > > DEBUG: execs [
> > > > DEBUG: execs rm
> > > > DEBUG: execs install
> > > > DEBUG: execs test
> > > > DEBUG: execs sed
> > > > DEBUG: execs rmdir
> > > > DEBUG: execs bbfatal_log
> > > > DEBUG: execs mv
> > > > DEBUG: execs /home/build/openxt-compartments/build/tmp-
> > > > glibc/work/core2-32-oe-linux/python-async/0.6.2-r0/recipe-
> > > > sysroot-
> > > > native/usr/bin/python-native/python
> > > > DEBUG: execs find
> > >
> > > This is useful data (along with the attachment), thanks.
> > >
> > > I agree that this looks likely to have come from a core2-32 tuned
> > > machine (e.g. genericx86) from python-async do_install.
> > >
> > > How old was this build directory? Can you remember any details of
> > > the
> > > update history for it?
> >
> > I think the build directory was from the beginning of October 30th,
> > and I guess I hit the collision December 10th or so.
> >
> > > I'd be very interested to try and reproduce that hash. I locally
> > > blacklisted your collision from my cache and tried to reproduce
> > > this. I
> > > can generate a matching hash for the binutils do_install but I
> > > can't
> > > produce one matching the above.
> >
> > I tried around December 18th to generate the collision again.  I set
> > up a new container with an identical openxt path.  There, python-
> > async was built, but it did not have the colliding hash.  When core2-
> > 64 binutils was built, it had the expected hash.
> >
> > > Can you remember the history of this build directory and which
> > > updates
> > > it may have had? The python-async recipe is confined to OE-Core so
> > > its
> > > probably the revision history for the oe-core repo which is most
> > > interesting. Anything in the .git/logs directory for that which
> > > would
> > > help us replay the different versions you might have built?
> >
> > oe-core is checked out at 819aa151bd634122a46ffdd822064313c67f5ba5
> > It's a git submodule locked at a fixed revision, and it had not
> > changed in the build directory.
> >
> > OpenXT builds 8 or 9 different MACHINEs and images in sequence in the
> > same build directory.  Maybe 6 are core2-32 and two are core2-64. The
> > 32bit ones run first.
>
> The hash we don't have is from a core2-32 MACHINE. I'm wondering which
> configurations you might have parsed for a core2-32 MACHINE between
> October and December?

Which "configurations" are you asking about?

The standard OpenXT build loops through building all 8 images and
packaging them up into an installer iso.  Often I run that build
script, but sometimes I just build individual machines manually.

I was mainly working on the core2-64 machines immediately prior to
this event.  I was very surprised when it occured since 1) I didn't
expect binutils to be re-built and 2) I wasn't working on the
openxt-installer machine which failed.

> Was TMPDIR ever cleaned? If not, do you have the python-async WORKDIR
> for core2-32? The TMPDIR/logs directory may also have useful hints
> about the configurations built...

Unfortunately, yes, I cleaned TMPDIR when I hit the build error.  Same
with the sstate-cache.

In general, I don't see python-async in TMPDIR after running through
the OpenXT build.  Would that be because an early machine builds
python-async, but then it gets cleared out of TMPDIR when a later
machine/image are built?

> > I think the problem first manifest after I added an additional local
> > layer to BBLAYERS.  At that time, I started building an additional
> > MACHINE.  Along with the mis-generated run.do_install script, bitbake
> > was complaining about the binutils base hash mismatch which triggered
> > the re-buil

Re: [OE-core] Mis-generation of shell script (run.do_install)?

2019-01-16 Thread Jason Andryuk
On Tue, Jan 8, 2019 at 1:26 PM  wrote:
>
> On Tue, 2018-12-18 at 12:45 -0500, Jason Andryuk wrote:
> > I can definitively state I have a hash in bb_codeparser.dat with an
> > incorrect shellCacheLine entry and I don't know how it got there.
> >
> > The bad hash is 3df9018676de219bb3e46e88eea09c98.  I've attached a
> > file with the binutils do_install() contents which hash to that
> > value.
> >
> > The bad 3df9018676de219bb3e46e88eea09c98 entry in the
> > bb_codeparser.dat returned
> > DEBUG: execs [
> > DEBUG: execs rm
> > DEBUG: execs install
> > DEBUG: execs test
> > DEBUG: execs sed
> > DEBUG: execs rmdir
> > DEBUG: execs bbfatal_log
> > DEBUG: execs mv
> > DEBUG: execs /home/build/openxt-compartments/build/tmp-
> > glibc/work/core2-32-oe-linux/python-async/0.6.2-r0/recipe-sysroot-
> > native/usr/bin/python-native/python
> > DEBUG: execs find
>
> This is useful data (along with the attachment), thanks.
>
> I agree that this looks likely to have come from a core2-32 tuned
> machine (e.g. genericx86) from python-async do_install.
>
> How old was this build directory? Can you remember any details of the
> update history for it?

I think the build directory was from the beginning of October 30th,
and I guess I hit the collision December 10th or so.

> I'd be very interested to try and reproduce that hash. I locally
> blacklisted your collision from my cache and tried to reproduce this. I
> can generate a matching hash for the binutils do_install but I can't
> produce one matching the above.

I tried around December 18th to generate the collision again.  I set
up a new container with an identical openxt path.  There, python-async
was built, but it did not have the colliding hash.  When core2-64
binutils was built, it had the expected hash.

> Can you remember the history of this build directory and which updates
> it may have had? The python-async recipe is confined to OE-Core so its
> probably the revision history for the oe-core repo which is most
> interesting. Anything in the .git/logs directory for that which would
> help us replay the different versions you might have built?

oe-core is checked out at 819aa151bd634122a46ffdd822064313c67f5ba5
It's a git submodule locked at a fixed revision, and it had not
changed in the build directory.

OpenXT builds 8 or 9 different MACHINEs and images in sequence in the
same build directory.  Maybe 6 are core2-32 and two are core2-64. The
32bit ones run first.

I think the problem first manifest after I added an additional local
layer to BBLAYERS.  At that time, I started building an additional
MACHINE.  Along with the mis-generated run.do_install script, bitbake
was complaining about the binutils base hash mismatch which triggered
the re-build.  The first 64bit MACHINE included TUNE-CCARGS +=
"-mstackrealign" while the second did not.  Could that be a reason why
bitbake complained about the base hash mismatch?

Without reproducing the hash, I'm more puzzled.

Regards,
Jason
-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] Mis-generation of shell script (run.do_install)?

2018-12-18 Thread Jason Andryuk
On Mon, Dec 17, 2018 at 4:24 PM  wrote:
>
> On Mon, 2018-12-17 at 12:21 -0800, Andre McCurdy wrote:
> > On Mon, Dec 17, 2018 at 6:44 AM 
> > wrote:
> > > On Sat, 2018-12-15 at 20:19 -0500, Jason Andryuk wrote:
> > > > As far as I can tell, pysh is working properly - it's just the
> > > > bb_codeparser.dat which is returning the incorrect shellCacheLine
> > > > entry.  It seems like I have an md5 collision between a pyro
> > > > core2-64
> > > > binutils do_install and core2-32 python-async
> > > > distutils_do_install in
> > > > the shellCacheLine.  python-async's entry got in first, so that's
> > > > why
> > > > binutils run.do_install doesn't include autotools_do_install -
> > > > the
> > > > shellCacheLine `execs` entry doesn't include it.  Or somehow the
> > > > `bb_codeparser.dat` file was corrupted to have an incorrect
> > > > `execs`
> > > > for the binutils do_install hash.
> > >
> > > That is rather worrying. Looking at the known issues with md5, I
> > > can
> > > see how this could happen though.
> >
> > How do you see this could happen? By random bad luck?
> >
> > Despite md5 now being susceptible to targeted attacks, the chances of
> > accidentally hitting a collision between two 128bit hashes is as
> > unlikely as it's always been.
> >
> >   http://big.info/2013/04/md5-hash-collision-probability-using.html
> >
> > "It is not that easy to get hash collisions when using MD5 algorithm.
> > Even after you have generated 26 trillion hash values, the
> > probability of the next generated hash value to be the same as one of
> > those 26 trillion previously generated hash values is 1/1trillion (1
> > out of 1 trillion)."
> >
> > It seems much more likely that there's a bug somewhere in the way the
> > hashes are used. Unless we understand that then switching to a longer
> > hash might not solve anything.
>
> The md5 collision generators have demonstrated its possible to get
> checksums where there is a block of contiguous fixed data and a block
> of arbitrary data in ratios of up to about 75% to 25%.
>
> That pattern nearly exactly matches our function templating mechanism
> where two functions may be nearly identical except for a name or a
> small subset of it.
>
> Two random hashes colliding are less interesting than the chances of
> two very similar but subtly different pieces of code getting the same
> hash. I don't have a mathematical level proof of it but looking at the
> way you can generate collisions, I suspect our data is susceptible and
> the fact you can do it at all with such large blocks is concerning.
>
> I would love to have definitive proof. I'd be really interested if
> Jason has the "bad" checksum and one of the inputs which matches it as
> I'd probably see if we could brute force the other. I've read enough to
> lose faith in our current code though.
>
> Also though, there is the human factor. What I don't want to have is
> people put off the project deeming it "insecure". I already get raised
> eyebrows at the use of md5. Its probably time to switch and be done
> with any perception anyway, particularly now questions are being asked,
> valid or not as the performance hit, whilst noticeable on a profile is
> not earth shattering.
>
> Finally, by all means please do audit the codepaths and see if there is
> another explanation. Our hash use is fairly simple but its possible
> there is some other logic error and if there is we should fix it.

I can definitively state I have a hash in bb_codeparser.dat with an
incorrect shellCacheLine entry and I don't know how it got there.

The bad hash is 3df9018676de219bb3e46e88eea09c98.  I've attached a
file with the binutils do_install() contents which hash to that value.

The bad 3df9018676de219bb3e46e88eea09c98 entry in the bb_codeparser.dat returned
DEBUG: execs [
DEBUG: execs rm
DEBUG: execs install
DEBUG: execs test
DEBUG: execs sed
DEBUG: execs rmdir
DEBUG: execs bbfatal_log
DEBUG: execs mv
DEBUG: execs 
/home/build/openxt-compartments/build/tmp-glibc/work/core2-32-oe-linux/python-async/0.6.2-r0/recipe-sysroot-native/usr/bin/python-native/python
DEBUG: execs find

These execs looks like they could be from a distutils_do_install(),
but that's just a guess.  python-async was not in my tmp-glibc
directory when I started this investigation.  I don't know how it got
there.  I built it manually, but the resulting distutils_do_install
has a different hash :(

The correct shellCacheLine entry for core2-64 binutils do_install returns:
DEBUG: execs basename
DEBUG: execs rm
DEBUG: execs oe_mul

Re: [OE-core] Mis-generation of shell script (run.do_install)?

2018-12-15 Thread Jason Andryuk
On Sat, Dec 15, 2018, 5:51 AM 
> On Fri, 2018-12-14 at 14:30 -0500, Jason Andryuk wrote:
> > I wasn't sure how to tie into the pysh stuff, but that got me poking
> > around in bitbake/lib/bb/codeparser.py .  Adding debug messages to
> > parse_shell(), I see that do_install is found in the CodeParserCache,
> > bb_codeparser.dat, but the returned `execs` do not include
> > autotools_do_install.  Strangely, it includes a path to python -
> > ...core2-32-oe-linux/python-async/0.6.2-r0/recipe-sysroot-
> > native/usr/bin/python-native/python.
> > It looks like `execs` could be for `distutils_do_install`.  And
> > again,
> > strangely, python-async is not in my tmp-glibc.  It must have been
> > built at some point which left the entry in bb_codeparser.dat.
> >
> > I built python-async, but its distutils_do_install hash value does
> > not
> > match the one in the cache.
> >
> > Moving cache/bb_codeparser.dat out of the way, bitbake complains:
> > ERROR: When reparsing
> > /home/build/openxt/build/repos/openembedded-core/meta/recipes-
> > devtools/binutils/binutils_2.28.bb.do_install,
> > the basehash value changed from 80812e0772cf901b51790c205564070d to
> > 493152cd3740c5420d0bf7a5d09df001. The metadata is not deterministic
> > and this needs to be fixed.
> >
> > `cleanall` does not clear out the message, but the package builds.
>
> Its a little hard to make sense of this. If you move the cache out the
> way it should simply get regenerated. It is long lived so things from
> old builds in there is expected.
>
> Were you able to isolate this into a smaller test case someone else
> could reproduce?

As far as I can tell, pysh is working properly - it's just the
bb_codeparser.dat which is returning the incorrect shellCacheLine
entry.  It seems like I have an md5 collision between a pyro core2-64
binutils do_install and core2-32 python-async distutils_do_install in
the shellCacheLine.  python-async's entry got in first, so that's why
binutils run.do_install doesn't include autotools_do_install - the
shellCacheLine `execs` entry doesn't include it.  Or somehow the
`bb_codeparser.dat` file was corrupted to have an incorrect `execs`
for the binutils do_install hash.

I briefly tried to reproduce the python-async distutils_do_install
with the same hash, but could not get it to match.  Also I tried to
manually unpickle bb_codeparser.dat, but it threw a stack underflow
error - maybe I just didn't have all the necessary imports?

I'm not sure where the basehash/"metadata is not deterministic"
message comes from.  I am using two different x86-64 machines types
that both fall back to the core2-64 binutils.  Could that be an issue?

Regards,
Jason
-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] Mis-generation of shell script (run.do_install)?

2018-12-14 Thread Jason Andryuk
On Tue, Dec 11, 2018 at 10:02 AM Richard Purdie
 wrote:
>
> On Tue, 2018-12-11 at 08:42 -0500, Jason Andryuk wrote:
> > Has anyone ever seen a generated shell script missing functions?
> >
> > I have an OpenXT/OpenEmbedded setup where I had run many successful
> > builds.  I made a change and then re-ran the build - it failed in
> > binutil's do_install with autotools_do_install command not found.
> >
> > core2-64-oe-linux/binutils/2.28-r0/temp/run.do_install.11776: line
> > 124: autotools_do_install: command not found
> >
> > Sure enough, autotools_do_install is not in run.do_install.
> >
> > I had not changed binutils or any relevant variable, as far as I can
> > tell.  If I run with '-e' I see the full autotools_do_install
> > function in the output.  For some reason, the generated script wasn't
> > including autotools_do_install.
> >
> > I tried binutils -c cleansstate, but that didn't work.  I tried
> > pruning the sstate-cache dir, but that didn't work.  I tried deleting
> > tmp-glibc and sstate-cache, but it had the same error when I rebuilt.
> >
> > Modifying binutils do_install by adding a comment and `true` lets it
> > builds.
> >
> > I saw something similar one other time where the generated script was
> > missing a function.  I can't recall the details, but it was a
> > different package and MACHINE.
> >
> > Any suggestions on debugging this?
>
> It sounds like pysh in bitbake wasn't able to see a dependency on the
> function in question. Creating a small/reproducible test case would be
> how I'd approach it, there are tests on the pysh code in bitbake-
> selftest for example.
>
> Once I had a test case which failed, I'd then use that to debug and see
> if I could figure out a fix.

Thanks, Richard.

I wasn't sure how to tie into the pysh stuff, but that got me poking
around in bitbake/lib/bb/codeparser.py .  Adding debug messages to
parse_shell(), I see that do_install is found in the CodeParserCache,
bb_codeparser.dat, but the returned `execs` do not include
autotools_do_install.  Strangely, it includes a path to python -
...core2-32-oe-linux/python-async/0.6.2-r0/recipe-sysroot-native/usr/bin/python-native/python.
It looks like `execs` could be for `distutils_do_install`.  And again,
strangely, python-async is not in my tmp-glibc.  It must have been
built at some point which left the entry in bb_codeparser.dat.

I built python-async, but its distutils_do_install hash value does not
match the one in the cache.

Moving cache/bb_codeparser.dat out of the way, bitbake complains:
ERROR: When reparsing
/home/build/openxt/build/repos/openembedded-core/meta/recipes-devtools/binutils/binutils_2.28.bb.do_install,
the basehash value changed from 80812e0772cf901b51790c205564070d to
493152cd3740c5420d0bf7a5d09df001. The metadata is not deterministic
and this needs to be fixed.

`cleanall` does not clear out the message, but the package builds.

Regards,
Jason
-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


[OE-core] Mis-generation of shell script (run.do_install)?

2018-12-11 Thread Jason Andryuk
Hi,

Has anyone ever seen a generated shell script missing functions?

I have an OpenXT/OpenEmbedded setup where I had run many successful
builds.  I made a change and then re-ran the build - it failed in
binutil's do_install with autotools_do_install command not found.

core2-64-oe-linux/binutils/2.28-r0/temp/run.do_install.11776: line
124: autotools_do_install: command not found

Sure enough, autotools_do_install is not in run.do_install.

I had not changed binutils or any relevant variable, as far as I can
tell.  If I run with '-e' I see the full autotools_do_install
function in the output.  For some reason, the generated script wasn't
including autotools_do_install.

I tried binutils -c cleansstate, but that didn't work.  I tried
pruning the sstate-cache dir, but that didn't work.  I tried deleting
tmp-glibc and sstate-cache, but it had the same error when I rebuilt.

Modifying binutils do_install by adding a comment and `true` lets it builds.

I saw something similar one other time where the generated script was
missing a function.  I can't recall the details, but it was a
different package and MACHINE.

Any suggestions on debugging this?

Thanks,
Jason
-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core