[yocto] Missing RDEPENDS on libldb, libtevent
I am doing Yocto builds with Yocto sumo, and using the meta-oe layer. I recently updated Yocto poky from eebbc00b to 64a257fa, and updated meta-oe from 2bb21ef2 to 8760facb. After that, I found I got build errors: ERROR: libldb-1.3.1-r0 do_package_qa: QA Issue: /usr/lib/python2.7/site-packages/ldb.so contained in package pyldb requires libldb.so.1(LDB_0.9.10), but no providers found in RDEPENDS_pyldb? [file-rdeps] ERROR: libldb-1.3.1-r0 do_package_qa: QA Issue: /usr/lib/python2.7/site-packages/ldb.so contained in package pyldb requires libldb.so.1, but no providers found in RDEPENDS_pyldb? [file-rdeps] ... ERROR: libldb-1.3.1-r0 do_package_qa: QA Issue: /usr/lib/libldb.so.1.3.1 contained in package libldb requires libtevent.so.0(TEVENT_0.9.9), but no providers found in RDEPENDS_libldb? [file-rdeps] ERROR: libldb-1.3.1-r0 do_package_qa: QA Issue: /usr/lib/libldb.so.1.3.1 contained in package libldb requires libtevent.so.0, but no providers found in RDEPENDS_libldb? [file-rdeps] ... ERROR: libtevent-0.9.36-r0 do_package_qa: QA Issue: /usr/lib/python2.7/site-packages/_tevent.so contained in package python-tevent requires libtevent.so.0(TEVENT_0.9.9), but no providers found in RDEPENDS_python-tevent? [file-rdeps] ERROR: libtevent-0.9.36-r0 do_package_qa: QA Issue: /usr/lib/python2.7/site-packages/_tevent.so contained in package python-tevent requires libtevent.so.0, but no providers found in RDEPENDS_python-tevent? [file-rdeps] I'm not sure what changed to make these errors appear, but I have added RDEPENDS lines in bbappend as a work-around for now. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] mono-native is trying to install files into a shared area...
From: Alex Lennon On 12/06/2018 05:43, Khem Raj wrote: On Mon, Jun 11, 2018 at 8:36 PM Craig McQueen mailto:craig.mcqu...@innerrange.com>> wrote: I wrote: > > I wrote: > > > > Lately, I'm trying to upgrade to a later version of mono, 5.4.1.6. > > When I try to do a build of my Yocto image, bitbake gets to the end of > > building mono- native, and then gets an error: > > > > > > ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: The recipe mono- > > native is trying to install files into a shared area when those files > > already > exist. > > Those files and their manifest location are: > >/home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > > linux/usr/lib/mono/lldb/mono.py > > Matched in b'' > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > > linux/usr/lib/mono/4.6.1-api/System.Web.Http.SelfHost.dll > > Matched in b'' > > ... > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > > > linux/usr/lib/mono/xbuild/14.0/bin/MSBuild/Microsoft.Build.CommonTypes. > > xsd > > Matched in b'' > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > > linux/usr/lib/mono/xbuild/14.0/bin/MSBuild/Microsoft.Build.Core.xsd > > Matched in b'' > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > > > linux/usr/lib/mono/xbuild/14.0/Microsoft.Common.targets/ImportAfter/Mi > > c > > rosoft.NuGet.ImportAfter.targets > > Matched in b'' > > Please verify which recipe should provide the above files. > > The build has stopped as continuing in this scenario WILL break > > things, if not now, possibly in the future (we've seen builds fail > > several months later). If the system knew how to recover from this > > automatically it would however there are several different scenarios > > which can result in this and we don't know which one this is. It may > > be you have switched providers of something like virtual/kernel (e.g. > > from linux-yocto to linux-yocto-dev), in that case you need to execute the > clean task for both recipes and it will resolve this error. > > It may be you changed DISTRO_FEATURES from systemd to udev or vice > > versa. Cleaning those recipes should again resolve this error however > > switching DISTRO_FEATURES on an existing build directory is not > > supported, you should really clean out tmp and rebuild (reusing sstate > > should be safe). It could be the overlapping files detected are > > harmless in which case adding them to SSTATE_DUPWHITELIST may be the > > correct solution. It could also be your buil d is including two > > different conflicting versions of things (e.g. bluez > > 4 and bluez 5 and the correct solution for that would be to resolve > > the conflict. If in doubt, please ask on the mailing list, sharing the > > error and filelist above. > > ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: If the above > > message is too much, the simpler version is you're advised to wipe out > > tmp and rebuild (reusing sstate is fine). That will likely fix things > > in most (but not all) cases. > > ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: Function failed: > > sstate_task_postfunc > > ERROR: Logfile of failure stored in: > > /home/craigm/yocto/poky/build/tmp/work/x86_64-linux/mono- > > native/5.4.1.6-r0/temp/log.do_populate_sysroot.108358 > > ERROR: Task (/home/craigm/yocto/poky/build/../../meta-mono/recipes- > > mono/mono/mono-native_5.4.1.6.bb:do_populate_sysroot) failed with > exit > > code '1' > > NOTE: Tasks Summary: Attempted 670 tasks of which 662 didn't need to > > be rerun and 1 failed. > > > > Summary: 1 task failed: > > /home/craigm/yocto/poky/build/../../meta-mono/recipes- > > mono/mono/mono-native_5.4.1.6.bb:do_populate_sysroot > > Summary: There were 3 ERROR messages shown, returning a non-zero exit > > code. > > > > > > I'm building with Yocto poky morty branch (currently commit > > 0e730770a9), meta-mono master (commit dced6635ca). I'm building on > Ubuntu 16.04.4. > > > > I have tried deleting the tmp directory, deleting all mono and > > mono-native from sstate, cleaning mono and meta-mono, etc, to no avail. > > > > It's puzzling why I'm getting these errors, because it says "Matched > > in b''", so the files are not clashing with another recipe. It seems > > to be somehow trying to install its own files twice, or something like > > that. If I look under tmp/work/x86_64-linux/mono-native/5.4.1.6-r0/, > > then I see
Re: [yocto] mono-native is trying to install files into a shared area...
I wrote: > > I wrote: > > > > Lately, I'm trying to upgrade to a later version of mono, 5.4.1.6. > > When I try to do a build of my Yocto image, bitbake gets to the end of > > building mono- native, and then gets an error: > > > > > > ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: The recipe mono- > > native is trying to install files into a shared area when those files > > already > exist. > > Those files and their manifest location are: > >/home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > > linux/usr/lib/mono/lldb/mono.py > > Matched in b'' > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > > linux/usr/lib/mono/4.6.1-api/System.Web.Http.SelfHost.dll > > Matched in b'' > > ... > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > > > linux/usr/lib/mono/xbuild/14.0/bin/MSBuild/Microsoft.Build.CommonTypes. > > xsd > > Matched in b'' > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > > linux/usr/lib/mono/xbuild/14.0/bin/MSBuild/Microsoft.Build.Core.xsd > > Matched in b'' > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > > > linux/usr/lib/mono/xbuild/14.0/Microsoft.Common.targets/ImportAfter/Mi > > c > > rosoft.NuGet.ImportAfter.targets > > Matched in b'' > > Please verify which recipe should provide the above files. > > The build has stopped as continuing in this scenario WILL break > > things, if not now, possibly in the future (we've seen builds fail > > several months later). If the system knew how to recover from this > > automatically it would however there are several different scenarios > > which can result in this and we don't know which one this is. It may > > be you have switched providers of something like virtual/kernel (e.g. > > from linux-yocto to linux-yocto-dev), in that case you need to execute the > clean task for both recipes and it will resolve this error. > > It may be you changed DISTRO_FEATURES from systemd to udev or vice > > versa. Cleaning those recipes should again resolve this error however > > switching DISTRO_FEATURES on an existing build directory is not > > supported, you should really clean out tmp and rebuild (reusing sstate > > should be safe). It could be the overlapping files detected are > > harmless in which case adding them to SSTATE_DUPWHITELIST may be the > > correct solution. It could also be your buil d is including two > > different conflicting versions of things (e.g. bluez > > 4 and bluez 5 and the correct solution for that would be to resolve > > the conflict. If in doubt, please ask on the mailing list, sharing the > > error and filelist above. > > ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: If the above > > message is too much, the simpler version is you're advised to wipe out > > tmp and rebuild (reusing sstate is fine). That will likely fix things > > in most (but not all) cases. > > ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: Function failed: > > sstate_task_postfunc > > ERROR: Logfile of failure stored in: > > /home/craigm/yocto/poky/build/tmp/work/x86_64-linux/mono- > > native/5.4.1.6-r0/temp/log.do_populate_sysroot.108358 > > ERROR: Task (/home/craigm/yocto/poky/build/../../meta-mono/recipes- > > mono/mono/mono-native_5.4.1.6.bb:do_populate_sysroot) failed with > exit > > code '1' > > NOTE: Tasks Summary: Attempted 670 tasks of which 662 didn't need to > > be rerun and 1 failed. > > > > Summary: 1 task failed: > > /home/craigm/yocto/poky/build/../../meta-mono/recipes- > > mono/mono/mono-native_5.4.1.6.bb:do_populate_sysroot > > Summary: There were 3 ERROR messages shown, returning a non-zero exit > > code. > > > > > > I'm building with Yocto poky morty branch (currently commit > > 0e730770a9), meta-mono master (commit dced6635ca). I'm building on > Ubuntu 16.04.4. > > > > I have tried deleting the tmp directory, deleting all mono and > > mono-native from sstate, cleaning mono and meta-mono, etc, to no avail. > > > > It's puzzling why I'm getting these errors, because it says "Matched > > in b''", so the files are not clashing with another recipe. It seems > > to be somehow trying to install its own files twice, or something like > > that. If I look under tmp/work/x86_64-linux/mono-native/5.4.1.6-r0/, > > then I see the files present in both: > > > > sysroot-destdir/home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > linux > > / and image/home/crai
Re: [yocto] mono-native is trying to install files into a shared area...
I wrote: > > Lately, I'm trying to upgrade to a later version of mono, 5.4.1.6. When I try > to > do a build of my Yocto image, bitbake gets to the end of building mono- > native, and then gets an error: > > > ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: The recipe mono- > native is trying to install files into a shared area when those files already > exist. > Those files and their manifest location are: >/home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > linux/usr/lib/mono/lldb/mono.py > Matched in b'' > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > linux/usr/lib/mono/4.6.1-api/System.Web.Http.SelfHost.dll > Matched in b'' > ... > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > linux/usr/lib/mono/xbuild/14.0/bin/MSBuild/Microsoft.Build.CommonTypes. > xsd > Matched in b'' > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > linux/usr/lib/mono/xbuild/14.0/bin/MSBuild/Microsoft.Build.Core.xsd > Matched in b'' > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64- > linux/usr/lib/mono/xbuild/14.0/Microsoft.Common.targets/ImportAfter/Mic > rosoft.NuGet.ImportAfter.targets > Matched in b'' > Please verify which recipe should provide the above files. > The build has stopped as continuing in this scenario WILL break things, if not > now, possibly in the future (we've seen builds fail several months later). If > the system knew how to recover from this automatically it would however > there are several different scenarios which can result in this and we don't > know which one this is. It may be you have switched providers of something > like virtual/kernel (e.g. from linux-yocto to linux-yocto-dev), in that case > you > need to execute the clean task for both recipes and it will resolve this > error. > It may be you changed DISTRO_FEATURES from systemd to udev or vice > versa. Cleaning those recipes should again resolve this error however > switching DISTRO_FEATURES on an existing build directory is not supported, > you should really clean out tmp and rebuild (reusing sstate should be safe). > It > could be the overlapping files detected are harmless in which case adding > them to SSTATE_DUPWHITELIST may be the correct solution. It could also be > your buil d is including two different conflicting versions of things (e.g. > bluez > 4 and bluez 5 and the correct solution for that would be to resolve the > conflict. If in doubt, please ask on the mailing list, sharing the error and > filelist > above. > ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: If the above message > is too much, the simpler version is you're advised to wipe out tmp and > rebuild (reusing sstate is fine). That will likely fix things in most (but > not all) > cases. > ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: Function failed: > sstate_task_postfunc > ERROR: Logfile of failure stored in: > /home/craigm/yocto/poky/build/tmp/work/x86_64-linux/mono- > native/5.4.1.6-r0/temp/log.do_populate_sysroot.108358 > ERROR: Task (/home/craigm/yocto/poky/build/../../meta-mono/recipes- > mono/mono/mono-native_5.4.1.6.bb:do_populate_sysroot) failed with exit > code '1' > NOTE: Tasks Summary: Attempted 670 tasks of which 662 didn't need to be > rerun and 1 failed. > > Summary: 1 task failed: > /home/craigm/yocto/poky/build/../../meta-mono/recipes- > mono/mono/mono-native_5.4.1.6.bb:do_populate_sysroot > Summary: There were 3 ERROR messages shown, returning a non-zero exit > code. > > > I'm building with Yocto poky morty branch (currently commit 0e730770a9), > meta-mono master (commit dced6635ca). I'm building on Ubuntu 16.04.4. > > I have tried deleting the tmp directory, deleting all mono and mono-native > from sstate, cleaning mono and meta-mono, etc, to no avail. > > It's puzzling why I'm getting these errors, because it says "Matched in b''", > so > the files are not clashing with another recipe. It seems to be somehow trying > to install its own files twice, or something like that. If I look under > tmp/work/x86_64-linux/mono-native/5.4.1.6-r0/, then I see the files present > in both: > > sysroot-destdir/home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/ > and image/home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/ > > Is that part of the problem? I haven't had any success figuring out what is going on. I tried doing a new clean build, and got the same error. Does anyone else have this problem? Is it an incompatibility with Yocto morty, which I'm using? Any pointers on how to narrow down the cause? -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] mono-native is trying to install files into a shared area...
Lately, I'm trying to upgrade to a later version of mono, 5.4.1.6. When I try to do a build of my Yocto image, bitbake gets to the end of building mono-native, and then gets an error: ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: The recipe mono-native is trying to install files into a shared area when those files already exist. Those files and their manifest location are: /home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/usr/lib/mono/lldb/mono.py Matched in b'' /home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/usr/lib/mono/4.6.1-api/System.Web.Http.SelfHost.dll Matched in b'' ... /home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/usr/lib/mono/xbuild/14.0/bin/MSBuild/Microsoft.Build.CommonTypes.xsd Matched in b'' /home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/usr/lib/mono/xbuild/14.0/bin/MSBuild/Microsoft.Build.Core.xsd Matched in b'' /home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/usr/lib/mono/xbuild/14.0/Microsoft.Common.targets/ImportAfter/Microsoft.NuGet.ImportAfter.targets Matched in b'' Please verify which recipe should provide the above files. The build has stopped as continuing in this scenario WILL break things, if not now, possibly in the future (we've seen builds fail several months later). If the system knew how to recover from this automatically it would however there are several different scenarios which can result in this and we don't know which one this is. It may be you have switched providers of something like virtual/kernel (e.g. from linux-yocto to linux-yocto-dev), in that case you need to execute the clean task for both recipes and it will resolve this error. It may be you changed DISTRO_FEATURES from systemd to udev or vice versa. Cleaning those recipes should again resolve this error however switching DISTRO_FEATURES on an existing build directory is not supported, you should really clean out tmp and rebuild (reusing sstate should be safe). It could be the overlapping files detected are harmless in which case adding them to SSTATE_DUPWHITELIST may be the correct solution. It could also be your buil d is including two different conflicting versions of things (e.g. bluez 4 and bluez 5 and the correct solution for that would be to resolve the conflict. If in doubt, please ask on the mailing list, sharing the error and filelist above. ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: If the above message is too much, the simpler version is you're advised to wipe out tmp and rebuild (reusing sstate is fine). That will likely fix things in most (but not all) cases. ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: Function failed: sstate_task_postfunc ERROR: Logfile of failure stored in: /home/craigm/yocto/poky/build/tmp/work/x86_64-linux/mono-native/5.4.1.6-r0/temp/log.do_populate_sysroot.108358 ERROR: Task (/home/craigm/yocto/poky/build/../../meta-mono/recipes-mono/mono/mono-native_5.4.1.6.bb:do_populate_sysroot) failed with exit code '1' NOTE: Tasks Summary: Attempted 670 tasks of which 662 didn't need to be rerun and 1 failed. Summary: 1 task failed: /home/craigm/yocto/poky/build/../../meta-mono/recipes-mono/mono/mono-native_5.4.1.6.bb:do_populate_sysroot Summary: There were 3 ERROR messages shown, returning a non-zero exit code. I'm building with Yocto poky morty branch (currently commit 0e730770a9), meta-mono master (commit dced6635ca). I'm building on Ubuntu 16.04.4. I have tried deleting the tmp directory, deleting all mono and mono-native from sstate, cleaning mono and meta-mono, etc, to no avail. It's puzzling why I'm getting these errors, because it says "Matched in b''", so the files are not clashing with another recipe. It seems to be somehow trying to install its own files twice, or something like that. If I look under tmp/work/x86_64-linux/mono-native/5.4.1.6-r0/, then I see the files present in both: sysroot-destdir/home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/ and image/home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/ Is that part of the problem? -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Yocto git repositories, shallow clone, uploadpack.allowReachableSHA1InWant
I'm using git submodule to set up a Yocto build using the various layers required for my project. I see that recent git has the ability to specify that a submodule repository should be a "shallow" clone. However, when I tried it with meta-mono, I got an error "server does not allow request for unadvertised object". It appears that for such a shallow clone of a specified commit to work, the repository must enable uploadpack.allowReachableSHA1InWant. Could this option be enabled for all Yocto git repositories? This would potentially save a lot of time and download bandwidth for everyone. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] linux-firmware: build failure
Alexander Kanavin wrote: > On 10/31/2017 09:04 AM, Craig McQueen wrote: > > Would the following be a suitable fix? -- > > This needs to go to oe-core list. Just one comment... > > > # Remove python script used to check the WHENCE file > > - rm ${D}${nonarch_base_libdir}/firmware/check_whence.py > > + rm ${D}${nonarch_base_libdir}/firmware/check_whence.py -rf > > > > You need to either remove the file, or remove the line that does the > removal. Don't do a 'remove if present', it's better to be deterministic. Sorry, I made a mistake. I had a bbappend for linux-firmware in my layer, which retrieved an older revision from a year ago (at the time I wrote the bbappend, it was to retrieve a newer version). So, false alarm -- my apologies. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] linux-firmware: build failure
I'm trying upgrading to rocko (from morty). I'm getting a build failure for linux-firmware: DEBUG: Executing python function extend_recipe_sysroot NOTE: Direct dependencies are ['virtual:native:/home/craigm/yocto/poky/build/../meta/recipes-devtools/pseudo/pseudo_1.8.2.bb:do_populate_sysroot', '/home/craigm/yocto/poky/build/../meta/recipes-devtools/quilt/quilt-native_0.65.bb:do_populate_sysroot'] NOTE: Installed into sysroot: ['pseudo-native'] NOTE: Skipping as already exists in sysroot: ['quilt-native'] DEBUG: Python function extend_recipe_sysroot finished DEBUG: Executing shell function do_install rm: cannot remove '/home/craigm/yocto/poky/build/tmp/work/all-poky-linux/linux-firmware/1_0.0+gitAUTOINC+80d463be82-r0/image/lib/firmware/check_whence.py': No such file or directory WARNING: exit code 1 from a shell command. ERROR: Function failed: do_install (log file is located at /home/craigm/yocto/poky/build/tmp/work/all-poky-linux/linux-firmware/1_0.0+gitAUTOINC+80d463be82-r0/temp/log.do_install.120558) It seems that the latest git doesn't contain check_whence.py... or something like that. But I'm puzzled as to why the previous committer (17a772775c2559897e49236ae352444065e1b1ff perhaps) would not have encountered this. Would the following be a suitable fix? -- diff --git a/meta/recipes-kernel/linux-firmware/linux-firmware_git.bb b/meta/recipes-kernel/linux-firmware/linux-firmware_git.bb index 0338ba8ac2..b8f2865e3b 100644 --- a/meta/recipes-kernel/linux-firmware/linux-firmware_git.bb +++ b/meta/recipes-kernel/linux-firmware/linux-firmware_git.bb @@ -215,7 +215,7 @@ do_install() { rm ${D}${nonarch_base_libdir}/firmware/configure # Remove python script used to check the WHENCE file - rm ${D}${nonarch_base_libdir}/firmware/check_whence.py + rm ${D}${nonarch_base_libdir}/firmware/check_whence.py -rf # Libertas sd8686 ln -sf libertas/sd8686_v9.bin ${D}${nonarch_base_libdir}/firmware/sd8686.bin -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] fontconfig-native build error
I've been building a project with 'morty' with no problems for a few months, using Ubuntu 16.04. Today, I've just tried doing a build on a fresh Ubuntu 17.10 VM. But I'm getting a build error for fontconfig-native: x86_64-linux-libtool: compile: gcc -DHAVE_CONFIG_H -I. -I../../fontconfig-2.12.1/src -I.. -I../../fontconfig-2.12.1 -I../../fontconfig-2.12.1/src -I/home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/usr/include/freetype2 -I/home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/usr/include -I/home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/usr/include -Wall -Wpointer-arith -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -fno-strict-aliasing -DFC_CACHEDIR=\"/home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/var/cache/fontconfig\" -DFONTCONFIG_PATH=\"/home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/etc/fonts\" -isystem/home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/usr/include -isystem/home/craigm/yocto/poky/build/tmp/sysroots/x86_64-linux/usr/include -O2 -pipe -pthread -c ../../fontconfig-2.12.1/src/fcobjs.c -fPIC -DPIC -o .libs/fcobjs.o ERROR: oe_runmake failed ../../fontconfig-2.12.1/src/fcmatch.c:324:63: error: 'PRI_CHAR_WIDTH_STRONG' undeclared here (not in a function); did you mean 'PRI_WIDTH_STRONG'? #define FC_OBJECT(NAME, Type, Cmp) { FC_##NAME##_OBJECT, Cmp, PRI_##NAME##_STRONG, PRI_##NAME##_WEAK }, ^ ../../fontconfig-2.12.1/src/fcobjs.h:54:1: note: in expansion of macro 'FC_OBJECT' FC_OBJECT (CHAR_WIDTH, FcTypeInteger, NULL) ^ ../../fontconfig-2.12.1/src/fcmatch.c:324:84: error: 'PRI_CHAR_WIDTH_WEAK' undeclared here (not in a function); did you mean 'PRI_WIDTH_WEAK'? #define FC_OBJECT(NAME, Type, Cmp) { FC_##NAME##_OBJECT, Cmp, PRI_##NAME##_STRONG, PRI_##NAME##_WEAK }, ^ ../../fontconfig-2.12.1/src/fcobjs.h:54:1: note: in expansion of macro 'FC_OBJECT' FC_OBJECT (CHAR_WIDTH, FcTypeInteger, NULL) ^ Makefile:632: recipe for target 'fcmatch.lo' failed make[3]: *** [fcmatch.lo] Error 1 Funnily enough, fontconfig (not native) builds fine; it's only fontconfig-native that gives the above error. Any advice on this? I would consider going back to Ubuntu 16.04; I would also consider moving from morty to pyro or rocko (it's on my to-do list). -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] meta-mono 5.2.x recipe and pdb files
I wrote: > > I'm trying to upgrade from mono 4.6.x to 5.2.x. I see resulting image size > increases by about 10 MB in my usage. It appears that a significant > contributing factor is the presence of *.pdb files in 5.2.x which weren't in > 4.6.x. > > * Are the *.pdb files necessary? > * What can be done to exclude them? Seeing how the recipe handles *.mdb files, I made a bbappend file containing: FILES_${PN}-dbg += "${libdir}/mono/*/*.pdb ${libdir}/mono/*/*/*.pdb ${libdir}/mono/gac/*/*/*.pdb" That seems to do it. This should probably be added to the mono recipe files for mono >= 5.0.x. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] meta-mono 5.2.x recipe and pdb files
I'm trying to upgrade from mono 4.6.x to 5.2.x. I see resulting image size increases by about 10 MB in my usage. It appears that a significant contributing factor is the presence of *.pdb files in 5.2.x which weren't in 4.6.x. * Are the *.pdb files necessary? * What can be done to exclude them? -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Python module and build_ext args
I wrote: > Khem Raj wrote: > > On Thu, Aug 24, 2017 at 9:42 PM, Craig McQueen > > wrote: > > > Khem Raj wrote: > > > On Thu, Aug 24, 2017 at 6:48 PM Craig McQueen > > > > > > wrote: > > > > > > I wrote: > > >> I'm trying to make a recipe for python3-uvloop, using setuptools3. > > >> > > >> The Python 3 uvloop module depends on libuv. It bundles a version > > >> of libuv, and setup.py tries to build it, but it doesn't work well > > >> for cross-compilation. > > >> However, it also provides a build_ext parameter > > >> "--use-system-libuv", which seems to work when I try running it > > >> manually in devshell. (I have made a suitable recipe for libuv and > > >> added libuv to DEPENDS.) > > >> > > >> How can I specify the "--use-system-libuv" parameter for build_ext > > >> in the python3-uvloop recipe? I see a reference to > > >> DISTUTILS_BUILD_EXT_ARGS, but it doesn't seem to be functional. > > > > > > > > > I see DISTUTILS_BUILD_EXT_ARGS was submitted in this patch: > > > https://patchwork.openembedded.org/patch/66071/ > > > > > > However, in the Yocto poky repository, I see commit > > > 0221af0f4ee9e8bfb8796841bdf806e38bc600c6 which appears to be a > > broken > > > version of the above patch with the separate build_ext step not > > > actually executed with the DISTUTILS_BUILD_EXT_ARGS parameters. > > > > > > > > > > > > You did not explain broken in which sense ? > > > > > > > > > > > > It is broken in the sense that: The separate build_ext step is not > > > actually executed with the DISTUTILS_BUILD_EXT_ARGS parameters. The > > > original patch submission contained: > > > > > > > > > > > > STAGING_INCDIR=${STAGING_INCDIR} \ > > > > > > STAGING_LIBDIR=${STAGING_LIBDIR} \ > > > > > > BUILD_SYS=${BUILD_SYS} HOST_SYS=${HOST_SYS} \ > > > > > > - ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}- > native/${PYTHON_PN} > > setup.py > > > build ${DISTUTILS_BUILD_ARGS} || \ > > > > > > + ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}- > native/${PYTHON_PN} > > > + setup.py > > > \ > > > > > > + build_ext --include-dirs > > > ${STAGING_INCDIR}/${PYTHON_DIR}${PYTHON_ABI} \ > > > > > > + --library-dirs ${STAGING_LIBCDIR}/${PYTHON_DIR} \ > > > > > > + ${DISTUTILS_BUILD_EXT_ARGS} \ > > > > > > + build ${DISTUTILS_BUILD_ARGS} || \ > > > > > > + ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}- > native/${PYTHON_PN} > > > + setup.py > > > build_ext ${DISTUTILS_BUILD_ARGS} || \ > > > > > > bbfatal "${PYTHON_PN} setup.py build_ext execution failed." > > > > > > } > > > > > > > > > > > > However that part of the patch is not present in commit > > > 0221af0f4ee9e8bfb8796841bdf806e38bc600c6. > > > > > > > IIRC there were some breakages with this but it was long time ago. Can > > you appy this change and test it out ? > > I tried applying the change manually. The Yocto do_compile step succeeded. > But then the do_install step still failed. It seems that the do_install step > (which runs setup.py install) still runs the setup.py build_ext step a second > time, without the extra option I was trying to use. > > I'm not familiar enough with the working of python3 setup.py to know why > the install step re-runs the build_ext step a second time. I wasn't able to find an elegant solution for DISTUTILS_BUILD_EXT_ARGS. I ended up getting a working python3-uvloop recipe a different way, using two patches to the setup.py. I have posted to the uvloop project regarding this: https://github.com/MagicStack/uvloop/issues/104 -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Python recipes wanted?
I have made a number of recipes for Python packages, for my own use. Does meta-oe have an interest in any/all Python package recipe contributions, or ones that are considered more popular? I have: python3-aiohttp_2.2.3.bb python3-aiohttp-jinja2_0.13.0.bb python3-aiohttp-session_1.0.0.bb python3-async-timeout_1.2.1.bb python3-cobs_1.1.3.bb python3-colorlog_2.10.0.bb python3-configargparse_0.12.0.bb python3-lru-dict_1.1.6.bb python3-pycrypto_2.6.1.bb python3-pyserial-asyncio_0.4.bb python3-uvloop_0.8.0.bb python3-yarl_0.11.0.bb python-cobs_1.1.3.bb python-lru-dict_1.1.6.bb -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Python module and build_ext args
Khem Raj wrote: > On Thu, Aug 24, 2017 at 9:42 PM, Craig McQueen > wrote: > > Khem Raj wrote: > > On Thu, Aug 24, 2017 at 6:48 PM Craig McQueen > > > > wrote: > > > > I wrote: > >> I'm trying to make a recipe for python3-uvloop, using setuptools3. > >> > >> The Python 3 uvloop module depends on libuv. It bundles a version of > >> libuv, and setup.py tries to build it, but it doesn't work well for > >> cross-compilation. > >> However, it also provides a build_ext parameter "--use-system-libuv", > >> which seems to work when I try running it manually in devshell. (I > >> have made a suitable recipe for libuv and added libuv to DEPENDS.) > >> > >> How can I specify the "--use-system-libuv" parameter for build_ext in > >> the python3-uvloop recipe? I see a reference to > >> DISTUTILS_BUILD_EXT_ARGS, but it doesn't seem to be functional. > > > > > > I see DISTUTILS_BUILD_EXT_ARGS was submitted in this patch: > > https://patchwork.openembedded.org/patch/66071/ > > > > However, in the Yocto poky repository, I see commit > > 0221af0f4ee9e8bfb8796841bdf806e38bc600c6 which appears to be a > broken > > version of the above patch with the separate build_ext step not > > actually executed with the DISTUTILS_BUILD_EXT_ARGS parameters. > > > > > > > > You did not explain broken in which sense ? > > > > > > > > It is broken in the sense that: The separate build_ext step is not > > actually executed with the DISTUTILS_BUILD_EXT_ARGS parameters. The > > original patch submission contained: > > > > > > > > STAGING_INCDIR=${STAGING_INCDIR} \ > > > > STAGING_LIBDIR=${STAGING_LIBDIR} \ > > > > BUILD_SYS=${BUILD_SYS} HOST_SYS=${HOST_SYS} \ > > > > - ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} > setup.py > > build ${DISTUTILS_BUILD_ARGS} || \ > > > > + ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} > > + setup.py > > \ > > > > + build_ext --include-dirs > > ${STAGING_INCDIR}/${PYTHON_DIR}${PYTHON_ABI} \ > > > > + --library-dirs ${STAGING_LIBCDIR}/${PYTHON_DIR} \ > > > > + ${DISTUTILS_BUILD_EXT_ARGS} \ > > > > + build ${DISTUTILS_BUILD_ARGS} || \ > > > > + ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} > > + setup.py > > build_ext ${DISTUTILS_BUILD_ARGS} || \ > > > > bbfatal "${PYTHON_PN} setup.py build_ext execution failed." > > > > } > > > > > > > > However that part of the patch is not present in commit > > 0221af0f4ee9e8bfb8796841bdf806e38bc600c6. > > > > IIRC there were some breakages with this but it was long time ago. Can you > appy this change and test it out ? I tried applying the change manually. The Yocto do_compile step succeeded. But then the do_install step still failed. It seems that the do_install step (which runs setup.py install) still runs the setup.py build_ext step a second time, without the extra option I was trying to use. I'm not familiar enough with the working of python3 setup.py to know why the install step re-runs the build_ext step a second time. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Python module and build_ext args
Khem Raj wrote: On Thu, Aug 24, 2017 at 6:48 PM Craig McQueen mailto:craig.mcqu...@innerrange.com>> wrote: I wrote: > I'm trying to make a recipe for python3-uvloop, using setuptools3. > > The Python 3 uvloop module depends on libuv. It bundles a version of libuv, > and setup.py tries to build it, but it doesn't work well for > cross-compilation. > However, it also provides a build_ext parameter "--use-system-libuv", which > seems to work when I try running it manually in devshell. (I have made a > suitable recipe for libuv and added libuv to DEPENDS.) > > How can I specify the "--use-system-libuv" parameter for build_ext in the > python3-uvloop recipe? I see a reference to DISTUTILS_BUILD_EXT_ARGS, > but it doesn't seem to be functional. I see DISTUTILS_BUILD_EXT_ARGS was submitted in this patch: https://patchwork.openembedded.org/patch/66071/ However, in the Yocto poky repository, I see commit 0221af0f4ee9e8bfb8796841bdf806e38bc600c6 which appears to be a broken version of the above patch with the separate build_ext step not actually executed with the DISTUTILS_BUILD_EXT_ARGS parameters. You did not explain broken in which sense ? It is broken in the sense that: The separate build_ext step is not actually executed with the DISTUTILS_BUILD_EXT_ARGS parameters. The original patch submission contained: STAGING_INCDIR=${STAGING_INCDIR} \ STAGING_LIBDIR=${STAGING_LIBDIR} \ BUILD_SYS=${BUILD_SYS} HOST_SYS=${HOST_SYS} \ - ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py build ${DISTUTILS_BUILD_ARGS} || \ + ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py \ + build_ext --include-dirs ${STAGING_INCDIR}/${PYTHON_DIR}${PYTHON_ABI} \ + --library-dirs ${STAGING_LIBCDIR}/${PYTHON_DIR} \ + ${DISTUTILS_BUILD_EXT_ARGS} \ + build ${DISTUTILS_BUILD_ARGS} || \ + ${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py build_ext ${DISTUTILS_BUILD_ARGS} || \ bbfatal "${PYTHON_PN} setup.py build_ext execution failed." } However that part of the patch is not present in commit 0221af0f4ee9e8bfb8796841bdf806e38bc600c6. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Python module and build_ext args
I wrote: > I'm trying to make a recipe for python3-uvloop, using setuptools3. > > The Python 3 uvloop module depends on libuv. It bundles a version of libuv, > and setup.py tries to build it, but it doesn't work well for > cross-compilation. > However, it also provides a build_ext parameter "--use-system-libuv", which > seems to work when I try running it manually in devshell. (I have made a > suitable recipe for libuv and added libuv to DEPENDS.) > > How can I specify the "--use-system-libuv" parameter for build_ext in the > python3-uvloop recipe? I see a reference to DISTUTILS_BUILD_EXT_ARGS, > but it doesn't seem to be functional. I see DISTUTILS_BUILD_EXT_ARGS was submitted in this patch: https://patchwork.openembedded.org/patch/66071/ However, in the Yocto poky repository, I see commit 0221af0f4ee9e8bfb8796841bdf806e38bc600c6 which appears to be a broken version of the above patch with the separate build_ext step not actually executed with the DISTUTILS_BUILD_EXT_ARGS parameters. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Python module and build_ext args
I'm trying to make a recipe for python3-uvloop, using setuptools3. The Python 3 uvloop module depends on libuv. It bundles a version of libuv, and setup.py tries to build it, but it doesn't work well for cross-compilation. However, it also provides a build_ext parameter "--use-system-libuv", which seems to work when I try running it manually in devshell. (I have made a suitable recipe for libuv and added libuv to DEPENDS.) How can I specify the "--use-system-libuv" parameter for build_ext in the python3-uvloop recipe? I see a reference to DISTUTILS_BUILD_EXT_ARGS, but it doesn't seem to be functional. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] python3-jinja2 missing package dependency
For the python3-jinja2 package in meta-openembedded meta-python, it should also have a dependency on python3-numbers. I'm not sure if this dependency also applies for python-jinja2 package. I'm just using the Python 3 package. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] dllmap config for a Mono app
Joshua Watt wrote: > On Thu, 2017-06-08 at 16:41 +1000, Craig McQueen wrote: > > I have a Mono app that uses DllImport to load some functions from > > libraries. The app can't find the libraries unless I create a Mono > > config file containing dllmap lines to map the library's plain "DLL" > > name (e.g. "foo") to the real name of the .so file (e.g. > > "libfoo.so.4"). > > I'm not familiar with Mono, is libfoo a plain old library that anyone can > use, or > it is some special "Mono Library" It's a plain old library. It just so happens that I wrote it, but other than that it's a typical library. > > * What would be a good way in the .bb recipe to automatically create > > the required Mono config file? It would need to find the "real" names > > of the library files from the library(s) build output. So it would > > DEPENDS on the library(s). > > > > * Is it better to create a local config file, or write it into > > /etc/mono/config? (My Mono app has the DllImports in a compiled DLL, > > and so far I've found that I need to create a local MyDll.dll.config > > file; making a MyApp.exe.config file doesn't work.) > > Packaging rules require that every file belong to one and only one package. If > your application writes /etc/mono/config, you wouldn't be able to write that > file from any other package, including some other Mono application. The > MyApp.exe.config file (unfortunately) sounds like the ideal solution, because > you know that no other package is going to try and write that file. Writing > MyDll.dll.config runs into the same problem as /etc/mono/config (at least if > you are trying to write it from your MyApp recipe), as another app might > want to write the same file because it uses the same library but that isn't > allowed by the packaging rules. If you want to write MyDll.dll.config, it > should > be done in the recipe that actually creates the dll. /etc/mono/config is the "standard" system file for Mono, and an initial file is created by the mono recipe, so Mono can load various standard system libraries such as libc.so.6. Modifying it would be comparable to modifying other system config files such as /etc/modules, where it would be better to add a file to /etc/modprobe.d/. But unfortunately Mono doesn't provide for loading config from files in a directory like /etc/mono/config.d/ I guess I need to write to MyDll.dll.config in the recipe for my Mono app then. Thanks for the advice on this, much appreciated. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Python 3 internal package dependencies
I found that the Python 3 internal package dependencies are incompletely specified. For example, I wanted to install python3-asyncio for my Python program to use. But when I tried to run it, it was missing some other Python 3 packages that asyncio depended on. I ended up making a Python 3 bbappend file with the following: RDEPENDS_${PN}-asyncio += "${PN}-selectors" RDEPENDS_${PN}-misc += "${PN}-multiprocessing" RDEPENDS_${PN}-io += "${PN}-shell" RDEPENDS_${PN}-shell += "${PN}-compression" This just solved my immediate needs; I'm sure it's incomplete. Is it worth trying to improve the internal package RDEPENDS in the Python 3 recipe, or is it better for recipes for Python apps to just manually list all the sub-dependencies as needed? -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] dllmap config for a Mono app
I have a Mono app that uses DllImport to load some functions from libraries. The app can't find the libraries unless I create a Mono config file containing dllmap lines to map the library's plain "DLL" name (e.g. "foo") to the real name of the .so file (e.g. "libfoo.so.4"). * What would be a good way in the .bb recipe to automatically create the required Mono config file? It would need to find the "real" names of the library files from the library(s) build output. So it would DEPENDS on the library(s). * Is it better to create a local config file, or write it into /etc/mono/config? (My Mono app has the DllImports in a compiled DLL, and so far I've found that I need to create a local MyDll.dll.config file; making a MyApp.exe.config file doesn't work.) * Alternatively, the Mono app can find the libraries if plain libfoo.so symbolic link to libfoo.so.4 exists in the rootfs. But this symbolic link normally is only in the libfoo-dev package, not the libfoo package. Is there some reasonable way to create and install the libfoo.so symbolic links on the rootfs? -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] arm-poky-linux-gnueabi-gcc: Command not found
ravikiran j wrote: > > I am trying to build the linux kernel for odorid-xu3 board but i am getting > following errors for > bitbake linux-stable > > > DEBUG: Executing shell function do_compile > NOTE: make -j 4 HOSTCC=gcc HOSTCPP=gcc -E zImage > CC=arm-poky-linux-gnueabi-gcc -fuse-ld=bfd LD=arm-poky-linux-gnueabi-ld.bfd > ERROR: oe_runmake failed > /home/mistral/yocto1/poky/build-xfce/tmp/work-shared/odroid-xu3/kernel-source/scripts/gcc-version.sh: > line 25: arm-poky-linux-gnueabi-gcc: command not found > /home/mistral/yocto1/poky/build-xfce/tmp/work-shared/odroid-xu3/kernel-source/scripts/gcc-version.sh: > line 26: arm-poky-linux-gnueabi-gcc: command not found > make[2]: arm-poky-linux-gnueabi-gcc: Command not found > make[2]: arm-poky-linux-gnueabi-gcc: Command not found > GEN ./Makefile > scripts/kconfig/conf --silentoldconfig Kconfig > /home/mistral/yocto1/poky/build-xfce/tmp/work-shared/odroid-xu3/kernel-source/scripts/gcc-version.sh: > line 25: arm-poky-linux-gnueabi-gcc: command not found > /home/mistral/yocto1/poky/build-xfce/tmp/work-shared/odroid-xu3/kernel-source/scripts/gcc-version.sh: > line 26: arm-poky-linux-gnueabi-gcc: command not found > make[2]: arm-poky-linux-gnueabi-gcc: Command not found > make[2]: arm-poky-linux-gnueabi-gcc: Command not found > CHK include/config/kernel.release > make[2]: arm-poky-linux-gnueabi-gcc: Command not found > make[2]: arm-poky-linux-gnueabi-gcc: Command not found > GEN ./Makefile > > > Using > /home/mistral/yocto1/poky/build-xfce/tmp/work-shared/odroid-xu3/kernel-source > as source for kernel > make[2]: arm-poky-linux-gnueabi-gcc: Command not found > make[2]: arm-poky-linux-gnueabi-gcc: Command not found > HOSTCC scripts/pnmtologo > HOSTCC scripts/kallsyms > CC scripts/mod/empty.o > /bin/sh: 1: arm-poky-linux-gnueabi-gcc: not found > /home/mistral/yocto1/poky/build-xfce/tmp/work-shared/odroid-xu3/kernel-source/scripts/Makefile.build:293: > recipe for target 'scripts/mod/empty.o' failed > make[4]: *** [scripts/mod/empty.o] Error 127 > /home/mistral/yocto1/poky/build-xfce/tmp/work-shared/odroid-xu3/kernel-source/scripts/Makefile.build:544: > recipe for target 'scripts/mod' failed > make[3]: *** [scripts/mod] Error 2 > make[3]: *** Waiting for unfinished jobs > HOSTCC scripts/dtc/dtc.o > HOSTCC scripts/dtc/flattree.o > HOSTCC scripts/dtc/fstree.o > HOSTCC scripts/dtc/data.o > HOSTCC scripts/dtc/livetree.o > HOSTCC scripts/dtc/treesource.o > HOSTCC scripts/dtc/srcpos.o > HOSTCC scripts/dtc/checks.o > HOSTCC scripts/dtc/util.o > SHIPPED scripts/dtc/dtc-lexer.lex.c > SHIPPED scripts/dtc/dtc-parser.tab.h > SHIPPED scripts/dtc/dtc-parser.tab.c > HOSTCC scripts/dtc/dtc-lexer.lex.o > HOSTCC scripts/dtc/dtc-parser.tab.o > HOSTLD scripts/dtc/dtc > /home/mistral/yocto1/poky/build-xfce/tmp/work-shared/odroid-xu3/kernel-source/Makefile:560: > recipe for target 'scripts' failed > make[2]: *** [scripts] Error 2 > Makefile:150: recipe for target 'sub-make' failed > make[1]: *** [sub-make] Error 2 > Makefile:24: recipe for target '__sub-make' failed > make: *** [__sub-make] Error 2 > WARNING: exit code 1 from a shell command. > ERROR: Function failed: do_compile (log file is located at > /home/mistral/yocto1/poky/build-xfce/tmp/work/odroid_xu3-poky-linux-gnueabi/linux-stable/4.9.13+gitAUTOINC+f3329efb7f-r0/temp/log.do_compile.15577) > what is the problem and how to solve this problem ? I had a similar problem. I was building a kernel for BeagleBone Black, using meta-ti for kernel. I'm currently using Yocto poky morty branch. In my case, the problem was that I had a .bbappend file for the kernel, and in the .bbappend file was mistakenly overwriting DEPENDS. It had a line such as: DEPENDS_beaglebone += "my-initramfs-image" But that is an incorrect way to append to DEPENDS for the specific machine. Using DEPENDS_machinename += "..." is the wrong way to do it, because the += operation doesn't pull in the existing DEPENDS values, but an empty DEPENDS_machinename value. Then after that, the DEPENDS_machinename overrides the kernel recipe's plain DEPENDS, so the kernel DEPENDS ends up missing all the build dependencies. It should instead be one of: DEPENDS += "my-initramfs-image" Or: DEPENDS_append_beaglebone = " my-initramfs-image" I don't know if your problem has the same cause, but I thought I should describe my problem in case it's related. You can check the DEPENDS definition that bitbake is using by running bitbake on your kernel recipe with the -e option. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Problem building glibc-locale
> > From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org] > On Behalf Of Burton, Ross > > So you did: > > echo "DISTRO_FEATURES += \"usbhost\"" >> conf/local.conf > > But poky.conf does: > > DISTRO_FEATURES ?= "${DISTRO_FEATURES_DEFAULT} ${DISTRO_FEATURES_LIBC} > ${POKY_DEFAULT_DISTRO_FEATURES}" > > Which means that the final value of DISTRO_FEATURES is "usbhost", so none of > the locale support is enabled, and glibc packages badly. > > If you just want to add a feature, use DISTRO_FEATURES_append = " usbhost" > (leading whitespace in the string is critical). Note that usbhost is a > default distro feature: > > meta/conf/distro/include/default-distrovars.inc:DISTRO_FEATURES_DEFAULT ?= > "acl alsa argp bluetooth ext2 irda largefile pcmcia usbgadget usbhost wifi > xattr nfs zeroconf pci 3g nfc x11" > > So you don't need to specify it unless you are defining a distribution from > scratch. > > Ross I'm having this issue, when trying to use a modification of the poky-tiny distro (currently using morty). Does the poky-tiny distro do the right thing? -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Package mono-libs-4.5 size increase in Mono version 4.4.x
I wrote: > I've just had to upgrade from Mono 4.2.x to Mono 4.4.x, to get a fix for SMTP > SSL/TLS. > > I'm using the mono-libs-4.5 package. I see that the size of it has increased > quite a lot (several MB) due to the upgrade. It looks as though it's now > putting a bunch of files in /usr/lib/mono/4.5-api in addition to the old > /usr/lib/mono/4.5. > > I can see this mentioned in the Mono 4.4.0 release notes: > http://www.mono-project.com/docs/about-mono/releases/4.4.0/ > > But the rationale is not entirely clear to me. Is it possible to cut down the > Yocto image size by removing one of /usr/lib/mono/4.5-api and > /usr/lib/mono/4.5, or some other refactoring? It looks as though, in my image for the device, I only need /usr/lib/mono/4.5. In my custom layer, I've made a mono_4.4.%.bbappend file which contains: # Split /usr/lib/mono/4.5-api off into a separate package. PACKAGES += "${PN}-libs-4.5-api" FILES_${PN}-libs-4.5-api= "${libdir}/mono/4.5-api/*" FILES_${PN}-libs-4.5= "${libdir}/mono/4.5/*" My image includes the package mono-libs-4.5 (as it did before when I was using Mono 4.2.x). Now the image size is back to near what it was for Mono 4.2.x, and everything on the target device seems to be running fine. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Package mono-libs-4.5 size increase in Mono version 4.4.x
I've just had to upgrade from Mono 4.2.x to Mono 4.4.x, to get a fix for SMTP SSL/TLS. I'm using the mono-libs-4.5 package. I see that the size of it has increased quite a lot (several MB) due to the upgrade. It looks as though it's now putting a bunch of files in /usr/lib/mono/4.5-api in addition to the old /usr/lib/mono/4.5. I can see this mentioned in the Mono 4.4.0 release notes: http://www.mono-project.com/docs/about-mono/releases/4.4.0/ But the rationale is not entirely clear to me. Is it possible to cut down the Yocto image size by removing one of /usr/lib/mono/4.5-api and /usr/lib/mono/4.5, or some other refactoring? -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] [PATCH] os-release: put double-quotes around variable contents
Sorry, my mistake--it looks as though it was applied. I was looking at an old local "master" that wasn't updated to upstream master. Regards, Craig McQueen > -Original Message- > From: yocto-boun...@yoctoproject.org [mailto:yocto- > boun...@yoctoproject.org] On Behalf Of Craig McQueen > Sent: Wednesday, 27 April 2016 1:51 PM > To: yocto@yoctoproject.org > Subject: Re: [yocto] [PATCH] os-release: put double-quotes around variable > contents > > I notice this patch hasn't been applied. Is there anything I can do to improve > it? > > Regards, > Craig McQueen > > > > -Original Message- > > From: Craig McQueen > > Sent: Thursday, 15 October 2015 4:13 PM > > To: yocto@yoctoproject.org > > Cc: Craig McQueen > > Subject: [PATCH] os-release: put double-quotes around variable > > contents > > > > This makes the resulting /etc/os-release file have valid shell > > assignment syntax. This makes it loadable by a shell script, using the > 'source' command: > > > > source /etc/os-release > > --- > > meta/recipes-core/os-release/os-release.bb | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/meta/recipes-core/os-release/os-release.bb > > b/meta/recipes- core/os-release/os-release.bb index db82760..2fda674 > > 100644 > > --- a/meta/recipes-core/os-release/os-release.bb > > +++ b/meta/recipes-core/os-release/os-release.bb > > @@ -29,7 +29,7 @@ python do_compile () { > > for field in d.getVar('OS_RELEASE_FIELDS', True).split(): > > value = d.getVar(field, True) > > if value: > > -f.write('{0}={1}\n'.format(field, value)) > > +f.write('{0}="{1}"\n'.format(field, value)) > > if d.getVar('RPM_SIGN_PACKAGES', True) == '1': > > rpm_gpg_pubkey = d.getVar('RPM_GPG_PUBKEY', True) > > bb.utils.mkdirhier('${B}/rpm-gpg') > > -- > > 2.1.4 > > -- > ___ > yocto mailing list > yocto@yoctoproject.org > https://lists.yoctoproject.org/listinfo/yocto -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] [PATCH] os-release: put double-quotes around variable contents
I notice this patch hasn't been applied. Is there anything I can do to improve it? Regards, Craig McQueen > -Original Message- > From: Craig McQueen > Sent: Thursday, 15 October 2015 4:13 PM > To: yocto@yoctoproject.org > Cc: Craig McQueen > Subject: [PATCH] os-release: put double-quotes around variable contents > > This makes the resulting /etc/os-release file have valid shell assignment > syntax. This makes it loadable by a shell script, using the 'source' command: > > source /etc/os-release > --- > meta/recipes-core/os-release/os-release.bb | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/meta/recipes-core/os-release/os-release.bb b/meta/recipes- > core/os-release/os-release.bb > index db82760..2fda674 100644 > --- a/meta/recipes-core/os-release/os-release.bb > +++ b/meta/recipes-core/os-release/os-release.bb > @@ -29,7 +29,7 @@ python do_compile () { > for field in d.getVar('OS_RELEASE_FIELDS', True).split(): > value = d.getVar(field, True) > if value: > -f.write('{0}={1}\n'.format(field, value)) > +f.write('{0}="{1}"\n'.format(field, value)) > if d.getVar('RPM_SIGN_PACKAGES', True) == '1': > rpm_gpg_pubkey = d.getVar('RPM_GPG_PUBKEY', True) > bb.utils.mkdirhier('${B}/rpm-gpg') > -- > 2.1.4 -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Making /etc/os-release loadable by shell script
Ross Burton wrote: > On 2 July 2015 at 09:00, Craig McQueen wrote: > > It would be good if the /etc/os-release file (created by the core > > os-release package) could be loadable by a shell script. That is, a shell > > script can do: > > > > source /etc/os-release > > or > > . /etc/os-release > > > > But it's currently not possible, because the file lacks double-quotes > > around values that would make it valid shell assignment syntax. > > Could you transform you bbappend to a patch against oe-core and submit it? Sure, done. (It took me a long time to get around to setting up git for send-email functionality.) I did the patch for master, although I'm still using dizzy myself. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] [PATCH] os-release: put double-quotes around variable contents
This makes the resulting /etc/os-release file have valid shell assignment syntax. This makes it loadable by a shell script, using the 'source' command: source /etc/os-release --- meta/recipes-core/os-release/os-release.bb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/meta/recipes-core/os-release/os-release.bb b/meta/recipes-core/os-release/os-release.bb index db82760..2fda674 100644 --- a/meta/recipes-core/os-release/os-release.bb +++ b/meta/recipes-core/os-release/os-release.bb @@ -29,7 +29,7 @@ python do_compile () { for field in d.getVar('OS_RELEASE_FIELDS', True).split(): value = d.getVar(field, True) if value: -f.write('{0}={1}\n'.format(field, value)) +f.write('{0}="{1}"\n'.format(field, value)) if d.getVar('RPM_SIGN_PACKAGES', True) == '1': rpm_gpg_pubkey = d.getVar('RPM_GPG_PUBKEY', True) bb.utils.mkdirhier('${B}/rpm-gpg') -- 2.1.4 -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Busybox hwclock.sh initscript issues
On 2015-09-18 21:31, Gary Thomas wrote: > > On 2015-09-17 23:28, Craig McQueen wrote: > > I'm using Yocto dizzy. I've found a couple of issues with the Busybox > hwclock.sh initscript. > > > > 1) The script checks that /sbin/hwclock exists at the start. But after that > > it > runs hwclock without an explicit /sbin/hwclock path. So it only works if > /sbin/ > is in the PATH. Thus it doesn't run properly when called from e.g. cronie > which doesn't run with /sbin/ in the PATH. > > > > 2) The bootmisc.sh initscript uses the time from /etc/timestamp if the > hwclock time is older. That's good. But then by default, hwclock.sh runs after > bootmisc.sh, and unconditionally overwrites the system time from the > hwclock. So on a system without a functional hwclock, the /etc/timestamp > feature basically doesn't work. One solution is modify > INITSCRIPT_PARAMS_${PN}-hwclock so it doesn't run at start-up (I am doing > that in a busybox bbappend). > > Why do you think it doesn't work? On a system without a functioning > hardware clock, at least the time stamp moves forward on every > boot/shutdown. It doesn't work because after the bootmisc.sh runs, which sets the time according to /etc/timestamp, then hwclock.sh runs and sets the time to that of the non-functional hwclock (2000-01-01 00:00 in my case). To fix it, I've made a busybox bbappend file with the following: INITSCRIPT_PARAMS_${PN}-hwclock = "stop 20 0 1 6 ." That means hwclock.sh isn't run at start-up (but actually bootmisc.sh runs it as part of its handling of /etc/timestamp). That makes it work the way I want, and how I suspect the bootmisc.sh author intended it. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Busybox hwclock.sh initscript issues
I'm using Yocto dizzy. I've found a couple of issues with the Busybox hwclock.sh initscript. 1) The script checks that /sbin/hwclock exists at the start. But after that it runs hwclock without an explicit /sbin/hwclock path. So it only works if /sbin/ is in the PATH. Thus it doesn't run properly when called from e.g. cronie which doesn't run with /sbin/ in the PATH. 2) The bootmisc.sh initscript uses the time from /etc/timestamp if the hwclock time is older. That's good. But then by default, hwclock.sh runs after bootmisc.sh, and unconditionally overwrites the system time from the hwclock. So on a system without a functional hwclock, the /etc/timestamp feature basically doesn't work. One solution is modify INITSCRIPT_PARAMS_${PN}-hwclock so it doesn't run at start-up (I am doing that in a busybox bbappend). -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Recipe with dynamic variable in do_compile not rebuilding
I want to make an 'os-build-revisions' package that will create /etc/os-build-revisions containing a list of git revisions in all layers. I've quoted the recipe below. It mostly works. However, it's not rebuilding if a layer changes its git revision. I've used := on OS_BUILD_REVISIONS, and I've used do_compile[vardeps] and tried do_compile[nostamp], but it doesn't seem to help. I've also run 'bitbake os-build-revisions -e' and confirmed that OS_BUILD_REVISIONS contains the updated git revision value. But it still doesn't trigger a rebuild of the package. What do I need to add to ensure that the package rebuilds whenever OS_BUILD_REVISIONS changes? os-build-revisions.bb: - inherit allarch SUMMARY = "Yocto build revisions" DESCRIPTION = "The /etc/os-build-revisions file contains data about git revisions of layers used in the build." LICENSE = "MIT" INHIBIT_DEFAULT_DEPS = "1" do_fetch[noexec] = "1" do_unpack[noexec] = "1" do_patch[noexec] = "1" do_configure[noexec] = "1" def buildhistory_get_metadata_revs(d): # We want an easily machine-readable format here, so get_layers_branch_rev isn't quite what we want layers = (d.getVar("BBLAYERS", True) or "").split() metadata_revs = [] for layer in layers: name = os.path.basename(layer) branch = base_get_metadata_git_branch(layer, None).strip() revision = base_get_metadata_git_revision(layer, None) metadata_revs.append('{0:17} = {1}:{2}'.format(name, branch, revision)) return '\n'.join(metadata_revs) OS_BUILD_REVISIONS := "${@buildhistory_get_metadata_revs(d)}" do_compile() { echo "${OS_BUILD_REVISIONS}" > ${B}/os-build-revisions } do_compile[vardeps] += "OS_BUILD_REVISIONS" do_compile[nostamp] += "1" do_install () { install -d ${D}${sysconfdir} install -m 0644 os-build-revisions ${D}${sysconfdir}/ } -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] bash getcwd errors when running under OverlayFS
I am using an OverlayFS patch for kernel 3.14.x obtained from OpenWRT. Once I started using OverlayFS, I found that bash shell scripts would give a lot of errors, something like: shell-init: error retrieving current directory: getcwd: cannot access parent directories: Success job-working-directory: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory chdir: error retrieving current directory: getcwd: cannot access parent directories: Bad file descriptor It seems that this can be traced to an internal implementation of getcwd() in bash. When cross-compiled, it can't check for getcwd() use of malloc, so it is cautious and sets GETCWD_BROKEN and uses an internal implementation of getcwd(). This internal implementation doesn't seem to work well with OverlayFS. A simple fix in Yocto is to put the following in a recipes-extended/bash/bash_4.3.bbappend in your own layer: EXTRA_OECONF += "bash_cv_getcwd_malloc=yes" -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Making /etc/os-release loadable by shell script
It would be good if the /etc/os-release file (created by the core os-release package) could be loadable by a shell script. That is, a shell script can do: source /etc/os-release or . /etc/os-release But it's currently not possible, because the file lacks double-quotes around values that would make it valid shell assignment syntax. Compare to a Debian or Ubuntu /etc/os-release, which has double-quotes around values, so it _can_ be loaded by a shell script. To achieve this, I am using the following in an os-release.bbappend. It just adds double-quotes around the {1} in the 2nd-last line. # Ensure all variables' data are in quotes python do_compile () { with open(d.expand('${B}/os-release'), 'w') as f: for field in d.getVar('OS_RELEASE_FIELDS', True).split(): value = d.getVar(field, True) if value: f.write('{0}="{1}"\n'.format(field, value)) } -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Unset a variable
I'd like to make a distro config file, derived from conf/distro/poky.conf. But in my case, DISTRO_CODENAME is not really something that can contain a meaningful value, so I'd like to unset it. How can I unset the variable? (This affects the format of VERSION in /etc/os-release. I guess alternatively I could modify it in a os-release.bbappend) -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] cryptsetup in initramfs causes ~4 MB image size increase
I earlier wrote: > > I'm interested to use an encrypted root filesystem, by using cryptsetup in > initramfs. > > I'm finding that adding cryptsetup to an initramfs image increases its size by > about 4 MB. It seems that cryptsetup depends on openssl and lvm2, and > lvm2 depends on bash, and the result of that is that a lot of extra files get > dragged in. > > Is this all strictly necessary? Perhaps cryptsetup really only needs > libraries, > not all of openssl and lvm2. > > What would be a good way to go about reducing the dependencies that get > pulled in for cryptsetup? > > I also noticed that libgcrypt could possibly be used instead of openssl (by > putting in bbappend, PACKAGECONFIG = ""), saving about 0.5 MB. However > libgcrypt isn't used, according to the cryptsetup bb file, because it drops > root > privileges if it is linked with libcap support. That gives the obscure > cryptsetup > error "Cannot initialize device-mapper. Is dm_mod kernel module loaded?" > when trying to use cryptsetup with libgcrypt. Is there any reasonable work- > around for this? I found that I can cut it down significantly, using the following lvm2_2.%.bbappend: --- PACKAGES =+ "lvm2-libdevmapper" # ${base_libdir}/udev ${sbindir}/dmsetup are to get device mapper udev rules, # to avoid cryptsetup luksOpen hanging. FILES_lvm2-libdevmapper = "${libdir}/libdevmapper.so.* ${base_libdir}/udev ${sbindir}/dmsetup" RDEPENDS_lvm2-libdevmapper = "bash" RDEPENDS_${PN} += " lvm2-libdevmapper" RPROVIDES_${PN}-dev = "lvm2-libdevmapper-dev" --- That cuts out a bunch of unneeded lvm files. I'm not sure why there needs to be a bash dependency, but it didn't work without it. I'd like to get rid of bash if it's possible. (After reading more about libgcrypt, I think I'll just stick with openssl. It seems questionable design for the library, to drop an application's capabilities.) -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] cryptsetup in initramfs causes ~4 MB image size increase
I'm interested to use an encrypted root filesystem, by using cryptsetup in initramfs. I'm finding that adding cryptsetup to an initramfs image increases its size by about 4 MB. It seems that cryptsetup depends on openssl and lvm2, and lvm2 depends on bash, and the result of that is that a lot of extra files get dragged in. Is this all strictly necessary? Perhaps cryptsetup really only needs libraries, not all of openssl and lvm2. What would be a good way to go about reducing the dependencies that get pulled in for cryptsetup? I also noticed that libgcrypt could possibly be used instead of openssl (by putting in bbappend, PACKAGECONFIG = ""), saving about 0.5 MB. However libgcrypt isn't used, according to the cryptsetup bb file, because it drops root privileges if it is linked with libcap support. That gives the obscure cryptsetup error "Cannot initialize device-mapper. Is dm_mod kernel module loaded?" when trying to use cryptsetup with libgcrypt. Is there any reasonable work-around for this? -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] setcap program
> armin wrote: > On 05/12/2015 10:42 PM, Craig McQueen wrote: > > I'd like to try using 'setcap' on a Yocto-built system. But I'm having > > trouble > figuring out what Yocto or meta-oe package might provide 'setcap' and > related programs. > > > try "libcap" or "libcap-ng" libcap 2.22-r6 is in my image, but only the library, not any of the command line programs. On a closer look, I see it is building a setcap executable, but not installing it. Now I see--I need to include the libcap-bin package in my image. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] setcap program
I'd like to try using 'setcap' on a Yocto-built system. But I'm having trouble figuring out what Yocto or meta-oe package might provide 'setcap' and related programs. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Yocto-ish upgrade-in-place strategy
I’m working on this for a BeagleBone Black type system, which uses eMMC (i.e. disk partitions). I’m considering: Partition 1: FAT16 “BOOT”, with MLO, u-boot.img, and custom uEnv.txt (U-Boot rules to append) Partition 2: ext4 “KERNEL1”, which contains a zImage with attached initramfs, and device tree Partition 3: ext4 “KERNEL2”, which contains a zImage with attached initramfs, and device tree Partition 4: ext4 “DATA”, a read/write filesystem The DATA partition should contain a SquashFS file named /lib/firmware/rootro1 and/or rootro2. At boot up, U-Boot loads the custom rules from uEnv.txt. That checks for the presence of a BOOT2 file on the DATA partition. If it exists, it boots the kernel from KERNEL2, otherwise from KERNEL1. It passes kernel arguments: rootrw=/dev/mmcblk1p4 rootro=/mnt/rootrw/lib/firmware/rootro1 -- or rootro2 depending on whether booting KERNEL1 or KERNEL2. The kernel contains an initramfs (using initramfs-framework) which mounts the DATA partition as /mnt/rootrw. Then it mounts a SquashFS partition /mnt/rootrw/lib/firmware/rootro1 according to the passed kernel argument ‘rootro’, as /mnt/rootro. Then it mounts an OverlayFS with the rootrw mount over the rootro mount. This is development in-progress, but it seems to be working well for me so far. Then, I need to have an upgrade image which is an archive of: · SquashFS rootro image · Kernel with attached initramfs · Device tree · Any metadata for the upgrade, README, etc The user can upload it onto the device through a web interface, or something like that. Then it gets processed after upload: · The integrity is verified somehow (e.g. hash) · The kernel and device tree are copied to the KERNEL1 or KERNEL2 partition that’s not currently in-use. · The SquashFS rootro gets copied to /lib/firmware/rootro1 or rootro2, whichever is not currently in-use. · The partition 4 file BOOT2 is created or deleted, as needed, to cause U-Boot to boot the “other image”. · Reboot The BeagleBone Black U-Boot implements an incrementing ‘bootcount’, stored in RTC scratch, I believe. A Linux kernel driver could be written which allows for this to be reset to 0 by the kernel or userspace app. Then, U-Boot could do some alternative action if bootcount gets too big (meaning it’s not successfully booting)—such as revert to the other older image, if present. I should also mention that I used a kernel bbappend file with: RDEPENDS_kernel-base = "" That results in my rootfs image (which is the SquashFS rootfs) NOT containing the kernel and device tree in its /boot directory—since in this setup the kernel and device tree are in a different location KERNELx partition. Currently, I’m wondering how to get Yocto to build the upgrade image for me. I am not sure whether I can use a custom “image” which has just 4 or 5 files in it. Or maybe if the “wic” tool is suitable for this purpose. Or if I should just use my own script. One possible complication is if I want to use encryption. If so, I probably need to encrypt the archive except for the metadata, README. And I would need to save the encryption keys somehow securely in my build system. Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Yocto-ish upgrade-in-place strategy
ng)—such as revert to the other older image, if present. Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] wic and SD card size
When using wic to make an SD card image, such as: wic create sdimage-bootpart -e core-image How do you tell wic the size of the target SD card? -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Deploying linux-ti-staging initramfs kernel into /boot
I'm using linux-ti-staging kernel 3.14.x with Yocto dizzy, for a build for BeagleBone Black based system. I'd like to use initramfs in my kernel. So I've set INITRAMFS_IMAGE in my linux-ti-staging_3.14.bbappend, and INITRAMFS_IMAGE_BUNDLE = "1" in local.conf. Now in build dir tmp/deploy/images/ I'm getting zImage-initramfs-.bin being built fine. However, I'm not getting an initramfs image being deployed to /boot, but just a plain zImage without an initramfs. Looking in the build directory tmp/work/-poky-linux-gnueabi/linux-ti-staging/3.14.39-r22b+gitrAUTOINC+ba552b4bc7/packages-split/kernel-image/boot I see there is: zImage-3.14.39 zImage-initramfs-.bin --however the zImage-initramfs-.bin file isn't being included in deploy-rpms//kernel-image-3.14.39-3.14.39-r22b+gitrAUTOINC+ba552b4bc7..rpm for some reason. But, if I change my linux-ti-staging_3.14.bbappend to include any one of the following lines, then the kernel-image .rpm file _does_ include both zImage files: RDEPENDS_kernel-base = "" RDEPENDS_kernel-base = "kernel-devicetree" RDEPENDS_kernel-base = "kernel-image" But if I put: RDEPENDS_kernel-base = "kernel-image kernel-devicetree" --then there is no zImage-initramfs-.bin in the kernel-image .rpm. This seems very confusing. Why would zImage-initramfs-.bin disappear from the kernel-image package when the linux-ti-staging RDEPENDS_kernel-base specifies both kernel-image and kernel-devicetree, but not other cases? -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] initramfs-framework ERROR: There's no '/dev' on rootfs.
I'm trying to use initramfs-framework to mount an overlayfs, and I've got it working mostly. However, I found that when it boots, I get an error: ERROR: There's no '/dev' on rootfs. That comes from initramfs-framework's 'finish' script. I got around it by creating the /dev directory on my writable partition. I also created /proc and /sys since it looks as though those would be needed too. Would it be better to change the 'finish' script to just create the required directory(s)? E.g.: - if [ ! -d $ROOTFS_DIR/dev ]; then - fatal "ERROR: There's no '/dev' on rootfs." - fi + mkdir -p $ROOTFS_DIR/dev + mkdir -p $ROOTFS_DIR/proc + mkdir -p $ROOTFS_DIR/sys -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] [poky] Initramfs not appended to kernel image
> > > > > On 2015-04-20 05:09 AM, Wouter van Rooy wrote: > > > Hi Bruce, > > > > > > First of all, thanks for your answer. It would be a comforting idea > > > to get this initramfs implemented cleanly in my project. > > > > > > On 16-04-15 16:22, Bruce Ashfield wrote: > > >> I'm the proud owner of the bugzilla to document this process > > >> better, so let's work through the issues and see if there's a bug, > > >> or something that just isn't clearly described. We are talking > > >> about the > > >> 1.6 release here .. so at least the recent changes in > > >> kernel.bbclass processing won't be the cause of the breakage. > > > Correct, to be even more precise I am using the daisy-11.0.0 tag for Poky. > > >> Correct, and this definitely used to work. I can't see anything > > >> wrong by inspection alone, but will launch some builds to see if I > > >> can confirm the behaviour and that variable not making it down into > > >> the function call. Bruce > > > Thanks, I would love to hear the results of your test builds. Just > > > drop me a line if you need anything else from my build environment > > > for reproduction, like log files and such. > > > > I was traveling last week and am just getting back to this now. > > I wanted to check in to see if your issues are still persisting to see > > if you worked it out in the meantime. > > I'm in the process of using the INITRAMFS_IMAGE and > INITRAMFS_IMAGE_BUNDLE feature with the latest from the 'dizzy' branch > for Poky, and it seems to be working for me (on BeagleBone Black). > > It's been quite a slow process to understand the initramfs mechanism and > how to use it in Poky. That's including the initramfs-framework. It would be > great to see some documentation for these features. On further inspection... When I build my image, it creates a kernel with initramfs in the deploy directory: zImage-initramfs-3.14.39-r22b+gitrAUTOINC+ba552b4bc7-machine-20150428061518.bin However, the rootfs image includes the kernel _without_ the initramfs in its /boot directory. I'll investigate to see why. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] [poky] Initramfs not appended to kernel image
> > On 2015-04-20 05:09 AM, Wouter van Rooy wrote: > > Hi Bruce, > > > > First of all, thanks for your answer. It would be a comforting idea to > > get this initramfs implemented cleanly in my project. > > > > On 16-04-15 16:22, Bruce Ashfield wrote: > >> I'm the proud owner of the bugzilla to document this process better, > >> so let's work through the issues and see if there's a bug, or > >> something that just isn't clearly described. We are talking about the > >> 1.6 release here .. so at least the recent changes in kernel.bbclass > >> processing won't be the cause of the breakage. > > Correct, to be even more precise I am using the daisy-11.0.0 tag for Poky. > >> Correct, and this definitely used to work. I can't see anything wrong > >> by inspection alone, but will launch some builds to see if I can > >> confirm the behaviour and that variable not making it down into the > >> function call. Bruce > > Thanks, I would love to hear the results of your test builds. Just > > drop me a line if you need anything else from my build environment for > > reproduction, like log files and such. > > I was traveling last week and am just getting back to this now. > I wanted to check in to see if your issues are still persisting to see if you > worked it out in the meantime. I'm in the process of using the INITRAMFS_IMAGE and INITRAMFS_IMAGE_BUNDLE feature with the latest from the 'dizzy' branch for Poky, and it seems to be working for me (on BeagleBone Black). It's been quite a slow process to understand the initramfs mechanism and how to use it in Poky. That's including the initramfs-framework. It would be great to see some documentation for these features. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Deploy rootfs image within another rootfs image
I've got two scenarios where I want to put a rootfs image within another rootfs image. One scenario is for factory programming: My target is a BeagleBone Black type of system. I want to make a simple programmer to boot from SD card, which will partition and format the on-board eMMC and then write the bootloaders onto one partitions, and a rootfs image onto the other partition. So I want to make a recipe to build the programmer image, which contains another previously-built rootfs image at say /lib/firmware/rootfs.tar.gz. Another scenario is to implement firmware upgrade in-the-field. I am considering making a rootfs as a read-only SquashFS image. That image would then be put into a writable ext4 filesystem. An initramfs would use OverlayFS to mount the writable ext4 filesystem over the SquashFS image that is loop-back mounted. What is necessary to make a Yocto recipe to make an image, that will build another image, and then copy its deployed .tar.gz image into its own rootfs image? -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Location of U-Boot environment for BeagleBone Black
I'm building Yocto for BeagleBone using the meta-ti layer and the kernel and U-Boot provided by that layer. I'm having some trouble understanding the location where U-Boot saves its environment for BeagleBone Black. After doing some reading, it looks as though it might be saving environment on a "special" eMMC boot partition (which is not the same as the FAT16 partition 1 I created to store MLO and u-boot.img). However, this is only when booting from the on-board eMMC I think. When booting from SD card, where is environment stored? I can do "saveenv" command in U-Boot, and it says it's saving it to MMC. U-Boot# saveenv Saving Environment to MMC... Writing to MMC(1)... done But then after a reboot, it says: MMC: block number 0x100 exceeds max(0x0) MMC: block number 0x200 exceeds max(0x0) *** Error - No Valid Environment Area found Using default environment So does that mean that U-Boot can't use a saved environment when booting from SD card? Actually, based on some simple testing, it seems that it _is_ writing to the boot partition of the on-board eMMC, and loading it when it reboots from SD card, despite the error message. This is confusing. What about the case of a board with no eMMC, such as BeagleBone White? Where would it save environment then? How can I erase the saved environment when booting from eMMC, to ensure that U-Boot defaults will be used? I see that when Linux boots, there are block devices /dev/mmcblk0boot0 and /dev/mmcblk0boot1. But it seems I can't write to them: # dd if=/dev/zero of=/dev/mmcblk0boot1 bs=1k count=1k dd: writing '/dev/mmcblk0boot1': Operation not permitted 1+0 records in 0+0 records out -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] U-Boot with a different default device tree, for BeagleBone-similar system
I'm working on a system based on the TI Sitara AM335x similar to the BeagleBone Black, but with various custom hardware. I've got a custom device tree for it, and it's working well. In Yocto, I've defined a custom machine in a custom layer under conf/machine, derived from the BeagleBone Black machine with modifications. The kernel is compiling my device tree named something like am335x-myfantasticdevice.dtb, due to a KERNEL_DEVICETREE... line. However, U-Boot is still looking for the stock BeagleBone Black device tree named am335x-boneblack.dtb. How can I modify my Yocto build so that U-Boot by default will expect a device tree with the name am335x-myfantasticdevice.dtb? I can see UBOOT_MACHINE but I'm not sure what I can do with it. -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Installing Linux kernel headers including custom UAPI header file
I think I've answered my own question. The existing files installed in the build root /usr/include/linux/... are put there by the linux-libc-headers package. According to a prominent comment in meta/recipes-kernel/linux-libc-headers/linux-libc-headers.inc, I shouldn't try to fiddle with those, but instead point my application/library code to the STAGING_KERNEL_DIR. This previous post gave some useful hints: https://lists.yoctoproject.org/pipermail/linux-yocto/2014-April/002178.html But according to http://kernelnewbies.org/KernelHeaders I should just copy the specific header file I need from the kernel into my application/library source code, with a note to keep it synced with the latest kernel versions. I ended up doing the latter. -- Regards, Craig McQueen -Original Message- From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org] On Behalf Of Craig McQueen Sent: Thursday, 19 March 2015 10:59 AM To: yocto@yoctoproject.org Subject: [yocto] Installing Linux kernel headers including custom UAPI header file Hello, I'm using Yocto (dizzy) with meta-ti and the linux-ti-staging kernel 3.14.x to build for BeagleBone Black. I've written a custom char driver with several ioctl() calls, and I'm patching that into the linux-ti-staging kernel. The driver adds a new UAPI header file that goes in the kernel tree under include/uapi/linux/mydriver.h, to define the ioctl commands. I've got a test application to use that custom char driver. I've also written a library libmydriver, to provide a more user-friendly interface to the ioctl calls. Both want to include the mydriver.h from the Linux headers. The application and library are using autotools to build. I've written a Yocto recipe to build them via autotools. My problem is this: my application and library fail to build under Yocto, due to missing the Linux kernel header file mydriver.h (/usr/include/linux/mydriver.h is where I'd expect it to appear in the build environment root filesystem). What do I need to change to cause the custom Linux kernel build to install the kernel header files, including mydriver.h into the Yocto build for other packages to use during build? (I don't want it installed on the target, only in the build sysroot, although I'm not familiar enough with Yocto to understand how it installs include files to the build sysroot, but not the target sysroot.) I can see that Linux kernel headers are being installed to tmp/sysroots/beaglebone/usr/include/linux/... but not including mydriver.h; I'm not sure what package is installing these headers. (I can also see that my header is installed at tmp/sysroots/beaglebone/usr/src/kernel/include/uapi/linux/mydriver.h, but that's not the same.) -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Installing Linux kernel headers including custom UAPI header file
Hello, I'm using Yocto (dizzy) with meta-ti and the linux-ti-staging kernel 3.14.x to build for BeagleBone Black. I've written a custom char driver with several ioctl() calls, and I'm patching that into the linux-ti-staging kernel. The driver adds a new UAPI header file that goes in the kernel tree under include/uapi/linux/mydriver.h, to define the ioctl commands. I've got a test application to use that custom char driver. I've also written a library libmydriver, to provide a more user-friendly interface to the ioctl calls. Both want to include the mydriver.h from the Linux headers. The application and library are using autotools to build. I've written a Yocto recipe to build them via autotools. My problem is this: my application and library fail to build under Yocto, due to missing the Linux kernel header file mydriver.h (/usr/include/linux/mydriver.h is where I'd expect it to appear in the build environment root filesystem). What do I need to change to cause the custom Linux kernel build to install the kernel header files, including mydriver.h into the Yocto build for other packages to use during build? (I don't want it installed on the target, only in the build sysroot, although I'm not familiar enough with Yocto to understand how it installs include files to the build sysroot, but not the target sysroot.) I can see that Linux kernel headers are being installed to tmp/sysroots/beaglebone/usr/include/linux/... but not including mydriver.h; I'm not sure what package is installing these headers. (I can also see that my header is installed at tmp/sysroots/beaglebone/usr/src/kernel/include/uapi/linux/mydriver.h, but that's not the same.) -- Craig McQueen -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto