Re: [yocto] Set linux capabilities on binary on a recipe in meta-oe layer
On Mon, Nov 12, 2018 at 3:09 PM Markus W markus4dev-at-gmail.com wrote: > > Thanks Uwe! > > I tried the global approach by adding the following to my local.conf file: > > ROOTFS_POSTPROCESS_COMMAND += "my_setcap_function" > > my_setcap_function() { > setcap cap_net_raw+eip ${IMAGE_ROOTFS}/usr/bin/node > } > > But got the following warning: > WARNING: core-image-full-cmdline-1.0-r0 do_rootfs: Function > my_setcap_function doesn't exist > > I have tried to add the function into a recipe but this doesn't work either. > Where should the function be defined? > As I wrote >> This is done in your image recipe. At least this is where I do it and it works for me. My use of "global" was probably not so lucky, I did not mean bitbake configuration. I just meant that it is done in the image recipe and not in the node recipe. You can always use "bitbake -e" to check variable and recipe function definitions if you are wondering what is really going on. I often understand problems when looking at the listing produced by this command. In your case "bitbake -e core-image-full-cmdline" There are also other functions in the value of ROOTFS_POSTPROCESS_COMMAND so as On Tue, Nov 13, 2018 at 4:09 AM Mike Looijmans mike.looijmans-at-topic.nl wrote: > > Also, there's a semicolon missing: > ROOTFS_POSTPROCESS_COMMAND += "my_setcap_function;" > The final value of ROOTFS_POSTPROCESS_COMMAND as shown by "bitbake -e" should be list of function names separated by semicolons. My previous link was broken, hopefully this time it survives intact... https://www.yoctoproject.org/docs/2.5.1/mega-manual/mega-manual.html#var-ROOTFS_POSTPROCESS_COMMAND > Sometimes the problem is that parts of the underscored function name are seen > as overrides, so you should try using "mysetcapfunction" instead as a name. > That has never happened to me and I use underscores in my function names. But I could imagine that it's possible in some case. I would strongly expect to see such issue in the output of "bitbake -e", too. Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gex...@neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] so how does PACKAGECONFIG_remove really work?
On Fri, Nov 9, 2018 at 4:41 PM Burton, Ross ross.burton-at-intel.com wrote: > > On Fri, 9 Nov 2018 at 14:36, Aditya Tayade wrote: > > Can any one please help me to understand PACKAGECONFIG_remove feature > > > > > > Let's take an example of systemd recipe as follows: > > > > PACKAGECONFIG ??= "vconsole" > > > > PACKAGECONFIG[vconsole] = > > "-Dvconsole=true,-Dvconsole=false,,${PN}-vconsole-setup" > > > > > > Now how should we disable features set in PACKAGECONFIG[vconsole] using > > PACKAGECONFIG_remove from it's bbappend file? > > The _remove override removes a value from a variable. So this will > remove 'vconsole' from the value of PACKAGECONFIG: > > PACKAGECONFIG_remove = "vconsole" > I also wondered quite a while in the past why such questions are not covered by the Mega Manual. https://www.yoctoproject.org/docs/2.5.1/mega-manual/mega-manual.html Once I had been told that the mega manual contains "everything". As matter of fact that is not true. Basic syntax for bitbake (recipes and conf files) is documented only in a separate Bitbake User Manual. https://www.yoctoproject.org/docs/2.5.1/bitbake-user-manual/bitbake-user-manual.html#removing-override-style-syntax Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gex...@neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Set linux capabilities on binary on a recipe in meta-oe layer
Hi! On Fri, Nov 9, 2018 at 12:16 PM Markus W markus4dev-at-gmail.com wrote: > On Thu, 8 Nov 2018 at 22:53, Piotr Tworek wrote: ... >> pkg_postinst_ontarget_${PN} () { >>setcap cap_net_raw+eip $D${bindir}/node >> } ... > How can this be achieved when the rootfs is created and not on first > boot? I would like not to ship libcap binaries with the target in > production. Ideally I would do it "locally" in do_install of the node recipe (you can append extra statements to the task in your own .bbappend in your own layer, don't edit existing recipes) That of course requires that the package manager preserves the capabilites. I have no experience which package manager would do or not do that. "Globally" you can do it by appending a new function to ROOTFS_POSTPROCESS_COMMAND https://www.yoctoproject.org/docs/2.5.1/mega-manual/mega-manual.html#var- ROOTFS_POSTPROCESS_COMMAND This is done in your image recipe. Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gex...@neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Busybox_1.23.2 fails at do_compile on Poky-Sumo
On Thu, Oct 18, 2018 at 11:03 AM Dhanush K.S dhanush.ks-at-gmail.com wrote: > > Hello Yocto, > > I'm currently building an image with the busybox_1.23.2.bb recipe > included, using Yocto Sumo 2.5 with the Bitbake version 1.37.0. It is > running on a CentOS host building images for the target ARM > cortexa8. This busybox recipe is placed in another custom > layer. Though this recipe compiles without errors in Poky-Fido, in > Poky-Sumo I get compilation errors due to missing header files. I have > pasted a part of the log.do.compile file with the error. Well, Fido sounds very old, long before I started to use Yocto :) It was probably before recipe-specific sysroots so a lot has changed. On the other hand byteswap.h, which it complains about, comes from glibc and that you should have. Maybe it's complicated in your case by a platform specific implmentation I don't happen to have here right now. Do you have a reason to build such an old busybox recipe? I get version 1.27.2 from meta (i.e. oe-core) and it builds. Also your bitbake 1.37.0 indicates that you are not running the latest Sumo 2.5.1 which has bitbake 1.38.0. Although I don't really expect that it would solve your issue here, updating to 2.5.1 should be painless if you are on 2.5 already. Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gexder at neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Kernel development using externalsrc
On Fri, Nov 2, 2018 at 5:47 PM CHMIELARZ Radoslaw radoslaw.chmielarz-at-leica-geosystems.com wrote: > The company I work for uses yocto to build an image with a custom > linux kernel. The documentation > (https://www.yoctoproject.org/docs/2.5.1/kernel-dev/kernel-dev.html) > suggests a setup where the kernel is under git and the kernel recipe > specifies this git repository hence the workflow is the following: > > Make changes in local kernel directory > Submit changes to git (and presumably push to shared space) > Run yocto build which fetches the changes > > However since I don’t want to push the changes to git server before I > test locally I have added the kernel sources using > externalsrc. Devtool is of course what they recommend, but can always change the git repo to a local one in a .bbappend, so you save the push step and Yocto build saves fetching over the network. (Making a git commit for each build attempt is very little overhead and I would recommend it anyway as a good working practice. You can clean up using "git rebase -i" once you are happy with your code and before you publish it.) In one of my development branches I find SRC_URI = "\ git:///home/myname/projects/yoctobuild/pkgsrc/somerepo/;protocol=file;branch=debug/#675-v2.16.0 \ file://some.patch \ file://other.patch \ " SRCREV = "${AUTOREV}" I have not done it for the kernel, but I see no reason why it would not work there. (Obviously the absolute path is nothing you would be able to share unmodified with others. Maybe it can be made a bit more portable using ${THISDIR}, but I have not had any need to try it.) > Unfortunately this has the drawback that > linux-libc-headers runs do_configure and do_install phase which > invalidates glibc-initial and requires a recompilation of a couple of > hundreds of packages. I would like to avoid it. > How editing the kernel source would cause rebuilds trailing linux-libc-haeaders I cannot see in my build here $ bitbake-diffsigs -t linux-libc-headers configure NOTE: Starting bitbake server... ERROR: Only one matching sigdata file found for the specified task (linux-libc-headers do_configure) Well, maybe I have never changed the kernel in this build area, so let's check what it depends on: $ bitbake-dumpsig -t linux-libc-headers configure NOTE: Starting bitbake server... ERROR: Metadata does not support finding signature data files Hmm, no idea what that means. Until someone tells us what's wrong let's use stamp files instead $ bitbake-dumpsig tmp/stamps/corei7-64-poky-linux/linux-libc-headers/4.15.7-r0.do_configure.sigdata... This task depends on the checksums of files: [] Hash for dependent task /home/geuder/projects/yoctobuild/meta-nel/poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_4.15.7.bb.do_prepare_recipe_sysroot is 5ac6adcdd9edfe2705fd461a42942613 $ bitbake-dumpsig tmp/stamps/corei7-64-poky-linux/linux-libc-headers/4.15.7-r0.do_prepare_recipe_sysroot.sigdata... This task depends on the checksums of files: [] Hash for dependent task /home/geuder/projects/yoctobuild/meta-nel/poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_4.15.7.bb.do_fetch is 362b694aea2202ce72ad259579d001ad $ bitbake-dumpsig tmp/stamps/corei7-64-poky-linux/linux-libc-headers/4.15.7-r0.do_fetch.sigdata... Variable SRC_URI value is ${KERNELORG_MIRROR}/linux/kernel/v${HEADER_FETCH_VER}/linux-${PV}.tar.${KORG_ARCHIVE_COMPRESSION} Tasks this task depends on: [] This task depends on the checksums of files: [] So my build gets kernel headers from kernel.org (mirror), it does not use the kernel source I use to build my kernel. Maybe that's different for you? Of course the output of the bitbake-dumpsig commands is massively shortened, I just picked those lines that seemed likely candidates to cause rebuilds. To get the whole truth in your build call... $ bitbake-diffsigs -t linux-libc-headers configure ... after you made a small kernel edit and rebuilt. Regards, Uwe Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gexder at neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Removing hostapd init file
On Tue, Oct 23, 2018 at 9:14 AM Kristupas Savickas kristupas.savickas-at-teltonika.lt wrote: > > Greetings, > > I'm trying to remove the hostapd init file as I'll handle startup of it on my > own. I've created a .bbappend file in my layer to do this: > > do_install_append() { > rm -rf ${D}${sysconfdir}/init.d > } > > However, this results in a compilation warning when building my image > and the ipk-postinst is still being run on my target: [...] > I tried setting INITSCRIPT_NAME to "" in my .bbappend, but it fails > with an error during do_rootfs task of my image: [...] I guess the problem is that the recipe inherits update-rc.d class and that class wants to work with the init script. Unfortunately uninheriting a class in a bbappend is not possible AFAIK. I am not familiar with the details of the class whether there is a way to tell it to do nothing. > Any pointers of how I could accomplish my goal? > What about do_install_append() { (echo "#!/bin/sh" ; echo true) > ${D}${sysconfdir}/init.d/hostapd } Just guessing. Untested and I have not worked with sysv init for years... Maybe update-rc.d requires the typcial comment section at the beginning. If so you just add it. Or actually if update-rc.d refers to the comment section in the beginning of the script you could add a patch that changes the Default-Start: value to empty (or some runlevel you do not use). Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gexder at neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Moving to Yocto 2.3 - recipe-specific sysroots - improving the documentation
On Tue, Oct 30, 2018 at 9:05 AM CHMIELARZ Radoslaw radoslaw.chmielarz-at-leica-geosystems.com wrote: > > Hi all, > > > > I think the documentation on migrating between Yocto 2.2 and 2.3 needs > more clarification as to how did the sysroots have changed. > [...] > > > Now when trying to migrate I have noticed that STAGING_DIR is not > populated any more and this is clearly explained by the > documentation. What is not explained however is how to get the files > from other recipes. The documentation simply states that each recipe > has its own staging directory. [...] Have you seen this chapter? https://www.yoctoproject.org/docs/2.5.1/mega-manual/mega-manual.html#new-sharing-files-between-recipes (well, that's not 2.3 but the newest one, but I am not aware that anything significant has changed once more) The headline promises to answer the question, but at least I don't fully follow the explanations. They are talking about *the* sysroot. As we know sysroots are recipe-specifc now. So if we are talking about sharing files between recipes, we have an "exporting" recipe and an "importing" recipe, so at least in theory 2 sysroots could be involved. From the text in the manual I do not understand which one they mean and what they try to explain. So yes, I agree that the documentation should be clarified. Unfortunatly I am not in the situation to suggest the solution. Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gexder at neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] adding own shell script to the image
On Mon, Oct 29, 2018 at 3:28 PM Zolee K zoleelikesdebian-at-gmail.com wrote: > The error message: > > /home//recipes-z/testing/testing.bb: Unable to get checksum for testing > SRC_URI entry testscript: *file could not be found* > Which I don't understand, since the file is definitaly there:-( > Could you please help me out with this? Bitbake uses a search path to locate the files, see https://www.yoctoproject.org/docs/2.5.1/mega-manual/mega-manual.html#var-FILESEXTRAPATHS Either extend the path or move your file to a location on the path. If you are unsure what your current search path is you can always check from the output of bitbake -e Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gexder at neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] adding own shell script
On Mon, Oct 29, 2018 at 11:51 AM Zolee K zoleelikesdebian-at-gmail.com wrote: > > I'd like to install my own test.sh script into the image, I created this > recipe file, however I dont know how to have bitbake put the my test.sh > into /etc folder > > SUMMARY = "test.sh" > LICENSE = "Z" > > SRC_URI = "\ > file://test.sh \ > " > > do_configure(){ > : > } > > do_compile() { > : > } > > > do_install() { >install -d ${D}${sysconfdir} >install -m 0770 ${WORKDIR}/test.sh ${D}${sysconfdir}/test.sh > } At least at the first glimpse this recipe looks to me like it might work. (Executable code does not belong under /etc, but that is a different issue) However, bitbake will execute the tasks in the recipe only if they are pulled in, i. e. some part of your image depends on the package created. There are always many recipes in the layers of a Yocto based project, but only those tasks which are really needed will be executed. There are 2 ways to create the dependency a.) directly into the image https://www.yoctoproject.org/docs/2.5.1/mega-manual/mega-manual.html#var-IMAGE_INSTALL https://www.yoctoproject.org/docs/2.5.1/mega-manual/mega-manual.html#var-PACKAGE_INSTALL b.) indirectly by making some package already in your image runtime-depend on your new package https://www.yoctoproject.org/docs/2.5.1/mega-manual/mega-manual.html#var-RDEPENDS Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gexder at neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] How to write a distro layer?
Thanks Vincent for your quick reply! On Fri, Aug 3, 2018 at 12:44 PM, Vincent Prince wrote: > 2018-08-03 10:39 GMT+02:00 Uwe Geuder : >>One of the few informations I could find about how to create my own >>layer was something like >>https://www.openembedded.org/wiki/Creating_a_new_Layer > Did you check > https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#creating-a-general-layer-using-the-bitbake-layers-script Yes. The template generated by that script is very minimal and generic. I don't see any guidance for distro layers in particular. As a matter fact I believe our layer has been created by that script (although that was long before my time). >> The yocto reference manual says for many variables: "Set it in your >> local.conf". For me they are policy, so why shouldn't I set them in >> distro/.conf? > Did you also check > https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#creating-your-own-distribution I had read that many times before, but not recently so let's check again... | Note | The DISTRO variable in your local.conf file determines the name of | your distribution [...] | Point to Your distribution configuration file: In your local.conf | file in the Build Directory, set your DISTRO variable to point to | your distribution's configuration file. For example, if your | distribution's configuration file is named mydistro.conf, then you | point to it as follows: | | DISTRO = "mydistro" Correct, that's what I remembered. But if I set it in local.conf before adding my layer the bitbake sanity check will complain that there is no such layer. And if I modify local.conf after adding the layer all recipes will be re-parsed when building the first time. I don't see any guidance in what order the steps should be carried out when setting up a new build area. They do indicate that you should copy another distro's conf file, that's why I asked whether using "require" as we do is a bad/dangerous idea. > You can look some Freescale example here: > https://github.com/Freescale/meta-freescale-distro Good, I hadn't found those. At least the distro/.conf files look pretty much like I would expect them. But all of them (I understand the layer defines 8 different distros) do set the DISTRO variable. I haven't run the yocto-check-layer for a while, but the last time I did it complained that a distro layer MUST NOT set its own DISTRO variable. Do you happen to know whether meta-freescale-distro passes the yocto-check-layer script without complaints? (I know I could test myself...) > In fact, when Yocto manual refers to adding something to local.conf, > it is the minimal way to do it, but the cleanest is indeed to set it > in machine/distro config file. That has also been my thinking. If that's generally accepted we probably should send a patch to the manual. It's pretty misleading if the manual without any further explanations says "do it way A" if "way B" often is actually the better one. > 1.) DISTRO is local to build folder, By "local to build folder", do you mean that within the same build folder one cannot build more than 1 distro? Not relevant for me, because I have only one for the time being, but just to understand the message. > so it should be in local.conf, and you can use a wrapper script to help > creating this local.conf file, Yes, that's what we do. Without a wrapper script builds were unlikely to be reproducible by any measure... > https://github.com/Freescale/fsl-community-bsp-base/blob/master/setup-environment Interesting. I cannot spot how that script handles layers. My own script 1) sources oe-init-build-env 2) modifies the generated conf/local.conf 3) runs bitbake-layer add-layer for all layers we need in our distro >> In an attempt to make our code more in line with what I understand to be >> correct style, I tried to move all policy settings to >> distro/.conf and the DISTRO setting to local.conf. >> >> However that created a couple of problems: >> >> 1. If I set DISTRO in local.conf before adding all needed layers, my own >> distro layer last, bitbake-layers fails in a sanity check that the >> distro does not exist. Fair enough, it doesn't know about before it has >> been added. >> >> 2. If I first modify my local.conf with all my settings except DISTRO, >> then run all the add-layer operations and finally modify local.conf a >> 2nd time to set DISTRO correctly the build works. However, all recipes >> are reparsed when the build starts. I guess this is caused by the config >> change. >> >> So in the end having the DISTRO setting in layer.conf seems to work >> best. All add-layer commands succeed and recipes can be used from the >
Re: [yocto] Why can diffsigs take sometimes really looooong?
Thanks for your reply! On Sun, Jul 15, 2018 at 1:38 PM, Richard Purdie wrote: > On Fri, 2018-07-13 at 19:18 +0300, Uwe Geuder wrote: >> At times I find the diffsigs command useful/educational to understand >> what is going on in my build. >> >> $ bitbake-diffsigs -t myimage do_image >> >> Often the result is shown in no time. However, recently I got some >> cases were it takes 150 (!) minutes to show a simple difference (1 >> line >> changed in do_install of systemd). >> >> In comparision building after that change (with sstate) takes some 10 >> minutes. And building everything from scratch (no sstate) takes just >> a >> bit over 50 minutes on the same machine. >> >> Of course the build can make good use of my 8 cores / 16 threads, >> whereas diffsigs seems to run in only 1 core. Still, shouldn't every >> build calculate all the dependencies, so running diffsigs should only >> be >> a small fraction of that work? >> >> Is there a natural explanation why diffsigs can sometimes be so slow? >> Just curious to understand what is going on there. >> >> I am on Rocko 2.4.3 if that makes a difference. > > Is your sstate directory on something slow like NFS? > No, it is on the same ext4 as the rest of my build area. Backed by an Intel NVMe flash drive, so hopefully that is of reasonable quality... Here is the output of the time command from a new "record" diffsigs run shortly after my first posting > real244m8.594s > user137m21.897s > sys 8m17.606s So there is already more than 2 hours of CPU usage. Because diffsigs seems to be single threaded, that is a lower bound even with the best filesystem/disk. This build environment has been in use in development work for a couple of days. So there are many different signatures. But that should not make any significant difference if I compare just the 2 last ones, should it? When I had such a long run for the first time I first thought I had run in an endless loop. strace showed that the same files were opened again and again. However, in the end the command completed. I haven't made an effort yet to understand the pattern how it opens these files. For clean builds with no previous sstate I need "only" something like > real 54m53.939s > user 3268446 - 103936 = 3164510 = 527m25.10s > system 820431 - 1345 = 136m30.86s (These numbers are from overlayfs over ext4 on the same disk. Don't have any numbers without the additional overlayfs handy right now. According to experience the additional overlayfs leads to very similar wall times, just significant extra system CPU time) Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gex...@neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Why can diffsigs take sometimes really looooong?
Hi! At times I find the diffsigs command useful/educational to understand what is going on in my build. $ bitbake-diffsigs -t myimage do_image Often the result is shown in no time. However, recently I got some cases were it takes 150 (!) minutes to show a simple difference (1 line changed in do_install of systemd). In comparision building after that change (with sstate) takes some 10 minutes. And building everything from scratch (no sstate) takes just a bit over 50 minutes on the same machine. Of course the build can make good use of my 8 cores / 16 threads, whereas diffsigs seems to run in only 1 core. Still, shouldn't every build calculate all the dependencies, so running diffsigs should only be a small fraction of that work? Is there a natural explanation why diffsigs can sometimes be so slow? Just curious to understand what is going on there. I am on Rocko 2.4.3 if that makes a difference. Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gex...@neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] ldd
Hi! When searching for "yocto ldd" Google brings up this old posting On Mon Nov 11 05:25:08 PST 2013 Chris Hallinan challinan at gmail.com wrote: > On Sat, Nov 9, 2013 at 9:57 AM, Gary Thomas wrote: > >> On 2013-11-09 06:42, Seth Bollinger wrote: >> >>> Hello All, >>> >>> I don't see that ldd is part of the cross toolchain build anywhere. Am I >>> missing something? >>> >> >> It's built with the eglibc recipe and is packaged separately. >> >> Right, and the command to locate it is $ oe-pkgdata-util find-path /usr/bin/ldd ldd: /usr/bin/ldd So the package name is indeed ldd. >Gosh, I though ldd, originally a hard-to-maintain script, was obsoleted a >long time ago. This command: > >$ readelf --dynamic > >provides the same information, and readelf can be found in every cross >toolchain. > >-Chris > I don't think that is correct. readelf only shows static information embedded into the binary. ldd does runtime lookup obeying e.g. the LD_LIBRARY_PATH environment variable. It also has options -r and -u for actually doing the relocations and it will report errors if they don't succeed. So ldd is more useful than readelf for some use cases. Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gex...@neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Wrong file's ownership in rootfs.
Hi! On Fri, May 11, 2018 at 2:42 PM, Grzegorz Mierzejewski <mierzejewskigrzeg...@o2.pl> wrote: > Hello all, > > I have the following problem concerning the file's ownership. > In my recipe I install the new file to rootfs and change it's ownership in > do_install function: > do_install () { > install -p -m 644 file1 ${D}/ > chmod 777 ${D}/file1 > chown ${USER_DUMMY} ${D}/file1 > } > > USER_DUMMY is properly created with useradd class. > Thing is, that file1 in rootfs do not have the proper ownership - it is > instead owned by root. > > I've checked the pseudo/files.db in recipe's temp folder and ownership is > proper. > Also, "bitbake my_package -c devshell" shows proper ownership (as it uses > files.db). > But, the same thing done in image's temp folder results in bad (root) > ownership. > > Of course, it happens in modified Jethro delivered by vendor. > Everything works fine on official Jethro for Wandboard. > > Could anyone please give me some hints on what to look for as a root-cause > of such behavior? > Or at least describe the process of generating the files.db for image? > Is it generated based on each package's files.db? > I don't know the exact answer. I hope somebody can tell how us to debug the problem systematically. I have debugged similar issues before without full success and here is what I happen to remember from the top of my head 1) Check whether the ipk contains the desired ownership 1a) An ipk is an "ar" file containing 3 files. Extract it like this $ ar -x tmp/work/corei7-64-poky-linux/openssh/7.5p1-r0/deploy-ipks/corei7-64/openssh-foo_7.5p1-r0_corei7-64.ipk 1b) The files are contained in data.tar.gz See their owners in textual form using $ tar tvf data.tar.gz (I'm not 100% sure how this relates the checking pseudo as you mention it. It might lead to the same result. But I feel checking the ipk contents is less dependent on low level implementation details.) 2) Check out file poky/meta/files/fs-perms.txt. It's documented in the mega manual. 3) Could it be some postinst command that changes it? No detailed commands from the top of my head :( 4) Debug what you have in ROOTFS_POSTPROCESS_COMMAND $ bitbake -e my-image-recipe Maybe something there calls chown? 5) If you cannot find/fix the root cause consider some like ROOTFS_POSTPROCESS_COMMAND_append = "; hack_protections" hack_protections () { chown 42:42 /foo/bar } in your image recipe. This is completely untested pseudo code, modify until it works :) I have succcessfully used ROOTFS_POSTPROCESS_COMMANDS, but have never tried for chown. I am not sure whether you have access to symbolic user/group names in your recipe. Depending on what you have done with useradd, the numerical id might not be fixed. I vaguely remember that I have used DEPENDS += "my-useradd" in a recipe. I think that should make the symbolic ids useradd has created available to another recipe. I have not done it in an image recipe. Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gex...@neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] building recipe with buildtools, causes sanity error: Your system needs to support the en_US.UTF-8 locale.
On Wed, Apr 25, 2018 at 1:48 PM, Burton, Ross <ross.bur...@intel.com> wrote: > So my hunch is that you've a modern system with glibc 2.27 on the > host, so that old buildtools won't work. Try this buildtools: > > http://downloads.yoctoproject.org/releases/yocto/milestones/yocto-2.5_M3/buildtools/ Ah, buildtools come with their own locale. So my previous message was probably not applicable for users of buildtools. Sorry about the noise! Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gexder at neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] building recipe with buildtools, causes sanity error: Your system needs to support the en_US.UTF-8 locale.
Correcting myself... On Wed, Apr 25, 2018 at 10:41 AM, Uwe Geuder <jrswdna...@snkmail.com> wrote: [...] > > Bitbake uses Python. To make things more confusing the locales in Python > are not necessarily named the same way as in your underlying distro and > there can be aliases. > > In the end this code needs to run without returning an error > > $ python > >>> import locale > >>> locale.setlocale(locale.LC_ALL, "en_US.UTF-8") > 'en_US.UTF-8' > The command should have been python3. Should not make a difference here, but that's what we have said before... Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gexder at neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] building recipe with buildtools, causes sanity error: Your system needs to support the en_US.UTF-8 locale.
On Tue, Apr 24, 2018 at 9:04 PM, John Smith <john-s...@gmx.net>: > bitbake stops with this error message: "Your system needs to support the > en_US.UTF-8 locale." > > export LC_ALL=en_US.UTF-8 has no effect. Do you know how to fix that > error? Use "locale -a" to show what locales your system supports at the moment. It appears that en_US.UTF-8 is not on the list. Installing/building additional locales is distro specific and out of scope for this list. Please use your favorite search engine to find out how to do it for your distro or ask in your friendly distro support forum if necessary. Exporting the environment variable just tells other programs what to use. It does not help to tell them to use something that is not installed. For bitbake you don't have to tell it anything, it insists on using en_US.UTF-8 anyway as the error message shows. Bitbake uses Python. To make things more confusing the locales in Python are not necessarily named the same way as in your underlying distro and there can be aliases. In the end this code needs to run without returning an error $ python >>> import locale >>> locale.setlocale(locale.LC_ALL, "en_US.UTF-8") 'en_US.UTF-8' E.g. in Ubuntu the locale is listed by "locale -a" as en_US.utf8 but that works fine. Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gexder at neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Usage of yocto on different (production vs debug) scenarios
Thanks Ross for your answers. I'm still working on fully understanding shared state so I appreciate your help. Let me follow up on both your answers in a single message. On Mon, Apr 23, 2018 at 6:38 PM, Burton, Ross <ross.bur...@intel.com> wrote: > On 23 April 2018 at 16:23, Iván Castell <icast...@nayarsystems.com> wrote: >> [...] >> >> Related with that shared state cache, I found some information on a e-book >> (search the text on google to find the source): >> >> "Sharing a shared state cache is possible; however, it needs to be >> approached with care. Not all changes are detected by the shared state cache >> implementation, and when this happens, some or all of the cache needs to be >> invalidated. This can cause problems when the state cache is being shared. [...] > Very curious as to what book said that, because *any* example of that > happening is a bug in the recipe itself. I wouldn't listen to it: the YP > autobuilder has a shared sstate for three distributions * four architectures > * two libc implementations and doesn't have problems. > > Ross > I think when talking about shared state there can be 2 aspects of sharing 1.) sharing over time: Several invocations of bitbake in the same environment (DISTRO, MACHINE etc) 2.) sharing over distros etc. Number 1 is an incremental build (as opposed to clean or full build). Without any doubt that is useful (or really mandatory) for development work. You build, you make little changes, you build, you test, you make more changes... You share state from the previous build, because making a full clean build every time would lead to completely inacceptable cycle times. But when you are "done" with your changes you make an integration build and to my experience that should be a clean (aka full) build. You intentionally do not share any state from a previous build, you don't make an incremental build, because there is always the risk of broken recipes. I mean if there were no broken software, we would all be unemployed by now... Let me give a simple example I experienced my self recently: I added creation of a new file in some task. In my incremental developer build it all worked fine. When I put into a clean integration build the task failed. It turned out that the directory I wrote my new file to in task A was created by a different task B. In my development build task B had already been executed long before I even started to make the recipe modificationto to create a new file, so task A always succeeded. In the clean build task A happened to run first and failed. A simple missing dependency / incorrect task ordering. With parallel clean builds you might not always find it, but with incremental builds it can go undetected forever. It's not only "some book" that mentions the risk of "bad" state, but also the Yocto documentaton: https://www.yoctoproject.org/docs/2.5/mega-manual/mega-manual.html#concepts-tips-and-tricks (Especially 4.5.4.2. Invalidating Shared State. Of course the invalidating is easy, but finding what you need to invalidate is hard.) Of course the less something is tested the bigger the risk is. So the bitbake system should be rather fine, commonly used Poky recipes probably too. But when it comes to random layers and our own local recipes and bbappends I would expect the risk to grow a lot. So do you really tell us in the message above that above that Yocto project runs incremental integration builds? Another way to think about it: Shared state is used to to avoid rebuilds. So how could I benefit from having state information in some SSTATE_DIR already? I have to build it anyway, because my build area is empty to begin with. The 2nd aspect, sharing over distros was covered in the other message. On Mon, Apr 23, 2018 at 6:10 PM, Burton, Ross <ross.bur...@intel.com> wrote: > On 20 April 2018 at 11:47, Uwe Geuder <jrswdna...@snkmail.com> wrote: >> But can you share state between distros? Isn't the purpose of distros to >> use different options (variable settings) so the state would always be >> different? > > If the input to the recipe is different then the hashes would be > different so the content won't conflict. You can definitely share > sstate between DISTROs. > > Ross Under the assumption that all recepies were perfect (which they probably aren't in every point of history, see above) I fully believe that the build system would work perfectly when I share the SSTATE_DIR over DISTRO borders (or diferent MACHINE settings). But do I really have any commmon data that would lead to any saving? As you say the hashes would be different. In my thinking all "expensive" state info should be different, because a different DISTRO (or different MACHINE) would lead to different hashes for bascially every item. So in the best case I
[yocto] creating your own distro layer
Hi! I have inherited our custom distro layer (heavily based on poky), but I am not sure whether it has been coded correctly. It produces working images, but certain weird symptoms when working with bitbake made me to double-check the code. I'm reading https://www.yoctoproject.org/docs/2.4.2/mega-manual/mega-manual.html#creating-your-own-distribution It says: > Create the distribution configuration file: The distribution > configuration file needs to be created in the conf/distro directory > of your layer. You need to name it using your distribution name > (e.g. mydistro.conf). That seems clear to me. It continues: > Note: The DISTRO variable in your local.conf file determines the > name of your distribution. I find that statement somewhat ambiguous. Didn't we just read above that the name of the distro is determined by the distro/.conf file? And in general case you have several of them. So at least I would find it easier to understand if the note read something like The DISTRO variable in your local.conf file selects which distro to build. The wording in the manual might make sense if you think of the distribution as a build product. But it doesn't make any sense to me when thinking about distro metadata. And that's what the reader of that section is likely to look at/edit in this moment. Or do I miss something here? The confusion continues. The manual says > Your configuration file needs to set the following required variables: >DISTRO_NAME >DISTRO_VERSION If I look into https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta-poky/conf/distro/poky-tiny.conf?h=rocko it does not set a DISTRO_NAME. Well, it gets a DISTRO_NAME from > require conf/distro/poky.conf So assuming that poky-tiny and poky are diffferent distros, should they really share the same name? Maybe there is something like a distro and a distro variant, but I could not find any such concept in the manual. Sometimes I see references to a poky-based distro, though. I assume the DISTRO_NAME is mostly cosmetic / for humans. So not that much to worry about? But the DISTRO variable seems to be central for the build process. There is a test in script yocto-check-layer that the distro config file does not set the DISTRO variable. When running yocto-check-layer on our own distro layer I got a failure for setting DISTRO. So I treid to remove the variable setting. But that results in my DISTRO variable ending up being set to "yocto". I think it goes like this 1.) my local.conf sets DISTRO to my own one 2.) in consequence my own distro conf file gets processed 3.) it does "require" of yocto.conf and that resets the DISTRO value to "yocto" 4.) (originally our own distro config file reset the value to our own name after the "require". So the name was "correct" in the end. But yocto-check-layer did not like that) So what is wrong here? Why does the poky code violate the test condition of not setting DISTRO? Or is nobody supposed to "require" poky.conf? I could of course copy the contents, but copying code is usually not a good idea. Thanks for reading until here, couldn't formulate it shorter... Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gex...@neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Usage of yocto on different (production vs debug) scenarios
Following up on my own message... On Fri, Apr 20, 2018 at 1:47 PM, Uwe Geuder wrote: > On Fri, Apr 20, 2018 at 10:59 AM, Iván Castell <icast...@nayarsystems.com> > wrote: > [...] >> However, things are getting complicated, because there is no way to >> exclude some recipes easily. For example, we don't want iptables >> installed on the debug image, but dependency chains include iptables >> by default > > Doesn't blacklist do what you want? > > E.g. in your distro-production.conf > > PNBLACKLIST[iptables] = "we don't want iptables in product" > As recently mentioned in another thread on this list there is also BBMASK. https://lists.yoctoproject.org/pipermail/yocto/2018-April/040861.html I don't know what would be the suitable/most idiomatic use cases for PNBLACKLIST vs. BBMASK Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gex...@neuroeventlabs.com (Bot check: fix one obvious typo) -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Usage of yocto on different (production vs debug) scenarios
Hi! On Fri, Apr 20, 2018 at 10:59 AM, Iván Castell <icast...@nayarsystems.com> wrote: > We are trying to use yocto in a continuous integration environment with > different (production vs debug) scenarios. > > To setup a given scenario (production vs debug) we are using something like > this: > > $ SCENARIO=debug > $ MACHINE= DISTRO=-${SCENARIO} source > ../../build--${SCENARIO} > $ bitbake -${SCENARIO} > > So we have different image recipes: > > * image-production.bb > * image-debug.bb > > Different distros: > > * distro-production.conf > * distro-debug.conf > > And different build directories: > > * build--production > * build--debug > > To optimize space usage and compilation time, we setup a shared sstate > cache and a shared directory for downloads. This seems a good starting > point. Shared download, yes. But can you share state between distros? Isn't the purpose of distros to use different options (variable settings) so the state would always be different? (Please note: This is really a follow-up question, not me knowing better. I am just trying to fully understand these concepts) > However, things are getting complicated, because there is no way to > exclude some recipes easily. For example, we don't want iptables > installed on the debug image, but dependency chains include iptables > by default Doesn't blacklist do what you want? E.g. in your distro-production.conf PNBLACKLIST[iptables] = "we don't want iptables in product" Of course if something has a hard dependency on iptables, the something might need blacklisting instead or too. Regards, Uwe Geuder Neuro Event Labs Oy Tampere, Finland uwe.gex...@neuroeventlabs.com (Bot check: fix one obvious typo) > even when declaring IMAGE_INSTALL_remove explicitly. In this case we > decided checking _%.bbappend to decide what rules are installed on ion > vs rules.debug). > > hod is poisoning all our recipes with that kind of > > > right way to manage this? Can you suggest a > ay to deal with this? > > -- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto