Re: [yocto] Build time data
On Thu, Apr 12, 2012 at 11:08:19PM -0700, Darren Hart wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 04/12/2012 10:51 PM, Martin Jansa wrote: > > > And my system is very slow compared to yours, I've found my > > measurement of core-image-minimal-with-mtdutils around 95 mins > > http://patchwork.openembedded.org/patch/17039/ but this was with > > Phenom II X4 965, 4GB RAM, RAID0 (3 SATA2 disks) for WORKDIR, RAID5 > > (the same 3 SATA2 disks) BUILDDIR (raid as mdraid), now I have > > Bulldozer AMD FX(tm)-8120, 16GB RAM, still the same RAID0 but > > different motherboard.. > > Why RAID5 for BUILDDIR? The write overhead of RAID5 is very high. The > savings RAID5 alots you is more significant with more disks, but with > 3 disks it's only 1 disk better than RAID10, with a lot more overhead. Becaure RAID10 needs at least 4 drivers and all my SATA ports are already used and also it's on my /home partition.. please not that this is not some company build server, just my desktop where it happens I do a lot of builds for comunity distribution for smartphones http://shr-project.org Server we have available for builds is _much_ slower then this especially IO (some virtualized host on busy server), but has much better network bandwidth.. :). Cheers, > I spent some time outlining all this a while back: > http://www.dvhart.com/2011/03/qnap_ts419p_configuration_raid_levels_and_throughput/ > > Here's the relevant bit: > > "RAID 5 distributes parity across all the drives in the array, this > parity calculation is both compute intensive and IO intensive. Every > write requires the parity calculation, and data must be written to > every drive." > > > > - -- > Darren Hart > Intel Open Source Technology Center > Yocto Project - Linux Kernel > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.4.11 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJPh8LTAAoJEKbMaAwKp364pa8H/A8BSudN/g7ixFmUTYMNGHlC > 2+H59MgNHYWRYzNn9QvN6vyyfXzX7C00HUTQ4MQ3CmisTUza2tbJEdX9CpeIBQNg > Ny8iqyNNoInTFx2T1Yi2eA9Ytegtue9Ls+IcBRbpIbs6Zo1Qwzi6oemdPZN7g3YI > rH/NKALWIBt/Y/Dt2k0fz7WsQGYOuE/lYpL/CmukU7vNNEUAdOs7tZa5o1ZOQDuj > zGCwuVH9QwrDJEXNsMtjNY37aJeAgDMwSXjN0pKv1WQI9j47kYQQrrp2qKVQYhV1 > x4QxJ5aOuV7BaS0Y7zYkNo9nv+yKPODt25s5L83k5vjbMhCvczmMJn3jupQuUhQ= > =3GDA > -END PGP SIGNATURE- -- Martin 'JaMa' Jansa jabber: martin.ja...@gmail.com signature.asc Description: Digital signature ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 04/12/2012 10:51 PM, Martin Jansa wrote: > And my system is very slow compared to yours, I've found my > measurement of core-image-minimal-with-mtdutils around 95 mins > http://patchwork.openembedded.org/patch/17039/ but this was with > Phenom II X4 965, 4GB RAM, RAID0 (3 SATA2 disks) for WORKDIR, RAID5 > (the same 3 SATA2 disks) BUILDDIR (raid as mdraid), now I have > Bulldozer AMD FX(tm)-8120, 16GB RAM, still the same RAID0 but > different motherboard.. Why RAID5 for BUILDDIR? The write overhead of RAID5 is very high. The savings RAID5 alots you is more significant with more disks, but with 3 disks it's only 1 disk better than RAID10, with a lot more overhead. I spent some time outlining all this a while back: http://www.dvhart.com/2011/03/qnap_ts419p_configuration_raid_levels_and_throughput/ Here's the relevant bit: "RAID 5 distributes parity across all the drives in the array, this parity calculation is both compute intensive and IO intensive. Every write requires the parity calculation, and data must be written to every drive." - -- Darren Hart Intel Open Source Technology Center Yocto Project - Linux Kernel -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJPh8LTAAoJEKbMaAwKp364pa8H/A8BSudN/g7ixFmUTYMNGHlC 2+H59MgNHYWRYzNn9QvN6vyyfXzX7C00HUTQ4MQ3CmisTUza2tbJEdX9CpeIBQNg Ny8iqyNNoInTFx2T1Yi2eA9Ytegtue9Ls+IcBRbpIbs6Zo1Qwzi6oemdPZN7g3YI rH/NKALWIBt/Y/Dt2k0fz7WsQGYOuE/lYpL/CmukU7vNNEUAdOs7tZa5o1ZOQDuj zGCwuVH9QwrDJEXNsMtjNY37aJeAgDMwSXjN0pKv1WQI9j47kYQQrrp2qKVQYhV1 x4QxJ5aOuV7BaS0Y7zYkNo9nv+yKPODt25s5L83k5vjbMhCvczmMJn3jupQuUhQ= =3GDA -END PGP SIGNATURE- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
On Thu, Apr 12, 2012 at 04:37:00PM -0700, Flanagan, Elizabeth wrote: > On Thu, Apr 12, 2012 at 7:12 AM, Darren Hart wrote: > > > -BEGIN PGP SIGNED MESSAGE- > > Hash: SHA1 > > > > > > > > On 04/12/2012 01:00 AM, Martin Jansa wrote: > > > On Thu, Apr 12, 2012 at 01:05:00PM +0530, Joshua Immanuel wrote: > > >> Darren, > > >> > > >> On Wed, 2012-04-11 at 17:30 -0700, Darren Hart wrote: > > >>> I run on a beast with 12 cores, 48GB of RAM, OS and sources on > > >>> a G2 Intel SSD, with two Seagate Barracudas in a RAID0 array > > >>> for my /build partition. I run a headless Ubuntu 11.10 (x86_64) > > >>> installation running the 3.0.0-16-server kernel. I can build > > >>> core-image-minimal in < 30 minutes and core-image-sato in < 50 > > >>> minutes from scratch. > > > > > > why not use so much RAM for WORKDIR in tmpfs? I bought 16GB just to > > > be able to do my builds in tmpfs and keep only more permanent data > > > on RAID. > > > > We've done some experiments with tmpfs, adding Beth on CC. If I recall > > correctly, my RAID0 array with the mount options I specified > > accomplishes much of what tmpfs does for me without the added setup. > > > > This should be the case in general. For the most part, if you have a decent > RAID setup (We're using RAID10 on the ab) with fast disks you should be > able to hit tmpfs speed (or close to it). I've done some experiments with > this and what I found was maybe a 5 minute difference, sometimes, from a > clean build between tmpfs and RAID10. 5 minutes on very small image like core-image-minimal (30 min) is 1/6 of that time :).. I have much bigger images and even bigger ipk feed, so to rebuild from scratch takes about 24 hours for one architecture.. And my system is very slow compared to yours, I've found my measurement of core-image-minimal-with-mtdutils around 95 mins http://patchwork.openembedded.org/patch/17039/ but this was with Phenom II X4 965, 4GB RAM, RAID0 (3 SATA2 disks) for WORKDIR, RAID5 (the same 3 SATA2 disks) BUILDDIR (raid as mdraid), now I have Bulldozer AMD FX(tm)-8120, 16GB RAM, still the same RAID0 but different motherboard.. Problem with tmpfs is that no RAM is big enough to build whole feed in one go, so I have to build in steps (e.g. bitbake gcc for all machines with the same architecture, then cleanup WORKDIR and switch to another arch, then bitbake small-image, bigger-image, qt4-x11-free, ...). qt4-x11-free is able to eat 15GB tmpfs almost completely. > I discussed this during Yocto Developer Day. Let me boil it down a bit to > explain some of what I did on the autobuilders. > > Caveat first though. I would avoid using autobuilder time as representative > of prime yocto build time. The autobuilder hosts a lot of different > services that sometimes impact build time and this can vary depending on > what else is going on on the machine. > > There are four places, in general, where you want to look at optimizing > outside of dependency issues. CPU, disk, memory, build process. What I > found was that the most useful of these in getting the autobuilder time > down was disk and build process. > > With disk, spreading it across the RAID saved us not only a bit of time, > but also helped us avoid trashed disks. More disk thrash == higher failure > rate. So far this year we've seen two disk failures that have resulted in > almost zero autobuilder downtime. True for RAID10, but for WORKDIR itself RAID0 is cheeper and even higher failure rate it's not big issue for WORKDIR.. just have to cleansstate tasks which were in hit in the middle of build.. > The real time saver however ended up being maintaining sstate across build > runs. Even with our sstate on nfs, we're still seeing a dramatic decrease > in build time. > > I would be interested in seeing what times you get with tmpfs. I've done > tmpfs builds before and have seen good results, but bang for the buck did > end up being a RAID array. I'll check if core-image-minimal can be built with just 15GB tmpfs, otherwise I would have to build it in 2 steps and the time wont be precise. > With a higher commit interval, the kernel doesn't try to sync the > > dcache with the disks as frequently (eg not even once during a build), > > so it's effectively writing to memory (although there is still plenty > > of IO occurring). > > > > The other reason is that while 48GB is plenty for a single build, I > > often run many builds in parallel, sometimes in virtual machines when > > I need to reproduce or test something on different hosts. > > > > For example: > > > > https://picasaweb.google.com/lh/photo/7PCrqXQqxL98SAY1ecNzDdMTjNZETYmyPJy0liipFm0?feat=directlink -- Martin 'JaMa' Jansa jabber: martin.ja...@gmail.com signature.asc Description: Digital signature ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] Unable to Load Kernel Module
Hi, I am facing problem in loading kernel module at startup. I have added "kernel-modules" in rootfs image file. I am getting the below mentioned error- modprobe: FATAL: Could not load /lib/modules/2.6.35.3-433-g0fae922/modules.dep: No such file or directory flexcan flexcan.0: writing ctrl=0x25292004 enable Yocto (Built by Poky 5.0) 1.0 EBboard ttymxc0 I noticed that I am getting "2.6.35.3" directory under /lib/modules/ . As per my understanding Poky is supposed to make soft link of "2.6.35.3" to "2.6.35.3-433-g0fae922". I tried making this soft link manually and it worked. Can any one please suggest me which changes I need to make in recipes to create this soft link? Thanks and Regards Navani Srivastava ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] [PATCH 4/4] yocto-bsp: fix x86_64 tuning for qemu arch
From: Tom Zanussi While testing the fix for [YOCTO #] I noticed that the tuning for the qemu x86_64 target was using the wrong tuning file - it should be x86_64 instead of i586. Change the template to match. Signed-off-by: Tom Zanussi --- .../arch/qemu/conf/machine/{{=machine}}.conf |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/scripts/lib/bsp/substrate/target/arch/qemu/conf/machine/{{=machine}}.conf b/scripts/lib/bsp/substrate/target/arch/qemu/conf/machine/{{=machine}}.conf index 61ba842..0e4ecad 100644 --- a/scripts/lib/bsp/substrate/target/arch/qemu/conf/machine/{{=machine}}.conf +++ b/scripts/lib/bsp/substrate/target/arch/qemu/conf/machine/{{=machine}}.conf @@ -20,7 +20,7 @@ PREFERRED_PROVIDER_virtual/libgl ?= "mesa-dri" {{ if qemuarch == "i386": }} require conf/machine/include/tune-i586.inc {{ if qemuarch == "x86_64": }} -require conf/machine/include/tune-i586.inc +require conf/machine/include/tune-x86_64.inc {{ if qemuarch == "arm": }} require conf/machine/include/tune-arm926ejs.inc {{ if qemuarch == "powerpc": }} -- 1.7.0.4 ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] [PATCH 3/4] yocto-bsp: fix multi-provider error in qemu arch template
From: Tom Zanussi While testing the fix for [YOCTO #] I noticed a new build error that wasn't there in previous testing: ERROR: Multiple .bb files are due to be built which each provide virtual/libgl The build still completed and produced a good image, but an error message was displayed, which this patch removes. Signed-off-by: Tom Zanussi --- .../arch/qemu/conf/machine/{{=machine}}.conf |1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/scripts/lib/bsp/substrate/target/arch/qemu/conf/machine/{{=machine}}.conf b/scripts/lib/bsp/substrate/target/arch/qemu/conf/machine/{{=machine}}.conf index 003ead1..61ba842 100644 --- a/scripts/lib/bsp/substrate/target/arch/qemu/conf/machine/{{=machine}}.conf +++ b/scripts/lib/bsp/substrate/target/arch/qemu/conf/machine/{{=machine}}.conf @@ -9,6 +9,7 @@ PREFERRED_PROVIDER_virtual/kernel ?= "{{=preferred_kernel}}" PREFERRED_VERSION_{{=preferred_kernel}} ?= "{{=preferred_kernel_version}}%" PREFERRED_PROVIDER_virtual/xserver ?= "xserver-xorg" +PREFERRED_PROVIDER_virtual/libgl ?= "mesa-dri" {{ input type:"choicelist" name:"qemuarch" prio:"5" msg:"Which qemu architecture would you like to use?" default:"i386" }} {{ input type:"choice" val:"i386" msg:"i386(32-bit)" }} -- 1.7.0.4 ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] [PATCH 2/4] yocto-bsp: fix qemuarch test for xserver-xf86-config.bbappend
From: Tom Zanussi While testing the fix for [YOCTO #] I noticed that the qemuarch test was wrong - there is no 'x86' qemuarch, just 'i386'. Change the test to match. Signed-off-by: Tom Zanussi --- ..."x86_64\": }} xserver-xf86-config_0.1.bbappend" |0 1 files changed, 0 insertions(+), 0 deletions(-) rename "scripts/lib/bsp/substrate/target/arch/qemu/recipes-graphics/xorg-xserver/{{ if qemuarch == \"x86\" or qemuarch == \"x86_64\": }} xserver-xf86-config_0.1.bbappend" => "scripts/lib/bsp/substrate/target/arch/qemu/recipes-graphics/xorg-xserver/{{ if qemuarch == \"i386\" or qemuarch == \"x86_64\": }} xserver-xf86-config_0.1.bbappend" (100%) diff --git "a/scripts/lib/bsp/substrate/target/arch/qemu/recipes-graphics/xorg-xserver/{{ if qemuarch == \"x86\" or qemuarch == \"x86_64\": }} xserver-xf86-config_0.1.bbappend" "b/scripts/lib/bsp/substrate/target/arch/qemu/recipes-graphics/xorg-xserver/{{ if qemuarch == \"i386\" or qemuarch == \"x86_64\": }} xserver-xf86-config_0.1.bbappend" similarity index 100% rename from "scripts/lib/bsp/substrate/target/arch/qemu/recipes-graphics/xorg-xserver/{{ if qemuarch == \"x86\" or qemuarch == \"x86_64\": }} xserver-xf86-config_0.1.bbappend" rename to "scripts/lib/bsp/substrate/target/arch/qemu/recipes-graphics/xorg-xserver/{{ if qemuarch == \"i386\" or qemuarch == \"x86_64\": }} xserver-xf86-config_0.1.bbappend" -- 1.7.0.4 ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] [PATCH 1/4] yocto-bsp: enable property value display of nested properties
From: Tom Zanussi Previous versions of yocto-bsp mapped every input element to a unique variable name, which is what the current property value display code expects. When that was changed to a nested form, the display code wasn't updated to match - this updated does that. Fixes [YOCTO #] Signed-off-by: Tom Zanussi --- scripts/lib/bsp/engine.py | 56 +++- scripts/lib/bsp/kernel.py |2 +- 2 files changed, 55 insertions(+), 3 deletions(-) diff --git a/scripts/lib/bsp/engine.py b/scripts/lib/bsp/engine.py index d2f0735..8e53f00 100644 --- a/scripts/lib/bsp/engine.py +++ b/scripts/lib/bsp/engine.py @@ -1222,8 +1222,7 @@ def yocto_bsp_create(machine, arch, scripts_path, bsp_output_dir, codedump, prop context = create_context(machine, arch, scripts_path) target_files = expand_targets(context, bsp_output_dir) -if not properties: -input_lines = gather_inputlines(target_files) +input_lines = gather_inputlines(target_files) program_lines = [] @@ -1316,6 +1315,44 @@ def yocto_bsp_list_properties(arch, scripts_path, properties_file): print_dict(properties) +def split_nested_property(property): +""" +A property name of the form x.y describes a nested property +i.e. the property y is contained within x and can be addressed +using standard JSON syntax for nested properties. Note that if a +property name itself contains '.', it should be contained in +double quotes. +""" +splittable_property = "" +in_quotes = False +for c in property: +if c == '.' and not in_quotes: +splittable_property += '\n' +continue +if c == '"': +in_quotes = not in_quotes +splittable_property += c + +split_properties = splittable_property.split('\n') + +if len(split_properties) > 1: +return split_properties + +return None + + +def find_input_line_group(substring, input_lines): +""" +Find and return the InputLineGroup containing the specified substring. +""" +for line in input_lines: +if isinstance(line, InputLineGroup): +if substring in line.group[0].line: +return line + +return None + + def find_input_line(name, input_lines): """ Find the input line with the specified name. @@ -1330,6 +1367,8 @@ def find_input_line(name, input_lines): try: if line.props["name"] == name: return line +if line.props["name"] + "_" + line.props["nameappend"] == name: +return line except KeyError: pass @@ -1363,6 +1402,17 @@ def yocto_bsp_list_property_values(arch, property, scripts_path, properties_file properties = get_properties(input_lines) +nested_properties = split_nested_property(property) +if nested_properties: +# currently the outer property of a nested property always +# corresponds to an input line group +input_line_group = find_input_line_group(nested_properties[0], input_lines) +if input_line_group: +input_lines[:] = input_line_group.group[1:] +# The inner property of a nested property name is the +# actual property name we want, so reset to that +property = nested_properties[1] + input_line = find_input_line(property, input_lines) if not input_line: print "Couldn't find values for property %s" % property @@ -1376,6 +1426,8 @@ def yocto_bsp_list_property_values(arch, property, scripts_path, properties_file elif type == "choicelist" or type == "checklist": try: gen_fn = input_line.props["gen"] +if nested_properties: +context["filename"] = nested_properties[0] values_list = input_line.gen_choices_list(context, False) except KeyError: for choice in input_line.choices: diff --git a/scripts/lib/bsp/kernel.py b/scripts/lib/bsp/kernel.py index b4e7fbf..360851b 100644 --- a/scripts/lib/bsp/kernel.py +++ b/scripts/lib/bsp/kernel.py @@ -657,7 +657,7 @@ def find_giturl(context): bbs = glob.glob(bbglob) for kernel in bbs: filename = os.path.splitext(os.path.basename(kernel))[0] -if filename == filebase: +if filename in filebase: giturl = extract_giturl(kernel) return giturl -- 1.7.0.4 ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] [PATCH 0/4] yocto-bsp fixes
From: Tom Zanussi This patchset fixes [YOCTO #], updating the property value listing code to match the current representation. The following combinations were tested (BSPs generated by yocto-bsp and sato images built and boot-tested): - qemu i386 generated from JSON using the 3.2 kernel - qemu arm generated from the text UI using the 3.2 kernel - qemu x86-64 generated from JSON using the 3.0 kernel - x86-64 generated from JSON using the 3.2 kernel - i386 generated from text UI using the 3.0 kernel Jessica Zhang has also verified that the propery value listing and BSP generation from JSON works for her in the Eclipse plugin. There are also a few minor patches to the template files here that fix problems encountered when testing. Please pull into poky/master. Thanks, Tom The following changes since commit 6703173449ad21e1623ac75a66535cb2ed52aeeb: Richard Purdie (1): package_rpm.bbclass: Set tmppath for rpm to somewhere which won't conflict with the rootfs are available in the git repository at: git://git.yoctoproject.org/poky-contrib.git tzanussi/-fix http://git.yoctoproject.org/cgit/cgit.cgi/poky-contrib/log/?h=tzanussi/-fix Tom Zanussi (4): yocto-bsp: enable property value display of nested properties yocto-bsp: fix qemuarch test for xserver-xf86-config.bbappend yocto-bsp: fix multi-provider error in qemu arch template yocto-bsp: fix x86_64 tuning for qemu arch scripts/lib/bsp/engine.py | 56 +++- scripts/lib/bsp/kernel.py |2 +- .../arch/qemu/conf/machine/{{=machine}}.conf |3 +- ..."x86_64\": }} xserver-xf86-config_0.1.bbappend" |0 4 files changed, 57 insertions(+), 4 deletions(-) rename "scripts/lib/bsp/substrate/target/arch/qemu/recipes-graphics/xorg-xserver/{{ if qemuarch == \"x86\" or qemuarch == \"x86_64\": }} xserver-xf86-config_0.1.bbappend" => "scripts/lib/bsp/substrate/target/arch/qemu/recipes-graphics/xorg-xserver/{{ if qemuarch == \"i386\" or qemuarch == \"x86_64\": }} xserver-xf86-config_0.1.bbappend" (100%) ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] QA for 1.2_M4.rc3
Hi Beth, We found the rc3 branch has been updated and rebuild on autobuilder. The images are available in http://autobuilder.pokylinux.org/nightly/20120412-2/ We should use it rather than 20120411-2 for our rc3 testing, right ? Thanks, Yi ? 2012?04?12? 09:31, Flanagan, Elizabeth ??: Sorry for the lateness on this, we were waiting for a few patches. We began running the tagged rc3 build today at around 4pm PST. Images should be available in about 10 hours (we are building from a clean sstate). You can find images in a few hours at: http://autobuilder.yoctoproject.org/pub/nightly/20120411-2 -- Elizabeth Flanagan Yocto Project Build and Release ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] confusion about IMAGE_INSTALL and getting packages into rootfs
On 2012-04-12 18:10, Peter Tornel wrote: "Keep in mind that you should be using package names here, not recipe names." Thank you for the reply Darren. I'm not sure I understand the difference yet, but I'm working on it. Perhaps you could send the exact setup so we might understand better what your problem(s) are. -- Gary Thomas | Consulting for the MLB Associates |Embedded world ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] confusion about IMAGE_INSTALL and getting packages into rootfs
"Keep in mind that you should be using package names here, not recipe names." Thank you for the reply Darren. I'm not sure I understand the difference yet, but I'm working on it. ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] native recipe and sysroot-destdir troubles
More / better info: On 04/12/2012 10:44 AM, Philip Tricca wrote: > I'm working on two new recipes and both are working quite well. Now I > need native variants and online sources indicate this should be done > through BBCLASSEXTEND = "native". For one of my recipes this works > fine, for the other not so much. > > The error I'm seeing seems to be in the staging of the sysroot-destdir > which ends up being empty though the source code builds fine (image > directory has everything expected). do_populate_sysroot seems to assume > there's a directory structure present which ends up being empty causing > an error when it trys to tar the directory up. If I create the > directories do_populate_sysroot expects the recipe runs to completion > but sysroot-destdir still ends up being empty and no packages are built. Image directiory is populated as expected (has lib, usr/include etc with expected files). Build is failing on populate_sysroot: CalledProcessError: Command 'tar -cf - -C /home/build/poky-edison-6.0/build/tmp/work/i686-linux/libmylib-native-2.1.4-r0/sysroot-destdir///home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux -ps . | tar -xf - -C /home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux' returned non-zero exit status 2 with output tar: /home/build/poky-edison-6.0/build/tmp/work/i686-linux/libmylib-native-2.1.4-r0/sysroot-destdir///home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux: Cannot chdir: No such file or directory tar: Error is not recoverable: exiting now The failure is obviously due to sysroot-destdir being empty. The question is why this directory is populated for libmylib but not for libmylib-native ... they're built from the same recipe. Stack trace: ERROR: The stack trace of python calls that resulted in this exception/failure was: ERROR: File "sstate_task_postfunc", line 10, in ERROR: ERROR: File "sstate_task_postfunc", line 4, in sstate_task_postfunc ERROR: ERROR: File "sstate.bbclass", line 19, in sstate_install ERROR: ERROR: File "/home/build/poky-edison-6.0/meta/lib/oe/path.py", line 59, in copytree ERROR: check_output(cmd, shell=True, stderr=subprocess.STDOUT) ERROR: ERROR: File "/home/build/poky-edison-6.0/meta/lib/oe/path.py", line 121, in check_output ERROR: raise CalledProcessError(retcode, cmd, output=output) ERROR: ERROR: The code that was being executed was: ERROR: 0006:bb.build.exec_func(intercept, d) ERROR: 0007:sstate_package(shared_state, d) ERROR: 0008: ERROR: 0009: ERROR: *** 0010:sstate_task_postfunc(d) ERROR: 0011: ERROR: (file: 'sstate_task_postfunc', lineno: 10, function: ) ERROR: 0001: ERROR: 0002:def sstate_task_postfunc(d): ERROR: 0003:shared_state = sstate_state_fromvars(d) ERROR: *** 0004:sstate_install(shared_state, d) ERROR: 0005:for intercept in shared_state['interceptfuncs']: ERROR: 0006:bb.build.exec_func(intercept, d) ERROR: 0007:sstate_package(shared_state, d) ERROR: 0008: ERROR: (file: 'sstate_task_postfunc', lineno: 4, function: sstate_task_postfunc) ERROR: Function 'sstate_task_postfunc' failed Thanks, - Philip ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
On Thu, Apr 12, 2012 at 7:12 AM, Darren Hart wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > > > On 04/12/2012 01:00 AM, Martin Jansa wrote: > > On Thu, Apr 12, 2012 at 01:05:00PM +0530, Joshua Immanuel wrote: > >> Darren, > >> > >> On Wed, 2012-04-11 at 17:30 -0700, Darren Hart wrote: > >>> I run on a beast with 12 cores, 48GB of RAM, OS and sources on > >>> a G2 Intel SSD, with two Seagate Barracudas in a RAID0 array > >>> for my /build partition. I run a headless Ubuntu 11.10 (x86_64) > >>> installation running the 3.0.0-16-server kernel. I can build > >>> core-image-minimal in < 30 minutes and core-image-sato in < 50 > >>> minutes from scratch. > > > > why not use so much RAM for WORKDIR in tmpfs? I bought 16GB just to > > be able to do my builds in tmpfs and keep only more permanent data > > on RAID. > > We've done some experiments with tmpfs, adding Beth on CC. If I recall > correctly, my RAID0 array with the mount options I specified > accomplishes much of what tmpfs does for me without the added setup. > This should be the case in general. For the most part, if you have a decent RAID setup (We're using RAID10 on the ab) with fast disks you should be able to hit tmpfs speed (or close to it). I've done some experiments with this and what I found was maybe a 5 minute difference, sometimes, from a clean build between tmpfs and RAID10. I discussed this during Yocto Developer Day. Let me boil it down a bit to explain some of what I did on the autobuilders. Caveat first though. I would avoid using autobuilder time as representative of prime yocto build time. The autobuilder hosts a lot of different services that sometimes impact build time and this can vary depending on what else is going on on the machine. There are four places, in general, where you want to look at optimizing outside of dependency issues. CPU, disk, memory, build process. What I found was that the most useful of these in getting the autobuilder time down was disk and build process. With disk, spreading it across the RAID saved us not only a bit of time, but also helped us avoid trashed disks. More disk thrash == higher failure rate. So far this year we've seen two disk failures that have resulted in almost zero autobuilder downtime. The real time saver however ended up being maintaining sstate across build runs. Even with our sstate on nfs, we're still seeing a dramatic decrease in build time. I would be interested in seeing what times you get with tmpfs. I've done tmpfs builds before and have seen good results, but bang for the buck did end up being a RAID array. With a higher commit interval, the kernel doesn't try to sync the > dcache with the disks as frequently (eg not even once during a build), > so it's effectively writing to memory (although there is still plenty > of IO occurring). > > The other reason is that while 48GB is plenty for a single build, I > often run many builds in parallel, sometimes in virtual machines when > I need to reproduce or test something on different hosts. > > For example: > > https://picasaweb.google.com/lh/photo/7PCrqXQqxL98SAY1ecNzDdMTjNZETYmyPJy0liipFm0?feat=directlink > > > - -- > Darren Hart > Intel Open Source Technology Center > Yocto Project - Linux Kernel > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.4.11 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJPhuLfAAoJEKbMaAwKp3648pYH/1HGCzI1QP1mj1OPfbo1TNou > nq1dCnEQOc+vUqShrmgjEY5H2G7Kqu5Y8JRp8m3D6v2iUPwu+ko3xASJkIVetgTn > 1J+dkZl93Gbm8nm63b5bES0mMqyiycNgXW4KTL0iA+4mLbKSXck7nF/gIyjE4iHa > SR+DDavSoOIJUiZsJBJpIdS4sY2RpalohhJvp97Qfmbxmqlo2RJkqzB7OmLliKbB > zGiuXeFgGojZXIRl11Rr36kqqA75WoTlNYjlkcg1paEhCr4zCMh0sujGaPQgVPtu > YU+FCtGxQ569f+hahdJraCU9T4IbMK4AOk30VqVxPifCqFhIvr7FnVRkYtV5pZM= > =tdFq > -END PGP SIGNATURE- > -- Elizabeth Flanagan Yocto Project Build and Release ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
On 04/12/2012 03:43 PM, Chris Tapp wrote: > On 12 Apr 2012, at 15:34, Darren Hart wrote: >> >> >> On 04/12/2012 07:08 AM, Björn Stenberg wrote: >>> Darren Hart wrote: /dev/md0/build ext4 noauto,noatime,nodiratime,commit=6000 >>> >>> A minor detail: 'nodiratime' is a subset of 'noatime', so there is no >>> need to specify both. >> >> Excellent, thanks for the tip. >> >>> I run on a beast with 12 cores, 48GB of RAM, OS and sources on a G2 Intel SSD, with two Seagate Barracudas in a RAID0 array for my /build partition. I run a headless Ubuntu 11.10 (x86_64) installation running the 3.0.0-16-server kernel. I can build core-image-minimal in < 30 minutes and core-image-sato in < 50 minutes from scratch. >>> >>> I'm guessing those are rather fast cores? >> >> They are: >> model name : Intel(R) Xeon(R) CPU X5680 @ 3.33GHz > > Nice, but well out of my budget - I've got to make do with what one of your > CPUs costs for the whole system ;-) > >> >>> I build on a different type >>> of beast: 64 cores at 2.1GHz and 128 GB ram. The OS is on a single >>> SSD and the build dir (and sources) is on a RAID0 array of Intel 520 >>> SSDs. Kernel is the same ubuntu 3.0.0-16-server as yours. >> >> Now that I think about it, my downloads are on the RAID0 array too. >> >> One thing that comes to mind is the parallel settings, BB_NUMBER_THREADS >> and PARALLEL_MAKE. I noticed a negative impact if I increased these >> beyond 12 and 14 respectively. I tested this with bb-matrix >> (scripts/contrib/bb-perf/bb-matrix.sh). The script is a bit fickle, but >> can provide useful results and killer 3D surface plots of build time >> with BB and PM on the axis. Can't seem to find a plot image at the >> moment for some reason... >> >>> >>> Yet for all the combined horsepower, I am unable to match your time >>> of 30 minutes for core-image-minimal. I clock in at around 37 minutes >>> for a qemux86-64 build with ipk output: >>> >>> -- NOTE: Tasks Summary: Attempted 1363 tasks of which 290 didn't >>> need to be rerun and all succeeded. >>> >>> real36m32.118s user214m39.697s sys 108m49.152s -- >>> >>> These numbers also show that my build is running less than 9x >>> realtime, indicating that 80% of my cores sit idle most of the time. >> >> Yup, that sounds about right. The build has a linear component to it, >> and anything above about 12 just doesn't help. In fact the added >> scheduling overhead seems to hurt. >> >>> This confirms what "ps xf" says during the builds: Only rarely is >>> bitbake running more than a handful tasks at once, even with >>> BB_NUMBER_THREADS at 64. And many of these tasks are in turn running >>> sequential loops on a single core. >>> >>> I'm hoping to find time soon to look deeper into this issue and >>> suggest remedies. It my distinct feeling that we should be able to >>> build significantly faster on powerful machines. >>> >> >> Reducing the dependency chains that result in the linear component of >> the build (forcing serialized execution) is one place we've focused, and >> could probably still use some attention. CC'ing RP as he's done a lot there. > > Current plan for a 'budget' system is: > > DX79TO motherboard, i7 3820, 16GB RAM, a pair of 60GB OCZ Vertex III's in > RAID-0 for downloads / build, SATA HD for OS (Ubuntu 11.10 x86_64). > > That'll give me a 2.7x boost just on CPU and the SSDs (and maybe some > over-clocking) will give some more. > > Not sure if SSDs in RAID-0 will give any boost, so I'll run some tests. > > Thanks to all for the comments in this thread. Get back to us with times, and we'll build up a wiki page. > > Chris Tapp > > opensou...@keylevel.com > www.keylevel.com -- Darren Hart Intel Open Source Technology Center Yocto Project - Linux Kernel ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
On 12 Apr 2012, at 15:34, Darren Hart wrote: > > > On 04/12/2012 07:08 AM, Björn Stenberg wrote: >> Darren Hart wrote: >>> /dev/md0/build ext4 >>> noauto,noatime,nodiratime,commit=6000 >> >> A minor detail: 'nodiratime' is a subset of 'noatime', so there is no >> need to specify both. > > Excellent, thanks for the tip. > >> >>> I run on a beast with 12 cores, 48GB of RAM, OS and sources on a >>> G2 Intel SSD, with two Seagate Barracudas in a RAID0 array for my >>> /build partition. I run a headless Ubuntu 11.10 (x86_64) >>> installation running the 3.0.0-16-server kernel. I can build >>> core-image-minimal in < 30 minutes and core-image-sato in < 50 >>> minutes from scratch. >> >> I'm guessing those are rather fast cores? > > They are: > model name: Intel(R) Xeon(R) CPU X5680 @ 3.33GHz Nice, but well out of my budget - I've got to make do with what one of your CPUs costs for the whole system ;-) > >> I build on a different type >> of beast: 64 cores at 2.1GHz and 128 GB ram. The OS is on a single >> SSD and the build dir (and sources) is on a RAID0 array of Intel 520 >> SSDs. Kernel is the same ubuntu 3.0.0-16-server as yours. > > Now that I think about it, my downloads are on the RAID0 array too. > > One thing that comes to mind is the parallel settings, BB_NUMBER_THREADS > and PARALLEL_MAKE. I noticed a negative impact if I increased these > beyond 12 and 14 respectively. I tested this with bb-matrix > (scripts/contrib/bb-perf/bb-matrix.sh). The script is a bit fickle, but > can provide useful results and killer 3D surface plots of build time > with BB and PM on the axis. Can't seem to find a plot image at the > moment for some reason... > >> >> Yet for all the combined horsepower, I am unable to match your time >> of 30 minutes for core-image-minimal. I clock in at around 37 minutes >> for a qemux86-64 build with ipk output: >> >> -- NOTE: Tasks Summary: Attempted 1363 tasks of which 290 didn't >> need to be rerun and all succeeded. >> >> real36m32.118s user214m39.697s sys 108m49.152s -- >> >> These numbers also show that my build is running less than 9x >> realtime, indicating that 80% of my cores sit idle most of the time. > > Yup, that sounds about right. The build has a linear component to it, > and anything above about 12 just doesn't help. In fact the added > scheduling overhead seems to hurt. > >> This confirms what "ps xf" says during the builds: Only rarely is >> bitbake running more than a handful tasks at once, even with >> BB_NUMBER_THREADS at 64. And many of these tasks are in turn running >> sequential loops on a single core. >> >> I'm hoping to find time soon to look deeper into this issue and >> suggest remedies. It my distinct feeling that we should be able to >> build significantly faster on powerful machines. >> > > Reducing the dependency chains that result in the linear component of > the build (forcing serialized execution) is one place we've focused, and > could probably still use some attention. CC'ing RP as he's done a lot there. Current plan for a 'budget' system is: DX79TO motherboard, i7 3820, 16GB RAM, a pair of 60GB OCZ Vertex III's in RAID-0 for downloads / build, SATA HD for OS (Ubuntu 11.10 x86_64). That'll give me a 2.7x boost just on CPU and the SSDs (and maybe some over-clocking) will give some more. Not sure if SSDs in RAID-0 will give any boost, so I'll run some tests. Thanks to all for the comments in this thread. Chris Tapp opensou...@keylevel.com www.keylevel.com ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
[yocto] native recipe and sysroot-destdir troubles
Yocto, I'm working on two new recipes and both are working quite well. Now I need native variants and online sources indicate this should be done through BBCLASSEXTEND = "native". For one of my recipes this works fine, for the other not so much. The error I'm seeing seems to be in the staging of the sysroot-destdir which ends up being empty though the source code builds fine (image directory has everything expected). do_populate_sysroot seems to assume there's a directory structure present which ends up being empty causing an error when it trys to tar the directory up. If I create the directories do_populate_sysroot expects the recipe runs to completion but sysroot-destdir still ends up being empty and no packages are built. Given that one of my packages works fine I'd guess the problem lies in the Makefiles for the offending source tree and not yocto though I'm still unsure of how to move forward. Any help would be appreciated. I can pastebin the receips if that will help. TIA, - Philip P.S. OE Build Configuration: BB_VERSION= "1.13.3" TARGET_ARCH = "i586" TARGET_OS = "linux" MACHINE = "qemux86" DISTRO= "poky" DISTRO_VERSION= "1.1" TUNE_FEATURES = "m32 i586" TARGET_FPU= "" meta meta-yocto= ":" ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
On 04/12/2012 07:08 AM, Björn Stenberg wrote: > Darren Hart wrote: >> /dev/md0/build ext4 >> noauto,noatime,nodiratime,commit=6000 > > A minor detail: 'nodiratime' is a subset of 'noatime', so there is no > need to specify both. Excellent, thanks for the tip. > >> I run on a beast with 12 cores, 48GB of RAM, OS and sources on a >> G2 Intel SSD, with two Seagate Barracudas in a RAID0 array for my >> /build partition. I run a headless Ubuntu 11.10 (x86_64) >> installation running the 3.0.0-16-server kernel. I can build >> core-image-minimal in < 30 minutes and core-image-sato in < 50 >> minutes from scratch. > > I'm guessing those are rather fast cores? They are: model name : Intel(R) Xeon(R) CPU X5680 @ 3.33GHz > I build on a different type > of beast: 64 cores at 2.1GHz and 128 GB ram. The OS is on a single > SSD and the build dir (and sources) is on a RAID0 array of Intel 520 > SSDs. Kernel is the same ubuntu 3.0.0-16-server as yours. Now that I think about it, my downloads are on the RAID0 array too. One thing that comes to mind is the parallel settings, BB_NUMBER_THREADS and PARALLEL_MAKE. I noticed a negative impact if I increased these beyond 12 and 14 respectively. I tested this with bb-matrix (scripts/contrib/bb-perf/bb-matrix.sh). The script is a bit fickle, but can provide useful results and killer 3D surface plots of build time with BB and PM on the axis. Can't seem to find a plot image at the moment for some reason... > > Yet for all the combined horsepower, I am unable to match your time > of 30 minutes for core-image-minimal. I clock in at around 37 minutes > for a qemux86-64 build with ipk output: > > -- NOTE: Tasks Summary: Attempted 1363 tasks of which 290 didn't > need to be rerun and all succeeded. > > real36m32.118s user214m39.697s sys 108m49.152s -- > > These numbers also show that my build is running less than 9x > realtime, indicating that 80% of my cores sit idle most of the time. Yup, that sounds about right. The build has a linear component to it, and anything above about 12 just doesn't help. In fact the added scheduling overhead seems to hurt. > This confirms what "ps xf" says during the builds: Only rarely is > bitbake running more than a handful tasks at once, even with > BB_NUMBER_THREADS at 64. And many of these tasks are in turn running > sequential loops on a single core. > > I'm hoping to find time soon to look deeper into this issue and > suggest remedies. It my distinct feeling that we should be able to > build significantly faster on powerful machines. > Reducing the dependency chains that result in the linear component of the build (forcing serialized execution) is one place we've focused, and could probably still use some attention. CC'ing RP as he's done a lot there. -- Darren Hart Intel Open Source Technology Center Yocto Project - Linux Kernel ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 04/12/2012 01:00 AM, Martin Jansa wrote: > On Thu, Apr 12, 2012 at 01:05:00PM +0530, Joshua Immanuel wrote: >> Darren, >> >> On Wed, 2012-04-11 at 17:30 -0700, Darren Hart wrote: >>> I run on a beast with 12 cores, 48GB of RAM, OS and sources on >>> a G2 Intel SSD, with two Seagate Barracudas in a RAID0 array >>> for my /build partition. I run a headless Ubuntu 11.10 (x86_64) >>> installation running the 3.0.0-16-server kernel. I can build >>> core-image-minimal in < 30 minutes and core-image-sato in < 50 >>> minutes from scratch. > > why not use so much RAM for WORKDIR in tmpfs? I bought 16GB just to > be able to do my builds in tmpfs and keep only more permanent data > on RAID. We've done some experiments with tmpfs, adding Beth on CC. If I recall correctly, my RAID0 array with the mount options I specified accomplishes much of what tmpfs does for me without the added setup. With a higher commit interval, the kernel doesn't try to sync the dcache with the disks as frequently (eg not even once during a build), so it's effectively writing to memory (although there is still plenty of IO occurring). The other reason is that while 48GB is plenty for a single build, I often run many builds in parallel, sometimes in virtual machines when I need to reproduce or test something on different hosts. For example: https://picasaweb.google.com/lh/photo/7PCrqXQqxL98SAY1ecNzDdMTjNZETYmyPJy0liipFm0?feat=directlink - -- Darren Hart Intel Open Source Technology Center Yocto Project - Linux Kernel -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJPhuLfAAoJEKbMaAwKp3648pYH/1HGCzI1QP1mj1OPfbo1TNou nq1dCnEQOc+vUqShrmgjEY5H2G7Kqu5Y8JRp8m3D6v2iUPwu+ko3xASJkIVetgTn 1J+dkZl93Gbm8nm63b5bES0mMqyiycNgXW4KTL0iA+4mLbKSXck7nF/gIyjE4iHa SR+DDavSoOIJUiZsJBJpIdS4sY2RpalohhJvp97Qfmbxmqlo2RJkqzB7OmLliKbB zGiuXeFgGojZXIRl11Rr36kqqA75WoTlNYjlkcg1paEhCr4zCMh0sujGaPQgVPtu YU+FCtGxQ569f+hahdJraCU9T4IbMK4AOk30VqVxPifCqFhIvr7FnVRkYtV5pZM= =tdFq -END PGP SIGNATURE- ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
Darren Hart wrote: > /dev/md0/build ext4 > noauto,noatime,nodiratime,commit=6000 A minor detail: 'nodiratime' is a subset of 'noatime', so there is no need to specify both. > I run on a beast with 12 cores, 48GB of RAM, OS and sources on a G2 > Intel SSD, with two Seagate Barracudas in a RAID0 array for my /build > partition. I run a headless Ubuntu 11.10 (x86_64) installation running > the 3.0.0-16-server kernel. I can build core-image-minimal in < 30 > minutes and core-image-sato in < 50 minutes from scratch. I'm guessing those are rather fast cores? I build on a different type of beast: 64 cores at 2.1GHz and 128 GB ram. The OS is on a single SSD and the build dir (and sources) is on a RAID0 array of Intel 520 SSDs. Kernel is the same ubuntu 3.0.0-16-server as yours. Yet for all the combined horsepower, I am unable to match your time of 30 minutes for core-image-minimal. I clock in at around 37 minutes for a qemux86-64 build with ipk output: -- NOTE: Tasks Summary: Attempted 1363 tasks of which 290 didn't need to be rerun and all succeeded. real36m32.118s user214m39.697s sys 108m49.152s -- These numbers also show that my build is running less than 9x realtime, indicating that 80% of my cores sit idle most of the time. This confirms what "ps xf" says during the builds: Only rarely is bitbake running more than a handful tasks at once, even with BB_NUMBER_THREADS at 64. And many of these tasks are in turn running sequential loops on a single core. I'm hoping to find time soon to look deeper into this issue and suggest remedies. It my distinct feeling that we should be able to build significantly faster on powerful machines. -- Björn ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
On Thu, 2012-04-12 at 10:00 +0200, Martin Jansa wrote: > > On Wed, 2012-04-11 at 17:30 -0700, Darren Hart wrote: > > > I run on a beast with 12 cores, 48GB of RAM, OS and sources on > > > a G2 Intel SSD, with two Seagate Barracudas in a RAID0 array for > > > my /build partition. I run a headless Ubuntu 11.10 (x86_64) > > > installation running the 3.0.0-16-server kernel. I can build > > > core-image-minimal in < 30 minutes and core-image-sato in < 50 > > > minutes from scratch. > > why not use so much RAM for WORKDIR in tmpfs? I bought 16GB just to be > able to do my builds in tmpfs and keep only more permanent data on > RAID. +1 I tried using the tmpfs for WORKDIR on my T420 which has 8GB of RAM. (In India, maximum single slot DDR3 RAM we can get is 4GB.) Obviously, this is not sufficient :( Maybe I shouldn't use the laptop for build purposes. Moreover, every time I build the image in yocto, temperature peeks to 87 degree Celsius. Hoping that my HDD should not die. -- Joshua Immanuel HiPro IT Solutions Private Limited http://hipro.co.in signature.asc Description: This is a digitally signed message part ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
On Thu, Apr 12, 2012 at 01:05:00PM +0530, Joshua Immanuel wrote: > Darren, > > On Wed, 2012-04-11 at 17:30 -0700, Darren Hart wrote: > > I run on a beast with 12 cores, 48GB of RAM, OS and sources on a G2 > > Intel SSD, with two Seagate Barracudas in a RAID0 array for my /build > > partition. I run a headless Ubuntu 11.10 (x86_64) installation running > > the 3.0.0-16-server kernel. I can build core-image-minimal in < 30 > > minutes and core-image-sato in < 50 minutes from scratch. why not use so much RAM for WORKDIR in tmpfs? I bought 16GB just to be able to do my builds in tmpfs and keep only more permanent data on RAID. Cheers, -- Martin 'JaMa' Jansa jabber: martin.ja...@gmail.com signature.asc Description: Digital signature ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
Darren, On Wed, 2012-04-11 at 17:30 -0700, Darren Hart wrote: > I run on a beast with 12 cores, 48GB of RAM, OS and sources on a G2 > Intel SSD, with two Seagate Barracudas in a RAID0 array for my /build > partition. I run a headless Ubuntu 11.10 (x86_64) installation running > the 3.0.0-16-server kernel. I can build core-image-minimal in < 30 > minutes and core-image-sato in < 50 minutes from scratch. wow. Can I get a shell? :D -- Joshua Immanuel HiPro IT Solutions Private Limited http://hipro.co.in signature.asc Description: This is a digitally signed message part ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] confusion about IMAGE_INSTALL and getting packages into rootfs
On 04/11/2012 11:02 AM, Peter Tornel wrote: > I'm working with the poky tree, master branch. I added a few packages > to IMAGE_INSTALL in core-image-minimal.bb. Everything builds fine, and > I can see the built images in each tmp/work subdirectory for my > IMAGE_INSTALL packages. > > However, I find that some of the image executables that were part of the > packages specified in IMAGE_INSTALL wind up in my core-image-minimal > rootfs and some don't. > > Is there a switch or variable that I need to set to direct all packages > specified in IMAGE_INSTALL to be installed into my rootfs? > > The preferred way modify the packages that get installed is via the CORE_IMAGE_EXTRA_INSTALL (POKY_EXTRA_INSTALL in the 1.1 edison release and earlier) variable in your local.conf. However, modifying the recipe itself - or writing a new image recipe that sets IMAGE_INSTALL should work. Can you provide an example of what you have tried that doesn't work? Keep in mind that you should be using package names here, not recipe names. -- Darren Hart Intel Open Source Technology Center Yocto Project - Linux Kernel ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] Build time data
On 04/11/2012 09:39 PM, Bob Cochran wrote: > On 04/11/2012 08:30 PM, Darren Hart wrote: >> SSDs are one way to >> go, but we've been known to chew through them and they aren't priced as >> consumables. > > Hi Darren, > > Could you please elaborate on "been known to chew through them"? > > Are you running into an upper limit on write / erase cycles? Are you > encountering hard (or soft) failures? Some have reported early physical disk failure. Due to the cost of SSDs, not a lot of people seem to be trying it out. I *believe* the current generation of SSDs would perform admirably, but I haven't tested that. I know Deny builds with SSDs, perhaps he would care to comment? -- Darren Hart Intel Open Source Technology Center Yocto Project - Linux Kernel ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto
Re: [yocto] [Yocto] How to remove a particular package or packages
Tomas, > The basic package selection in the Poky/Yocto images is done on basis of > the DISTRO_FEATURES variable, so you need to set this somewhere suitable > to only include the components you need (iirc, the default package > selection is quite rich). Thanks for pointing this out. I am using this variable in my custom image file now. I think i will get desired output with this. The need comes here that some of the packages are never gonna be used and explicitly asked to remove those. > Other than that, bitbake -g generates .dot files that show > package dependencies, so you can work out why particular package is > being included. It is helpful. Thanks, -JC ___ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto