Re: [yocto] "(-)"??

2017-09-20 Thread Khem Raj


On 9/20/17 8:18 PM, Takashi Matsuzawa wrote:
> Hello.
> I am seeing some of the recipes contains lines like below.
> 
>> COMPATIBLE_MACHINE = "(-)"
> 
> Sorry being novice, but what is the intended effect of this line?
> I can see submit comments that this is for blacklisting but I am not
> sure how it works.  It simply means a '-' letter?

COMAPTIBLE_MACHINE uses regexp syntax
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] "(-)"??

2017-09-20 Thread Takashi Matsuzawa
Hello.
I am seeing some of the recipes contains lines like below.

> COMPATIBLE_MACHINE = "(-)"

Sorry being novice, but what is the intended effect of this line?
I can see submit comments that this is for blacklisting but I am not sure how 
it works.  It simply means a '-' letter?


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Sysroot bug in bitbake or wrong configuration?

2017-09-20 Thread Andre McCurdy
On Tue, Sep 19, 2017 at 11:43 PM, Svein Seldal  wrote:
>
> I have the spu-image.bb recipe below, and running on Pyro, the recipe
> behaves differently if the recipe is run on a fresh system with no sstate
> elements, compared to a system that has a sstate cache present.
>
> The failure is that the spu-image required the host tool "uuidgen", and thus
> has DEPENDS on "util-linux-native".

DEPENDS is basically a shorthand for saying that the
do_populate_sysroot task for the recipe(s) listed in DEPENDS should be
run before the do_configure task of the current recipe.

Since image recipes don't have a do_configure task (or at least, they
do their work in tasks such as do_rootfs which don't depend on
do_configure), using the DEPENDS shorthand for setting dependencies
for the do_configure task doesn't work.

If an image recipe's do_rootfs or do_image tasks have dependencies
then they need to be expressed using the "longhand" format, for
example:

  do_rootfs[depends] += "util-linux-native:do_populate_sysroot"

Unfortunately trying to use DEPENDS in an image recipe seems to be
quite a common mistake. Maybe we should try to make things a little
more user friendly by adding a sanity test to catch the problem? Or
perhaps do_rootfs should depend on a dummy do_configure task (and so
ensure that do_rootfs effectively sees dependencies expressed via
DEPENDS) ?

> When the -c cleanall spu-image is run
> prior to building spu-image, the recipe sysroot is properly initialized with
> util-linux-native and uuidgen is available in the task functions.
>
> If -c clean is run prior to build, or simply by deleting tmp, the sysroot
> will not be properly initialized and uuidgen is not available and the recipe
> fails
>
> Is this a bug in bitbake or am I missing something in my recipe?
>
> Best regards,
> Svein Seldal
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [yocto-autobuilder][PATCH] CheckYoctoCompat.py: rename yocto-compat-layer to yocto-check-layer

2017-09-20 Thread Stephano Cetola
This script name was changed in the following commit:

b46e05677b342df44829ffe8bcfbfc954e906030

This patch updates the script name to match.

[YOCTO #12110]

Signed-off-by: Stephano Cetola 
---
 lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py 
b/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
index 134adaa51..62eddae50 100644
--- a/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
+++ b/lib/python2.7/site-packages/autobuilder/buildsteps/CheckYoctoCompat.py
@@ -41,11 +41,12 @@ class CheckYoctoCompat(BitbakeShellCommand):
 
 layerversioncore = int(self.getProperty("layerversion_core", "0"))
 # yocto-compat-layer-wrapper was introduced in Pyro
+# it was renamed to yocto-check-layer-wrapper Rocko
 if layerversioncore >= 10:
 command = ". ./oe-init-build-env;"
 for layer in self.layers:
 layerpath = os.path.join(builddir, layer)
-cmd = "yocto-compat-layer-wrapper {}".format(layerpath)
+cmd = "yocto-check-layer-wrapper {}".format(layerpath)
 cmd = cmd + " || export CL_FAIL=1;"
 command = command + cmd
 command = command + 'if [ "$CL_FAIL" = "1" ]; then exit 1; fi;'
-- 
2.14.1

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] eSDK install script failure

2017-09-20 Thread Andrea Galbusera
Hi Paul,
thanks for explaining and helping sorting this out.

On Wed, Sep 20, 2017 at 11:54 AM, Paul Eggleton <
paul.eggle...@linux.intel.com> wrote:

> Hi Andrea,
>
> On Wednesday, 20 September 2017 8:44:22 PM NZST Andrea Galbusera wrote:
> > Seeing the errors below while installing an eSDK. This is a routinely
> > generated VM that installs the eSDK from installation script. The errors
> > appeared with the latest iteration of the eSDK script, which is generated
> > with almost up-to-date revisions from master. Of course I have extra
> layers
> > in the mix, but none of them apparently had relevant changed since last
> > (working) iteration: mostly synching to master branches happened. Can
> > anyone help suggesting how to investigate this further? What do those
> > unexpected task mean? I'm blocked on releasing this SDK to developers and
> > clues from expert would be very appreciated...
> >
> > ==> default: Checking sstate mirror object availability...
> > ==> default: done.
> > ==> default: ERROR: Task python-native.do_fetch attempted to execute
> > unexpectedly
> > ==> default: ERROR: Task python-native.do_prepare_recipe_sysroot
> attempted
> > to execute unexpectedly
> > ==> default: ERROR: Task python-native.do_unpack attempted to execute
> > unexpectedly
> > ==> default: ERROR: Task python-native.do_patch attempted to execute
> > unexpectedly
> > ==> default: ERROR: Task python-native.do_populate_lic attempted to
> execute
> > unexpectedly and should have been setscened
> > ==> default: ERROR: Task python-native.do_configure attempted to execute
> > unexpectedly
> > ==> default: ERROR: Task python-native.do_compile attempted to execute
> > unexpectedly
> > ==> default: ERROR: Task python-native.do_install attempted to execute
> > unexpectedly
> > ==> default: ERROR: Task python-native.do_populate_sysroot attempted to
> > execute unexpectedly and should have been setscened
> > ==> default: ERROR: SDK preparation failed: error log written to
> > /home/vagrant/poky_sdk/preparing_build_system.log
> >
>
> Basically this means that these tasks tried to execute where really the
> results should have been able to be restored from sstate.
>
> The cause of this type of error is one of three things:
>
> 1) The sstate archive corresponding to a task wasn't able to be fetched
> from
> the server (for a minimal eSDK) or wasn't present in the installer (for a
> full
> eSDK - less likely as we basically do a trial run as part of building the
> eSDK
> in the first place)
>
> 2) The signature was somehow different to what it should have been. (Locked
> signatures are supposed to guard against this.)
>
> 3) A task that wasn't expected to execute did execute and thus the sstate
> wasn't available.
>
> Given that this was python-native which I would expect would be a normal
> part
> of the SDK, I would suspect #1. Is this a minimal or full eSDK (i.e. what
> is
> SDK_EXT_TYPE set to)?
>

That was a "full" eSDK. I noticed that the "same" eSDK installer from
another build host was not affected and I'm trying to rebuild on the
original one with even more recent revision and see if it still happens or
not. Failure with the first one was repeatable, hence I suspect an issue at
sdk population stage, not during installation.
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] pyro openembedded gpsd update-rc.d problems with read-only-rootfs

2017-09-20 Thread Dan Walkes
On Wed, Sep 13, 2017 at 10:56 AM, Dan Walkes
 wrote:
> On Mon, Sep 11, 2017 at 5:01 AM, Burton, Ross  wrote:
>> On 10 September 2017 at 21:35, Dan Walkes 
>> wrote:
>>>
>>> It looks like because the update-rc.d step fails this setup gets moved
>>> into a gpsd post install script, which won’t work because I’m
>>> configured to use a read only root filesystem.  So I need to find a
>>> way to keep the update-rc.d step from failing.
>>
>>
>> The recipe shouldn't invoke update-alternatives directly, but use the
>> update-alternatives class instead.
>>
>
> Thanks for the suggestion Ross.
>
> I didn't mention it before but I had already attempted to make this
> change after I initially noticed the problem.  See this commit:
> https://github.com/Trellis-Logic/meta-openembedded/commit/ddf008dbdae602dbe722f1fcb231f5549e75a586
>
> I didn't see any difference when I updated to use update-alternatives
> instead of invoking directly.
>
> Since the error message was related to update-rc.d I've also attempted
> to use the multi-update form of update-rc.d in the above commit.  I
> thought that might be required when multiple packages were built from
> the same .bb file.  However, don't see a difference in the result with
> these changes either.

The fix was to specify INITSCRIPT_PACKAGES = "gpsd-conf", since the
gpsd-conf package was where the /etc/init.d/gpsd file is being
installed, per inspection of rpm files.  See the patch at
https://github.com/Trellis-Logic/meta-openembedded/commit/d91bab137dfc4f3ce6526bd8a6e95e5de7658fd5

I will submit this patch to the Openembedded-devel list unless anyone
has other/different recommended changes.
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Kernel Build Failures with Shared SSTATE

2017-09-20 Thread Manjukumar Harthikote Matha
Hi Richard,

> -Original Message-
> From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org]
> On Behalf Of Schmitt, Richard
> Sent: Friday, July 14, 2017 8:23 AM
> To: yocto@yoctoproject.org
> Subject: [yocto] Kernel Build Failures with Shared SSTATE
> 
> Hi,
> 
> 
> 
> I had been running into kernel build failures on the morty branch when using 
> a shared
> state.  First I'll describe the error, and then my solution.
> 
> 
> 
> The first build that initializes the sstate cache works fine.  Subsequent 
> clean builds
> will fail.  The failure
> 
> would occur in the do_compile_kernelmodules task.  The error would indicate a
> failure because tmp/work-shared//kernel-build-artifacts was missing.
> 
> 
> 
> My analysis concluded that the kernel build was restored from the cache, but 
> it did
> not restore the kernel-build-artifacts needed by the do_compile_kernelmodules 
> task.
> 
> 
> 
> My solution was to include the following in a bbappend file for the kernel:
> 
> 
> 
> SSTATETASKS += "do_shared_workdir"
> 
> 
> 
> do_shared_workdir[sstate-plaindirs] = "${STAGING_KERNEL_BUILDDIR}"
> 
> 
> 
> python do_shared_workdir_setscene () {
> 
> sstate_setscene(d)
> 
> }
> 
> 
> 
> I assume the correct way to fix this would be to update the
> meta/classes/kernel.bbclass.  It looks like there was some attempt to do 
> something
> with the shared_workdir because there is a do_shared_workdir_setscene routine,
> but right now it just returns a 1.  Is that intentional.  It seems wrong.
> 

Even I am facing the same issue, but seen only few instances of failures. Not 
able to concretely figure out exact steps to replicate the issue.
Is it better to remove the addtask shared_workdir_setscene ?
If you see do_deploy task in kernel.bbclass, it doesn't handle the setscene 
task either

Thanks,
Manju

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] meta-security / meta-tpm functionality

2017-09-20 Thread Patrick Vacek
Hello,

The meta-tpm layer inside the meta-security layer looks like exactly
what I want in order to get TPM 2.0 support on my devices. I'm currently
using qemu and trying to create a swtpm device on my host and pass it
through to the device. Despite my best attempts, I have not yet been
able to get a TPM device to be recognized in qemu.

To use the meta-tpm layer, is there anything I should be doing beyond
adding meta-tpm to my bblayers.conf and adding
packagegroup-security-tpm2 to IMAGE_INSTALL_append? If not, is there a
way to debug what might be going wrong in this process?

Thank you,
Patrick


-- 
Patrick Vacek
ATS Advanced Telematic Systems GmbH
Kantstraße 162, 10623 Berlin
HRB 151501 B, Amtsgericht Charlottenburg
Vertreten durch die Geschäftsführer
Dirk Pöschl, Armin G. Schmidt




signature.asc
Description: OpenPGP digital signature
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] eSDK install script failure

2017-09-20 Thread Paul Eggleton
Hi Andrea,

On Wednesday, 20 September 2017 8:44:22 PM NZST Andrea Galbusera wrote:
> Seeing the errors below while installing an eSDK. This is a routinely
> generated VM that installs the eSDK from installation script. The errors
> appeared with the latest iteration of the eSDK script, which is generated
> with almost up-to-date revisions from master. Of course I have extra layers
> in the mix, but none of them apparently had relevant changed since last
> (working) iteration: mostly synching to master branches happened. Can
> anyone help suggesting how to investigate this further? What do those
> unexpected task mean? I'm blocked on releasing this SDK to developers and
> clues from expert would be very appreciated...
> 
> ==> default: Checking sstate mirror object availability...
> ==> default: done.
> ==> default: ERROR: Task python-native.do_fetch attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_prepare_recipe_sysroot attempted
> to execute unexpectedly
> ==> default: ERROR: Task python-native.do_unpack attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_patch attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_populate_lic attempted to execute
> unexpectedly and should have been setscened
> ==> default: ERROR: Task python-native.do_configure attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_compile attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_install attempted to execute
> unexpectedly
> ==> default: ERROR: Task python-native.do_populate_sysroot attempted to
> execute unexpectedly and should have been setscened
> ==> default: ERROR: SDK preparation failed: error log written to
> /home/vagrant/poky_sdk/preparing_build_system.log
> 

Basically this means that these tasks tried to execute where really the 
results should have been able to be restored from sstate. 

The cause of this type of error is one of three things: 

1) The sstate archive corresponding to a task wasn't able to be fetched from 
the server (for a minimal eSDK) or wasn't present in the installer (for a full 
eSDK - less likely as we basically do a trial run as part of building the eSDK 
in the first place)

2) The signature was somehow different to what it should have been. (Locked 
signatures are supposed to guard against this.)

3) A task that wasn't expected to execute did execute and thus the sstate 
wasn't available.

Given that this was python-native which I would expect would be a normal part 
of the SDK, I would suspect #1. Is this a minimal or full eSDK (i.e. what is 
SDK_EXT_TYPE set to)?

Cheers,
Paul

-- 

Paul Eggleton
Intel Open Source Technology Centre
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [meta-selinux][PATCH] selinux-python: fix installed-vs-shipped warnings

2017-09-20 Thread wenzong.fan
From: Wenzong Fan 

Fix the warnings if ${libdir} = '/usr/lib64':
  WARNING: selinux-python-2.7-r0 do_package: QA Issue: selinux-python: \
Files/directories were installed but not shipped in any package:
/usr/lib/python2.7/site-packages/sepolicy-1.1.egg-info
/usr/lib/python2.7/site-packages/sepolicy/__init__.py

Signed-off-by: Wenzong Fan 
---
 recipes-security/selinux/selinux-python.inc | 1 +
 1 file changed, 1 insertion(+)

diff --git a/recipes-security/selinux/selinux-python.inc 
b/recipes-security/selinux/selinux-python.inc
index 55060e3..4bc5cb5 100644
--- a/recipes-security/selinux/selinux-python.inc
+++ b/recipes-security/selinux/selinux-python.inc
@@ -102,6 +102,7 @@ FILES_${PN} += "\
 EXTRA_OEMAKE += "LIBSEPOLA=${STAGING_LIBDIR}/libsepol.a"
 do_install() {
 oe_runmake DESTDIR=${D} \
+LIBDIR='${D}${libdir}' \
 
PYTHONLIBDIR='${libdir}/python${PYTHON_BASEVERSION}/site-packages' \
 install
 }
-- 
2.13.0

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] eSDK install script failure

2017-09-20 Thread Andrea Galbusera
Seeing the errors below while installing an eSDK. This is a routinely
generated VM that installs the eSDK from installation script. The errors
appeared with the latest iteration of the eSDK script, which is generated
with almost up-to-date revisions from master. Of course I have extra layers
in the mix, but none of them apparently had relevant changed since last
(working) iteration: mostly synching to master branches happened. Can
anyone help suggesting how to investigate this further? What do those
unexpected task mean? I'm blocked on releasing this SDK to developers and
clues from expert would be very appreciated...

==> default: Checking sstate mirror object availability...
==> default: done.
==> default: ERROR: Task python-native.do_fetch attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_prepare_recipe_sysroot attempted
to execute unexpectedly
==> default: ERROR: Task python-native.do_unpack attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_patch attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_populate_lic attempted to execute
unexpectedly and should have been setscened
==> default: ERROR: Task python-native.do_configure attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_compile attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_install attempted to execute
unexpectedly
==> default: ERROR: Task python-native.do_populate_sysroot attempted to
execute unexpectedly and should have been setscened
==> default: ERROR: SDK preparation failed: error log written to
/home/vagrant/poky_sdk/preparing_build_system.log
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] devtool/sdk: multiple issues with sdk-update (and after update)

2017-09-20 Thread Krzysztof Kozlowski
Hi all,

I am using Yocto Poky 2.3 (yocto-2.3-65-gcc48789276e0) and its
extensible SDK. Host is Ubuntu 16.04.3 LTS. I have multiple issues
with sdk-update:


1. $ devtool sdk-update
Fetching origin
fatal: unable to access
'https://foobar.com/~builder/releases/yocto-2.3/toolchain/updates/layers/.git/':
Problem with the SSL CA cert (path? access rights?)
error: Could not fetch origin

Workaround is GIT_SSL_CAINFO="/etc/ssl/certs/ca-certificates.crt"
devtool sdk-update but that is not that convenient.


2. SDK update partially succeeds but not all tasks are apparently executed:

HEAD is now at 8189dd22fed5 init repo
NOTE: Preparing build system... (This may take some time.)
ERROR: Unexecuted tasks found in preparation log:
  NOTE: Running task 1065 of 2619
(/home/krzk/proceq17_sdk/layers/poky/meta/recipes-graphics/freetype/freetype_2.7.1.bb:do_fetch)
  NOTE: Running task 1077 of 2619
(/home/krzk/proceq17_sdk/layers/poky/meta/recipes-multimedia/libpng/libpng_1.6.28.bb:do_fetch)
...
...
...
 NOTE: Running task 2619 of 2619
(/home/krzk/proceq17_sdk/layers/meta-oe/recipes-support/opencv/opencv_3.2.bb:do_packagedata)


It seems that update works... but not entirely.


3. devtool sdk-update sees new commits in layers repository on origin
remote but runs just "git reset --hard" so it does not switch to it.

Fetching origin
>From https://foobar.com/~builder/releases/yocto-2.3/toolchain/updates/layers/
   f392bd369685..af2013cdfa56  master -> origin/master
HEAD is now at f392bd369685 init repo

Running git reset --hard will obviously not switch the HEAD from
f392bd369685 to af2013cdfa56.

My workaround here is to manually reset --hard origin/master and then
re-run the sdk-update.


4. SDK updated this way (with all workarounds above) is different than
fresh install. For example some files in sysroots are missing - the
parts introduced at some point by changes in my software. Here - the
json-c.pc is missing:

$ cd UPDATED_SDK/workspace/sources
$ mkdir lib-eclipse
$ make -G"Eclipse CDT4 - Unix Makefiles" -D CMAKE_BUILD_TYPE=Debug ../lib
-- Checking for module 'json-c'
--   No package 'json-c' found
$ find UPDATED_SDK/ -name 'json-c.pc'
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/image/usr/lib/pkgconfig/json-c.pc
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/package/usr/lib/pkgconfig/json-c.pc
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/sysroot-destdir/usr/lib/pkgconfig/json-c.pc
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/packages-split/json-c-dev/usr/lib/pkgconfig/json-c.pc
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/build/json-c.pc
./tmp/sysroots-components/cortexa5hf-neon/json-c/usr/lib/pkgconfig/json-c.pc

$ find FRESH_INSTALL/ -name 'json-c.pc'
./tmp/sysroots/col-vf50-proceq/usr/lib/pkgconfig/json-c.pc
./tmp/work/cortexa5hf-neon-poky-linux-gnueabi/json-c/0.12-r0/sysroot-destdir/usr/lib/pkgconfig/json-c.pc
./tmp/sysroots-components/cortexa5hf-neon/json-c/usr/lib/pkgconfig/json-c.pc

The devtool build works okay for both cases.


Any hints on these issues?

Best regards,
Krzysztof

P.S. I could not find discussion list for extensible SDK or devtool so
I hope this is the right place...
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Sysroot bug in bitbake or wrong configuration?

2017-09-20 Thread Svein Seldal


I have the spu-image.bb recipe below, and running on Pyro, the recipe 
behaves differently if the recipe is run on a fresh system with no 
sstate elements, compared to a system that has a sstate cache present.


The failure is that the spu-image required the host tool "uuidgen", and 
thus has DEPENDS on "util-linux-native". When the -c cleanall spu-image 
is run prior to building spu-image, the recipe sysroot is properly 
initialized with util-linux-native and uuidgen is available in the task 
functions.


If -c clean is run prior to build, or simply by deleting tmp, the 
sysroot will not be properly initialized and uuidgen is not available 
and the recipe fails


Is this a bug in bitbake or am I missing something in my recipe?


Best regards,
Svein Seldal


# spu-image.bb
DESCRIPTION = "Upgrade Image"

LICENSE = "MIT"
LIC_FILES_CHKSUM = 
"file://${COREBASE}/LICENSE;md5=4d92cd373abda3937c2bc47fbc49d690"


DEPENDS = "util-linux-native"
INHIBIT_DEFAULT_DEPS = "1"

fakeroot do_spu_rootfs() {
uuidgen
}
addtask do_spu_rootfs before do_build

fakeroot do_spu_image () {
uuidgen
}
addtask do_spu_image after do_spu_rootfs before do_build

# It does not matter if these are noexec-ed or not
#do_fetch[noexec] = "1"
#do_unpack[noexec] = "1"
#do_patch[noexec] = "1"
#do_configure[noexec] = "1"
#do_compile[noexec] = "1"
#do_install[noexec] = "1"
#do_package[noexec] = "1"
#do_package_qa[noexec] = "1"
#do_packagedata[noexec] = "1"
#do_package_write_ipk[noexec] = "1"
#do_package_write_deb[noexec] = "1"
#do_package_write_rpm[noexec] = "1"


# 1) Running works fine
#   bitbake -v spu-image |tee log1.txt
#   cat log1.txt | grep -2 uuidgen
#
# 2) Cleaning
#   bitbake -c clean spu-image
#
# 3) Rebuilding -- now fails
#   bitbake -v spu-image |tee log2.txt
#   cat log2.txt | grep -2 uuidgen
#
# 4) Sstate cleaning
#   bitbake -c cleanall spu-image
#
# 5) Works again:
#   bitbake -v spu-image |tee log3.txt
#   cat log3.txt | grep -2 uuidgen

--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto