Re: [yocto] [meta-security][PATCH 1/2] oe-selftest: add running cve checker

2019-06-14 Thread akuster808
Chen,

On 6/14/19 1:13 AM, ChenQi wrote:
> Hi Armin,
>
> I just noticed this selftest case.
> Have you considered putting it into oe-core?
Yes I have. That was the first place I wanted to put it but Richard and
Ross have reservations about doing that so it sits in meta-security
until we can get it into core.

Regards,
armin


>
> Best Regards,
> Chen Qi
>
> On 05/10/2019 11:09 AM, Armin Kuster wrote:
>> Signed-off-by: Armin Kuster 
>> ---
>>   lib/oeqa/selftest/cases/cvechecker.py | 27 +++
>>   1 file changed, 27 insertions(+)
>>   create mode 100644 lib/oeqa/selftest/cases/cvechecker.py
>>
>> diff --git a/lib/oeqa/selftest/cases/cvechecker.py
>> b/lib/oeqa/selftest/cases/cvechecker.py
>> new file mode 100644
>> index 000..23ca7d2
>> --- /dev/null
>> +++ b/lib/oeqa/selftest/cases/cvechecker.py
>> @@ -0,0 +1,27 @@
>> +import os
>> +import re
>> +
>> +from oeqa.selftest.case import OESelftestTestCase
>> +from oeqa.utils.commands import bitbake, get_bb_var
>> +
>> +class CveCheckerTests(OESelftestTestCase):
>> +    def test_cve_checker(self):
>> +    image = "core-image-sato"
>> +
>> +    deploy_dir = get_bb_var("DEPLOY_DIR_IMAGE")
>> +    image_link_name = get_bb_var('IMAGE_LINK_NAME', image)
>> +
>> +    manifest_link = os.path.join(deploy_dir, "%s.cve" %
>> image_link_name)
>> +
>> +    self.logger.info('CVE_CHECK_MANIFEST = "%s"' % manifest_link)
>> +    if (not 'cve-check' in get_bb_var('INHERIT')):
>> +    add_cve_check_config = 'INHERIT += "cve-check"'
>> +    self.append_config(add_cve_check_config)
>> +    self.append_config('CVE_CHECK_MANIFEST = "%s"' % manifest_link)
>> +    result = bitbake("-k -c cve_check %s" % image,
>> ignore_status=True)
>> +    if (not 'cve-check' in get_bb_var('INHERIT')):
>> +    self.remove_config(add_cve_check_config)
>> +
>> +    isfile = os.path.isfile(manifest_link)
>> +    self.assertEqual(True, isfile, 'Failed to create cve data
>> file : %s' % manifest_link)
>> +
>
>

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Early Registration Deadline: Yocto Project DevDay NA 2019

2019-06-14 Thread Volosincu, Andreea S
Time is almost up for the Yocto Project DevDay NA 2019 early registration. 
Register separately or add it to your OSS NA registration.

https://www.cvent.com/events/yocto-dev-day-north-america-2019/registration-cfa8da9b8d0b43bdb92f3e81493ca2ce.aspx?fqp=true

Thanks!
Yocto Project Advocacy Team
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] dnf causing build failure

2019-06-14 Thread Larry Brown
So I got the ${B} from the original openssh recipe so I'm not sure what
that variable equates to now.  But I changed to ${D} and had to change some
other aspects as well.  What I didn't understand is that the recipe is
installing the files into a folder (package folder) that is ultimately
packaged and then installed into the rootfs and that it is not installing
the files from a package folder into the rootfs directly by the script.  I
don't understand how the build system knows that the files in the install
directory need to be installed into the rootfs but I can only assume that
anything installed into a structure in the package directory will be
overlayed into the rootfs in the corresponding locations.  I'm guessing...

Well with the current layout working the DNF error went away.  Maybe it was
failing the file copy to begin with but quietly and only the DNF error was
displayed?  There is definitely a learning curve here...

Thanks so much,

Larry

On Fri, Jun 14, 2019 at 10:11 AM Alexander Kanavin 
wrote:

> Should be ${D}, not ${B} in do_install.
>
> Alex
>
> On Fri, 14 Jun 2019 at 15:53, Larry Brown 
> wrote:
>
>> I've created a recipe that simply copies files into a folder of the
>> image.  Basically in this form:
>>
>> --
>>
>> SUMMARY = "Some text"
>> HOMEPAGE = ""
>> LICENSE = "CLOSED"
>> LIC_FILES_CHKSUM = ""
>>
>> DEPENDS = "openssl"
>>
>> do_install () {
>>  install -m 0644 ${WORKDIR}/path/fileA ${B}/
>>  install -m 0644 ${WORKDIR}/path/fileB ${B}/
>> }
>>
>> FILES_${PN}-ssh = "${sysconfdir}/path/fileA ${sysconfdir}/path/fileB"
>>
>> -
>>
>> When I build this I only want the files in the target image.  I don't
>> need an rpm built that could be used to deliver these files.  Now there
>> might be something I could do that would enable DNF to succeed, however I
>> would like to learn how to get the package installer to ignore/skip a
>> recipe (openssh-keys-install.bb in this case) that is only used during
>> the initial build.
>>
>> Can someone shed some light here?
>>
>> Also, if there is already a way built into the build process to retain a
>> set of keys for each device so they don't keep creating new keys, that
>> would be cool and appreciated, but I'm wanting to learn how to accomplish
>> this so it does not stop me when wanting to include other files elsewhere.
>>
>> I'm assuming here that there is someone here that has dealt with this;
>> however, if I'm mistaken the following is the error that is generated when
>> building.  And by the way, the files do get placed into the correct folder
>> (after some bit of tweaking of the code).  But the error persists:
>>
>> ERROR: core-image-minimal-1.0-r0 do_rootfs: Could not invoke dnf. Command
>> '...--nogpgcheck install base-passwd iptables openssh openssh-keys-install
>> packagegroup-core-boot run-postinsts shadow' returned 1:
>> DNF version:4.2.2
>> cachedir:
>> /stor/development/yocto/poky/trident-build/tmp/work/raspberrypi3_64-poky-linux/core-image-minimal/1.0-r0/rootfs/var/cache/dnf
>> Added oe-repo repo from
>> /stor/development/yocto/poky/trident-build/tmp/work/raspberrypi3_64-poky-linux/core-image-minimal/1.0-r0/oe-rootfs-repo
>> repo: using cache for: oe-repo
>> not found other for:
>> not found modules for:
>> not found deltainfo for:
>> not found updateinfo for:
>> oe-repo: using metadata from Thu 13 Jun 2019 09:13:54 PM UTC.
>> No module defaults found
>> No match for argument: openssh-keys-install
>> Error: Unable to find a match
>>
>> ERROR: core-image-minimal-1.0-r0 do_rootfs:
>> ERROR: core-image-minimal-1.0-r0 do_rootfs: Function failed: do_rootfs
>> 
>>
>> TIA
>>
>>
>>
>> --
>> Larry Brown
>> S/V Trident
>> Palm Harbor, FL
>> ~_/)
>> ~ ~  ~~   ~
>> ~   ~~_/)~  ~ ~~
>>  ~  _/)  ~
>> --
>> ___
>> yocto mailing list
>> yocto@yoctoproject.org
>> https://lists.yoctoproject.org/listinfo/yocto
>>
>

-- 
Larry Brown
S/V Trident
Palm Harbor, FL
~_/)
~ ~  ~~   ~
~   ~~_/)~  ~ ~~
 ~  _/)  ~
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] nativesdk-expect configure error

2019-06-14 Thread Martin Townsend
Hi,

I'm seeing the following do_configure error when building
nativesdk-expect, this is in Rocko but the recipe doesn't look like
it's changed much in master.

 checking for Tcl public headers... configure: error: tcl.h not found.
Please specify its location with --with-tclinclude
| NOTE: The following config.log files may provide further information.
| NOTE: 
/ws/rufilla/curtisswright/yocto-rocko/build/tmp/work/x86_64-nativesdk-cwrsdk-linux/nativesdk-expect/5.45-r1/build/config.log
| ERROR: configure failed
| WARNING: exit code 1 from a shell command.
| ERROR: Function failed: do_configure (log file is located at
/ws/rufilla/curtisswright/yocto-rocko/build/tmp/work/x86_64-nativesdk-cwrsdk-linux/nativesdk-expect/5.45-r1/temp/log.do_configure.31445)
ERROR: Task 
(virtual:nativesdk:/ws/rufilla/curtisswright/yocto-rocko/sources/poky/meta/recipes-devtools/expect/expect_5.45.bb:do_configure)
failed with exit code '1'

To fix it I added

TCL_INCLUDE_PATH_class-nativesdk = "--with-tclinclude=${STAGING_INCDIR}/tcl8.6"

Is anyone else seeing this with newer Yocto versions? if so shall I
submit a patch for this?

Best Regards,
Martin.
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Build failure on kernel-devsrc

2019-06-14 Thread Adam Lee
At some point I noticed I *must* clean kernel-devsrc before I can
build my image with Kernel changes.
I suppose this isn't normal and expect it rebuild what's necessary as
part of the image build. Has anyone seen this before?

Build Configuration:
BB_VERSION   = "1.36.0"
BUILD_SYS= "x86_64-linux"
NATIVELSBSTRING  = "ubuntu-18.04"
TARGET_SYS   = "arm-oe-linux-gnueabi"
MACHINE  = "solix"
DISTRO   = "jome"
DISTRO_VERSION   = "2018.06"
TUNE_FEATURES= "arm armv7a vfp thumb neon callconvention-hard"
TARGET_FPU   = "hard"
meta-arago-distro
meta-arago-extras= "HEAD:61f5c7b578255a0ddff8046b060f8488157c0a0d"
meta-browser = "HEAD:4c1d135a085fe4011fadd73bf1f7876d26eac443"
meta-qt5 = "HEAD:682ad61c071a9710e9f9d8a32ab1b5f3c14953d1"
meta-networking
meta-python
meta-oe
meta-gnome   = "HEAD:352531015014d1957d6444d114f4451e241c4d23"
meta-ti  = "HEAD:6f1740c8121daafc497e26834bee67e3eeae322c"
meta-jo  = "dev:914fe3bb19ec4ddad0c7286b6908eb82a5be"
meta-linaro-toolchain
meta-optee   = "HEAD:75dfb67bbb14a70cd47afda9726e2e1c76731885"
meta = "HEAD:931a52e8698f684ccbb26ddec18764ad9d9a3e8f"
workspace= ":"

Initialising tasks: 100%
|###|
Time: 0:00:06
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
NOTE: redpine: compiling from external source tree
/home/adam/test/build/workspace/sources/redpine/source/host
NOTE: lp8863-mod: compiling from external source tree
/home/adam/test/build/workspace/sources/lp8863-mod
ERROR: kernel-devsrc-1.0-r0 do_install: Function failed: do_install
(log file is located at
/home/adam/test/build/build/tmp-glibc/work/solix-oe-linux-gnueabi/kernel-devsrc/1.0-r0/temp/log.do_install.70443)
ERROR: Logfile of failure stored in:
/home/adam/test/build/build/tmp-glibc/work/solix-oe-linux-gnueabi/kernel-devsrc/1.0-r0/temp/log.do_install.70443
Log data follows:
| DEBUG: Executing python function extend_recipe_sysroot
| NOTE: Direct dependencies are
['virtual:native:/home/adam/test/poky/meta/recipes-devtools/pseudo/pseudo_1.8.2.bb:do_populate_sysroot',
'/home/adam/test/poky/meta/recipes-devtools/gcc/gcc-runtime_7.3.bb:do_populate_sysroot',
'/home/adam/test/poky/meta/recipes-devtools/quilt/quilt-native_0.65.bb:do_populate_sysroot',
'/home/adam/test/poky/meta/recipes-core/glibc/glibc_2.26.bb:do_populate_sysroot',
'virtual:native:/home/adam/test/poky/meta/recipes-bsp/u-boot/u-boot-mkimage_2017.09.bb:do_populate_sysroot',
'virtual:native:/home/adam/test/poky/meta/recipes-extended/bc/bc_1.06.bb:do_populate_sysroot',
'/home/adam/test/poky/meta/recipes-kernel/kmod/kmod-native_git.bb:do_populate_sysroot',
'virtual:native:/home/adam/test/poky/meta/recipes-support/lzop/lzop_1.03.bb:do_populate_sysroot',
'/home/adam/test/poky/meta/recipes-devtools/binutils/binutils-cross_2.29.1.bb:do_populate_sysroot',
'/home/adam/test/poky/meta/recipes-devtools/gcc/gcc-cross_7.3.bb:do_populate_sysroot']
| NOTE: Installed into sysroot: []
| NOTE: Skipping as already exists in sysroot: ['pseudo-native',
'gcc-runtime', 'quilt-native', 'glibc', 'u-boot-mkimage-native',
'bc-native', 'kmod-native', 'lzop-native', 'binutils-cross-arm',
'gcc-cross-arm', 'linux-libc-headers', 'libgcc', 'openssl-native',
'gnu-config-native', 'gtk-doc-native', 'libtool-native',
'zlib-native', 'pkgconfig-native', 'autoconf-native',
'automake-native', 'texinfo-dummy-native', 'bison-native',
'flex-native', 'mpfr-native', 'xz-native', 'gmp-native',
'libmpc-native', 'lzo-native', 'makedepend-native',
'cryptodev-linux-native', 'm4-native', 'gettext-minimal-native',
'xproto-native', 'util-macros-native']
| DEBUG: Python function extend_recipe_sysroot finished
| DEBUG: Executing shell function do_install
| 0 blocks
| cpio: ./singletask.lock: Cannot stat: No such file or directory
| 0 blocks
| WARNING: exit code 2 from a shell command.
| ERROR: Function failed: do_install (log file is located at
/home/adam/test/build/build/tmp-glibc/work/solix-oe-linux-gnueabi/kernel-devsrc/1.0-r0/temp/log.do_install.70443)
ERROR: Task 
(/home/adam/test/poky/meta/recipes-kernel/linux/kernel-devsrc.bb:do_install)
failed with exit code '1'
NOTE: Tasks Summary: Attempted 5488 tasks of which 5240 didn't need to
be rerun and 1 failed.
NOTE: Writing buildhistory

Summary: 1 task failed:
  /home/adam/test/poky/meta/recipes-kernel/linux/kernel-devsrc.bb:do_install
Summary: There was 1 WARNING message shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [ptest-runner][PATCH v2 3/4] utils: Ensure pipes are read after exit

2019-06-14 Thread Randy MacLeod
From: Richard Purdie 

There was a race in the code where the pipes may not be read after the process 
has exited
and data may be left behind in them. This change to ordering ensures the pipes 
are read
after the exit code has been read meaning no data can be left behind and the 
logs should
be complete.

Signed-off-by: Richard Purdie 
Upstream-Status: Pending [code being tested]
---
 utils.c | 29 -
 1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/utils.c b/utils.c
index 86dcdad..ad737c2 100644
--- a/utils.c
+++ b/utils.c
@@ -285,6 +285,7 @@ wait_child(const char *ptest_dir, const char *run_ptest, 
pid_t pid,
struct pollfd pfds[2];
struct timespec sentinel;
clockid_t clock = CLOCK_MONOTONIC;
+   int looping = 1;
int r;
 
int status;
@@ -302,9 +303,23 @@ wait_child(const char *ptest_dir, const char *run_ptest, 
pid_t pid,
 
*timeouted = 0;
 
-   while (1) {
+   while (looping) {
waitflags = WNOHANG;
 
+   if (timeout >= 0) {
+   struct timespec time;
+
+   clock_gettime(clock, );
+   if ((time.tv_sec - sentinel.tv_sec) > timeout) {
+   *timeouted = 1;
+   kill(-pid, SIGKILL);
+   waitflags = 0;
+   }
+   }
+
+   if (waitpid(pid, , waitflags) == pid)
+   looping = 0;
+
r = poll(pfds, 2, WAIT_CHILD_POLL_TIMEOUT_MS);
if (r > 0) {
char buf[WAIT_CHILD_BUF_MAX_SIZE];
@@ -324,19 +339,7 @@ wait_child(const char *ptest_dir, const char *run_ptest, 
pid_t pid,
}
 
clock_gettime(clock, );
-   } else if (timeout >= 0) {
-   struct timespec time;
-
-   clock_gettime(clock, );
-   if ((time.tv_sec - sentinel.tv_sec) > timeout) {
-   *timeouted = 1;
-   kill(-pid, SIGKILL);
-   waitflags = 0;
-   }
}
-
-   if (waitpid(pid, , waitflags) == pid)
-   break;
}
 
fflush(fps[0]);
-- 
2.17.0

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [ptest-runner][PATCH v2 4/4] utils: ensure child can be session leader

2019-06-14 Thread Randy MacLeod
When running the run-execscript bash ptest as a user rather than root, a 
warning:
  bash: cannot set terminal process group (16036): Inappropriate ioctl for 
device
  bash: no job control in this shell
contaminates the bash log files causing the test to fail. This happens only
when run under ptest-runner and not when interactively testing!

The changes made to fix this include:
1. Get the process group id (pgid) before forking,
2. Set the pgid in both the parent and child to avoid a race,
3. Find, open and set permission on the child tty, and
4. Allow the child to attach to controlling tty.

Also add '-lutil' to Makefile. This lib is from libc and provides openpty.

Signed-off-by: Sakib Sajal 
Signed-off-by: Randy MacLeod 
---
 Makefile |   2 +-
 utils.c  | 102 +--
 2 files changed, 92 insertions(+), 12 deletions(-)

diff --git a/Makefile b/Makefile
index 1bde7be..439eb79 100644
--- a/Makefile
+++ b/Makefile
@@ -29,7 +29,7 @@ TEST_DATA=$(shell echo `pwd`/tests/data)
 all: $(SOURCES) $(EXECUTABLE)
 
 $(EXECUTABLE): $(OBJECTS)
-   $(CC) $(LDFLAGS) $(OBJECTS) -o $@
+   $(CC) $(LDFLAGS) $(OBJECTS) -lutil -o $@
 
 tests: $(TEST_SOURCES) $(TEST_EXECUTABLE)
 
diff --git a/utils.c b/utils.c
index ad737c2..f11ce39 100644
--- a/utils.c
+++ b/utils.c
@@ -1,5 +1,6 @@
 /**
  * Copyright (c) 2016 Intel Corporation
+ * Copyright (C) 2019 Wind River Systems, Inc.
  *
  * This program is free software; you can redistribute it and/or
  * modify it under the terms of the GNU General Public License
@@ -22,23 +23,27 @@
  */
 
 #define _GNU_SOURCE 
+
 #include 
 
+#include 
+#include 
+#include 
+#include 
 #include 
-#include 
 #include 
-#include 
+#include 
+#include 
+#include 
+#include 
 #include 
-#include 
+#include 
+
+#include 
 #include 
+#include 
 #include 
 #include 
-#include 
-#include 
-#include 
-#include 
-
-#include 
 
 #include "ptest_list.h"
 #include "utils.h"
@@ -346,6 +351,53 @@ wait_child(const char *ptest_dir, const char *run_ptest, 
pid_t pid,
return status;
 }
 
+/* Returns an integer file descriptor.
+ * If it returns < 0, an error has occurred.
+ * Otherwise, it has returned the slave pty file descriptor.
+ * fp should be writable, likely stdout/err.
+ */
+static int
+setup_slave_pty(FILE *fp) { 
+   int pty_master = -1;
+   int pty_slave = -1;
+   char pty_name[256];
+   struct group *gptr;
+   gid_t gid;
+   int slave = -1;
+
+   if (openpty(_master, _slave, pty_name, NULL, NULL) < 0) {
+   fprintf(fp, "ERROR: openpty() failed with: %s.\n", 
strerror(errno));
+   return -1;
+   }
+
+   if ((gptr = getgrnam(pty_name)) != 0) {
+   gid = gptr->gr_gid;
+   } else {
+   /* If the tty group does not exist, don't change the
+* group on the slave pty, only the owner
+*/
+   gid = -1;
+   }
+
+   /* chown/chmod the corresponding pty, if possible.
+* This will only work if the process has root permissions.
+*/
+   if (chown(pty_name, getuid(), gid) != 0) {
+   fprintf(fp, "ERROR; chown() failed with: %s.\n", 
strerror(errno));
+   }
+
+   /* Makes the slave read/writeable for the user. */
+   if (chmod(pty_name, S_IRUSR|S_IWUSR) != 0) {
+   fprintf(fp, "ERROR: chmod() failed with: %s.\n", 
strerror(errno));
+   }
+
+   if ((slave = open(pty_name, O_RDWR)) == -1) {
+   fprintf(fp, "ERROR: open() failed with: %s.\n", 
strerror(errno));
+   }
+   return (slave);
+}
+
+
 int
 run_ptests(struct ptest_list *head, const struct ptest_options opts,
const char *progname, FILE *fp, FILE *fp_stderr)
@@ -362,6 +414,8 @@ run_ptests(struct ptest_list *head, const struct 
ptest_options opts,
int timeouted;
time_t sttime, entime;
int duration;
+   int slave;
+   int pgid = -1;
 
if (opts.xml_filename) {
xh = xml_create(ptest_list_length(head), opts.xml_filename);
@@ -379,7 +433,6 @@ run_ptests(struct ptest_list *head, const struct 
ptest_options opts,
close(pipefd_stdout[1]);
break;
}
-
fprintf(fp, "START: %s\n", progname);
PTEST_LIST_ITERATE_START(head, p);
char *ptest_dir = strdup(p->run_ptest);
@@ -388,6 +441,13 @@ run_ptests(struct ptest_list *head, const struct 
ptest_options opts,
break;
}
dirname(ptest_dir);
+   if (ioctl(0, TIOCNOTTY) == -1) {
+   fprintf(fp, "ERROR: Unable to detach from 
controlling tty, %s\n", strerror(errno));
+   }
+
+   if ((pgid = getpgid(0)) == -1) {
+   fprintf(fp, "ERROR: getpgid() failed, %s\n", 
strerror(errno));
+   

[yocto] [ptest-runner][PATCH v2] 3 old patches + utils: ensure child can be session leader

2019-06-14 Thread Randy MacLeod
My patch needs Richards previous 3 patches so I've added them here.

I've cleaned up the patch a bit since v1; mostly it's indentation and
other cosmetic changes. I have added a bit more error handling and
I've renamed get_slave_pty to setup_slave_pty to more accurately 
reflect what the function does. Also I've cleaned up the code to only
use open_pty() to get the tty name.

Care to release an update now that oe-core 2.8-M1 is in QA?

../Randy


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [ptest-runner][PATCH v2 2/4] use process groups when spawning

2019-06-14 Thread Randy MacLeod
From: Richard Purdie 

Rather than just killing the process we've swawned, set the process group
for spawned children and then kill the group of processes.

Signed-off-by: Richard Purdie 
Upstream-Status: Pending [code being tested]
---
 utils.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/utils.c b/utils.c
index 9fab6f2..86dcdad 100644
--- a/utils.c
+++ b/utils.c
@@ -330,7 +330,7 @@ wait_child(const char *ptest_dir, const char *run_ptest, 
pid_t pid,
clock_gettime(clock, );
if ((time.tv_sec - sentinel.tv_sec) > timeout) {
*timeouted = 1;
-   kill(pid, SIGKILL);
+   kill(-pid, SIGKILL);
waitflags = 0;
}
}
@@ -392,6 +392,7 @@ run_ptests(struct ptest_list *head, const struct 
ptest_options opts,
rc = -1;
break;
} else if (child == 0) {
+   setsid();
run_child(p->run_ptest, pipefd_stdout[1], 
pipefd_stderr[1]);
} else {
int status;
-- 
2.17.0

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [ptest-runner][PATCH v2 1/4] utils: Ensure stdout/stderr are flushed

2019-06-14 Thread Randy MacLeod
From: Richard Purdie 

There is no guarantee that the data written with fwrite will be flushed to the
buffer. If stdout and stderr are the same thing, this could lead to interleaved
writes. The common case is stdout output so flush the output pipes when writing 
to
stderr. Also flush stdout before the function returns.

Signed-off-by: Richard Purdie 
Upstream-Status: Pending [code being tested]
---
 utils.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/utils.c b/utils.c
index 6e453a1..9fab6f2 100644
--- a/utils.c
+++ b/utils.c
@@ -316,8 +316,11 @@ wait_child(const char *ptest_dir, const char *run_ptest, 
pid_t pid,
}
 
if (pfds[1].revents != 0) {
-   while ((n = read(fds[1], buf, 
WAIT_CHILD_BUF_MAX_SIZE)) > 0)
+   while ((n = read(fds[1], buf, 
WAIT_CHILD_BUF_MAX_SIZE)) > 0) {
+   fflush(fps[0]);
fwrite(buf, n, 1, fps[1]);
+   fflush(fps[1]);
+   }
}
 
clock_gettime(clock, );
@@ -336,7 +339,7 @@ wait_child(const char *ptest_dir, const char *run_ptest, 
pid_t pid,
break;
}
 
-
+   fflush(fps[0]);
return status;
 }
 
-- 
2.17.0

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] dnf causing build failure

2019-06-14 Thread Alexander Kanavin
Should be ${D}, not ${B} in do_install.

Alex

On Fri, 14 Jun 2019 at 15:53, Larry Brown  wrote:

> I've created a recipe that simply copies files into a folder of the
> image.  Basically in this form:
>
> --
>
> SUMMARY = "Some text"
> HOMEPAGE = ""
> LICENSE = "CLOSED"
> LIC_FILES_CHKSUM = ""
>
> DEPENDS = "openssl"
>
> do_install () {
>  install -m 0644 ${WORKDIR}/path/fileA ${B}/
>  install -m 0644 ${WORKDIR}/path/fileB ${B}/
> }
>
> FILES_${PN}-ssh = "${sysconfdir}/path/fileA ${sysconfdir}/path/fileB"
>
> -
>
> When I build this I only want the files in the target image.  I don't need
> an rpm built that could be used to deliver these files.  Now there might be
> something I could do that would enable DNF to succeed, however I would like
> to learn how to get the package installer to ignore/skip a recipe (
> openssh-keys-install.bb in this case) that is only used during the
> initial build.
>
> Can someone shed some light here?
>
> Also, if there is already a way built into the build process to retain a
> set of keys for each device so they don't keep creating new keys, that
> would be cool and appreciated, but I'm wanting to learn how to accomplish
> this so it does not stop me when wanting to include other files elsewhere.
>
> I'm assuming here that there is someone here that has dealt with this;
> however, if I'm mistaken the following is the error that is generated when
> building.  And by the way, the files do get placed into the correct folder
> (after some bit of tweaking of the code).  But the error persists:
>
> ERROR: core-image-minimal-1.0-r0 do_rootfs: Could not invoke dnf. Command
> '...--nogpgcheck install base-passwd iptables openssh openssh-keys-install
> packagegroup-core-boot run-postinsts shadow' returned 1:
> DNF version:4.2.2
> cachedir:
> /stor/development/yocto/poky/trident-build/tmp/work/raspberrypi3_64-poky-linux/core-image-minimal/1.0-r0/rootfs/var/cache/dnf
> Added oe-repo repo from
> /stor/development/yocto/poky/trident-build/tmp/work/raspberrypi3_64-poky-linux/core-image-minimal/1.0-r0/oe-rootfs-repo
> repo: using cache for: oe-repo
> not found other for:
> not found modules for:
> not found deltainfo for:
> not found updateinfo for:
> oe-repo: using metadata from Thu 13 Jun 2019 09:13:54 PM UTC.
> No module defaults found
> No match for argument: openssh-keys-install
> Error: Unable to find a match
>
> ERROR: core-image-minimal-1.0-r0 do_rootfs:
> ERROR: core-image-minimal-1.0-r0 do_rootfs: Function failed: do_rootfs
> 
>
> TIA
>
>
>
> --
> Larry Brown
> S/V Trident
> Palm Harbor, FL
> ~_/)
> ~ ~  ~~   ~
> ~   ~~_/)~  ~ ~~
>  ~  _/)  ~
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] dnf causing build failure

2019-06-14 Thread Larry Brown
I've created a recipe that simply copies files into a folder of the image.
Basically in this form:

--

SUMMARY = "Some text"
HOMEPAGE = ""
LICENSE = "CLOSED"
LIC_FILES_CHKSUM = ""

DEPENDS = "openssl"

do_install () {
 install -m 0644 ${WORKDIR}/path/fileA ${B}/
 install -m 0644 ${WORKDIR}/path/fileB ${B}/
}

FILES_${PN}-ssh = "${sysconfdir}/path/fileA ${sysconfdir}/path/fileB"

-

When I build this I only want the files in the target image.  I don't need
an rpm built that could be used to deliver these files.  Now there might be
something I could do that would enable DNF to succeed, however I would like
to learn how to get the package installer to ignore/skip a recipe (
openssh-keys-install.bb in this case) that is only used during the initial
build.

Can someone shed some light here?

Also, if there is already a way built into the build process to retain a
set of keys for each device so they don't keep creating new keys, that
would be cool and appreciated, but I'm wanting to learn how to accomplish
this so it does not stop me when wanting to include other files elsewhere.

I'm assuming here that there is someone here that has dealt with this;
however, if I'm mistaken the following is the error that is generated when
building.  And by the way, the files do get placed into the correct folder
(after some bit of tweaking of the code).  But the error persists:

ERROR: core-image-minimal-1.0-r0 do_rootfs: Could not invoke dnf. Command
'...--nogpgcheck install base-passwd iptables openssh openssh-keys-install
packagegroup-core-boot run-postinsts shadow' returned 1:
DNF version:4.2.2
cachedir:
/stor/development/yocto/poky/trident-build/tmp/work/raspberrypi3_64-poky-linux/core-image-minimal/1.0-r0/rootfs/var/cache/dnf
Added oe-repo repo from
/stor/development/yocto/poky/trident-build/tmp/work/raspberrypi3_64-poky-linux/core-image-minimal/1.0-r0/oe-rootfs-repo
repo: using cache for: oe-repo
not found other for:
not found modules for:
not found deltainfo for:
not found updateinfo for:
oe-repo: using metadata from Thu 13 Jun 2019 09:13:54 PM UTC.
No module defaults found
No match for argument: openssh-keys-install
Error: Unable to find a match

ERROR: core-image-minimal-1.0-r0 do_rootfs:
ERROR: core-image-minimal-1.0-r0 do_rootfs: Function failed: do_rootfs


TIA



-- 
Larry Brown
S/V Trident
Palm Harbor, FL
~_/)
~ ~  ~~   ~
~   ~~_/)~  ~ ~~
 ~  _/)  ~
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-swupdate] failed update leads to kernel panic

2019-06-14 Thread Stefano Babic
Hi Moritz,

On 14/06/19 12:19, Moritz Porst wrote:
> (Sorry, the answer should go to everyone)
> Thanks for your response, unfortunately it didn't work out for me.
> Because I am not 100% sure whether I did the correct thing I describe it:
> In my layer I went to recipes-kernel/kernel/files, here I added a file
> defconfig which I filled with the content of your config-initramfs
> my linux-yocto_%.bbappend, residing one level lower (kernel), got this
> file added so it reads (in its entirety):
>  
> FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
> SRC_URI += "file://0001-sdcard-power-management-off.patch"
> SRC_URI += "file://defconfig"
>  
> However I *also* have a file called defconfig in meta-swupdate which is
> basically the swupdate config that comes from menuconfig. I let this
> untouched but since this is not the kernel defconfig I guess you didn't
> mean this file ?
>  
> Additionally I can't mount the rootfs with the failed boot so I cannot
> access the dmesg log file. Any advice here ?
>  
> Best regards
> Moritz
>  
>  
>  
> *Gesendet:* Freitag, 14. Juni 2019 um 11:00 Uhr
> *Von:* "Zoran Stojsavljevic" 
> *An:* "Moritz Porst" 
> *Cc:* "Yocto Project" 
> *Betreff:* Re: [yocto] [meta-swupdate] failed update leads to kernel panic
>> However if I abort the update while running (i.e. simulating a power cut)
>> and then reboot I end up in a kernel panic: "Unable to mount rootfs on
>> unknown block". My understanding is that I should at least end up in a
>> busybox or that the update is retried, but this does not happen.
> 
> You missed to attach failed dmesg while the system booted and aborted.
> Nevertheless, I expect problem with your defconfig, which does NOT
> have options set for initramfs. These ones:
> https://github.com/ZoranStojsavljevic/bbb-yocto/blob/master/custom/config-initramfs
> 
> Please, could you add these one to your current .config, this might
> very well solve your problem.
> 
> Best Regards,
> Zoran
> ___
> 
> On Fri, Jun 14, 2019 at 10:15 AM Moritz Porst  wrote:
>>
>> Hello,
>>
>> I am currently having trouble to get swupdate working properly.
>> My problem:
>> I can execute swupdate -i normally and if the update fits I have no
> problem. However if I abort the update while running (i.e. simulating a
> power cut) and then reboot I end up in a kernel panic: "Unable to mount
> rootfs on unknown block". My understanding is that I should at least end
> up in a busybox or that the update is retried, but this does not happen.
>> I receive 1 error when updating which does not stop the update: "Could
> not find MTD".
>>
>> My configuration:
>> I have a single rootfs and a separate boot partition. I build the
> initramfs using:

You have a *single* rootfs, you stop an upgrade (resulting of course in
a corrupted rootfs or worse), and you wonder that kernel cannot mount
ityour update concept is broken.

You *must* check the bootloader marker in U-Boot (default is the
"recovery_status" variable) and you *mus*t start again the updater (the
ramdisk) until the marker (it is a transaction flag) is set. It is
erased by SWUpdate only after a successful update. You are now starting
your board with a half-completed update.

Best regards,
Stefano Babic


>> ---
>> IMAGE_INSTALL = "base-files \
>> base-passwd \
>> swupdate \
>> busybox \
>> libconfig \
>> util-linux-sfdisk \
>> mtd-utils \
>> mtd-utils-ubifs \
>> ${@bb.utils.contains('SWUPDATE_INIT', 'tiny',
> 'virtual/initscripts-swupdate', 'initscripts sysvinit', d)} \
>> "
>> ---
>> This is taken from swupdate-image. I include the same packages (via
> _append) in my base image, is this necessary ?
>> I bundle the initramfs with my image using INITRAMFS_IMAGE_BUNDLE = "1"
>>
>> Can you see a mistake I made ?
>>
>> Best regards
>> Moritz
>> --
>> ___
>> yocto mailing list
>> yocto@yoctoproject.org
>> https://lists.yoctoproject.org/listinfo/yocto
> 

-- 
=
DENX Software Engineering GmbH,  Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: +49-8142-66989-53 Fax: +49-8142-66989-80 Email: sba...@denx.de
=
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-swupdate] failed update leads to kernel panic

2019-06-14 Thread Moritz Porst

(Sorry, the answer should go to everyone)

Thanks for your response, unfortunately it didn't work out for me.

Because I am not 100% sure whether I did the correct thing I describe it:

In my layer I went to recipes-kernel/kernel/files, here I added a file defconfig which I filled with the content of your config-initramfs

my linux-yocto_%.bbappend, residing one level lower (kernel), got this file added so it reads (in its entirety):


 

FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"

SRC_URI += "file://0001-sdcard-power-management-off.patch"
SRC_URI += "file://defconfig"

 

However I *also* have a file called defconfig in meta-swupdate which is basically the swupdate config that comes from menuconfig. I let this untouched but since this is not the kernel defconfig I guess you didn't mean this file ?

 

Additionally I can't mount the rootfs with the failed boot so I cannot access the dmesg log file. Any advice here ?

 

Best regards

Moritz

 


 

 

Gesendet: Freitag, 14. Juni 2019 um 11:00 Uhr
Von: "Zoran Stojsavljevic" 
An: "Moritz Porst" 
Cc: "Yocto Project" 
Betreff: Re: [yocto] [meta-swupdate] failed update leads to kernel panic

> However if I abort the update while running (i.e. simulating a power cut)
> and then reboot I end up in a kernel panic: "Unable to mount rootfs on
> unknown block". My understanding is that I should at least end up in a
> busybox or that the update is retried, but this does not happen.

You missed to attach failed dmesg while the system booted and aborted.
Nevertheless, I expect problem with your defconfig, which does NOT
have options set for initramfs. These ones:
https://github.com/ZoranStojsavljevic/bbb-yocto/blob/master/custom/config-initramfs

Please, could you add these one to your current .config, this might
very well solve your problem.

Best Regards,
Zoran
___

On Fri, Jun 14, 2019 at 10:15 AM Moritz Porst  wrote:
>
> Hello,
>
> I am currently having trouble to get swupdate working properly.
> My problem:
> I can execute swupdate -i normally and if the update fits I have no problem. However if I abort the update while running (i.e. simulating a power cut) and then reboot I end up in a kernel panic: "Unable to mount rootfs on unknown block". My understanding is that I should at least end up in a busybox or that the update is retried, but this does not happen.
> I receive 1 error when updating which does not stop the update: "Could not find MTD".
>
> My configuration:
> I have a single rootfs and a separate boot partition. I build the initramfs using:
> ---
> IMAGE_INSTALL = "base-files \
> base-passwd \
> swupdate \
> busybox \
> libconfig \
> util-linux-sfdisk \
> mtd-utils \
> mtd-utils-ubifs \
> ${@bb.utils.contains('SWUPDATE_INIT', 'tiny', 'virtual/initscripts-swupdate', 'initscripts sysvinit', d)} \
> "
> ---
> This is taken from swupdate-image. I include the same packages (via _append) in my base image, is this necessary ?
> I bundle the initramfs with my image using INITRAMFS_IMAGE_BUNDLE = "1"
>
> Can you see a mistake I made ?
>
> Best regards
> Moritz
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto



-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Can't build SDK while shrinking shadow tools

2019-06-14 Thread Jérémy Singy
Hi,

I'm facing a problem by trying to shrink the size of our root filesystem.
To avoid installing some unneeded tools such as adduser, groupadd,
nologin, etc. I just created as usual a shadow_%.bbappend in our layer
which removes the content using do_install_append:

# do not install alternatives
ALTERNATIVE_${PN} = ""
ALTERNATIVE_${PN}-base = ""

# remove unwanted files on target
do_install_append_class-target () {
  rm -r ${D}${base_bindir}
  rm -r ${D}${base_sbindir}
  rm -r ${D}${bindir}
  rm -r ${D}${sbindir}
}

Building an image with this works well and I got rid of the unwanted
tools but I get an error whenever I try to build the SDK for this image
with do_populate_sdk:

WARNING: ceres-image-1.0-r0 do_populate_sdk:
nativesdk-util-linux.postinst returned 1, marking as unpacked only,
configuration required on target.
ERROR: ceres-image-1.0-r0 do_populate_sdk: Postinstall scriptlets
of ['nativesdk-util-linux'] have failed. If the intention is to defer
them to first boot,
then please place them into pkg_postinst_ontarget_${PN} ().
Deferring to first boot via 'exit 1' is no longer supported.
Details of the failure are in
/home/chsingj/yocto-thud/tmp/work/ceres_con-oe-linux-gnueabi/ceres-image/1.0-r0/temp/log.do_populate_sdk.
ERROR: ceres-image-1.0-r0 do_populate_sdk: Function failed: do_populate_sdk
ERROR: Logfile of failure stored in:
/home/chsingj/yocto-thud/tmp/work/ceres_con-oe-linux-gnueabi/ceres-image/1.0-r0/temp/log.do_populate_sdk.28331
ERROR: Task
(/home/chsingj/meta-delta/recipes-core/images/ceres-image.bb:do_populate_sdk)
failed with exit code '1'
NOTE: Tasks Summary: Attempted 2730 tasks of which 2729 didn't
need to be rerun and 1 failed.

The log file shows an error with update-alternatives (probably a
failing ln call):

update-alternatives: Error: not linking
[BUILDPATH]/tmp/work/ceres_con-oe-linux-gnueabi/ceres-image/1.0-r0/sdk/image/opt/ceres/sdk-2019.05-r1/sysroots/x86_64-delta-linux/sbin/nologin
to /opt/ceres/sdk-2019.05-r1/sysroots/x86_64-delta-linux/sbin/nologin.util-linux
since 
[BUILDPATH]/tmp/work/ceres_con-oe-linux-gnueabi/ceres-image/1.0-r0/sdk/image/opt/ceres/sdk-2019.05-r1/sysroots/x86_64-delta-linux/sbin/nologin
exists and is not a link

If I remove the bbappend, the SDK builds fine but of course my image
contains the unwanted files. I tried many other recipe tweaks but I
can't get over some build errors. I'd like to avoid a solution using
ROOTFS_POSTPROCESS_COMMAND in the image recipe as I would have to
explicitely list all shadow tools and it would blow up my recipe. Is
there any solution here? Is it maybe a bug that can be fixed? Thanks!

Regards,
Jeremy
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-swupdate] failed update leads to kernel panic

2019-06-14 Thread Zoran Stojsavljevic
> However if I abort the update while running (i.e. simulating a power cut)
> and then reboot I end up in a kernel panic: "Unable to mount rootfs on
> unknown block". My understanding is that I should at least end up in a
> busybox or that the update is retried, but this does not happen.

You missed to attach failed dmesg while the system booted and aborted.
Nevertheless, I expect problem with your defconfig, which does NOT
have options set for initramfs. These ones:
https://github.com/ZoranStojsavljevic/bbb-yocto/blob/master/custom/config-initramfs

Please, could you add these one to your current .config, this might
very well solve your problem.

Best Regards,
Zoran
___

On Fri, Jun 14, 2019 at 10:15 AM Moritz Porst  wrote:
>
> Hello,
>
> I am currently having trouble to get swupdate working properly.
> My problem:
> I can execute swupdate -i normally and if the update fits I have no problem. 
> However if I abort the update while running (i.e. simulating a power cut) and 
> then reboot I end up in a kernel panic: "Unable to mount rootfs on unknown 
> block". My understanding is that I should at least end up in a busybox or 
> that the update is retried, but this does not happen.
> I receive 1 error when updating which does not stop the update: "Could not 
> find MTD".
>
> My configuration:
> I have a single rootfs and a separate boot partition. I build the initramfs 
> using:
> ---
> IMAGE_INSTALL = "base-files \
>base-passwd \
>swupdate \
>busybox \
>libconfig \
>util-linux-sfdisk \
>mtd-utils \
>mtd-utils-ubifs \
>${@bb.utils.contains('SWUPDATE_INIT', 'tiny', 
> 'virtual/initscripts-swupdate', 'initscripts sysvinit', d)} \
> "
> ---
> This is taken from swupdate-image. I include the same packages (via _append) 
> in my base image, is this necessary ?
> I bundle the initramfs with my image using INITRAMFS_IMAGE_BUNDLE = "1"
>
> Can you see a mistake I made ?
>
> Best regards
> Moritz
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [meta-swupdate] failed update leads to kernel panic

2019-06-14 Thread Moritz Porst
Hello,

 

I am currently having trouble to get swupdate working properly.


My problem:

I can execute swupdate -i normally and if the update fits I have no problem. However if I abort the update while running (i.e. simulating a power cut) and then reboot I end up in a kernel panic: "Unable to mount rootfs on unknown block". My understanding is that I should at least end up in a busybox or that the update is retried, but this does not happen.

I receive 1 error when updating which does not stop the update: "Could not find MTD".

 

My configuration:


I have a single rootfs and a separate boot partition. I build the initramfs using:

---


IMAGE_INSTALL = "base-files \
   base-passwd \
   swupdate \
   busybox \
   libconfig \
   util-linux-sfdisk \
   mtd-utils \
   mtd-utils-ubifs \
   ${@bb.utils.contains('SWUPDATE_INIT', 'tiny', 'virtual/initscripts-swupdate', 'initscripts sysvinit', d)} \
    "

---

This is taken from swupdate-image. I include the same packages (via _append) in my base image, is this necessary ?

I bundle the initramfs with my image using INITRAMFS_IMAGE_BUNDLE = "1"

 

Can you see a mistake I made ?
 

Best regards

Moritz

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-security][PATCH 1/2] oe-selftest: add running cve checker

2019-06-14 Thread ChenQi

Hi Armin,

I just noticed this selftest case.
Have you considered putting it into oe-core?

Best Regards,
Chen Qi

On 05/10/2019 11:09 AM, Armin Kuster wrote:

Signed-off-by: Armin Kuster 
---
  lib/oeqa/selftest/cases/cvechecker.py | 27 +++
  1 file changed, 27 insertions(+)
  create mode 100644 lib/oeqa/selftest/cases/cvechecker.py

diff --git a/lib/oeqa/selftest/cases/cvechecker.py 
b/lib/oeqa/selftest/cases/cvechecker.py
new file mode 100644
index 000..23ca7d2
--- /dev/null
+++ b/lib/oeqa/selftest/cases/cvechecker.py
@@ -0,0 +1,27 @@
+import os
+import re
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import bitbake, get_bb_var
+
+class CveCheckerTests(OESelftestTestCase):
+def test_cve_checker(self):
+image = "core-image-sato"
+
+deploy_dir = get_bb_var("DEPLOY_DIR_IMAGE")
+image_link_name = get_bb_var('IMAGE_LINK_NAME', image)
+
+manifest_link = os.path.join(deploy_dir, "%s.cve" % image_link_name)
+
+self.logger.info('CVE_CHECK_MANIFEST = "%s"' % manifest_link)
+if (not 'cve-check' in get_bb_var('INHERIT')):
+add_cve_check_config = 'INHERIT += "cve-check"'
+self.append_config(add_cve_check_config)
+self.append_config('CVE_CHECK_MANIFEST = "%s"' % manifest_link)
+result = bitbake("-k -c cve_check %s" % image, ignore_status=True)
+if (not 'cve-check' in get_bb_var('INHERIT')):
+self.remove_config(add_cve_check_config)
+
+isfile = os.path.isfile(manifest_link)
+self.assertEqual(True, isfile, 'Failed to create cve data file : %s' % 
manifest_link)
+



--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] QA notification for completed autobuilder build (yocto-2.8_M1.rc2)

2019-06-14 Thread Jain, Sangeeta
QA cycle report for 2.8 M1 RC2:

1. No high milestone defects.  
2. Test results are available at following location:
• For results of all automated tests, please refer to results 
at public AB [1]
• For other test results, refer to git repo " 
yocto-testresults-contrib" [2]
• For test report for test cases run by Intel and WR team, 
refer to git repo " yocto-testresults-contrib" [2]
3. 3 new defects are found in this cycle, BSP HW - parselog issue 
[3], BSP HW - audio issue [4], qemux86 oe-selftest failure on AB [5] 
4. ptests failing in this release which were passing in previous 
release - acl [6]. 3 timeout issues observed in ptests - acl [7], bluez5 [8]  
and mdadm [9].
Richard's comments on ptest failures:  There are three timeouts in 
ptest, acl, bluez5 (#13366) and mdadm (#13368). These are also not going to 
block the milestone release. The

 bluez5 issue is gcc9 related, mdadm was exposed by wider testing in 
master. The acl issue needs a bug opening and investigation as that is a 
regression.
Summary of ptest in this build: 
qemuarm64-ptest:
Total : 435597
Pass : 433344
 Fail :  65
Skip : 2188
Timeout issues in bluez5 and mdadm
qemux86-64-ptest
Total : 45346
Pass : 43235
 Fail :  56
Skip : 2055
Timeout issues in acl, bluez5 and mdadm

5. Summary of ltp test run in this build:
qemuarm64-ltp
Total : 1886
Pass : 1759
 Fail :  127
  qemux86-64-ltp
Total : 1879
 Pass : 1825
 Fail :  54

=== Links 

[1] - https://autobuilder.yocto.io/pub/releases/yocto-2.8_M1.rc2/testresults/

[2] - git clone g...@push.yoctoproject.org:yocto-testresults-contrib -b zeus

[3] - [QA 2.8 M1 RC1][BSP HW] parselogs.ParseLogsTest.test_parselogs failure on 
coffeelake
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13396

[4] - [QA 2.8 M1 RC1][BSP HW] audio is not playing on coffeelake and nuc7
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13397

[5] - [QA 2.8 M1 RC1] test_boot_machine_slirp is showing error for qemux86
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13399

[6] - [QA 2.8 M1 RC1] acl ptest failure
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13395

[7] - acl ptest timeout due to perl update
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13391

[8] - bluez5 ptest hangs with gcc 9
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13366

[9] - mdadm ptest times out
https://bugzilla.yoctoproject.org/show_bug.cgi?id=13368


Thanks & Regards,
Sangeeta Jain

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto