Re: [yocto] [RFC] CVEs on sumo branch

2018-09-24 Thread Sinan Kaya

On 9/24/2018 10:27 AM, Sinan Kaya wrote:

Here is another one.

https://patches.openembedded.org/patch/154290/


I see you pulled this into rucko-nmut. It is also needed for the sumo
branch.
--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] How to debug unbuildable module error?

2018-09-24 Thread Paul Eggleton
Hi Clay

Somewhat late, but I stumbled across this this morning

On Monday, 16 July 2018 9:29:07 AM NZST Clay D. Montgomery wrote:
> 
> Is there a way to get bitbake to provide more information about why a 
> module is unbuildable?
> 
> I am trying to build this example:
> 
>   poky/meta-skeleton/recipes-kernel/hello-mod/hello-mod_0.1.bb
> 
> However, bitbake always reports:
> 
>   "Runtime target 'hello-mod' is unbuildable, removing..."

The actual issue will be printed above this message. Usually it's because the 
recipe depends upon something else that could not be built (and the reason for 
that will be in the message above).

Cheers,
Paul

-- 

Paul Eggleton
Intel Open Source Technology Centre


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] runtime: add BSP test case for usb storage

2018-09-24 Thread Paul Eggleton
On Monday, 24 September 2018 3:02:28 PM NZST Hussin, Mohamad Noor Alim wrote:
> This patch is my supersede of earlier "manualbsp" which I split into 2 parts 
> called usb test and microsd test.

OK thanks.

> I would use MACHINE_FEATURES = "usbhost" instead of introducing new variable 
> as it would eliminate problem in case someone try to execute the usb test 
> case without enable the "HARDWARE_TEST = 1" 

Sorry I don't understand - do you mean you will use it or you can't?

> or download the image form public autobuilder.

The test won't do anything in an image by default, only when you're running 
the test. In any case if the test isn't writing to the device there shouldn't 
be any major risk.

> > Otherwise I agree with Mike's reply, we should avoid writing to the storage 
> > device as part of the test.
> 
> It is mean that just do test like mount and unmount only? To read something 
> in storage device we need to write at first place? 

That's true - but you can still do a read test if you make it a precondition of 
the test
that you write some known file to the storage device before running the test 
(as part of setup, just as you need to set up the board/device before
running any tests - you just need to ensure this gets documented somewhere).

Cheers,
Paul

-- 

Paul Eggleton
Intel Open Source Technology Centre


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Python3 sqlite does not work

2018-09-24 Thread Fabian Sturm
Hi, 

I added python3 to my yocto build and it seems python-sqlite3 is not
working due to a missing file. If I change python3-manifest.json from
here:

  http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/recipes-devt
ools/python/python3/python3-manifest.json?h=sumo

to also include the __init__.py file. Them it seems to be woking fine.


diff --git a/meta/recipes-devtools/python/python3/python3-manifest.json 
b/meta/recipes-devtools/python/python3/python3-manifest.json
index 2491f36..607eb2b 100644
--- a/meta/recipes-devtools/python/python3/python3-manifest.json
+++ b/meta/recipes-devtools/python/python3/python3-manifest.json
@@ -979,12 +979,14 @@
 "cached": [
 "${libdir}/python3.5/sqlite3/__pycache__",
 "${libdir}/python3.5/sqlite3/__pycache__/dbapi2.*.pyc",
-"${libdir}/python3.5/sqlite3/__pycache__/dump.*.pyc"
+"${libdir}/python3.5/sqlite3/__pycache__/dump.*.pyc",
+"${libdir}/python3.5/sqlite3/__pycache__/__init__.pyc"
 ],
 "files": [
 "${libdir}/python3.5/lib-dynload/_sqlite3.*.so",
 "${libdir}/python3.5/sqlite3/dbapi2.py",
-"${libdir}/python3.5/sqlite3/dump.py"
+"${libdir}/python3.5/sqlite3/dump.py",
+"${libdir}/python3.5/sqlite3/__init__.py"
 ],

To test it it is enough to run python3 and do:

  import sqlite3
  help(sqlite3)

You should see many functions like e.g. connect instead of only two.

Is this a bug, or am I doing something wrong?

Kind regards,
Fabian

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] QA notification for completed autobuilder build (yocto-2.6_M3.rc1)

2018-09-24 Thread Michael Halstead
A build flagged for QA (yocto-2.6_M3.rc1) was completed on the

autobuilder and is available at:
https://autobuilder.yocto.io/pub/releases/yocto-2.6_M3.rc1

Build hash information:

bitbake: e72be96cfa9f05fda5f420c7cfa8bcfa9304b884
eclipse-poky-neon: 303e46a6848f1937d12541a7fd58e61aa1361225
eclipse-poky-oxygen: f1a20dc6a5a252a4ed4484b618d579cbbc7d146e
meta-gplv2: 813b7d2b5573d8591c6cd8087b326f0a0703d6b9
meta-intel: ba19605153725d66dc55e0a16962ae684e66c9e4
meta-mingw: 17887019257143f649a0f2e9caa9f814f7fb0b2a
meta-qt3: 02f273cba6c25f5cf20cb66d8a417a83772c3179
meta-qt4: 8e791c40140460825956430ba86b6266fdec0a93
oecore: e3f5ceec690379d64bd38330111a32f6c8fc6353
poky: c6edf2f8bc5778b267e3a7f4e8875d22f359fb8d

This is an automated message from the Yocto Project Autobuilder
Git: git://git.yoctoproject.org/yocto-autobuilder2
Email: richard.pur...@linuxfoundation.org

-- 
Michael Halstead
Linux Foundation / SysAdmin
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Yocto Project Unassigned Bugs - Help Needed

2018-09-24 Thread Jolley, Stephen K
All,



The triage team meets weekly and does its best to handle the bugs reported into 
the bugzilla. The number of people attending that meeting has fallen, as have 
the number of people available to help fix bugs. One of the things we hear 
users report is they don't know how to help. We (the triage team) are therefore 
going to start reporting out the currently 308 unassigned bugs.



We're hoping people may be able to spare some time now and again to help out 
with these.



Bugs are split into two types, "true bugs" where things don't work as they 
should and "enhancements" which are features we'd want to add to the system.



There are also roughly five different "priority" classes right now, "2.6", 
"2.7", "2.8", "2.99" and "Future", the more pressing/urgent issues being in 
"2.6" and then "2.7".



Please review this link and if a bug is something you would be able to help 
with either take ownership of the bug, or send me 
(stephen.k.jol...@intel.com) an e-mail with 
the bug number you would like and I will assign it to you (please make sure you 
have a bugzilla account).



The list is at: https://wiki.yoctoproject.org/wiki/Bug_Triage#Unassigned_Bugs


Thanks,


Stephen K. Jolley
Yocto Project Program Manager
INTEL, MS JF1-255, 2111 N.E. 25th Avenue, Hillsboro, OR 97124
*Cell:(208) 244-4460
* Email: stephen.k.jol...@intel.com

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [yocto-docs][PATCH] documentation: Document debug-with-srcpkg

2018-09-24 Thread Scott Rifenbark
Joshua,

Thanks for the patch.  I have appliied it.  You can see it at
https://yoctoproject.org/docs/2.6/ref-manual/ref-manual.html#var-PACKAGE_DEBUG_SPLIT_STYLE
.

Scott

On Sun, Sep 23, 2018 at 8:24 PM Joshua Watt  wrote:

> Adds documentation for the "debug-with-srcpkg" option for the
> PACKAGE_DEBUG_SPLIT_STYLE variable
>
> Signed-off-by: Joshua Watt 
> ---
>  documentation/ref-manual/ref-variables.xml | 6 ++
>  1 file changed, 6 insertions(+)
>
> diff --git a/documentation/ref-manual/ref-variables.xml
> b/documentation/ref-manual/ref-variables.xml
> index 05487cf83f6..1bc6e2b77d5 100644
> --- a/documentation/ref-manual/ref-variables.xml
> +++ b/documentation/ref-manual/ref-variables.xml
> @@ -9646,6 +9646,12 @@ recipes-graphics/xorg-font/font-alias_1.0.3.bb:PR
> = "${INC_PR}.3"
>  ".debug" previously described with the
> exception
>  that no source files are installed.
>  .
> +
> +"debug-with-srcpkg": The same behavior as
> +".debug" previously described with the
> exception
> +that all source files are placed in a separate
> +*-src pkg.
> +
>  
>  
>
> --
> 2.17.1
>
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [RFC] CVEs on sumo branch

2018-09-24 Thread Sinan Kaya

On 9/22/2018 10:46 AM, akuster wrote:

Signed-off-by: Zheng Ruoqin
     Signed-off-by: Richard Purdie

What does it take to move this in the right direction?

bring it to my attention like you have.

backported it for the next build round.


Here is another one.

https://patches.openembedded.org/patch/154290/

I'll post

gnupg: CVE-2018-9234

shortly.
--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] msbuild: add ca-certificates for curl

2018-09-24 Thread Jef Driesen
The msbuild bootstrap scripts uses curl to download some pre-build files. The
ca-certificates are required in order to support https. The CURL_CA_BUNDLE is
used to override the built-in path, because this is hardcoded to curl's own
sysroot, which may already be removed when building msbuild.

Signed-off-by: Jef Driesen 
---
 recipes-mono/msbuild/msbuild_git.bb | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/recipes-mono/msbuild/msbuild_git.bb 
b/recipes-mono/msbuild/msbuild_git.bb
index 8a89543..6a6761a 100644
--- a/recipes-mono/msbuild/msbuild_git.bb
+++ b/recipes-mono/msbuild/msbuild_git.bb
@@ -4,7 +4,7 @@ SECTION = "console/apps"
 LICENSE = "MIT"
 DEPENDS = " \
coreutils-native findutils-native curl-native \
-   unzip-native mono-native \
+   unzip-native mono-native ca-certificates-native \
"
 
 LIC_FILES_CHKSUM = "file://LICENSE;md5=768c552053144071f8ef7e5621714b0a"
@@ -23,7 +23,7 @@ SRC_URI = " \
 S = "${WORKDIR}/git"
 
 do_compile () {
-   ./cibuild.sh --scope Compile --host Mono --target Mono --config Release 
--bootstrap-only
+   CURL_CA_BUNDLE=${STAGING_DIR_NATIVE}/etc/ssl/certs/ca-certificates.crt 
./cibuild.sh --scope Compile --host Mono --target Mono --config Release 
--bootstrap-only
 }
 
 do_install () {
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] liblzma: memory allocation failed

2018-09-24 Thread Andrea Adami
On Mon, Sep 24, 2018 at 9:08 AM Peter Bergin  wrote:
>
> On 2018-09-24 00:06, Andrea Adami wrote:
> > On Fri, Sep 21, 2018 at 1:04 AM Andrea Adami  wrote:
> >> On Thu, Sep 20, 2018 at 1:03 PM Peter Bergin  
> >> wrote:
> >>> On 2018-09-17 10:27, Burton, Ross wrote:
> >>>
> >>> On Mon, 17 Sep 2018 at 08:13, Peter Bergin  wrote:
> >>>
> >>> I'm pretty sure I have narrowed down the root cause to the restriction
> >>> of virtual memory and that liblzma base its memory calculations on
> >>> physical RAM.
> >>>
> >>> To prove this I added a printout in rpm-native/rpmio/rpmio.c and the
> >>> function lzopen_internal.
> >>>
> >>>   uint64_t memory_usage = 
> >>> lzma_stream_encoder_mt_memusage(_options);
> >>>  rpmlog(RPMLOG_NOTICE, "DBG: memory_usage %lu\n", memory_usage);
> >>>
> >>>
> >>> The value of memory_usage is the same regardless of which 'ulimit -v'
> >>> value I set. On the host with 256GB of physical RAM and 32GB of virtual
> >>> memory, memory_usage is ~5.1GB. On another host with 16GB of physical
> >>> RAM I get memory_usage of ~660MB.
> >>>
> >>> I guess you have not seen this kind of failure if you not have
> >>> restricted virutal memory on your host. If you want to try to reproduce
> >>> this set 'ulimit -v 8388608' (8GB) in your shell and then 'bitbake
> >>> glibc-locale -c package_write_rpm -f'.
> >>>
> >>> Wouldn't a solution be to change lzma to look at free memory, not
> >>> total physical memory?
> >>>
> >>> Ross
> >>>
> >>> I have been in contact with the maintainer of liblzma. There is currently 
> >>> no way to restrict the memory usage in liblzma during multi threaded 
> >>> compression. He recommended to adjust the number of threads used during 
> >>> compression. This kind of check is done for 32-bits but not when running 
> >>> 64-bits (in rpmio/rpmio.c lzopen_internal). To rewrite liblzma is another 
> >>> option but I took an easier way out.
> >>>
> >>> I have come up with a patch 
> >>> (https://patchwork.openembedded.org/patch/155017/) that solves my problem 
> >>> and do a restriction of memory usage when the 'ulimit -v' is set. The 
> >>> calculation is based on the assumption that lzopen_internal is run in 
> >>> parallel with as many instances as cpu threads as '#pragma omp parallel' 
> >>> is used in build/pack.c.
> >>>
> >>> When running test on my machine with 4 cores 16GB of physical RAM and 
> >>> 'ulimit -v 2097152' (2GB). It works good and the log shows:
> >>>
> >>> XZ: virtual memory restricted to 2147483648 and per CPU thread 536870912
> >>> XZ: Adjusted the number of threads from 4 to 3 to not exceed the memory 
> >>> usage limit of 2147483648 bytes
> >>>
> >>> Didn't get a clear answer if this is something Yocto/OE should support 
> >>> but I hope my patch solves the issue and it will not affect the normal 
> >>> environments where 'ulimit -v' is not set.
> >>>
> >>> /Peter
> >>> --
> >> Peter,
> >> first of all sorry for butting-in again. My bad I misunderstood the point,
> >>
> >> Looking properly at it, I see the sources do restrict only #if __WORDSIZE 
> >> == 32.
> >>
> >> Again, there is the other way to reduce total memory footprint just
> >> using lower preset in mt_options.
> >> As far as I see default is  #define LZMA_PRESET_DEFAULT UINT32_C(6)
> >> so you could tr to modify   mt_options.preset to your needs.
> >> I think more threads is normally  better.
> >>
> >> Cheers
> >> Andrea
> >>
> >>> ___
> >>> yocto mailing list
> >>> yocto@yoctoproject.org
> >>> https://lists.yoctoproject.org/listinfo/yocto
> >
> > Well,
> > call it luck, I now get similar problems with
> > -  opkg-utils: use multithreaded xz when building packages
> > 369c43cca4f0
> >
> > I build in tmpfs (8GB) and have 8 cores.
> > Strange enough, it does happen with only one machine of our bsp (akita).
> >
> > Build Configuration:
> > BB_VERSION   = "1.39.1"
> > BUILD_SYS= "x86_64-linux"
> > NATIVELSBSTRING  = "ubuntu-16.04"
> > TARGET_SYS   = "arm-oe-linux-gnueabi"
> > MACHINE  = "akita"
> > DISTRO   = "nodistro"
> > DISTRO_VERSION   = "nodistro.0"
> > TUNE_FEATURES= "arm armv5 thumb dsp"
> > TARGET_FPU   = "soft"
> > meta = "master:8cc08e44cf3e969c124456d459c6c06a76cad018"
> > meta-handheld= "master:61e16e253ea870b87592075efd87727e8b5c574d"
> > meta-initramfs
> > meta-oe  = "master:7ac8d82fec470a8c8e66cfc4216381e2364cb4b2"
> >
> > ...
> > Installing kernel-module-nls-cp1251-4.14.62-handheld (4.14) on root
> > Downloading 
> > file:/tmp/build/tmp-glibc/work/akita-oe-linux-gnueabi/core-image-base/1.0-r0/oe-rootfs-repo/akita/kernel-module-nls-cp1251-4.14.62-handheld_4.14-r0_akita.ipk.
> > Installing kernel-mo
> > ERROR: core-image-base-1.0-r0 do_rootfs: Function failed: do_rootfs
> > ERROR: Logfile of failure stored in:
> > 

Re: [yocto] liblzma: memory allocation failed

2018-09-24 Thread Peter Bergin

On 2018-09-24 00:06, Andrea Adami wrote:

On Fri, Sep 21, 2018 at 1:04 AM Andrea Adami  wrote:

On Thu, Sep 20, 2018 at 1:03 PM Peter Bergin  wrote:

On 2018-09-17 10:27, Burton, Ross wrote:

On Mon, 17 Sep 2018 at 08:13, Peter Bergin  wrote:

I'm pretty sure I have narrowed down the root cause to the restriction
of virtual memory and that liblzma base its memory calculations on
physical RAM.

To prove this I added a printout in rpm-native/rpmio/rpmio.c and the
function lzopen_internal.

  uint64_t memory_usage = lzma_stream_encoder_mt_memusage(_options);
 rpmlog(RPMLOG_NOTICE, "DBG: memory_usage %lu\n", memory_usage);


The value of memory_usage is the same regardless of which 'ulimit -v'
value I set. On the host with 256GB of physical RAM and 32GB of virtual
memory, memory_usage is ~5.1GB. On another host with 16GB of physical
RAM I get memory_usage of ~660MB.

I guess you have not seen this kind of failure if you not have
restricted virutal memory on your host. If you want to try to reproduce
this set 'ulimit -v 8388608' (8GB) in your shell and then 'bitbake
glibc-locale -c package_write_rpm -f'.

Wouldn't a solution be to change lzma to look at free memory, not
total physical memory?

Ross

I have been in contact with the maintainer of liblzma. There is currently no 
way to restrict the memory usage in liblzma during multi threaded compression. 
He recommended to adjust the number of threads used during compression. This 
kind of check is done for 32-bits but not when running 64-bits (in 
rpmio/rpmio.c lzopen_internal). To rewrite liblzma is another option but I took 
an easier way out.

I have come up with a patch (https://patchwork.openembedded.org/patch/155017/) 
that solves my problem and do a restriction of memory usage when the 'ulimit 
-v' is set. The calculation is based on the assumption that lzopen_internal is 
run in parallel with as many instances as cpu threads as '#pragma omp parallel' 
is used in build/pack.c.

When running test on my machine with 4 cores 16GB of physical RAM and 'ulimit 
-v 2097152' (2GB). It works good and the log shows:

XZ: virtual memory restricted to 2147483648 and per CPU thread 536870912
XZ: Adjusted the number of threads from 4 to 3 to not exceed the memory usage 
limit of 2147483648 bytes

Didn't get a clear answer if this is something Yocto/OE should support but I 
hope my patch solves the issue and it will not affect the normal environments 
where 'ulimit -v' is not set.

/Peter
--

Peter,
first of all sorry for butting-in again. My bad I misunderstood the point,

Looking properly at it, I see the sources do restrict only #if __WORDSIZE == 32.

Again, there is the other way to reduce total memory footprint just
using lower preset in mt_options.
As far as I see default is  #define LZMA_PRESET_DEFAULT UINT32_C(6)
so you could tr to modify   mt_options.preset to your needs.
I think more threads is normally  better.

Cheers
Andrea


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Well,
call it luck, I now get similar problems with
-  opkg-utils: use multithreaded xz when building packages
369c43cca4f0

I build in tmpfs (8GB) and have 8 cores.
Strange enough, it does happen with only one machine of our bsp (akita).

Build Configuration:
BB_VERSION   = "1.39.1"
BUILD_SYS= "x86_64-linux"
NATIVELSBSTRING  = "ubuntu-16.04"
TARGET_SYS   = "arm-oe-linux-gnueabi"
MACHINE  = "akita"
DISTRO   = "nodistro"
DISTRO_VERSION   = "nodistro.0"
TUNE_FEATURES= "arm armv5 thumb dsp"
TARGET_FPU   = "soft"
meta = "master:8cc08e44cf3e969c124456d459c6c06a76cad018"
meta-handheld= "master:61e16e253ea870b87592075efd87727e8b5c574d"
meta-initramfs
meta-oe  = "master:7ac8d82fec470a8c8e66cfc4216381e2364cb4b2"

...
Installing kernel-module-nls-cp1251-4.14.62-handheld (4.14) on root
Downloading 
file:/tmp/build/tmp-glibc/work/akita-oe-linux-gnueabi/core-image-base/1.0-r0/oe-rootfs-repo/akita/kernel-module-nls-cp1251-4.14.62-handheld_4.14-r0_akita.ipk.
Installing kernel-mo
ERROR: core-image-base-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in:
/tmp/build/tmp-glibc/work/akita-oe-linux-gnueabi/core-image-base/1.0-r0/temp/log.do_rootfs.28481
ERROR: Task (/oe/oe-core/meta/recipes-core/images/core-image-base.bb:do_rootfs)
failed with exit code '1'
...

and in dmesg you see opkg segfaulting and OOM kicking in.
It is the same issue...unnoticed building for other similar devices
(spitz). Maybe it did not happen using musl as TC_LIBC.

Reverting the patch fixes that.
Ross, worth opening a bug?

Cheers
Andrea

Andrea,

do you have any virtual memory restrictions (ulimit -v) or overcommit 
settings (vm.overcommit_memory)  on your build host?


Regarding my problem with rpm and liblzma when virtual memory is 
restricted I have supplied a patch to