Re: [yocto] Building, Using SDK

2018-09-17 Thread Dudziak Krzysztof
Hi,
Thanks for feedback from you.
Embedded system native package building and image creation might not be the 
regular case
as frequently embedded system do not provide power needed for that.
So let's concentrate to cross-compilation.

One need first working toolchain.
In next steps one can build BSP, user-space, image(-s).
Is the sequence of step as presented previously the right one?
step 1: bitbake  -c populate_sdk
step : build bootloader, kernel
step : bitbake 

krzysiek

From: ChenQi [mailto:qi.c...@windriver.com]
Sent: Monday, 17. September 2018 05:50
To: Dudziak Krzysztof ; yocto@yoctoproject.org
Subject: Re: [yocto] Building, Using SDK

On 09/13/2018 10:18 PM, Dudziak Krzysztof wrote:
Hi,

Alex González elaborates in his book Embedded Linux Development Using Yocto 
Projets (2nd Edition)
SDK-related questions - basics, building, usage (chapter 4).

1.
Downloading then installing precompiled SDK was one of all available options 
according to Alex.
He elaborates how to find it on server in Internet, how to select needed one 
and install it.
I wonder how to integrate downloaded and installed precompiled SDK
to Poky release used on build system?


I'd suggest you not using precompiled SDK unless you are justing doing some 
simple cross compilation that requires no additional libs, header files, etc.


2.
Preparing / building SDK by oneself was further option with image target's 
'populate_sdk' Bitbake task
as the recommended way (according to Alex in chapter's certain section).
One would need only to start populate_sdk task for image which matches root 
file system of system in development.
As soon as task completed SDK can be installed using generated script.
But how does it work for first build where rootfs was not built in the past.
Is in that case following procedure the proper one?
step 1: bitbake  -c populate_sdk
step 2: bitbake 


The populate_sdk task directly installs rpm packages (nativesdk ones and target 
ones) to form an SDK.
It does not need rootfs to be generated first.

Normally you use an SDK for a specific target.
So the `bitbake IMAGE' is used to generate the image, and `bitbake IMAGE -c 
populate_sdk' is used to generate the SDK for the image.

Best Regards,
Chen Qi


krzysiek





This message and any attachments are intended solely for the addressees and may 
contain confidential information. Any unauthorized use or disclosure, either 
whole or partial, is prohibited.
E-mails are susceptible to alteration. Our company shall not be liable for the 
message if altered, changed or falsified. If you are not the intended recipient 
of this message, please delete it and notify the sender.
Although all reasonable efforts have been made to keep this transmission free 
from viruses, the sender will not be liable for damages caused by a transmitted 
virus.



 This message and any attachments are intended solely for the addressees and 
may contain confidential information. Any unauthorized use or disclosure, 
either whole or partial, is prohibited.
E-mails are susceptible to alteration. Our company shall not be liable for the 
message if altered, changed or falsified. If you are not the intended recipient 
of this message, please delete it and notify the sender.
Although all reasonable efforts have been made to keep this transmission free 
from viruses, the sender will not be liable for damages caused by a transmitted 
virus.
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] liblzma: memory allocation failed

2018-09-17 Thread Andrea Adami
On Mon, Sep 17, 2018 at 10:28 AM Burton, Ross  wrote:
>
> On Mon, 17 Sep 2018 at 08:13, Peter Bergin  wrote:
> > I'm pretty sure I have narrowed down the root cause to the restriction
> > of virtual memory and that liblzma base its memory calculations on
> > physical RAM.


Hello,

well, not only.
You can set the memory footprint for compression/decompression.

In OE for legacy kernels we use in our BSP:
# sane defaults for devices with only 32Mb RAM (see man xz)
XZ_COMPRESSION_LEVEL = "-2e"

Default is -3, the -2 uses right half the RAM for compressing,.
Pls check man xz.

Cheers
Andrea


> >
> > To prove this I added a printout in rpm-native/rpmio/rpmio.c and the
> > function lzopen_internal.
> >
> >  uint64_t memory_usage = 
> > lzma_stream_encoder_mt_memusage(&mt_options);
> > rpmlog(RPMLOG_NOTICE, "DBG: memory_usage %lu\n", memory_usage);
> >
> >
> > The value of memory_usage is the same regardless of which 'ulimit -v'
> > value I set. On the host with 256GB of physical RAM and 32GB of virtual
> > memory, memory_usage is ~5.1GB. On another host with 16GB of physical
> > RAM I get memory_usage of ~660MB.
> >
> > I guess you have not seen this kind of failure if you not have
> > restricted virutal memory on your host. If you want to try to reproduce
> > this set 'ulimit -v 8388608' (8GB) in your shell and then 'bitbake
> > glibc-locale -c package_write_rpm -f'.
>
> Wouldn't a solution be to change lzma to look at free memory, not
> total physical memory?
>
> Ross
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] liblzma: memory allocation failed

2018-09-17 Thread Burton, Ross
On Mon, 17 Sep 2018 at 08:13, Peter Bergin  wrote:
> I'm pretty sure I have narrowed down the root cause to the restriction
> of virtual memory and that liblzma base its memory calculations on
> physical RAM.
>
> To prove this I added a printout in rpm-native/rpmio/rpmio.c and the
> function lzopen_internal.
>
>  uint64_t memory_usage = lzma_stream_encoder_mt_memusage(&mt_options);
> rpmlog(RPMLOG_NOTICE, "DBG: memory_usage %lu\n", memory_usage);
>
>
> The value of memory_usage is the same regardless of which 'ulimit -v'
> value I set. On the host with 256GB of physical RAM and 32GB of virtual
> memory, memory_usage is ~5.1GB. On another host with 16GB of physical
> RAM I get memory_usage of ~660MB.
>
> I guess you have not seen this kind of failure if you not have
> restricted virutal memory on your host. If you want to try to reproduce
> this set 'ulimit -v 8388608' (8GB) in your shell and then 'bitbake
> glibc-locale -c package_write_rpm -f'.

Wouldn't a solution be to change lzma to look at free memory, not
total physical memory?

Ross
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] liblzma: memory allocation failed

2018-09-17 Thread Peter Bergin

Hi Randy,

On 2018-09-17 06:25, Randy MacLeod wrote:

On 09/16/2018 04:40 PM, Peter Bergin wrote:

Hi,

during the task do_package_write_rpm I get the error "liblzma: Memory 
allocation failed". It happens during packaging of binary RPM 
packages. The root cause seems to be the host environment that is 
used in our project. We run our builds on a big server with 32 cores 
and 256GB of physical RAM but each user has a limit of virtual memory 
usage to 32GB (ulimit -v). The packaging in rpm-native has been 
parallelized in the commit 
http://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/meta/recipes-devtools/rpm?id=84e0bb8d936f1b9094c9d5a92825e9d22e1bc7e3. 
What seems to happen is that rpm-native put up 32 parallel tasks with 
'#pragma omp', each task is using liblzma that also put up 32 tasks for 


#pragma omp

Tha'ts OpenMP, right? I haven't played with that at all but
it looks like you can limit the number of threads using an
environment variable:
   OMP_NUM_THREADS num
https://www.openmp.org/wp-content/uploads/OpenMP3.0-SummarySpec.pdf

Doing that would be a little ugly but for now at least, there doesn't
seem to be that many packages using such a pragma.

Does that work for your case?

Yes, it's OpenMP. I tried '#pragma omp parallel num_thread(4)' and it 
worked as a workaround. On the failing server the build succeeded. The 
problem is to get this as a generic solution based on the host settings 
because the #pragma is a compiler directive. But for sure we can make a 
bbappend on this to get it working on our host.


the compression work. The memory calculations in liblzma is based on 
the amount of physical RAM but as the user is limited by 'ulimit -v' 
we get into a OOM situation in liblzma.


Here is the code snippet from rpm-native/build/pack.c where it happens:

 #pragma omp parallel
 #pragma omp single
 // re-declaring task variable is necessary, or older gcc 
versions will produce code that segfaults
 for (struct binaryPackageTaskData *task = tasks; task != NULL; 
task = task->next) {

 if (task != tasks)
 #pragma omp task
 {
 task->result = packageBinary(spec, task->pkg, cookie, 
cheating, &(task->filename), buildTime, buildHost);
 rpmlog(RPMLOG_NOTICE, _("Finished binary package job, 
result %d, filename %s\n"), task->result, task->filename);

 }
 }


Steps to reproduce is to set 'ulimit -v' in your shell to, for 
example, 1/8 of the amount of physical RAM and then build for example 
glibc-locale. I have tested this with rocko. If the '#pragma omp' 
statements in code snippet above is removed the problem is solved. 
But that not good as the parallel processing speed up the process.


Is the host environment used here with restricted virtual memory 
supported by Yocto? If it is, someone that have any suggestion for a 
solution on this issue?



This is a little tricky.

From bitbake's point of view, it's almost like you are building
on a 32 core, 32 GB box and runing out of RAM/swap.
Clearly we would not fix a build that OOMs in that case
(it does seem odd that 32 GB isn't enough ...)

Are you sure that there isn't something else going on?
I have a 24 core machine with 64 GB RAM that never comes
close to such a problem (so I haven't paid attention to RAM usage).

I'm pretty sure I have narrowed down the root cause to the restriction 
of virtual memory and that liblzma base its memory calculations on 
physical RAM.


To prove this I added a printout in rpm-native/rpmio/rpmio.c and the 
function lzopen_internal.


        uint64_t memory_usage = lzma_stream_encoder_mt_memusage(&mt_options);
rpmlog(RPMLOG_NOTICE, "DBG: memory_usage %lu\n", memory_usage);


The value of memory_usage is the same regardless of which 'ulimit -v' 
value I set. On the host with 256GB of physical RAM and 32GB of virtual 
memory, memory_usage is ~5.1GB. On another host with 16GB of physical 
RAM I get memory_usage of ~660MB.


I guess you have not seen this kind of failure if you not have 
restricted virutal memory on your host. If you want to try to reproduce 
this set 'ulimit -v 8388608' (8GB) in your shell and then 'bitbake 
glibc-locale -c package_write_rpm -f'.


Best regards,
/Peter



../Randy



Best regards,
/Peter









--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto