Re: [yocto] what's the proper value for BB_NUMBER_THREADS?

2011-10-30 Thread Christian Gagneraud

On 30/10/11 15:32, Robert P. J. Day wrote:


   all the docs recommend twice the number of cores (AFAICT), yet the
template local.conf file suggests that, for a quad core, the value of
4 would be appropriate.  shouldn't that say 8?  same with
PARALLEL_MAKE?


Hi Robert,

The Poky ref manual says (rule of thumb) x2 for BB_NUMBER_THREADS, and 
x1.5 for PARALLEL_MAKE.


Chris




rday




--
Chris
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Should I be worried about this nativesdk-packagegroup-sdk-host warning while building meta-toolchain

2013-07-23 Thread Christian Gagneraud

On 24/07/13 03:34, Denys Dmytriyenko wrote:

On Tue, Jul 23, 2013 at 04:13:18PM +0100, Paul Eggleton wrote:

On Tuesday 23 July 2013 10:13:10 Brian Hutchinson wrote:

I'm using master branch of Yocto and meta-ti (and meta-openembedded).

I built meta-toolchain for beaglebone machine type and then I went into my
local.conf and changed sdkmachine to i686 as I need both 64bit & 32bit
toolchains.  After modifying my local.conf I ran bitbake meta-toolchain
again and received these warnings:

hutch@neo:~/yocto_master_beaglebone/poky/build$ bitbake meta-toolchain
WARNING: Host distribution "Debian-7.1" has not been validated with this
version of the build system; you may possibly experience unexpected
failures. It is recommended that you use a tested distribution.
WARNING: Variable key FILES_${PN}-dev (${includedir} ${FILES_SOLIBSDEV}
${libdir}/*.la ${libdir}/*.o ${libdir}/pkgconfig ${datadir}/pkgconfig
${datadir}/aclocal ${base_libdir}/*.o ${libdir}/${BPN}/*.la
${base_libdir}/*.la) replaces original key FILES_ti-ipc-dev (${libdir}/*).

WARNING: Variable key FILES_${PN}-dev (${includedir} ${FILES_SOLIBSDEV}
${libdir}/*.la ${libdir}/*.o ${libdir}/pkgconfig ${datadir}/pkgconfig
${datadir}/aclocal ${base_libdir}/*.o ${libdir}/${BPN}/*.la
${base_libdir}/*.la) replaces original key FILES_ti-syslink-dev
(${libdir}/*).


These FILES_${PN}-dev warnings indeed come from meta-ti and so far harmless -
I haven't had bandwidth to fix them, as they seem low priority.


I got these ones as well (using poky, meta-oe, meta-ti and meta-qt5 all 
on Dylan), good to hear they are harmless.


Thanks,
Chris





Parsing recipes: 100%

NOTE: Resolving any missing task queue dependencies
NOTE: Preparing runqueue
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
WARNING: The recipe nativesdk-packagegroup-sdk-host is trying to install
files into a shared area when those files already exist. Those files and
their manifest location are:

  /home/hutch/yocto_master_beaglebone/poky/build/tmp/pkgdata/all-pokysdk-linu
x/nativesdk-packagegroup-sdk-host Matched in
manifest-x86_64-nativesdk-packagegroup-sdk-host.packagedata

  /home/hutch/yocto_master_beaglebone/poky/build/tmp/pkgdata/all-pokysdk-linu
x/runtime/nativesdk-packagegroup-sdk-host.packaged Matched in
manifest-x86_64-nativesdk-packagegroup-sdk-host.packagedata

  /home/hutch/yocto_master_beaglebone/poky/build/tmp/pkgdata/all-pokysdk-linu
x/runtime/nativesdk-packagegroup-sdk-host Matched in
manifest-x86_64-nativesdk-packagegroup-sdk-host.packagedata

  /home/hutch/yocto_master_beaglebone/poky/build/tmp/pkgdata/all-pokysdk-linu
x/runtime-reverse/nativesdk-packagegroup-sdk-host Matched in
manifest-x86_64-nativesdk-packagegroup-sdk-host.packagedata Please verify
which package should provide the above files.


Should I be worried or do I need to do something about the
nativesdk-packagegroup-sdk-host warnings?

I've never seen them before and I couldn't find anyone else talking about
them so I thought I'd ask.


Depends. Without looking at the recipe and output closely it's hard to tell
whether this is indicative or a problem or not, but it should be resolved by
the meta-ti maintainers. Denys, you might want to take a closer look at this.


And the warnings in question about nativesdk-packagegroup-sdk-host have
absolutely nothing to do with meta-ti - there are no SDK packagegroup recipes
in meta-ti.



___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Generating an image with systemd and connman

2013-07-23 Thread Christian Gagneraud

Hi there,

I have successfully generated Dylan core-image-minimal with the meta-ti 
layer. I would like to know what is the procedure to select systemd for 
the init process and connman as the network manager.


It seems to me that systemd can be selected using EXTRA_IMAGE_FEATURES 
from local.conf (at least with the poky distro)
The Yocto dev manual mention using DISTRO_FEATURES_append but it is 
usable when creating a custom distro.


Concerning connman, I found references only in sato and self-hosted images.

As well, I would like to know how can I make Qt build depends on 
connman, so that I get the right bearer plugin.


Any help or point out greatly appreciated.

Regards,
Chris


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] External toolchain (sourcery)

2013-07-24 Thread Christian Gagneraud

On 25/07/13 02:49, Laszlo Papp wrote:

Hi,

is this officially supported by the Yocto project? I would not like to
use Yocto for my own purposes if it is something unsupported, and I
would need to put a significant investment into to it to make the
releases buildable, et cetera.


FYI, The arago project uses an external Linaro toolchain [1]

Chris

[1] http://arago-project.org/wiki/index.php/Setting_Up_Build_Environment



Many thanks,
Laszlo


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto



___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Generating an image with systemd and connman

2013-07-24 Thread Christian Gagneraud

On 24/07/13 19:31, Jukka Rissanen wrote:

Hi Christian,

On 24.07.2013 05:51, Christian Gagneraud wrote:

Hi there,

I have successfully generated Dylan core-image-minimal with the meta-ti
layer. I would like to know what is the procedure to select systemd for
the init process and connman as the network manager.

It seems to me that systemd can be selected using EXTRA_IMAGE_FEATURES
from local.conf (at least with the poky distro)
The Yocto dev manual mention using DISTRO_FEATURES_append but it is
usable when creating a custom distro.


I have

VIRTUAL-RUNTIME_init_manager = "systemd"
VIRTUAL-RUNTIME_initscripts = ""
DISTRO_FEATURES_append = " systemd"
DISTRO_FEATURES_BACKFILL_CONSIDERED="sysvinit"

in my distro conf file and after that the generated system does only
systemd. But I am creating a custom distro so I am not sure if these
settings are any help for you.


Thanks, I might have to go this way at some point anyway.
Are you using the meta-systemd layer? If so, could you give me more 
details about your setup please?


Chris




Cheers,
Jukka



___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Detecting build type within recipe (target, native or nativesdk)

2013-07-25 Thread Christian Gagneraud

Hi there,

Is there a way to detect what kind of build is going on (target, native 
or nativesdk) from within a recipe or a class?


Basically when using 'BBCLASSEXTEND = "native nativesdk"', how can I 
tweak the build behaviour depending of the build type?


Does anyone know an example recipe I could use as a reference for doing 
these kind of things?


Regards,
Chris
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] GCC SVN repo too slow?

2013-08-18 Thread Christian Gagneraud

Hi all,

I'm having problems with the GCC SVN repo being so slow to checkout, 
that svn failed (and so my build too).
I first suspected the proxy we have here, but i tried a checkout from an 
outside machine (but still in New Zealand) and it seems the problem 
comes from gcc.gnu.org.


I am using the arago layer, which requires 
svn://gcc.gnu.org/svn/gcc/branches;module=gcc-4_5-branch;protocol=http 
for gcc-crosssdk-intermediate.


By slow, I really mean slow: 1 to 2 files are checkout per second!?!

Has anyone encountered the same kind of problems?

Regards,
Chris
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] GCC SVN repo too slow?

2013-08-18 Thread Christian Gagneraud

CC: meta-ar...@arago-project.org


On 19/08/13 17:16, Khem Raj wrote:


On Aug 18, 2013, at 7:34 PM, Christian Gagneraud  wrote:


Hi all,

I'm having problems with the GCC SVN repo being so slow to checkout, that svn 
failed (and so my build too).
I first suspected the proxy we have here, but i tried a checkout from an 
outside machine (but still in New Zealand) and it seems the problem comes from 
gcc.gnu.org.

I am using the arago layer, which requires 
svn://gcc.gnu.org/svn/gcc/branches;module=gcc-4_5-branch;protocol=http for 
gcc-crosssdk-intermediate.

By slow, I really mean slow: 1 to 2 files are checkout per second!?!

Has anyone encountered the same kind of problems?



thats a known issue and one of pressing reasons to not use svn version on later 
versions if you see new recipes
best case you could hit a mirror from yocto project and download the source 
mirror



From the logs, I can see that bitbake is trying (among others):
downloads.yoctoproject.org
sources.openembedded.org

but hits a 404, so it fall-backs to GCC's SVN repo directly.

On the side, from the bitbake logs, I end up with a rather obscure error:
svn: E175002: REPORT of '/svn/gcc/!svn/vcc/default': 200 OK 
(http://gcc.gnu.org)


When trying a checkout manually from the outside machine, it fails too, 
but the error message is a bit more friendly:
svn: REPORT of '/svn/gcc/!svn/vcc/default': Could not read response 
body: connection was closed by server (http://gcc.gnu.org)


I found plenty of "black magic" workarounds on the web while looking for 
the strange "Error: [...] 200 OK", among them was to increase Apache 
time out! Which won't even really solve my problem, because the checkout 
will takes hours, if not days!


Right now, I'm really stuck. The only solutions I can think of are:
- Ask kindly Arago not to use GCC code from SVN, or maybe to upload an 
archive on yocto/oe mirrors.
- Check with GNU/GCC guys if they can do something about it (maybe it's 
a temporary issue with their server)

- Wait and see, cross my fingers. :(

Thanks,
Chris






Regards,
Chris
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto




___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Server specs for a continuous integration system

2013-09-01 Thread Christian Gagneraud

Hi all,

I'm currently looking at server specs for a "good" continuous 
integration server to be used for a project using Yocto and other things 
as well.


The definition of "good" in my context is something that allows me to:
- Build 2 images for 2 different products that will interact with each 
other (kind of client/server architecture), so likely based on the same 
custom Yocto distro, but likely running on 2 different SoCs (and 
different vendors).
- Each image will need to have two build flavours: production and 
engineering

- Very likely other demo images as well
- The client is a "lightweight" measurement system, so i need a small 
base (connman, systemd, wifi, Qt, ntp client), the application layer (Qt 
based but no GUI), and a couple of firmwares to be run on auxiliary devices
- The server is still kind of lightweight, same base as above, plus 
sqlite, light http server, ntp server and of course it's own application 
layer (Qt based again, with a GUI for a wide "session screen").


On top of that:
- The server will be part of a continuous integration system with 
nightly builds and test suite runners/controllers .
- The CI will have to build a couple of "Engineering tools" as well (Qt 
based again), that need to be compile for Gnu/Linux and cross-compiled 
for Windows.

- The CI will have to build a couple of firmware that run on embedded RTOS.

Last thing is I would like to run all build/test activities on a unique 
Linux server.


Having myself a quad-core i7/3GHz workstation, i still find the yocto 
build very (very) long, and this server will have way more work to do 
than me when i'm playing around with Yocto.


So right now, I'm thinking about:
- CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads)
- Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up 
the builds)

- RAM: i don't really know, maybe 8 or 16 GB or more?

Budget wise, my feeling is that 10k US$ should be enough...

I'm coming here to see if anyone would have feedback on choosing the 
right "good enough" specs for a continuous integration server.


Best regards,
Chris

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Server specs for a continuous integration system

2013-09-02 Thread Christian Gagneraud


On 03/09/13 00:35, Burton, Ross wrote:

Hi Ross,


On 2 September 2013 06:05, Christian Gagneraud  wrote:

So right now, I'm thinking about:
- CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads)
- Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the
builds)


RAID-5 seems to be what i am after.


- RAM: i don't really know, maybe 8 or 16 GB or more?


At least 16GB of RAM for the vast amount of disk cache that will give
you.  32GB or more will mean you can easily put the TMPDIR or WORKDIR
into a tmpfs (there's been discussion about this a few weeks ago).


Yes, I remember that one now, well spotted!


I've 16GB of RAM and a 8GB tmpfs with rm_work was sufficient for
WORKDIR which gave a 10% speedup (and massive reduction on disk wear).


I'm a bit surprise to see only a 10% speedup.


  Others have machines with 64GB RAM and use it for all of TMPDIR, at
which point you'll be almost entirely CPU-bound.


OK, so 16GB sounds like a minimum, 32GB or 64GB being even better, at 
that size, this is not that cheap...


Thanks,

Chris



Ross



___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Server specs for a continuous integration system

2013-09-02 Thread Christian Gagneraud

On 03/09/13 10:16, Chris Tapp wrote:

On 2 Sep 2013, at 22:45, Christian Gagneraud wrote:



On 03/09/13 00:35, Burton, Ross wrote:

Hi Ross,


On 2 September 2013 06:05, Christian Gagneraud  wrote:

So right now, I'm thinking about:
- CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads)
- Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the
builds)


RAID-5 seems to be what i am after.


Hi Chris,


Isn't RAID-5 going to be slower, especially if it's software? RAID 1
is probably better as you'll potentially double the write speed to disk.
I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed
near to 1GBs. Write endurance is possibly a concern, but I've not had
any issues using them on a local build machine. I would probably look at
some higher end models if I was going to run a lot of builds. A lot less
noise than hard drives ;-)


Thanks for the info, i will have a look at RAID-1, as you can see, I 
know absolutely nothing about RAID! ;)


Does SSD really help with disk throughput? Then what's the point of 
using ramdisk for TMPDIR/WORKDIR? If you "fully" work in RAM, the disk 
bottleneck shouldn't be such a problem anymore (basically, on disk, you 
should only have your yocto source tree and your download directory?).





- RAM: i don't really know, maybe 8 or 16 GB or more?


At least 16GB of RAM for the vast amount of disk cache that will give
you.  32GB or more will mean you can easily put the TMPDIR or WORKDIR
into a tmpfs (there's been discussion about this a few weeks ago).


Yes, I remember that one now, well spotted!


I've 16GB of RAM and a 8GB tmpfs with rm_work was sufficient for
WORKDIR which gave a 10% speedup (and massive reduction on disk wear).


I'm a bit surprise to see only a 10% speedup.


I looked at this a while back on a quad core + hyper-threading
system(so 8 cores). Depending on what you're building, there are significant
periods of the build where even 8 cores aren't maxed out as there's not
enough on the ready list to feed to them - basically there are times
when you're not CPU, memory or I/O bound. I've estimated that being able
to max out the CPUs would cut 20-25% of the build time, but the
build-time dependencies mean this isn't easy/possible. At one point I
inverted the priority scheme used by the bitbake scheduler and it (very
surprisingly) made no difference to the overall build time!


I have the same configuration here (4 cores, 8 threads), although, i 
didn't try to tweak bitbake, but i've noticed the same phenomenon as 
you: even with "aggressive" parallelism settings, the machine wasn't 
optimally loaded over time.




I ran builds with 16 threads and 16 parallel makes and the peak
memory usage I see is something like 8GB during intensive
compile/link phases, so 16GB for RAM and tmpfs sounds like a
reasonable minimum. The tmpfs would reduce SSD wear quite a bit ;-)


The quantity of RAM boils down to the budget, after a (very) quick 
search, i have estimated the cost of 64GB of RAM to be 1500 to 2000 US$.


Thanks.
Chris







  Others have machines with 64GB RAM and use it for all of TMPDIR, at
which point you'll be almost entirely CPU-bound.


OK, so 16GB sounds like a minimum, 32GB or 64GB being even better, at that 
size, this is not that cheap...

Thanks,

Chris



Ross



___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Chris Tapp

opensou...@keylevel.com
www.keylevel.com





___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Server specs for a continuous integration system

2013-09-02 Thread Christian Gagneraud

On 03/09/13 13:04, Elvis Dowson wrote:

Hi,

On Sep 3, 2013, at 3:29 AM, Christian Gagneraud mailto:chg...@gna.org>> wrote:


Isn't RAID-5 going to be slower, especially if it's software? RAID 1
is probably better as you'll potentially double the write speed to disk.
I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed
near to 1GBs. Write endurance is possibly a concern, but I've not had
any issues using them on a local build machine. I would probably look at
some higher end models if I was going to run a lot of builds. A lot less
noise than hard drives ;-)


Thanks for the info, i will have a look at RAID-1, as you can see, I
know absolutely nothing about RAID! ;)

Does SSD really help with disk throughput? Then what's the point of
using ramdisk for TMPDIR/WORKDIR? If you "fully" work in RAM, the disk
bottleneck shouldn't be such a problem anymore (basically, on disk,
you should only have your yocto source tree and your download directory?).


I use a Gigabyte Z77X-UP5TH motherboard

http://www.gigabyte.us/press-center/news-page.aspx?nid=1166

which has support for RAID in BIOS, at boot up, and Thunderbolt
connected to an Apple 27" Thunderbolt display. I've got two SSDs in a
RAID1 configuration (striped).

If you can wait for some more time, they'll be releasing a version of
the motherboard for the new haswell chips as well, but it's not probably
going to increase performance.


Right now, i'm just proposing an infrastructure solution, i'm not even 
sure it will be accepted, and the final hardware choice (if accepted) 
might not even be in my hands ...




I use a 3770K i7 quad-core processor, 16GB RAM, with a liquid cooled
solution running at 3.8GHz. I've overclocked the CPU to 4.5GHz, but I
end up shaving only 2 minutes off build times, so I just run it at 3.8GHz.

A core-image-minimal build takes around 22 minutes for me, for a Xilinx
ZC702 machine configuration (Dual ARM Cortex A9 processor + FPGA).


Is it a full build from scratch (cross-toolchain, native stuff, etc...)? 
If so, it's quite impressive to me!


Chris



Here are the modifications that I've done to my system, to tweak SSD
performance, for Ubuntu-12.10, for a RAID1 array.

*SSD performance tweaks (for non RAID0 arrays)*

Step 01.01: Modify /etc/fstab.

$ sudo gedit /etc/fstab

Increase the life of the SSD by reducing how much the OS writes to the
disk. If you don't need to knowwhen each file or directory was last
accessed, add the following two options to the /etc/fstab file:

noatime, nodiratime

To enable TRIM support to help manage disk performance over the long
term, add the following option to the /etc/fstab file:

discard

The /etc/fstab file should look like this:

# / was on /dev/sda1 during installation
UUID=---- /   ext4
discard,noatime,nodiratime,errors=remount-ro 0   1

Move /tmp to RAM

# Move /tmp to RAM
none/tmptmpfs
defaults,noatime,nodiratime,noexec,nodev,nosuid 0  0

See: Guide software RAID/LVM TRIM support on Linux
<http://www.ocztechnologyforum.com/forum/showthread.php?82648-software-RAID-LVM-TRIM-support-on-Linux>
for more details.

Step 01.02: Move the browser's cache to a tmpfs in RAM

Launch firefox and type the following in the location bar:

about:config

Right click and enter a new preference configution by selecting the
New->String option.

Preference name: browser.cache.disk.parent_directory
string value:/tmp/firefox-cache

See: Running Ubuntu and other Linux flavors on an SSD « Brizoma
<http://brizoma.wordpress.com/2012/08/04/running-ubuntu-and-other-linux-flavors-on-an-ssd/>.


Best regards,

Elvis Dowson


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Embedded Linux with Xenomai support

2013-09-02 Thread Christian Gagneraud

On 02/09/13 03:07, Robert Berger wrote:

Hi,

On 08/30/2013 07:56 PM, Darren Hart wrote:


Is there any Linux distribution based on the Yocto project that lets
me configure my embedded kernel with Xenomai? If not, has anybody got
any experinece in adding Xenomai to the Yocto project?


I am not aware of anyone using Xenomai with Yocto to date (although that
doesn't mean nobody is). Our Real-Time focus has been on the PREEMPT_RT
Linux kernel, which we do have recipes for.


googling for meta-xenomai reveals:

[1][2]



It appears as though Xenomai has changed quite a bit over the years. If
my quick re-reading of their material is correct, the Xenomai core is
implemented as a Linux kernel module which can built in to a standard
Linux kernel?


... kind of ...

kernel space:

You need to apply a patch to a certain kernel version and configure the
kernel afterwards.

So for an ARM architecture there is a patch for the 3.8 kernel[3]

user land:

But unlike with preempt-rt you also need to build the Xenomai userland
stuff.



Out of curiosity, what sort of real-time requirements do you have?



That's a good point. Shameless self promotion [4].


Does your RT-threads have access to the full linux userland?
I'm thinking about D-BUS comms, Qt framework, networking stuff, ...
Or do they live in their own shell and communicate with the non-rt world 
via a dedicated system?
As well, have you tried/use it in a multi core context, where one core 
runs linux and the other one a RT kernel.


Regards,
Chris



Regards,

Robert

[1] https://github.com/nojgosu/meta-xenomai
[2] https://github.com/DrunkenInfant/beaglebone-xenomai
[3]
http://git.xenomai.org/?p=xenomai-head.git;a=tree;f=ksrc/arch/arm/patches;h=c6045f00819318970d6bba65c397609052c9414e;hb=HEAD
[4] http://www.reliableembeddedsystems.com/pdfs/2010_03_04_rt_linux.pdf


..."A language that doesn't affect the way you think about programming
is not worth knowing." - Anonymous

My public pgp key is available,at:
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x90320BF1


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto



___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Server specs for a continuous integration system

2013-09-03 Thread Christian Gagneraud

On 04/09/13 07:22, Chris Tapp wrote:


On 3 Sep 2013, at 00:29, Christian Gagneraud wrote:


On 03/09/13 10:16, Chris Tapp wrote:

On 2 Sep 2013, at 22:45, Christian Gagneraud wrote:



On 03/09/13 00:35, Burton, Ross wrote:

Hi Ross,


On 2 September 2013 06:05, Christian Gagneraud  wrote:

So right now, I'm thinking about:
- CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads)
- Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the
builds)


RAID-5 seems to be what i am after.


Hi Chris,


Isn't RAID-5 going to be slower, especially if it's software? RAID 1
is probably better as you'll potentially double the write speed to disk.
I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed
near to 1GBs. Write endurance is possibly a concern, but I've not had
any issues using them on a local build machine. I would probably look at
some higher end models if I was going to run a lot of builds. A lot less
noise than hard drives ;-)


Thanks for the info, i will have a look at RAID-1, as you can see, I know 
absolutely nothing about RAID! ;)


Did you see my correction to this? I meant to say RAID 0. Sorry for the 
confusion.


No problem, at least it forces me to look at RAID-5, RAID-1 and now 
RAID-0, thanks! ;)





Does SSD really help with disk throughput? Then what's the point of using ramdisk for 
TMPDIR/WORKDIR? If you "fully" work in RAM, the disk bottleneck shouldn't be 
such a problem anymore (basically, on disk, you should only have your yocto source tree 
and your download directory?).


Running from RAM would be fastest - it really comes down to how much you have 
and how much you want to keep.






- RAM: i don't really know, maybe 8 or 16 GB or more?


At least 16GB of RAM for the vast amount of disk cache that will give
you.  32GB or more will mean you can easily put the TMPDIR or WORKDIR
into a tmpfs (there's been discussion about this a few weeks ago).


Yes, I remember that one now, well spotted!


I've 16GB of RAM and a 8GB tmpfs with rm_work was sufficient for
WORKDIR which gave a 10% speedup (and massive reduction on disk wear).


I'm a bit surprise to see only a 10% speedup.


I looked at this a while back on a quad core + hyper-threading
system(so 8 cores). Depending on what you're building, there are significant
periods of the build where even 8 cores aren't maxed out as there's not
enough on the ready list to feed to them - basically there are times
when you're not CPU, memory or I/O bound. I've estimated that being able
to max out the CPUs would cut 20-25% of the build time, but the
build-time dependencies mean this isn't easy/possible. At one point I
inverted the priority scheme used by the bitbake scheduler and it (very
surprisingly) made no difference to the overall build time!


I have the same configuration here (4 cores, 8 threads), although, i didn't try to tweak 
bitbake, but i've noticed the same phenomenon as you: even with "aggressive" 
parallelism settings, the machine wasn't optimally loaded over time.



I ran builds with 16 threads and 16 parallel makes and the peak
memory usage I see is something like 8GB during intensive
compile/link phases, so 16GB for RAM and tmpfs sounds like a
reasonable minimum. The tmpfs would reduce SSD wear quite a bit ;-)


The quantity of RAM boils down to the budget, after a (very) quick search, i 
have estimated the cost of 64GB of RAM to be 1500 to 2000 US$.


That sounds high - I generally get 16GB DDR 1600 for less that 150 US$ - it was 
quite a bit lower a year back!



Thanks.
Chris







  Others have machines with 64GB RAM and use it for all of TMPDIR, at
which point you'll be almost entirely CPU-bound.


OK, so 16GB sounds like a minimum, 32GB or 64GB being even better, at that 
size, this is not that cheap...

Thanks,

Chris



Ross



___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Chris Tapp

opensou...@keylevel.com
www.keylevel.com







Chris Tapp

opensou...@keylevel.com
www.keylevel.com





___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Server specs for a continuous integration system

2013-09-03 Thread Christian Gagneraud

On 03/09/13 20:55, Elvis Dowson wrote:




I use a 3770K i7 quad-core processor, 16GB RAM, with a liquid cooled
solution running at 3.8GHz. I've overclocked the CPU to 4.5GHz, but I
end up shaving only 2 minutes off build times, so I just run it at 3.8GHz.

A core-image-minimal build takes around 22 minutes for me, for a Xilinx
ZC702 machine configuration (Dual ARM Cortex A9 processor + FPGA).


Is it a full build from scratch (cross-toolchain, native stuff, etc...)? If so, 
it's quite impressive to me!


Yes, it is a full build from scratch, and the core-image-minimal
builds the cross tool chain, kernel and root file system. A full kernel
build from scratch completes in under 1 or 2 minutes, can't remember
exactly, will let u know in a while. This represents approximately 1600
tasks. A full meta-toolchain-sdk task takes about 40 minutes and
executes around 3600 tasks. I'll send some precise figures later on.


Very interesting figures indeed. Not sure about the water cooling stuff, 
does that come in "standard", or did you build and tweak your server 
yourself? Up to now, I was thinking about an off-the-shelf server, maybe 
the best approach is to build it myself (actually get some people here 
build it themselves!)


I would have another question concerning the CPU, Does anyone know how 
the Xeon E5 compare to the i7 in this context (build server and yocto)?


Regards,
Chris



Best regards,

Elvis Dowson



___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Yocto Project 1.5_M5.rc6

2013-10-10 Thread Christian Gagneraud

On 09/10/13 11:35, Flanagan, Elizabeth wrote:

All,

The final release candidate for the upcoming Yocto Project 1.5 (dora 10.0.0)
release is now available for testing at:

http://autobuilder.yoctoproject.org/pub/releases/1.5_M5.rc6

poky 102bf5e0f640fe85068452a42b85077f1c81e0c9
eclipse-poky-juno 2fa1c58940141a3c547c8790b8a6832167e8eb66
eclipse-poky-kepler ad74249895f882a8f00bdeef7a0f7c18998cc43e
meta-fsl-arm d334d3c7388e41d720dd0c8e0cb76fab51a6e46a
meta-fsl-ppc 25ee629b826a138ef407c113f776830e5d822c01
meta-intel a1138af21f6ba85e79d1422a3f0a3e0c4f658a90
meta-minnow ad469d78a1f56abd3b7d5103f1e0344cb10684b1
meta-qt3 b73552fb998fd30a01dbee5a172e432a16078222


   ^^^

Qt3, really!?! It's so unbeleivable that I had to check if it really was 
Qt3, and yes it is!!! The commit log [1] is very interesting.


Qt-3.0 was released in 2001 (linux-2.4 era), Qt-4.0 was released in 2005 
(linux-2.6 era) and Qt-5.0 in Dec. 2012 (linux-3.7 release)...


Anyone fancy a meta-linux24 layer? Or maybe a meta-gcc2, or 
meta-xfree86, or ...


Please excuse my sarcasm, but honestly I would prefer to see a meta-qt5 
here!


Best regards,
Chris

[1] http://git.yoctoproject.org/cgit/cgit.cgi/meta-qt3/log/



Please begin the QA cycle. Thanks!



___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Yocto Project 1.5_M5.rc6

2013-10-10 Thread Christian Gagneraud

On 11/10/13 11:02, Jeff Osier-Mixon wrote:

Hi Chris - well-intended sarcasm isn't usually a problem, but I want
to make sure to address your concerns here. We would like to see a
meta-qt5 as well but so far one has not been submitted. If you would
like to submit one, I would be happy to put you in touch with the
right maintainers who can guide you through the process.


Hi Jeff,

Well-intended answer! ;) But i want to make sure to address your 
concerns here. I would like to see the meta-qt5 [1] in Yocto's QA 
process. If you would like to integrate it, I would be happy to put you 
in touch with the right maintainers who can guide you through the process.


Chris.

[1] https://github.com/meta-qt5

--
Cheeky Chris
Random community dude @nocompany
http://catb.org/jargon/html/T/top-post.html





On Thu, Oct 10, 2013 at 2:37 PM, Christian Gagneraud  wrote:

On 09/10/13 11:35, Flanagan, Elizabeth wrote:


All,

The final release candidate for the upcoming Yocto Project 1.5 (dora
10.0.0)
release is now available for testing at:

http://autobuilder.yoctoproject.org/pub/releases/1.5_M5.rc6

poky 102bf5e0f640fe85068452a42b85077f1c81e0c9
eclipse-poky-juno 2fa1c58940141a3c547c8790b8a6832167e8eb66
eclipse-poky-kepler ad74249895f882a8f00bdeef7a0f7c18998cc43e
meta-fsl-arm d334d3c7388e41d720dd0c8e0cb76fab51a6e46a
meta-fsl-ppc 25ee629b826a138ef407c113f776830e5d822c01
meta-intel a1138af21f6ba85e79d1422a3f0a3e0c4f658a90
meta-minnow ad469d78a1f56abd3b7d5103f1e0344cb10684b1
meta-qt3 b73552fb998fd30a01dbee5a172e432a16078222



^^^

Qt3, really!?! It's so unbeleivable that I had to check if it really was
Qt3, and yes it is!!! The commit log [1] is very interesting.

Qt-3.0 was released in 2001 (linux-2.4 era), Qt-4.0 was released in 2005
(linux-2.6 era) and Qt-5.0 in Dec. 2012 (linux-3.7 release)...

Anyone fancy a meta-linux24 layer? Or maybe a meta-gcc2, or meta-xfree86, or
...

Please excuse my sarcasm, but honestly I would prefer to see a meta-qt5
here!

Best regards,
Chris

[1] http://git.yoctoproject.org/cgit/cgit.cgi/meta-qt3/log/




Please begin the QA cycle. Thanks!



___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto






___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Yocto Project 1.5_M5.rc6

2013-10-10 Thread Christian Gagneraud

On 11/10/13 14:45, Saul Wold wrote:


Sorry for the top post here. The real reason we still have Qt3 is
because it's required part of the LSB spec.  OE-Core contains Qt4 and we
know that meta-qt5 exists from Otavio and Martin.


Thanks for the explanation.



I know that Richard and Paul have been watching what's going on with
meta-qt5.


Good to hear.

Chris



Sau!

On 10/10/2013 05:31 PM, Jeff Osier-Mixon wrote:

Game, set, and match!  This is also why I don't play golf!

There are many layers I would love to see in the YP QA process and I
would count meta-qt5 in the top priority. However, we only have so
many resources, so we have to focus on the core right now. I'd be
happy to talk offline about finding ways to locate new resources.

On Thu, Oct 10, 2013 at 5:01 PM, Otavio Salvador
 wrote:

On Thu, Oct 10, 2013 at 7:26 PM, Christian Gagneraud 
wrote:

On 11/10/13 11:02, Jeff Osier-Mixon wrote:


Hi Chris - well-intended sarcasm isn't usually a problem, but I want
to make sure to address your concerns here. We would like to see a
meta-qt5 as well but so far one has not been submitted. If you would
like to submit one, I would be happy to put you in touch with the
right maintainers who can guide you through the process.



Hi Jeff,

Well-intended answer! ;) But i want to make sure to address your
concerns
here. I would like to see the meta-qt5 [1] in Yocto's QA process. If
you
would like to integrate it, I would be happy to put you in touch
with the
right maintainers who can guide you through the process.

Chris.

[1] https://github.com/meta-qt5


Hey, this was even easier.

I am one of meta-qt5 maintainers and I am sure Martin will be more
than happy to get it into the QA process. Jeff ... this was easy :-)

--
Otavio Salvador O.S. Systems
http://www.ossystems.com.brhttp://code.ossystems.com.br
Mobile: +55 (53) 9981-7854Mobile: +1 (347) 903-9750





___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Yocto Project 1.5_M5.rc6

2013-10-10 Thread Christian Gagneraud

On 11/10/13 14:55, Otavio Salvador wrote:

On Thu, Oct 10, 2013 at 10:45 PM, Saul Wold  wrote:


Sorry for the top post here. The real reason we still have Qt3 is because
it's required part of the LSB spec.  OE-Core contains Qt4 and we know that
meta-qt5 exists from Otavio and Martin.

I know that Richard and Paul have been watching what's going on with
meta-qt5.


The biggest missing feature at meta-qt5 is the lacking of toolchain
support; I am still looking for someone to sponsor me to work on this
(as it is not a small task), a critical need from my side for it or
someone step on and do it :)


Yes, and ideally it would be nice if this toolchain would be easy to use 
with QtCreator.
Some weeks ago, I had a look at writing a Yocto plugin for QtCreator 
(using Danny/Qt4), and the biggest problem was how you "activate" the 
toolchain, sourcing the env setup is good when you work in a shell, but 
it would be better if the toolchain comes with a machine friendly 
manifest file, this would allow as well to use several yocto toolchains 
(eg, for different architectures/machines) from within QtCreator.


My 2 cents.

Chris
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [meta-qt5] SDK missing python-json (QtCreator)

2017-07-20 Thread Christian Gagneraud
Hi there,

Since v3.6, QtCreator requires python-json for use with GDB.
The Qt5 SDK doesn't ship this python package (tested with Pyro) and so
it is impossible to debug a program due to the missing python package.
(QtC 4.4 (to be released) should contain a workaround)

Can this dependency be added to Qt5 SDK?

Original QtC bug report can be found at:
https://bugreports.qt.io/browse/QTCREATORBUG-18577

Thanks,
Chris
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto