Re: [yocto] Building on MacOS X

2017-01-16 Thread Brian Avery
Hi,

A couple of comments even though I'm coming late to the discussion.

So, from what I've understood from the above, the main issue with the
docker approach to building a yocto/oe image on the mac is that it was much
slower...  Here's  some numbers that don’t quite agree with that assertion:

I ran a couple of tests, some on my linux box and some on my mac laptop.
The tests involved building core-image-minimal for a qemux86 target. All
the downloads were in place. There was no sstate, parse cache, nor was
there an existing tmp directory before the tests were run.

My linux box has a Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz , 36 cores.
My mac laptop has a Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz.

The test is "$ time bitbake core-image-minimal"

Here are the results of the tests:

Linux box Ubuntu 14.04

clock : 30m24.024

user  : 229m12.324

sys: 20m10.188

—-

Linux box Ubuntu 14.04 running inside docker 1.12.3 using crops/poky:latest

clock : 30m37.66

user  : 231m35.984

sys: 31m20.204


——

Linux box Ubuntu 14.04 running inside docker 1.12.3 using crops/poky:latest

I set the following in my local.conf file:

BB_NUMBER_THREADS=“2”

BB_NUMBER_PARSE_THREADS=“2”

PARALLEL_MAKE=“-j 2”

AND I constrained docker to 2 cpu cores and 8 gb of ram (This is what I
have my mac laptop set to).

clock : 127m8.523

user  : 201m32.468

sys: 19m54.052


—

Mac OSX laptop - running docker 1.12.5 2 cpus, 8gb ram

I set the following in my local.conf file:

BB_NUMBER_THREADS=“2”

BB_NUMBER_PARSE_THREADS=“2”

PARALLEL_MAKE=“-j 2”


clock : 99m31.190

user  : 137m40.400

sys: 18m38.650


——


So, from the above, it looks like my mac is actually faster when it has the
same number of cores and memory.  I haven’t seen any particular slowdown
with running docker linux programs on the mac other than those caused by
the difference in horsepower between my build server and my laptop.


Thanks,

Brian Avery

an Intel employee


p.s. Andrea, would you mind replying with how you changed the docker run to
make loopback work? I’d like to add it to the docs. Also, if you could
point me at what layers you used to make resin, I’d like to give that a try
as well.  Right now, we are providing a bare bones environment but I’d be
happy to write up a howto for inheriting from our images to customize your
own for special purposes (like doing builds that require loopback mounts,
for instance).







On Mon, Jan 16, 2017 at 3:19 AM, Burton, Ross <ross.bur...@intel.com> wrote:

>
> On 14 January 2017 at 19:45, Roger Smith <ro...@sentientblue.com> wrote:
>
>> Is Building Yocto project on a POSIX system, a desire for the Yocto
>> project? It would allow support on all bsd UNIX’s including macOS
>>
>
> Making OE itself work isn't rocket science - fix a few Linuxisms in
> bitbake, port pseudo to macOS.
>
> The hard bit is then convincing the hundred-odd recipes that are often
> Linux-centric if not Linux specific to build under something that isn't
> Linux.  My ross/darwin branch (from before the security changes) has a
> patch to gmp as 'echo' has different semantics. unlink() has different
> error codes between macOS and Linux.  There's a very long tail of
> differences that will need patching and testing.
>
> But if this is something you care about, patches welcome!
>
> Ross
>
>
>
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [Openembedded-architecture] RFC BuildAppliance future

2016-12-05 Thread Brian Avery
Excellent Questions!



Dogfooding – Good point.  The
https://bugzilla.yoctoproject.org/show_bug.cgi?id=9502 bug is intended to
address that. With this, we can build and test container images just as we
can do with the Build Appliance. As an added win, those folks building
Busybox based minimal sized containers would have a different way to do
that...



OS availability: Docker is available for Windows, Mac, and Linux.  Were
there fears about specific Linux distributions?



It does not seem to be too much harder to try Docker on Windows than it was
to try Virtualbox.  That being said, Docker is moving/changing more quickly
so it may have more bumps in the road but more features too…



Windows explained more in depth or one of those bumps…:

  The answer is a little complicated, so please bear with me.  There are 2
different versions of Docker for Windows.  The original version is known as
“Docker Toolbox” https://www.docker.com/products/docker-toolbox.  This is
basically Docker running in a minimal VirtualBox Linux VM.  They made a
separate host binary called docker-machine to help manage the images. This
is, in my opinion, a little clunky and does add some complexity when
switching from Docker running on Linux to Docker Toolbox running on
Windows. The clunkiness is largely due to the VirtualBox VM being “between”
the host and the Docker Engine. This version is targeted to people whose
Windows version does not meet the requirements specified below.



  If you have 64bit Windows 10 Pro, Enterprise and Education (1511 November
update, Build 10586 or later) and Microsoft Hyper-V, then you can download
“Docker for Windows” https://docs.docker.com/docker-for-windows/.  This is
a Docker implementation for Windows that relies on Hyper-V and a much
smaller vm.  It is also better integrated into the host environment so that
the discontinuity between the host side and the Docker  Engine is no longer
as jarring.  For example, you get a disk tray icon for Docker, can set up
http proxies for it, can manage how much memory and how many cpus it will
get…


This (https://docs.docker.com/docker-for-mac/docker-toolbox/) is nice page
that addresses the difference between the newer hypervisor approach and the
older "Docker Toolbox aka VirtualBox" approach.  The page is discussing it
from a Mac perspective but if you replace xhyve with Windows 10 Hyper-V,
the page is relevant to Windows as well.



The Mac implementation is functionally equivalent to “Docker for Windows”
and is called “Docker for Mac”
https://docs.docker.com/engine/installation/mac/.  It runs on a modified
xhyve hypervisor.



The Linux implementation runs as a container and uses the same kernel as
the host.



Thanks for the feedback and if I missed or shortchanged something please
let me know,

-Brian


an intel employee

On Mon, Dec 5, 2016 at 10:42 AM, Khem Raj <raj.k...@gmail.com> wrote:

> On Mon, Dec 5, 2016 at 10:04 AM, Brian Avery <avery.br...@gmail.com>
> wrote:
> >   Please note, this is going out to 3 lists in an attempt to insure that
> no
> > one who would be impacted by this change misses it. Implied spam apology
> > included.
> >
> >   The Yocto Project currently provides a virtual machine image called the
> > Build Appliance
> > (https://downloads.yoctoproject.org/releases/yocto/yocto-2.2/build-
> appliance/).
> > This image can be run using either VirtualBox or VMware. It enables you
> to
> > build and boot a custom embedded Linux image with the Yocto Project on a
> > non-Linux development system.  It is not intended for day to day
> development
> > use, but instead, is intended to allow users to “try out” the tools even
> if
> > they do not have access to a Linux system.
> >
> >   We are considering replacing the VM based Build Appliance with a set of
> > containers (https://hub.docker.com/r/crops/poky/, or if you want a user
> > interface (https://hub.docker.com/r/crops/toaster-master/ ).  These
> > containers currently provide most of the functionality of the Build
> > Appliance and should shortly be at feature parity with the Build
> Appliance.
> > We are actively adding to and supporting the containers as some of our
> > developers are using them in their day to day development in order to
> > benefit from the host isolation and ease with which other distributions
> can
> > be tested.
> >
> >   This is an RFC to see what features in the Build Appliance are
> important
> > to you but would not be provided by the container solutions.  If the
> > community would be just as content using the container approach as using
> the
> > VM based Build Appliance image, then we’d be better off deprecating the
> > Build Appliance and applying those resources elsewhere.  If there are
> > important features the Build App

[yocto] RFC BuildAppliance future

2016-12-05 Thread Brian Avery
  Please note, this is going out to 3 lists in an attempt to insure that no
one who would be impacted by this change misses it. Implied spam apology
included.

  The Yocto Project currently provides a virtual machine image called the
Build Appliance (
https://downloads.yoctoproject.org/releases/yocto/yocto-2.2/build-appliance/).
This image can be run using either VirtualBox or VMware. It enables you to
build and boot a custom embedded Linux image with the Yocto Project on a
non-Linux development system.  It is not intended for day to day
development use, but instead, is intended to allow users to “try out” the
tools even if they do not have access to a Linux system.

  We are considering replacing the VM based Build Appliance with a set of
containers (https://hub.docker.com/r/crops/poky/, or if you want a user
interface (https://hub.docker.com/r/crops/toaster-master/ ).  These
containers currently provide most of the functionality of the Build
Appliance and should shortly be at feature parity with the Build
Appliance.  We are actively adding to and supporting the containers as some
of our developers are using them in their day to day development in order
to benefit from the host isolation and ease with which other distributions
can be tested.

  This is an RFC to see what features in the Build Appliance are important
to you but would not be provided by the container solutions.  If the
community would be just as content using the container approach as using
the VM based Build Appliance image, then we’d be better off deprecating the
Build Appliance and applying those resources elsewhere.  If there are
important features the Build Appliance provides which the container
solution does not, or cannot provide, we’d love to hear what they are!

Thanks in advance for any feedback,

-Brian

an Intel Employee
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH 1/1] runqemu: add user mode (SLIRP) support to x86 QEMU targets

2016-10-25 Thread Brian Avery
Hi,

This doesn't default to slirp, it just allows it as an option.

In particular, on CROPS containers on Windows 10 and Mac OSX  the
hypervisor is *not* a linux kernel. This means that the only way we can run
qemu in the container for testing/debugging is to run it in slirp mode
.since there is no tun/tap device or driver.

So, given that we
1) are not defaulting to slirp and are merely enabling it
2) need slirp mode to be able to run in a CROPS container setting

I think this makes sense.

-Brian
an intel employee


On Mon, Oct 24, 2016 at 5:16 PM, Mark Hatle 
wrote:

> On 10/24/16 6:46 PM, Todor Minchev wrote:
> > On Mon, 2016-10-24 at 18:15 -0500, Mark Hatle wrote:
> >> On 10/24/16 5:19 PM, Todor Minchev wrote:
> >>> Using 'slirp' as a command line option to runqemu will start QEMU
> >>> with user mode networking instead of creating tun/tap devices.
> >>> SLIRP does not require root access. By default port  on the
> >>> host will be mapped to port 22 in the guest. The default port
> >>> mapping can be overwritten with the QB_SLIRP_OPT variable e.g.
> >>>
> >>> QB_SLIRP_OPT = "-net nic,model=e1000 -net user,hostfwd=tcp::-:22"
> >>
> >> In the past patches like this have been rejected for performance and
> other
> >> reasons.
> >
> > Performance is not ideal with SLIRP, but should be sufficient for basic
> > ssh access to the guest. Could you please elaborate on the other reasons
> > why having this is a bad idea?
>
> You'd have to go back in the archives.  Performance was the main reason I
> remember being discussed at the time.
>
> I don't remember the others, and as I said, I'm not a fan of requiring
> root, but
> the community decided sudo approach was better as long as there was a way
> to
> avoid it.
>
> Changing the default will likely require email to the oe-core list and/or
> oe-architecture list...
>
> >> While I don't particularly like requiring sudo access for runqemu,
> >> adding the 'slirp' option will allow someone w/o root/sudo to be able
> to run it.
> >
> > The goal was to be able to run a QEMU image and provide basic networking
> > without requiring root access. Is there any reason why running runqemu
> > w/o root/sudo access should be discouraged?
>
> As long as you can pass in slirp (and it's not the default) then I believe
> your
> requirement and the community direction is preserved.
>
> >> Otherwise, if there is something you can do to make slirp work better
> (when
> >> enabled), that is more likely to be accepted.
> >
> > Similar SLIRP support has been already accepted for qemuarm64 with
> > commit 9b0a94cb
>
> Support for slirp is good, it's defaulting to slirp that has been rejected
> in
> the past by the community.
>
> > --Todor
> >
> > Intel Open Source Technology Center
> >
> >> --Mark
> >>
> >>> Signed-off-by: Todor Minchev 
> >>> ---
> >>>  meta/conf/machine/include/qemuboot-x86.inc | 1 +
> >>>  scripts/runqemu| 3 ++-
> >>>  2 files changed, 3 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/meta/conf/machine/include/qemuboot-x86.inc
> b/meta/conf/machine/include/qemuboot-x86.inc
> >>> index 06ac983..0870294 100644
> >>> --- a/meta/conf/machine/include/qemuboot-x86.inc
> >>> +++ b/meta/conf/machine/include/qemuboot-x86.inc
> >>> @@ -13,3 +13,4 @@ QB_AUDIO_OPT = "-soundhw ac97,es1370"
> >>>  QB_KERNEL_CMDLINE_APPEND = "vga=0 uvesafb.mode_option=640x480-32
> oprofile.timer=1 uvesafb.task_timeout=-1"
> >>>  # Add the 'virtio-rng-pci' device otherwise the guest may run out of
> entropy
> >>>  QB_OPT_APPEND = "-vga vmware -show-cursor -usb -usbdevice tablet
> -device virtio-rng-pci"
> >>> +QB_SLIRP_OPT = "-net nic,model=e1000 -net user,hostfwd=tcp::-:22"
> >>> diff --git a/scripts/runqemu b/scripts/runqemu
> >>> index dbe17ab..6952f32 100755
> >>> --- a/scripts/runqemu
> >>> +++ b/scripts/runqemu
> >>> @@ -542,7 +542,8 @@ class BaseConfig(object):
> >>>  def check_and_set(self):
> >>>  """Check configs sanity and set when needed"""
> >>>  self.validate_paths()
> >>> -check_tun()
> >>> +if not self.slirp_enabled:
> >>> +check_tun()
> >>>  # Check audio
> >>>  if self.audio_enabled:
> >>>  if not self.get('QB_AUDIO_DRV'):
> >>>
> >>
> >
> >
>
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] How to build yocto as a root user?

2016-08-09 Thread Brian Avery
Yocto includes pseudo which handles the things fakeroot does and a bit more
https://www.yoctoproject.org/tools-resources/projects/pseudo.

-brian
an Intel employee

On Tue, Aug 9, 2016 at 7:27 AM, Jan Lindåker  wrote:

>
> 
> From: yocto-boun...@yoctoproject.org  on
> behalf of Philip Balister 
> Sent: Tuesday, August 9, 2016 15:39
> To: jags gediya; yocto@yoctoproject.org; meta-freesc...@yoctoproject.org
> Subject: Re: [yocto] How to build yocto as a root user?
>
> On 08/09/2016 09:32 AM, jags gediya wrote:
> > > I am facing issues while build yocto as root user on ubuntu 14.04.
> > > How can i build as a root user?
> >
> > Do not do this, it is a terrible idea. Create a user account and build
> > there.
> >
> > Philip
>
> You are right Philip, for building with yocto/bitbake it is a bad idea. I
> only read the part with becoming root, and forgot that I do not build as
> root.
>
> However,  to be able to build a usable file system that installs on target
> I think installing fakeroot is necessary:
> $ sudo apt-get install fakeroot
> $ man fakeroot
>
> /BR Jan
>
> >
> >
> >
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Minutes: Yocto Project Technical Team Meeting - Tuesday, May 3, 2016 8:00 AM US Pacific Time

2016-05-03 Thread Brian Avery
Hi,
The question about number of machines we currently have from (Armin ?)
made me wonder.

Here's toasters/layer indexes current view of machines.
jethro - 226
krogath - 148
master - 310

I'll go ahead and attach the lists in case anyone is curious and it
took a long sql statement to make them :).

Note that toaster will have slightly higher numbers since machines
like beaglebone, for instance, can come from 4 different layers:
meta-ti, meta-yocto-bsp,meta-beagleboard and meta-mender.  When
machines come from different layers, toaster will let you pick.


-b
an intel employee

On Tue, May 3, 2016 at 8:29 AM, Jolley, Stephen K
 wrote:
> Attendees: Joshua, Stephen, Sona, Armin, Richard, Mark,
>
>
>
> Agenda:
>
>
>
> * Opens collection - 5 min (Stephen)
>
> * Yocto Project status - 5 min (Stephen/team)
>
> ·YP 2.1 was released.
>
> ·YP 1.8.2 should build this week.
>
> ·YP 2.0.2 should build in a couple of weeks.
>
> ·YP 2.2 Planning is almost done and execution has begun.
>
> https://wiki.yoctoproject.org/wiki/Yocto_Project_v2.2_Status
>
> https://wiki.yoctoproject.org/wiki/Yocto_2.2_Schedule
>
> https://wiki.yoctoproject.org/wiki/Yocto_2.2_Features
>
> * Opens - 10 min
>
> ·Armin – LTS discussion – We discussed if we want to support 1.8.x
> beyond 1.8.2. We have no problem with community driven LTS for YP releases.
>
> ·Armin – Layer Index – Toaster demo showed Layer Index is critical
> to Toaster. It was agreed this was an area we need to work, but more
> discussion is needed.  If you have ideas how to update and support this;
> work with Armin.
>
> * Team Sharing - 10 min
>
> ·Mark – Pre-linker – This has been disabled in YP 2.1, and is under
> investigation.  Work is progressing to fix this issue.
>
>
>
>
>
> Thanks,
>
>
>
> Stephen K. Jolley
>
> Yocto Project Program Manager
>
> INTEL, MS JF1-255, 2111 N.E. 25th Avenue, Hillsboro, OR 97124
>
> (   Work Telephone:  (503) 712-0534
>
> (Cell:   (208) 244-4460
>
> * Email: stephen.k.jol...@intel.com
>
>
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
 1  a500
 2  apalis-imx6
 3  at91sam9m10g45ek
 4  at91sam9rlek
 5  at91sam9x5ek
 6  b4420qds
 7  b4420qds-64b
 8  b4860qds
 9  b4860qds-64b
10  beaglebone
11  bsc9131rdb
12  bsc9132qds
13  c293pcie
14  cfa10036
15  cfa10037
16  cfa10049
17  cfa10055
18  cfa10056
19  cfa10057
20  cfa10058
21  cgtqmx6
22  cm-fx6
23  colibri-imx6
24  colibri-vf
25  crespo
26  cubox-i
27  dragonboard-410c
28  edgerouter
29  geeksphone-one
30  genericx86
31  genericx86-64
32  grouper
33  hpveer
34  htcdream
35  htcleo
36  i9300
37  ifc6410
38  imx233-olinuxino-maxi
39  imx233-olinuxino-micro
40  imx233-olinuxino-mini
41  imx233-olinuxino-nano
42  imx23evk
43  imx28evk
44  imx51evk
45  imx53ard
46  imx53qsb
47  imx6dl-riotboard
48  imx6dlsabreauto
49  imx6dlsabresd
50  imx6qdl-variscite-som
51  imx6q-elo
52  imx6qpsabreauto
53  imx6qpsabresd
54  imx6qsabreauto
55  imx6qsabrelite
56  imx6qsabresd
57  imx6slevk
58  imx6sl-warp
59  imx6solosabreauto
60  imx6solosabresd
61  imx6sxsabreauto
62  imx6sxsabresd
63  imx6ulevk
64  imx7dsabresd
65  imx7d-warp7
66  intel-core2-32
67  intel-corei7-64
68  intel-quark
69  ls1021atwr
70  m28evk
71  m53evk
72  maguro
73  mpc8315e-rdb
74  nexusone
75  nitrogen6sx
76  nitrogen6x
77  nitrogen6x-lite
78  nitrogen7
79  nokia900
80  odroid-c1
81  odroid-c2
82  odroid-xu3
83  om-gta01
84  om-gta02
85  om-gta04
86  p1010rdb
87  p1020rdb
88  p1021rdb
89  p1022ds
90  p1023rdb
91  p1025twr
92  p2020rdb
93  p2041rdb
94  p3041ds
95  p4080ds
96  p5020ds
97  p5020ds-64b
98  p5040ds
99  p5040ds-64b
   100  palmpre
   101  palmpre2
   102  pcm052
   103  qemuarm
   104  qemuarm64
   105  qemumips
   106  qemumips64
   107  qemuppc
   108  qemux86
   109  qemux86-64
   110  sama5d2-xplained
   111  sama5d2-xplained-sd
   112  sama5d3xek
   113  sama5d3-xplained
   114  sama5d3-xplained-sd
   115  sama5d4ek
   116  sama5d4-xplained
   117  sama5d4-xplained-sd
   118  t1023rdb
   119  t1023rdb-64b
   120  t1024rdb
   121  t1024rdb-64b
   122  t1040d4rdb
   123  t1040d4rdb-64b
   124  t1042d4rdb
   125  t1042d4rdb-64b
   126  t2080qds
   127  t2080qds-64b
   128  t2080rdb
   129  t2080rdb-64b
   130  t4160qds
   131  t4160qds-64b
   132  t4240qds
   133  t4240qds-64b
   134  t4240rdb
   135  t4240rdb-64b
   136  tilapia
   137  toro
   138  toroplus
   139  twr-vf65gs10
   140  

[yocto] [layerindex-web]

2015-07-09 Thread Brian Avery
For the update script you also need python-git. Since we are already
using a virtenv, why not add it to requirements :).

yolk lists the following as being available so I fairly arbitrarily
picked 0.3.7 on the assumption it was the best of the 0.3 line...

GitPython 1.0.1
GitPython 1.0.0
GitPython 0.3.7
GitPython 0.3.6


-bavery

diff --git a/requirements.txt b/requirements.txt
index b88ab2b..ea52994 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -17,3 +17,4 @@ python-nvd3==0.12.2
 regex==2014.06.28
 six==1.7.3
 wsgiref==0.1.2
+gitpython==0.3.7
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto