Re: [yocto] How to add openmp to the target image

2015-05-03 Thread Christophe Coutand cc


On 5/1/2015 11:11 PM, Khem Raj wrote:

On May 1, 2015, at 12:48 AM, christophe coutand ccout...@widenorth.no wrote:

Hi Paul,

Thank you for your help.

replacing
IMAGE_INSTALL += libgomp libgomp-dev libgomp-staticdev
with
IMAGE_INSTALL_append =  libgomp libgomp-dev libgomp-staticdev”


unless you are doing development on the device, you don’t need -dev and 
-staticdev packages to be there



noted. in this case we are doing some development on the device.
--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] QEMU support for SD/MMC card

2015-05-03 Thread Patrick Doyle
Can anybody point me in the direction of running an arm based QEMU
with a (simulated) SD/MMC card?

From the documentation for QEMU, I see that I can pass a -drive
parameter to QEMU with an if=sd option, which sounds (reads?) an awful
lot like I can simulate an SD/MMC card.

But looking through the Yocto supplied runqemu script, (actually,
runqemu-internal) it appears that only supports -hda or -virtio disks.

--wpd
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Yocto-ish upgrade-in-place strategy

2015-05-03 Thread Patrick Doyle
On Sun, May 3, 2015 at 3:58 PM, Chris Morgan chmor...@gmail.com wrote:
 We thought about the rescue/recovery approach as well. The issue was how the
 system would recover in the field and how updates were handled.

 Consider that to update the full system image you'd have to download and
 store that image to flash, unless you stored it somewhere, and if you were
 going to store it you'd have to have extra space for that. For recovery
 you'd have the similar approach of booting recovery but you could write
 directly to the primary fs since you're booted to recovery. The trouble
 there is download speed and connectivity. The system is down until you
 download the update diff/full image and that could take some time. And the
 system is down for that period.

 So we didn't see a path forward that would support slow connections plus no
 down time. And we would still need some place to put the new image that was
 guaranteed to be available which means it couldn't be used for application
 or other data that had to be preserved across updates. With two rootfs
 partitions we can download in place and take our time doing so. Plus no
 special recovery image and no reserved area that is unused except during
 update. We do have a full duplicate rootfs though.

 I see where you are going, for other use cases it may be better to have a
 primary and a recovery. Our rootfs is around 200MB at this point.
I don't expect my rootfs will be anywhere near 200MB.  It's a deeply
embedded, single function application.  So the issues of memory size
and bandwidth don't really apply for my particular application.

As I have thought about this more (driving to and from baseball games
for my kids), I think what I was really wishing for was a Yocto
standard/recommended Poky core-image-bootstrap image with appropriate
hooks and configuration items to allow me to tailor it to my
particular application.

So far, I'm not sure how core-image-bootstrap differs from
core-image-minimal, except for the additional packages and a custom
rc.local I mentioned previously.  So perhaps there isn't a need for
such a beast, and I'll go roll my own bootstrap/rescue upgrade
strategy myself.

I'll go start a different thread with a specific question along that line now...

--wpd
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] How to maintain my own git repo for whole poky without affecting of yocto git history

2015-05-03 Thread Raghavendra Kakarla
Hi All,



We want to maintain the whole poky source code (with all required packages are 
already built) in my local gitlab. When I do this the yocto git history is 
replaced by my local git. So, After checkout the poky from my local git and try 
to build the poky with a new package enabling in yocto it give some errors like 
unable to fetch the source code. But with original yocto source code it is 
successfully built.


Could you please help for resolving this issue.


Thanks in advance.


Regards,


Raghavendra K.


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Yocto-ish upgrade-in-place strategy

2015-05-03 Thread Chris Morgan
On Saturday, May 2, 2015, Patrick Doyle wpds...@gmail.com wrote:

 Rather than inventing this from scratch yet again, I would like to ask
 what the Yocto-ish best practice is for deploying an embedded system
 that has to support in-place upgrades.

 It seems to me that this should be a fairly common scenario:
 I have (or, rather am in the process of developing yet another) an
 embedded application that will be running on a device whose power
 supply is uncertain at best.  Consequently, I want to run from a
 read-only squashfs rootfs, with some small amount of seldom changed
 configuration data (most likely stored in a JFFS partition).

 But I need a strategy to upgrade this system in place.  Since I am
 running from a read-only squashfs, I can't apt-get or rpm upgrade
 individual packages.  I must redeploy the entire image.

 I can divvy up the flash however I want, so I am thinking that I would
 like to use u-boot to boot a rescue image from one partition, that
 would kexec the deployed image from a different parition.

 Are there Yocto recipes, blogs, community experience with this sort of
 thing, or should I invent my own solution?

 Again, this feel like a common problem that others should have already
 solved, and I would rather solve uncommon problems than re-solve
 common ones.

 --wpd
 --
 ___
 yocto mailing list
 yocto@yoctoproject.org javascript:;
 https://lists.yoctoproject.org/listinfo/yocto


Is there a standard way? We've seen a few different approaches between
android systems (phones), Linux distributions, and things like chromebook.

In our case we are using two uboot, two kernel, and two root file system
partitions with uboot environment controlling the active one. Squashfs for
the root fs. Raw images for uboot and kernel. Overlayfs in another rw
partition that we put on top of the rootfs where we keep system
configuration. Media and other stuff goes into yet another btrfs partition
that, like the overlayfs, isn't managed by the update system.

Approach is to update the second rootfs while one rootfs is running. Swap
env in uboot environment to point at the other rootfs and then when
appropriate reboot. This lets us avoid downtime while downloading the
update, we download in the background.

We build everything with yocto but afaik we don't have much specific stuff
for the update system because we don't have an upgrade partition but rather
two sets of partitions.

Thoughts?

Chris
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Yocto-ish upgrade-in-place strategy

2015-05-03 Thread Patrick Doyle
On Sun, May 3, 2015 at 5:51 AM, Chris Morgan chmor...@gmail.com wrote:
 Is there a standard way? We've seen a few different approaches between
 android systems (phones), Linux distributions, and things like chromebook.

 In our case we are using two uboot, two kernel, and two root file system
 partitions with uboot environment controlling the active one. Squashfs for
 the root fs. Raw images for uboot and kernel. Overlayfs in another rw
 partition that we put on top of the rootfs where we keep system
 configuration. Media and other stuff goes into yet another btrfs partition
 that, like the overlayfs, isn't managed by the update system.

 Approach is to update the second rootfs while one rootfs is running. Swap
 env in uboot environment to point at the other rootfs and then when
 appropriate reboot. This lets us avoid downtime while downloading the
 update, we download in the background.

 We build everything with yocto but afaik we don't have much specific stuff
 for the update system because we don't have an upgrade partition but rather
 two sets of partitions.

 Thoughts?

Hi Chris,
Thanks for the description.  We do something very much similar to that
on one of our systems (except for the part about downloading in the
background).

As I have been thinking about this, I have been thinking about
something along the lines of creating a rescue/boot partition with the
following:

core-image-minimal
squashfs
ssh (for rescue operations)
kexec-tools

U-boot would boot that image.  That image would contain an rc.local
that would mount the active partition and kexec the kernel it found
there.  The active partition would contain our deployed application,
which would likely also have a squashfs rootfs (because I don't want
to trash the flash when the power goes out).  It would also have
whatever hooks we needed to upgrade images in place.

My thinking is that the boot/rescue image would be a factory/expert
image.  It (of necessity) needs special tools and/or access to the
board (JTAG header, USB rescue port, etc...) to be installed.  But
once it is installed, it is capable of booting whatever Linux I place
in the active partition.  I might make it smart enough to support an
active partition and a fallback partition, but I kinda feel like I
already have a fallback mechanism in place with the boot/rescue
image.

At some point, I begin to think that I am overthinking this.  I also
begin to think that everybody (in this space) has these same issues.
Why do we keep reinventing the wheel?

--wpd
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] fido: out-of-tree module build fails due to wrong Module.symvers reference

2015-05-03 Thread Timo Korthals

Dear yocto developers,

with the yocto projects  1.8 we were able to build our out-of-tree 
module without any linking failures, but since the upgrade to 1.8 there 
seems to be something wrong with the Module.symvers.
For our external module we use exactly the same Makefile as you proposed 
in hello-mod (but of course with another obj-m argument).
If we then build the module, the do_compile stage prints, what you can 
find under [1].
I think the problem refers to a wrong Module.symvers which is used by 
the MODPOST stage.


I just searched for all Module.symvers files in my fido work folder, 
after a full build:
-rw-r--r-- 1 myName staff 528483 Apr 29 17:59 
/home/myName/fido/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19.2+gitAUTOINC+9e70b482d3_31b35da6a5-r0/linux-amiro-standard-build/Module.symvers
-rw-r--r-- 1 myName staff  0 May  3 16:22 
/home/myName/fido/tmp/work/amiro-poky-linux-gnueabi/ov5647/0.0-r0/Module.symvers
-rw-r--r-- 1 myName staff 386012 Apr 29 17:58 
/home/myName/fido/tmp/work-shared/amiro/kernel-build-artifacts/Module.symvers


In my old (projects  1.8) workfolder, the Module.symvers files, where 
distributed as follows:
-rw-r--r-- 4 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/sysroots/amiro/usr/src/kernel/Module.symvers
-rw-r--r-- 4 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19+gitAUTOINC+8897ef68b3_bfa76d4957-r0/image/usr/src/kernel/Module.symvers
-rw-r--r-- 4 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19+gitAUTOINC+8897ef68b3_bfa76d4957-r0/linux-amiro-standard-build/Module.symvers
-rw-r--r-- 2 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19+gitAUTOINC+8897ef68b3_bfa76d4957-r0/packages-split/kernel-dev/usr/src/kernel/Module.symvers
-rw-r--r-- 2 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19+gitAUTOINC+8897ef68b3_bfa76d4957-r0/package/usr/src/kernel/Module.symvers
-rw-r--r-- 4 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19+gitAUTOINC+8897ef68b3_bfa76d4957-r0/sysroot-destdir/usr/src/kernel/Module.symvers
-rw-r--r-- 1 myName staff 0 Mar  3 10:08 
/home/tkorthals/tmp/work/amiro-poky-linux-gnueabi/ov5647/0.0-r0/ov5647/Module.symvers
-rw-r--r-- 1 myName staff  0 Mar  3 10:08 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/ov5647/0.0-r0/ov5647/Module.symvers


A small hotfix to solve my problem is just a copy of the 
Module.symvers from the kernel build into my source folder of the 
module, by the following extension in my ov5647.bb:

# Copy the fiel containing all kernel modules
do_patchappend() {
cp -f 
/home/myName/fido/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19.2+gitAUTOINC+9e70b482d3_31b35da6a5-r0/linux-amiro-standard-build/Module.symvers 
${S}

}
addtask patchappend after do_configure before do_compile

If I take the Module.symvers from 
kernel-build-artifacts/Module.symvers, the linkage fails because all 
the symbols are missing which are important for the module.
Do you have any Idea, if this is a bug in the new 1.8 build environment, 
or if I am doing something wrong?


Greetings,
Timo



[1]

+ module_do_compile
+ unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
+ oe_runmake 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard 'CC=arm-poky-linux-gnueabi-gcc  ' 
'LD=arm-poky-linux-gnueabi-ld.bfd  ' 'AR=arm-poky-linux-gnueabi-ar ' O=/mnt$
+ oe_runmake_call 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard 'CC=arm-poky-linux-gnueabi-gcc  ' 
'LD=arm-poky-linux-gnueabi-ld.bfd  ' 'AR=arm-poky-linux-gnueabi-ar ' O$
+ bbnote make -j 8 -e MAKEFLAGS= 
KERNEL_SRC=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard 'CC=arm-poky-$
+ echo 'NOTE: make -j 8 -e MAKEFLAGS= 
KERNEL_SRC=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard CC=arm-p$
NOTE: make -j 8 -e MAKEFLAGS= 
KERNEL_SRC=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard CC=arm-poky-linu$
+ make -j 8 -e MAKEFLAGS= 
KERNEL_SRC=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard 'CC=arm-poky-linux-g$


make[1]: Entering directory 
`/home/myHome/fido/tmp/work-shared/amiro/kernel-source'


make[2]: Entering directory 
`/home/myHome/fido/tmp/work-shared/amiro/kernel-build-artifacts'


  LD 
/home/myHome/fido/tmp/work/amiro-poky-linux-gnueabi/ov5647/0.0-r0/built-in.o


  CC [M] 

Re: [yocto] fido: out-of-tree module build fails due to wrong Module.symvers reference

2015-05-03 Thread Timo Korthals

Dear yocto developers,

the following more generic workaround worked also, which let's me 
assume that the Module.symvers in ${S} is somehow broken?


Content of linux-yocto_3.19.bbappend:
do_install_append() {
cp -f ${KBUILD_OUTPUT}/Module.symvers ${STAGING_KERNEL_BUILDDIR}
}

Greetings,
Timo


On 03.05.2015 17:26, Timo Korthals wrote:

Dear yocto developers,

with the yocto projects  1.8 we were able to build our out-of-tree 
module without any linking failures, but since the upgrade to 1.8 
there seems to be something wrong with the Module.symvers.
For our external module we use exactly the same Makefile as you 
proposed in hello-mod (but of course with another obj-m argument).
If we then build the module, the do_compile stage prints, what you can 
find under [1].
I think the problem refers to a wrong Module.symvers which is used 
by the MODPOST stage.


I just searched for all Module.symvers files in my fido work folder, 
after a full build:
-rw-r--r-- 1 myName staff 528483 Apr 29 17:59 
/home/myName/fido/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19.2+gitAUTOINC+9e70b482d3_31b35da6a5-r0/linux-amiro-standard-build/Module.symvers
-rw-r--r-- 1 myName staff  0 May  3 16:22 
/home/myName/fido/tmp/work/amiro-poky-linux-gnueabi/ov5647/0.0-r0/Module.symvers
-rw-r--r-- 1 myName staff 386012 Apr 29 17:58 
/home/myName/fido/tmp/work-shared/amiro/kernel-build-artifacts/Module.symvers


In my old (projects  1.8) workfolder, the Module.symvers files, 
where distributed as follows:
-rw-r--r-- 4 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/sysroots/amiro/usr/src/kernel/Module.symvers
-rw-r--r-- 4 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19+gitAUTOINC+8897ef68b3_bfa76d4957-r0/image/usr/src/kernel/Module.symvers
-rw-r--r-- 4 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19+gitAUTOINC+8897ef68b3_bfa76d4957-r0/linux-amiro-standard-build/Module.symvers
-rw-r--r-- 2 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19+gitAUTOINC+8897ef68b3_bfa76d4957-r0/packages-split/kernel-dev/usr/src/kernel/Module.symvers
-rw-r--r-- 2 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19+gitAUTOINC+8897ef68b3_bfa76d4957-r0/package/usr/src/kernel/Module.symvers
-rw-r--r-- 4 myName staff 528483 Mar 11 13:48 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19+gitAUTOINC+8897ef68b3_bfa76d4957-r0/sysroot-destdir/usr/src/kernel/Module.symvers
-rw-r--r-- 1 myName staff 0 Mar  3 10:08 
/home/tkorthals/tmp/work/amiro-poky-linux-gnueabi/ov5647/0.0-r0/ov5647/Module.symvers
-rw-r--r-- 1 myName staff  0 Mar  3 10:08 
/home/myName/tmp/work/amiro-poky-linux-gnueabi/ov5647/0.0-r0/ov5647/Module.symvers


A small hotfix to solve my problem is just a copy of the 
Module.symvers from the kernel build into my source folder of the 
module, by the following extension in my ov5647.bb:

# Copy the fiel containing all kernel modules
do_patchappend() {
cp -f 
/home/myName/fido/tmp/work/amiro-poky-linux-gnueabi/linux-yocto/3.19.2+gitAUTOINC+9e70b482d3_31b35da6a5-r0/linux-amiro-standard-build/Module.symvers 
${S}

}
addtask patchappend after do_configure before do_compile

If I take the Module.symvers from 
kernel-build-artifacts/Module.symvers, the linkage fails because all 
the symbols are missing which are important for the module.
Do you have any Idea, if this is a bug in the new 1.8 build 
environment, or if I am doing something wrong?


Greetings,
Timo



[1]

+ module_do_compile
+ unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
+ oe_runmake 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard 'CC=arm-poky-linux-gnueabi-gcc  ' 
'LD=arm-poky-linux-gnueabi-ld.bfd  ' 'AR=arm-poky-linux-gnueabi-ar ' 
O=/mnt$
+ oe_runmake_call 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard 'CC=arm-poky-linux-gnueabi-gcc  ' 
'LD=arm-poky-linux-gnueabi-ld.bfd  ' 'AR=arm-poky-linux-gnueabi-ar ' O$
+ bbnote make -j 8 -e MAKEFLAGS= 
KERNEL_SRC=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard 'CC=arm-poky-$
+ echo 'NOTE: make -j 8 -e MAKEFLAGS= 
KERNEL_SRC=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard CC=arm-p$
NOTE: make -j 8 -e MAKEFLAGS= 
KERNEL_SRC=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard CC=arm-poky-linu$
+ make -j 8 -e MAKEFLAGS= 
KERNEL_SRC=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_PATH=/home/myHome/fido/tmp/work-shared/amiro/kernel-source 
KERNEL_VERSION=3.19.2-yocto-standard 

Re: [yocto] Yocto-ish upgrade-in-place strategy

2015-05-03 Thread Chris Morgan
On Sunday, May 3, 2015, Patrick Doyle wpds...@gmail.com wrote:

 On Sun, May 3, 2015 at 5:51 AM, Chris Morgan chmor...@gmail.com
 javascript:; wrote:
  Is there a standard way? We've seen a few different approaches between
  android systems (phones), Linux distributions, and things like
 chromebook.
 
  In our case we are using two uboot, two kernel, and two root file system
  partitions with uboot environment controlling the active one. Squashfs
 for
  the root fs. Raw images for uboot and kernel. Overlayfs in another rw
  partition that we put on top of the rootfs where we keep system
  configuration. Media and other stuff goes into yet another btrfs
 partition
  that, like the overlayfs, isn't managed by the update system.
 
  Approach is to update the second rootfs while one rootfs is running. Swap
  env in uboot environment to point at the other rootfs and then when
  appropriate reboot. This lets us avoid downtime while downloading the
  update, we download in the background.
 
  We build everything with yocto but afaik we don't have much specific
 stuff
  for the update system because we don't have an upgrade partition but
 rather
  two sets of partitions.
 
  Thoughts?

 Hi Chris,
 Thanks for the description.  We do something very much similar to that
 on one of our systems (except for the part about downloading in the
 background).

 As I have been thinking about this, I have been thinking about
 something along the lines of creating a rescue/boot partition with the
 following:

 core-image-minimal
 squashfs
 ssh (for rescue operations)
 kexec-tools

 U-boot would boot that image.  That image would contain an rc.local
 that would mount the active partition and kexec the kernel it found
 there.  The active partition would contain our deployed application,
 which would likely also have a squashfs rootfs (because I don't want
 to trash the flash when the power goes out).  It would also have
 whatever hooks we needed to upgrade images in place.

 My thinking is that the boot/rescue image would be a factory/expert
 image.  It (of necessity) needs special tools and/or access to the
 board (JTAG header, USB rescue port, etc...) to be installed.  But
 once it is installed, it is capable of booting whatever Linux I place
 in the active partition.  I might make it smart enough to support an
 active partition and a fallback partition, but I kinda feel like I
 already have a fallback mechanism in place with the boot/rescue
 image.

 At some point, I begin to think that I am overthinking this.  I also
 begin to think that everybody (in this space) has these same issues.
 Why do we keep reinventing the wheel?

 --wpd



We thought about the rescue/recovery approach as well. The issue was how
the system would recover in the field and how updates were handled.

Consider that to update the full system image you'd have to download and
store that image to flash, unless you stored it somewhere, and if you were
going to store it you'd have to have extra space for that. For recovery
you'd have the similar approach of booting recovery but you could write
directly to the primary fs since you're booted to recovery. The trouble
there is download speed and connectivity. The system is down until you
download the update diff/full image and that could take some time. And the
system is down for that period.

So we didn't see a path forward that would support slow connections plus no
down time. And we would still need some place to put the new image that was
guaranteed to be available which means it couldn't be used for application
or other data that had to be preserved across updates. With two rootfs
partitions we can download in place and take our time doing so. Plus no
special recovery image and no reserved area that is unused except during
update. We do have a full duplicate rootfs though.

I see where you are going, for other use cases it may be better to have a
primary and a recovery. Our rootfs is around 200MB at this point.

Chris
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto