Re: [OE-core] [yocto] RFC: Improving the developer workflow

2014-08-09 Thread Mike Looijmans

On 08/07/2014 03:05 PM, Paul Eggleton wrote:

On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:

Historically I, and I suspect others, have done full image updates of
the storage medium,  onboard flash or whatever but these images are
getting so big now that I am trying to  move away from that and into
using package feeds for updates to embedded targets.


Personally with how fragile package management can end up being, I'm convinced
that full-image updates are the way to go for a lot of cases, but ideally with
some intelligence so that you only ship the changes (at a filesystem level
rather than a package or file level). This ensures that an upgraded image on
one device ends up exactly identical to any other device including a newly
deployed one. Of course it does assume that you have a read-only rootfs and
keep your configuration data / logs / other writeable data on a separate
partition or storage medium. However, beyond improvements to support for
having a read-only rootfs we haven't really achieved anything in terms of out-
of-the-box support for this, mainly due to lack of resources.


Full-image upgrades are probably most seen in lab environments, where 
the software is being developed.


Once deployed to customers, who will not be using a build system, the 
system must rely on packages and online updates.


Embedded systems look more like desktops these days.

- End-users will make changes to the system:
- plugins and other applications.
- configuration data
- application data (e.g. loggings, EPG data)
- There is not enough room in the flash for two full images.
- There is usually a virtually indestructable bootloader that can 
recover even from fully erasing the NAND flash.
- Flash filesystems are usually NAND. NAND isn't suitable for read-only 
root filesystems, you want to wear-level across the whole flash.


For the OpenPLi settop boxes we've been using online upgrades which 
basically just call opkg update  opkg upgrade for many years, and 
there's never been a real disaster. The benefits easily outweigh the 
drawbacks.


When considering system upgrades, too much attention is being spent in 
the corner cases. It's not really a problem if the box is bricked when 
the power fails during an upgrade. As long as there's a procedure the 
end-user can use to recover the system (on most settop boxes, debricking 
the system is just a matter of inserting a USB stick and flipping the 
power switch).




--
Mike Looijmans
--
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [yocto] RFC: Improving the developer workflow

2014-08-09 Thread Alex J Lennon

On 09/08/2014 09:13, Mike Looijmans wrote:
 On 08/07/2014 03:05 PM, Paul Eggleton wrote:
 On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:
 Historically I, and I suspect others, have done full image updates of
 the storage medium,  onboard flash or whatever but these images are
 getting so big now that I am trying to  move away from that and into
 using package feeds for updates to embedded targets.

 Personally with how fragile package management can end up being, I'm
 convinced
 that full-image updates are the way to go for a lot of cases, but
 ideally with
 some intelligence so that you only ship the changes (at a filesystem
 level
 rather than a package or file level). This ensures that an upgraded
 image on
 one device ends up exactly identical to any other device including a
 newly
 deployed one. Of course it does assume that you have a read-only
 rootfs and
 keep your configuration data / logs / other writeable data on a separate
 partition or storage medium. However, beyond improvements to support for
 having a read-only rootfs we haven't really achieved anything in
 terms of out-
 of-the-box support for this, mainly due to lack of resources.

 Full-image upgrades are probably most seen in lab environments,
 where the software is being developed.

 Once deployed to customers, who will not be using a build system, the
 system must rely on packages and online updates.

 Embedded systems look more like desktops these days.

 - End-users will make changes to the system:
 - plugins and other applications.
 - configuration data
 - application data (e.g. loggings, EPG data)
 - There is not enough room in the flash for two full images.
 - There is usually a virtually indestructable bootloader that can
 recover even from fully erasing the NAND flash.
 - Flash filesystems are usually NAND. NAND isn't suitable for
 read-only root filesystems, you want to wear-level across the whole
 flash.


Agreeing with much you say Mike, I was under the impression that there
are block management layers now which will wear level across partitions?

So you could have your read only partition but still wear levelled
across the NAND ?

 For the OpenPLi settop boxes we've been using online upgrades which
 basically just call opkg update  opkg upgrade for many years, and
 there's never been a real disaster. The benefits easily outweigh the
 drawbacks.

 When considering system upgrades, too much attention is being spent in
 the corner cases. It's not really a problem if the box is bricked
 when the power fails during an upgrade. As long as there's a procedure
 the end-user can use to recover the system (on most settop boxes,
 debricking the system is just a matter of inserting a USB stick and
 flipping the power switch).



For us on this latest project - and indeed the past few projects - it is
a major problem (and cost) if the device is bricked. These devices are
not user-maintainable and we'd be sending engineers out around the world
to fix.

Not a good impression to make with the customers either.

Whether we're a usual use case I don't know.

Cheers,

Alex

-- 

Dynamic Devices Ltd http://www.dynamicdevices.co.uk/

Alex J Lennon / Director
1 Queensway, Liverpool L22 4RA

mobile: +44 (0)7956 668178

Linkedin http://www.linkedin.com/in/alexjlennon Skype
skype:alexjlennon?add

This e-mail message may contain confidential or legally privileged
information and is intended only for the use of the intended
recipient(s). Any unauthorized disclosure, dissemination, distribution,
copying or the taking of any action in reliance on the information
herein is prohibited. E-mails are not secure and cannot be guaranteed to
be error free as they can be intercepted, amended, or contain viruses.
Anyone who communicates with us by e-mail is deemed to have accepted
these risks. Company Name is not responsible for errors or omissions in
this message and denies any responsibility for any damage arising from
the use of e-mail. Any opinion and other statement contained in this
message and any attachment are solely those of the author and do not
necessarily represent those of the company.

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [yocto] RFC: Improving the developer workflow

2014-08-09 Thread Mike Looijmans

On 08/09/2014 10:44 AM, Alex J Lennon wrote:


On 09/08/2014 09:13, Mike Looijmans wrote:

On 08/07/2014 03:05 PM, Paul Eggleton wrote:

On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:

Historically I, and I suspect others, have done full image updates of
the storage medium,  onboard flash or whatever but these images are
getting so big now that I am trying to  move away from that and into
using package feeds for updates to embedded targets.


Personally with how fragile package management can end up being, I'm
convinced
that full-image updates are the way to go for a lot of cases, but
ideally with
some intelligence so that you only ship the changes (at a filesystem
level
rather than a package or file level). This ensures that an upgraded
image on
one device ends up exactly identical to any other device including a
newly
deployed one. Of course it does assume that you have a read-only
rootfs and
keep your configuration data / logs / other writeable data on a separate
partition or storage medium. However, beyond improvements to support for
having a read-only rootfs we haven't really achieved anything in
terms of out-
of-the-box support for this, mainly due to lack of resources.


Full-image upgrades are probably most seen in lab environments,
where the software is being developed.

Once deployed to customers, who will not be using a build system, the
system must rely on packages and online updates.

Embedded systems look more like desktops these days.

- End-users will make changes to the system:
- plugins and other applications.
- configuration data
- application data (e.g. loggings, EPG data)
- There is not enough room in the flash for two full images.
- There is usually a virtually indestructable bootloader that can
recover even from fully erasing the NAND flash.
- Flash filesystems are usually NAND. NAND isn't suitable for
read-only root filesystems, you want to wear-level across the whole
flash.



Agreeing with much you say Mike, I was under the impression that there
are block management layers now which will wear level across partitions?

So you could have your read only partition but still wear levelled
across the NAND ?


Going off-topic here I guess, but I think you can use the UBI block 
layer in combination with e.g. squashfs. Never tried it, but it should 
be possible to create an UBI volume, write a squash blob into it and 
mount that.


However, any system that accomplishes that, is sort of cheating. It 
isn't a read-only rootfs in the true meaning of the word any more. In 
time, the volume will move around on the flash, thus the rootfs will be 
re-written.



For the OpenPLi settop boxes we've been using online upgrades which
basically just call opkg update  opkg upgrade for many years, and
there's never been a real disaster. The benefits easily outweigh the
drawbacks.

When considering system upgrades, too much attention is being spent in
the corner cases. It's not really a problem if the box is bricked
when the power fails during an upgrade. As long as there's a procedure
the end-user can use to recover the system (on most settop boxes,
debricking the system is just a matter of inserting a USB stick and
flipping the power switch).


For us on this latest project - and indeed the past few projects - it is
a major problem (and cost) if the device is bricked. These devices are
not user-maintainable and we'd be sending engineers out around the world
to fix.

Not a good impression to make with the customers either.

Whether we're a usual use case I don't know.


I think you're a very usual use case, and it's a valid one indeed. I'm 
just trying to create awareness that there are projects out there that 
use OE for consumer products, and have millions of devices running in 
the end-users' living rooms, who upgrade at a whim (feed servers sending 
out about 4TB traffic each month).


I've also done medical devices where, just as you say, bricking it just 
isn't an option. These are typically inaccessible by the end-user, and 
see no modification other than about 1k of configuration data (e.g. wifi 
keys) during their lifespan.


--
Mike Looijmans
--
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [yocto] RFC: Improving the developer workflow

2014-08-09 Thread Alex J Lennon

On 09/08/2014 12:22, Mike Looijmans wrote:
 On 08/09/2014 10:44 AM, Alex J Lennon wrote:

 Going off-topic here I guess, but I think you can use the UBI block
 layer in combination with e.g. squashfs. Never tried it, but it should
 be possible to create an UBI volume, write a squash blob into it and
 mount that.

 However, any system that accomplishes that, is sort of cheating. It
 isn't a read-only rootfs in the true meaning of the word any more. In
 time, the volume will move around on the flash, thus the rootfs will
 be re-written.


I guess it comes down to what risks we're trying to guard against here?

Thinking aloud...

If I believe that my UBI - or other - layer is robust (and I think it is
nowadays?) then I should be able to believe that UBI can wear level my
data across NAND without a risk of data-loss due to bad sectors, power
interruption or other (assuming enough spare blocks)

Now if that's a true statement then the risk of my main
'read-only-but-wear-levelled' file-system becoming corrupted due to this
is very low.

I think I would accept that risk - with some testing to prove it out to
myself - given that the main file-system partition is likely to be the
largest partition and if I am minimising cost/size of flash then I want
to be able to wear level using that larger area.

I've had exactly this problem before with e.g. data/logs on small
read-write data partitions which rapidly kill the flash as there's a
very small area being wear levelled.


So, what I am thinking is more of a risk for us is if I remount that OS
filesystem as read/write and start doing some kind of update to it,
whether via package feeds or some delta-based system?

I think if I could remount read-write / start a transaction / do the
update / commit the update transaction that would be rather good. And of
course if it gets interrupted or otherwise fails we just roll-back.

 For the OpenPLi settop boxes we've been using online upgrades which
 basically just call opkg update  opkg upgrade for many years, and
 there's never been a real disaster. The benefits easily outweigh the
 drawbacks.

 When considering system upgrades, too much attention is being spent in
 the corner cases. It's not really a problem if the box is bricked
 when the power fails during an upgrade. As long as there's a procedure
 the end-user can use to recover the system (on most settop boxes,
 debricking the system is just a matter of inserting a USB stick and
 flipping the power switch).

 For us on this latest project - and indeed the past few projects - it is
 a major problem (and cost) if the device is bricked. These devices are
 not user-maintainable and we'd be sending engineers out around the world
 to fix.

 Not a good impression to make with the customers either.

 Whether we're a usual use case I don't know.

 I think you're a very usual use case, and it's a valid one indeed. I'm
 just trying to create awareness that there are projects out there that
 use OE for consumer products, and have millions of devices running in
 the end-users' living rooms, who upgrade at a whim (feed servers
 sending out about 4TB traffic each month).

 I've also done medical devices where, just as you say, bricking it
 just isn't an option. These are typically inaccessible by the
 end-user, and see no modification other than about 1k of configuration
 data (e.g. wifi keys) during their lifespan.


That's really interesting. Do you mind me asking who pays for that
traffic? (!)

Yes we have done some medical devices in the past. This current crop is
smart buildings which is similarly difficult to access if something
blows up.
Then we've done some in-car telematics and train telemetry which is all
similarly difficult due to inaccessibility, maintenance constraints, and the
desire to keep the users' fingers out of the device.

I guess it's horses for courses isn't it. Glad to hear I'm not too much
of an outlier ;)

Cheers,

Alex

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [yocto] RFC: Improving the developer workflow

2014-08-08 Thread Nicolas Dechesne
On Thu, Aug 7, 2014 at 3:05 PM, Paul Eggleton
paul.eggle...@linux.intel.com wrote:
 Personally with how fragile package management can end up being, I'm convinced
 that full-image updates are the way to go for a lot of cases, but ideally with
 some intelligence so that you only ship the changes (at a filesystem level
 rather than a package or file level). This ensures that an upgraded image on
 one device ends up exactly identical to any other device including a newly
 deployed one. Of course it does assume that you have a read-only rootfs and
 keep your configuration data / logs / other writeable data on a separate
 partition or storage medium. However, beyond improvements to support for
 having a read-only rootfs we haven't really achieved anything in terms of out-
 of-the-box support for this, mainly due to lack of resources.

 However, whilst I haven't had a chance to look at it closely, there has been
 some work on this within the community:

 http://sbabic.github.io/swupdate/swupdate.html
 https://github.com/sbabic/swupdate
 https://github.com/sbabic/meta-swupdate/


fwiw, Ubuntu has started to do something like that for their phone images, see

https://wiki.ubuntu.com/ImageBasedUpgrades

I haven't used nor looked into the details... i just had heard about
it, and thought it was worth mentioning it here. however the main
design idea from that wiki page is exactly what we are discussing
here. e.g. build images on the 'server' side using our regular tools,
but deploy binary differences on targets.
-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [yocto] RFC: Improving the developer workflow

2014-08-08 Thread Alex J Lennon
Hi Paul,
 Personally with how fragile package management can end up being, I'm 
 convinced 
 that full-image updates are the way to go for a lot of cases, but ideally 
 with 
 some intelligence so that you only ship the changes (at a filesystem level 
 rather than a package or file level). This ensures that an upgraded image on 
 one device ends up exactly identical to any other device including a newly 
 deployed one. Of course it does assume that you have a read-only rootfs and 
 keep your configuration data / logs / other writeable data on a separate 
 partition or storage medium. However, beyond improvements to support for 
 having a read-only rootfs we haven't really achieved anything in terms of out-
 of-the-box support for this, mainly due to lack of resources.

 However, whilst I haven't had a chance to look at it closely, there has been 
 some work on this within the community:

 http://sbabic.github.io/swupdate/swupdate.html
 https://github.com/sbabic/swupdate
 https://github.com/sbabic/meta-swupdate/
  


I had a quick look at this. It's interesting. If I am reading this
correctly it's based on the old

- Bootloader runs Partition A
- Update Partition B, set Bootloader to run Partition B
-   On failure stay on partition A and retry update.
- Bootloader runs Partition B
- Update Partition A, set Bootloader to run Partition A
-  etc.

We've done this type of thing before and it works well. Of course the
drawback is the amount
of flash you need to achieve it but it is a good robust system.

I'd be interested to see how this could work with filesystem deltas say.
I don't _think_ that is
documented here?

...

Thinking a little further what would also really interest me would be to
consider using the
transactionality of the underlying file-system or block-management layer
for the update process.

Given nowadays journalling and log-structure file-systems are already
designed to fail-back when
file/meta-data modifications are interrupted surely we should be able to
start a macro-transaction
point at the start of the  partition update,  and if that update doesn't
complete with a macro-commit
then the f/s layer should be able to automatically roll itself back?
Perhaps the same could be done at
a block management layer?

Cheers,

Alex

-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [yocto] RFC: Improving the developer workflow

2014-08-07 Thread Alex J Lennon

On 07/08/2014 10:10, Paul Eggleton wrote:
 Hi folks,

 As most of you know within the Yocto Project and OpenEmbedded we've been 
 trying to figure out how to improve the OE developer workflow. This 
 potentially 
 covers a lot of different areas, but one in particular I where think we can 
 have some impact is helping application developers - people who are working 
 on 
 some application or component of the system, rather than the OS as a whole.

 Currently, what we provide is an installable SDK containing the toolchain, 
 libraries and headers; we also have the ADT which additionally provides some 
 Eclipse integration (which I'll leave aside for the moment) and has some 
 ability to be extended / updated using opkg only.

 The pros:

 * Self contained, no extra dependencies
 * Relocatable, can be installed anywhere
 * Runs on lots of different systems
 * Mostly pre-configured for the desired target machine

 The cons:

 * No ability to migrate into the build environment
 * No helper scripts/tools beyond the basic environment setup
 * No real upgrade workflow (package feed upgrade possible in theory, but no 
 tools to help manage the feeds and difficult to scale with multiple releases 
 and 
 targets)


Very interesting Paul.

fwiw Upgrade solutions are something that is still a read need imho,  as
I think we discussed at one of the FOSDEMs.

(The other real need being an on-board test framework, again imho, and
which I believe is ongoing)

Historically I, and I suspect others, have done full image updates of
the storage medium,  onboard flash or whatever but these images are
getting so big now that I am trying to  move away from that and into
using package feeds for updates to embedded targets.

My initial experience has been that

- as you mention it would be really helpful to have something more
around management  of package feed releases / targets.

- some automation around deployment of package feeds to production
servers would help,   or at least some documentation on best practice.
 
The other big issue I am seeing, which is mostly my own fault thus far,
is that I have sometimes  taken  the easy option of modifying the root
filesystem image in various ways within the image recipe (for example
changing  a Webmin configuration perhaps)

However when I then come to upgrade a package in-situ, such as Webmin,
the changes  are  then overwritten.

I think this is probably also an issue when upgrading packages that have
had local modifications made, and I wonder whether there's a solution to
this that I'm not aware of?

I am aware of course that mainstream package management tools allow
diffing, upgrading,  ignoring and such but I am unsure as to how that is
supported under Yocto at present?

As a minimum I will have to make sure my OEM recipe changes are all in
the correct .bbappends I believe think (more best practice notes there)
and I definitely need to understand better how configuration file
changes are handled when upgrading packages.

Cheers,

Alex



-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [yocto] RFC: Improving the developer workflow

2014-08-07 Thread Paul Eggleton
Hi Alex,

On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:
 On 07/08/2014 10:10, Paul Eggleton wrote:
 fwiw Upgrade solutions are something that is still a read need imho,  as
 I think we discussed at one of the FOSDEMs.

 (The other real need being an on-board test framework, again imho, and
 which I believe is ongoing)

Indeed; I think we've made some pretty good progress here in that the Yocto 
Project QA team is now using the automated runtime testing to do QA tests on 
real hardware. Reporting and monitoring of ptest results is also being looked 
at as well as integration with LAVA.
 
 Historically I, and I suspect others, have done full image updates of
 the storage medium,  onboard flash or whatever but these images are
 getting so big now that I am trying to  move away from that and into
 using package feeds for updates to embedded targets.

Personally with how fragile package management can end up being, I'm convinced 
that full-image updates are the way to go for a lot of cases, but ideally with 
some intelligence so that you only ship the changes (at a filesystem level 
rather than a package or file level). This ensures that an upgraded image on 
one device ends up exactly identical to any other device including a newly 
deployed one. Of course it does assume that you have a read-only rootfs and 
keep your configuration data / logs / other writeable data on a separate 
partition or storage medium. However, beyond improvements to support for 
having a read-only rootfs we haven't really achieved anything in terms of out-
of-the-box support for this, mainly due to lack of resources.

However, whilst I haven't had a chance to look at it closely, there has been 
some work on this within the community:

http://sbabic.github.io/swupdate/swupdate.html
https://github.com/sbabic/swupdate
https://github.com/sbabic/meta-swupdate/
 
 My initial experience has been that
 
 - as you mention it would be really helpful to have something more
 around management  of package feed releases / targets.
 
 - some automation around deployment of package feeds to production
 servers would help,   or at least some documentation on best practice.

So the scope of my proposal is a little bit narrower, i.e. for the SDK; and 
I'm suggesting that we mostly bypass the packaging system since it doesn't 
really add much benefit and sometimes gets in the way when you're an 
application developer in the middle of development and the level of churn is 
high (as opposed to making incremental changes after the product's release).

 The other big issue I am seeing, which is mostly my own fault thus far,
 is that I have sometimes  taken  the easy option of modifying the root
 filesystem image in various ways within the image recipe (for example
 changing  a Webmin configuration perhaps)
 
 However when I then come to upgrade a package in-situ, such as Webmin,
 the changes  are  then overwritten.
 
 I think this is probably also an issue when upgrading packages that have
 had local modifications made, and I wonder whether there's a solution to
 this that I'm not aware of?

We do have CONFFILES to point to configuration files that may be modified (and 
thus should not just be overwritten on upgrade). There's not much logic in the 
actual build system to deal with this, we just pass it to the package manager; 
but it does work, and recipes that deploy configuration files (and bbappends, 
if 
the configuration file is being added rather than changed from there) should 
set 
CONFFILES so that the right thing happens on upgrade if you are using a 
package manager on the target.

A related issue is that for anything other than temporary changes it's often 
not clear which recipe you need to change/append in order to provide your own 
version of a particular config file. FYI I entered the following enhancement 
bug 
some months ago to add a tool to help with that:

https://bugzilla.yoctoproject.org/show_bug.cgi?id=6447

 I am aware of course that mainstream package management tools allow
 diffing, upgrading,  ignoring and such but I am unsure as to how that is
 supported under Yocto at present?

There isn't really any support for this at the moment, no; I think we'd want 
to try to do this kind of thing at the build system end though to avoid tying 
ourselves to one particular package manager.
 
Cheers,
Paul

-- 

Paul Eggleton
Intel Open Source Technology Centre
-- 
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [yocto] RFC: Improving the developer workflow

2014-08-07 Thread Alex J Lennon

On 07/08/2014 14:05, Paul Eggleton wrote:
 Hi Alex,

 On Thursday 07 August 2014 11:13:02 Alex J Lennon wrote:
 On 07/08/2014 10:10, Paul Eggleton wrote:
 fwiw Upgrade solutions are something that is still a read need imho,  as
 I think we discussed at one of the FOSDEMs.

 (The other real need being an on-board test framework, again imho, and
 which I believe is ongoing)
 Indeed; I think we've made some pretty good progress here in that the Yocto 
 Project QA team is now using the automated runtime testing to do QA tests on 
 real hardware. Reporting and monitoring of ptest results is also being looked 
 at as well as integration with LAVA.
  

Great news. I really want to look into this but as ever time is the
constraining factor.

 Historically I, and I suspect others, have done full image updates of
 the storage medium,  onboard flash or whatever but these images are
 getting so big now that I am trying to  move away from that and into
 using package feeds for updates to embedded targets.
 Personally with how fragile package management can end up being, I'm 
 convinced 
 that full-image updates are the way to go for a lot of cases, but ideally 
 with 
 some intelligence so that you only ship the changes (at a filesystem level 
 rather than a package or file level). This ensures that an upgraded image on 
 one device ends up exactly identical to any other device including a newly 
 deployed one. Of course it does assume that you have a read-only rootfs and 
 keep your configuration data / logs / other writeable data on a separate 
 partition or storage medium. However, beyond improvements to support for 
 having a read-only rootfs we haven't really achieved anything in terms of out-
 of-the-box support for this, mainly due to lack of resources.

Deltas. Yes I've seen binary deltas attempted over the years, with
varying degrees of success.

I can see how what you say could work at a file-system level if we could
separate out the
writeable data, yes. Not sure I've seen any tooling around this though?

Back in the day when I first started out with Arcom Embedded Linux in
the late '90's I had us
do something similar with a read only JFFS2 system partition and then a
separate app/data
partition. That seemed to work OK. Maybe I need to revisit that.

 However, whilst I haven't had a chance to look at it closely, there has been 
 some work on this within the community:

 http://sbabic.github.io/swupdate/swupdate.html
 https://github.com/sbabic/swupdate
 https://github.com/sbabic/meta-swupdate/

I'll take a look. Thanks.

  
 My initial experience has been that

 - as you mention it would be really helpful to have something more
 around management  of package feed releases / targets.

 - some automation around deployment of package feeds to production
 servers would help,   or at least some documentation on best practice.
 So the scope of my proposal is a little bit narrower, i.e. for the SDK; and 
 I'm suggesting that we mostly bypass the packaging system since it doesn't 
 really add much benefit and sometimes gets in the way when you're an 
 application developer in the middle of development and the level of churn is 
 high (as opposed to making incremental changes after the product's release).

Mmm. Yes I can understand that. Same here.

 The other big issue I am seeing, which is mostly my own fault thus far,
 is that I have sometimes  taken  the easy option of modifying the root
 filesystem image in various ways within the image recipe (for example
 changing  a Webmin configuration perhaps)

 However when I then come to upgrade a package in-situ, such as Webmin,
 the changes  are  then overwritten.

 I think this is probably also an issue when upgrading packages that have
 had local modifications made, and I wonder whether there's a solution to
 this that I'm not aware of?
 We do have CONFFILES to point to configuration files that may be modified 
 (and 
 thus should not just be overwritten on upgrade). There's not much logic in 
 the 
 actual build system to deal with this, we just pass it to the package 
 manager; 
 but it does work, and recipes that deploy configuration files (and bbappends, 
 if 
 the configuration file is being added rather than changed from there) should 
 set 
 CONFFILES so that the right thing happens on upgrade if you are using a 
 package manager on the target.

 A related issue is that for anything other than temporary changes it's often 
 not clear which recipe you need to change/append in order to provide your own 
 version of a particular config file. FYI I entered the following enhancement 
 bug 
 some months ago to add a tool to help with that:

 https://bugzilla.yoctoproject.org/show_bug.cgi?id=6447

Interesting thanks. I don't recall seeing this in recipes. I might have
missed it or are not many
people using this features in their recipes? Of course the next issue is
not knowing what you
want to do with those conf files during an unattended upgrade onto an
embedded box.