Re: [OE-core] [PATCH 1/1] oeqa/utils/commands.py: Fix get_bb_vars() when called without arguments

2016-12-14 Thread Lopez, Mariano



On 12/14/2016 10:01 AM, Leonardo Sandoval wrote:



On 12/14/2016 01:45 AM, mariano.lo...@linux.intel.com wrote:

From: Mariano Lopez 

Commit 9d55e9d489cd78be592fb9b4d6484f9060c62fdd broke calling 
get_bb_vars()

when called without arguments. This fix this issue.

Signed-off-by: Mariano Lopez 
---
  meta/lib/oeqa/utils/commands.py | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/meta/lib/oeqa/utils/commands.py 
b/meta/lib/oeqa/utils/commands.py

index 6acb24a..aecf8cf 100644
--- a/meta/lib/oeqa/utils/commands.py
+++ b/meta/lib/oeqa/utils/commands.py
@@ -149,7 +149,8 @@ def get_bb_vars(variables=None, target=None, 
postconfig=None):

  """Get values of multiple bitbake variables"""
  bbenv = get_bb_env(target, postconfig=postconfig)
  -variables = variables.copy()
+if variables is not None:
+variables = variables.copy()


Is 'variables' type is  a dict (or some derived type)? I see some 
get_bb_env calls using lists and lists do not have the copy method.


I only see 3 calls in OE core, two of them uses None as first argument, 
and the last one uses a list, also if you check the function it will 
handle the argument as a list; so the function expect a list or None. 
And the list support the copy method, I just double check it:


>>> l = [1,2,3]
>>> l.copy()
[1, 2, 3]

--
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 2/8] oeqa/sdkext/devtool.py: remove workspace/sources before running test cases

2016-12-13 Thread Lopez, Mariano



On 12/12/2016 10:45 PM, Paul Eggleton wrote:

On Wed, 16 Nov 2016 22:19:31 Robert Yang wrote:

Fixed:
MACHINE = "qemux86-64"
require conf/multilib.conf
MULTILIBS = "multilib:lib32"
DEFAULTTUNE_virtclass-multilib-lib32 = "x86"

$ bitbake core-image-minimal -cpopulate_sdk_ext
[snip]
ERROR: Source tree path
/path/to/tmp/work/qemux86_64-poky-linux/core-image-minimal/1.0-r0/testsdkex
t/tc/workspace/sources/v4l2loopback-driver already exists and is not
empty\n' [snip]

This is because the test case will run twice
(environment-setup-core2-64-poky-linux and
environment-setup-x86-pokymllib32-linux), it would fail in the second
run, 'devtool reset' can not remove sources, so remove it before running
test cases.

[YOCTO #10647]

Signed-off-by: Robert Yang 
---
  meta/lib/oeqa/sdkext/devtool.py | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/meta/lib/oeqa/sdkext/devtool.py
b/meta/lib/oeqa/sdkext/devtool.py index 65f41f6..f101eb6 100644
--- a/meta/lib/oeqa/sdkext/devtool.py
+++ b/meta/lib/oeqa/sdkext/devtool.py
@@ -15,6 +15,9 @@ class DevtoolTest(oeSDKExtTest):
  self.myapp_cmake_dst = os.path.join(self.tc.sdktestdir,
"myapp_cmake") shutil.copytree(self.myapp_cmake_src, self.myapp_cmake_dst)

+# Clean sources dir to make "git clone" can run again
+shutil.rmtree(os.path.join(self.tc.sdktestdir,
"tc/workspace/sources"), True) +
  def _test_devtool_build(self, directory):
  self._run('devtool add myapp %s' % directory)
  try:

It seems to me that's what's missing here is a proper teardown process like we
have for oe-selftest, so that tests clean up after themselves whether they
succeed or fail. I'm unsure as to whether that is part of the plan for the new
QA refactoring though.


To clean directories before/after the test it is not in the plans of the 
QA refactoring, they way Robert did the clean up is appropriated, in the 
setUpClass method, this way it will run before every class test and only 
one time.


Mariano



In the absence of that however I guess we don't have much choice but to do
something like this.

Cheers,
Paul



--
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH v2] grub: add -Wno-error=trampolines to native CFLAGS

2016-03-19 Thread Lopez, Mariano



On 3/17/2016 7:30 PM, Randle, William C wrote:

Previous patch was not against master. Updated against master branch.

Fixes YOCTO 9201
Adds -Wno-error=trampolines to native CFLAGS prevent multiple compile
errors when using gcc 5.3.0 for gentoo.

Signed-off-by: Bill Randle 
---
  meta/recipes-bsp/grub/grub-efi_2.00.bb | 4 
  1 file changed, 4 insertions(+)

diff --git a/meta/recipes-bsp/grub/grub-efi_2.00.bb 
b/meta/recipes-bsp/grub/grub-efi_2.00.bb
index 4e80e18..ca73234 100644
--- a/meta/recipes-bsp/grub/grub-efi_2.00.bb
+++ b/meta/recipes-bsp/grub/grub-efi_2.00.bb
@@ -35,6 +35,10 @@ EXTRA_OECONF = "--with-platform=efi --disable-grub-mkfont \
  
  EXTRA_OECONF += "${@bb.utils.contains('DISTRO_FEATURES', 'largefile', '--enable-largefile ac_cv_sizeof_off_t=8', '--disable-largefile', d)}"
  
+# ldm.c:114:7: error: trampoline generated for nested function 'hook' [-Werror=trampolines]

+# and many other places in the grub code when compiled with some native gcc 
5.3 compilers
+CFLAGS_append_class-native = " -Wno-error=trampolines"
+
  do_install_class-native() {
 install -d ${D}${bindir}
 install -m 755 grub-mkimage ${D}${bindir}
--
2.5.0


I'm a gentoo user and I have the trampoline issue with gcc 4.9.3. I just 
tested this patch in gentoo and it works, I can remove my bbappend file 
from my personal layer!


Mariano
--
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [PATCH 1/1] yocto-bsp: Set linux-yocto-4.1 as default for x86-64

2016-03-02 Thread Lopez, Mariano



On 3/2/2016 11:37 AM, Saul Wold wrote:

On Wed, 2016-03-02 at 08:14 +, mariano.lo...@linux.intel.com wrote:

From: Mariano Lopez 

Setting default kernel to linux-yocto-4.1 now that
3.19 bbappend is no longer in the tree.


This should default to 4.4 for 2.1, correct? 4.1 is the LSB/LTS kernel
for 2.1.


I was setting the default to the same as the other archs in order to fix 
the error with x86-64. If it's okay to change to version 4.4 at this 
point in the release I can send a v2 patch with all the archs updated.


Mariano
--
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [yocto] RFC: Reference updater filesystem

2015-11-24 Thread Lopez, Mariano



On 11/24/2015 12:06 AM, Anders Darander wrote:

* Mariano Lopez  [151123 22:41]:


There has been interest in an image based software updater in Yocto Project.

Ok. Sure, it might be nice with something that can be shared, instead
off everyone's building our own solutions.


The idea is to integrate one, not build one from the scratch.




The proposed solution for a image based updater is to use Stefano Babic's
software updater (http://sbabic.github.io/swupdate). This software do a
binary copy, so it is needed to have at least two partitions, these
partitions would be the rootfs and the maintenance partition. The rootfs
it's the main partition used to boot during the normal device operation, on
the other hand, the maintenance will be used to update the main partition.

I haven't checked the swupdate tool, though I'd suspect that it also
supports the alternating rootfs use case? (I.e. run system1 update
system2; reboot to system2. Next update is system1). This is a rather
common setup, not least when you need a remote upgrade facility.

Would your proposed inclusion to the Yocto Project support that case
too?


Yeah, it would be possible to have two "rootfs" and do the update and 
the just reboot one time.





To update the system, the user has to connect to device and boot in the
maintenance partition; once in the maintenance partition the software
updater will copy the new image in the rootfs partition. A final reboot into
the rootfs it is necessary to complete the upgrade.

Like said above, not all system can be reached manually (at least not in
cost efficient way). Sure, the mainenance partition scheme can be made
to work anyway...


I plan to release this in phases, in the first one it will be manually 
do the update. The idea is implement tools to automate the process of 
the update (where it can be automated).





As mentioned before the the software will copy an image to the partition, so
everything in that partition will be wiped out, including custom
configurations. To avoid the loss of configuration I explore three different
solutions:
1. Use a separate partition for the configuration.
   a. The pro of this method is the partition is not touched during the
update.
   b. The con of this method is the configuration is not directly in rootfs
(example: /etc).

I'd vote for that as well. Though, I only keep the re-writable
configurations here. The one that are constant between all systems are
shipped in /etc in the read-only-rootfs.


With the above information I'm proposing to use a separate partition for the
configuration; this is because is more reliable and doesn't require big
changes in the current architecture.
So, the idea is to have 4 partitions in the media:
1. boot. This is the usual boot partition
2. data. This will hold the configuration files. Not modified by updates.
3. maintenance. This partition will be used to update rootfs.
4. rootfs. Partition used for normal operation.

How flexible to you intend to make this system? Allow everything that
swupdate supports? Or a specific subset?


If you are referring to the filesystem creation I would say very 
flexible. It will be implemented using wic instead of a class, so just 
needs to change a file to suit your needs. If you refer to the swupdate 
features, I plan to have a generic use case; as an example I won't use 
the MTD capabilities of the software.




Cheers,
Anders



--
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] RFC: Reference updater filesystem

2015-11-24 Thread Lopez, Mariano

On 11/24/2015 1:32 AM, Randy Witt wrote:



On Mon, Nov 23, 2015 at 1:41 PM, Mariano Lopez 
mailto:mariano.lo...@linux.intel.com>> 
wrote:


There has been interest in an image based software updater in
Yocto Project. The proposed solution for a image based updater is
to use Stefano Babic's software updater
(http://sbabic.github.io/swupdate). This software do a binary
copy, so it is needed to have at least two partitions, these
partitions would be the rootfs and the maintenance partition. The
rootfs it's the main partition used to boot during the normal
device operation, on the other hand, the maintenance will be used
to update the main partition.

To update the system, the user has to connect to device and boot
in the maintenance partition; once in the maintenance partition
the software updater will copy the new image in the rootfs
partition. A final reboot into the rootfs it is necessary to
complete the upgrade.

As mentioned before the the software will copy an image to the
partition, so everything in that partition will be wiped out,
including custom configurations. To avoid the loss of
configuration I explore three different solutions:
1. Use a separate partition for the configuration.
  a. The pro of this method is the partition is not touched during
the update.
  b. The con of this method is the configuration is not directly
in rootfs (example: /etc).

Configuration files can be anywhere a package decides to install them. 
So having a single partition would be difficult. If you could, you 
would most likely be forced to have an initramfs to make sure /etc was 
mounted before init runs.


/etc was an example, the image should have the required files to make 
the target boot and the get the application configuration from this 
other partition. This is like openwrt does, it has a read-only rootfs 
and small read-write partition where the user can write its 
configuration and restore it at boot time.



2. Do the backup during the update.
  a. The pro is the configuration is directly in rootfs.
  b. The con is If the update fail most likely the configuration
would be lost.

Why would the configuration be lost if the update fails? Couldn't it 
just be stored on the thumbdrive?


If there is a power loss while the configuration is copied, the 
partition could go corrupt and would be difficult to recover. And as you 
mentioned  before the configuration files could be anywhere, so the 
script must be customized to get all those files and once the update is 
complete another script must restore those files, these could be 
cumbersome instead of the application have the config in another partition.



3. Have an OverlayFS for the rootfs or the partition that have the
configuration.
  a. The pro is the configuration is  "directly" in rootfs.
  b. The con is there is need to provide a custom init to
guarantee the Overlay is mounted before the boot process.

With the above information I'm proposing to use a separate
partition for the configuration; this is because is more reliable
and doesn't require big changes in the current architecture.

So, the idea is to have 4 partitions in the media:
1. boot. This is the usual boot partition
2. data. This will hold the configuration files. Not modified by
updates.
3. maintenance. This partition will be used to update rootfs.
4. rootfs. Partition used for normal operation.

Mariano
-- 
___

Openembedded-core mailing list
Openembedded-core@lists.openembedded.org

http://lists.openembedded.org/mailman/listinfo/openembedded-core




--
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core


Re: [OE-core] [yocto] [oe] RFC: Reference updater filesystem

2015-11-24 Thread Lopez, Mariano



On 11/24/2015 7:47 AM, Mark Hatle wrote:

On 11/24/15 4:39 AM, Roman Khimov wrote:

В письме от 23 ноября 2015 15:41:28 пользователь Mariano Lopez написал:

1. Use a separate partition for the configuration.
a. The pro of this method is the partition is not touched during the
update.
b. The con of this method is the configuration is not directly in
rootfs (example: /etc).

That's the right solution, although to do it really right (at least IMO) you
need to implement the /usr merge [1] (and that's orthogonal to using or not
using systemd), which can also help you make your /usr read-only (because
that's just code and static data) with read-write / for user data of various
sorts.

Why does merging /usr have anything to do with this?  I've read the case for
merging /usr and / and still don't understand why it "helps".  The key is that
if you have separate partitions for /usr and /, then you need to update both of
them in sequence.  Merging these two just seems like a lazy solution to people
not wanting to deal with early boot being self-contained.

Also having a separate / from /usr can help with '/' be your maintenance
partition in some cases.


3. Have an OverlayFS for the rootfs or the partition that have the
configuration.
a. The pro is the configuration is  "directly" in rootfs.
b. The con is there is need to provide a custom init to guarantee the
Overlay is mounted before the boot process.

And this is the approach I would recommend not doing. I've used UnionFS for
thing like that (overlaying whole root file system) some 6 years ago, it
sounded nice and it kinda worked, but it wasn't difficult to make it fail
(just a little playing with power), we've even seen failures on production
devices, like when you have whiteout file for directory already written, but
don't have new files in it yet and that can completely ruin the system.

Also, it usually works better when you don't have any changes in the lower
layer, but we're talking about updating it here, you can easily end up in a
situation where you have updated something in the rootfs but that was
overriden by upper layer and thus your user doesn't see any change.

When using overlayfs, I'd strongly recommend not doing it over the entire
rootfs.  This is generally a bad idea for the reasons stated above.

However, overlaying a part of the rootfs often makes sense.  /etc is a good
example.  This way applications that want their configurations in /etc can still
have it that way -- and there is always a (hopefully) reasonable default
configuration, should the configuration 'partition' get corrupted.  So worst
case the user can start over on configurations only.


Do you know a way to mount the overlay before all the services start? I 
tried to do this, but the only reliable way to do it was using a custom 
init, I couldn't accomplish this using systemd or sysvnit.




For applications and user data, these can and should be stored outside of the
main rootfs.  The FHS/LSB recomment '/opt', but while it doesn't matter if it's
-actually- /opt, the concept itself is good.


So going back to image upgrade.  The key here is that you need a way to update
arbitrary images with arbitrary contents and a mechanisms that is smart enough
to generate the update (vs a full image flash) to conserve bandwidth.


I was concerned about this too, not just bandwidth but resources in the 
target. Unfortunately I couldn't find an option that is generic enough 
to just provide the update. The idea is to integrate the tool into YP, 
not to develop a new one. Some of the tools that I checked needed to use 
btrfs partitions, need python in the target, or other constrains that 
make the update system impossible for a lot of targets.




I still contend it's up to the device to be able to configure the system on how
to get the update and where to apply the update.  The tooling (host and target)
should simply assist with this.

Delta updates need version information in order to know they're doing the right
sequence of updating.

Full updates don't, but should be sent in a format that limits "empty space",
effectively send them as sparse files.

On many devices you will need to flash as part of the download due to space
limitations.


The tool mentioned has this capability.



And you need the ability to flash multiple partitions.

maintenance
/
/usr
data

etc..  whatever it takes to either upgrade or restore the device.


Yes, that would be possible, the only limitation is that is not possible 
to flash the partition that is being used.




--Mark


With the above information I'm proposing to use a separate partition for
the configuration; this is because is more reliable and doesn't require
big changes in the current architecture.

So, the idea is to have 4 partitions in the media:
1. boot. This is the usual boot partition
2. data. This will hold the configuration files. Not modified by updates.
3. maintenance. This partition will be used to update rootfs.
4. rootfs. 

Re: [OE-core] [oe] RFC: Reference updater filesystem

2015-11-24 Thread Lopez, Mariano



On 11/24/2015 4:30 AM, Roman Khimov wrote:

В письме от 23 ноября 2015 15:41:28 пользователь Mariano Lopez написал:

1. Use a separate partition for the configuration.
a. The pro of this method is the partition is not touched during the
update.
b. The con of this method is the configuration is not directly in
rootfs (example: /etc).

That's the right solution, although to do it really right (at least IMO) you
need to implement the /usr merge [1] (and that's orthogonal to using or not
using systemd), which can also help you make your /usr read-only (because
that's just code and static data) with read-write / for user data of various
sorts.


To be honest I'm not familiar with /usr merge, I need to check on that 
to see if it is a good option with the current OE-core infrastructure.





3. Have an OverlayFS for the rootfs or the partition that have the
configuration.
a. The pro is the configuration is  "directly" in rootfs.
b. The con is there is need to provide a custom init to guarantee the
Overlay is mounted before the boot process.

And this is the approach I would recommend not doing. I've used UnionFS for
thing like that (overlaying whole root file system) some 6 years ago, it
sounded nice and it kinda worked, but it wasn't difficult to make it fail
(just a little playing with power), we've even seen failures on production
devices, like when you have whiteout file for directory already written, but
don't have new files in it yet and that can completely ruin the system.

Also, it usually works better when you don't have any changes in the lower
layer, but we're talking about updating it here, you can easily end up in a
situation where you have updated something in the rootfs but that was
overriden by upper layer and thus your user doesn't see any change.


Thanks for sharing your experience, this is another big con for the 
Overlay option.





With the above information I'm proposing to use a separate partition for
the configuration; this is because is more reliable and doesn't require
big changes in the current architecture.

So, the idea is to have 4 partitions in the media:
1. boot. This is the usual boot partition
2. data. This will hold the configuration files. Not modified by updates.
3. maintenance. This partition will be used to update rootfs.
4. rootfs. Partition used for normal operation.

You probably don't need to separate 1 and 3, all the code for system update
should easily fit into initramfs and just making /boot a bit larger would
allow you to store some backup rootfs.


I left the /boot partition separate just in case there is need to 
replace the kernel or the bootloader. This way it would be easier to 
change using the same method as the upgrading the rootfs.




Also, you can swap 4 and 2 which will be useful if you're installing on
different sized storage devices, usually you know good enough the size of your
rootfs, but you probably want to leave more space for user data if there is an
opportunity to do so, that's just easier to do with data partition at the end.


I was thinking in the same thinking just backwards, usually 
configuration files are just small text files that don't require too 
much space. If you require a new feature in the target that will make 
rootfs to grow depending on the feature. I plan to use wic to accomplish 
the filesystem structure. A good thing about wic is that it will be very 
easy to do the swap, just need to modify two options in the .wks file.





[1] http://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/


--
___
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core