On 8/5/19 2:21 AM, mikko.rap...@bmw.de wrote:
Hi,

On Fri, Aug 02, 2019 at 05:17:21PM +0100, Richard Purdie wrote:
On Fri, 2019-08-02 at 16:53 +0100, Richard Purdie wrote:
With the patches in master-next and this configuration in local.conf:

BB_HASHSERVE = "localhost:0"
BB_SIGNATURE_HANDLER = "OEEquivHash"

$ bitbake core-image-sato
$ bitbake m4-native -c install -f
$ bitbake core-image-sato

will result in do_populate_sysroot of m4-native running, it will see
the output matches the previous build and it will then skip to the
rootfs generation pulling all the other pieces from sstate.

Note that for this to work, m4-native has to have previously built
with the hashserv running, otherwise it has nothing to compare its
output to.

I think this should be a "big deal" for many developers, reducing
unneeded rebuilds and hence speeding up development.
Awesome, thanks for pushing this!

I should have mentioned, this code relies on reproducibile builds as
its comparing the binary output. The more reproducibile builds are, the
more likely sstate reuse will happen.

This is one reason reproducibile builds are important!
What else do users need to enable to get more reproducible builds, or
are poky defaults enough?

You can test with what we currently have by adding:

 INHERIT += "reproducible_build"

to local.conf. This is subject to change though as we are still sorting out some of the details.


Are there some tools available to debug build reproducibility issues e.g.
when task hashes suddenly changed?

The current goal of the reproducible build work is to create and OEQA test that checks if all recipes that are required for a given image can be built in a reproducible manner. This QA test will be run on the autobuilder to detect regressions. We are currently working toward getting core-image-minimal reproducible, but I think Ross Burton investigated core-image-sato and determined it wouldn't be too much extra work, so we might try that one also. Either way, the test is designed to be be easily extensible so you can write your own tests on whatever image you would like.

I think that there is a significant opportunity to improve the reporting and tests for reproducible builds, and much of this complements hash equivalence. For example, the hash equivalence code is (or can be) really good at reporting why a task's output hash changed, which would be really useful information for debugging reproducibility.

I also have some patches that should make it easier to debug packages that didn't build reproducibly when the QA check fails by allowing you to run diffoscope on them.


Cheers,

-Mikko
--
_______________________________________________
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core

Reply via email to