On Mon, 2020-02-17 at 04:27 -0800, philip.le...@domino-uk.com wrote:
> Something using the built-in cache mirror in Yocto–there are a few
> ways it can do this, as it’s essentially a file share somewhere. 
> https://pelux.io/2017/06/19/How-to-create-a-shared-sstate-dir.html
> for an example shows how to share it via NFS, but you can also use http or 
> ftp.

Sharing sstate between the workers is the obvious win, as is rm_work to
reduce individual build sizes.

> Having a single cache largely solves the storage issue as there is
> only one cache, so having solved that issue, it introduces a few more
> questions and constraints:
> 
> How do we manage the size of the cache?
> There’s no built-in expiry mechanism I could find. This means we’d
> probably have to create something ourselves (parse access logs from
> the server hosting the cache and apply a garbage collector process).

The system is setup to "touch" files it uses if it has write access so
you can tell which artefacts are being used.

> How/When do we update the cache?
> All environments contributing to the cache need to be identical (that
> ansible playbook just grabs the latest of everything) to avoid subtle
> differences in the build artefacts depending on which environment
> populated the cache.

All environments contributing to the cache don't have to identical, we
aim to build reproducible binaries regardless of the host OS.

Obviously you reduce risk by doing so but I just wanted to be clear
that we have protection in place for this and sstate does support it.

> How much time will fetching the cache from a remote server add to the
> build?

Mostly depends on your interconnecting network speed.

Some mentioned NFS, we do support NFS for sstate, our autobuilders make
extensive use of that.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#48459): https://lists.yoctoproject.org/g/yocto/message/48459
Mute This Topic: https://lists.yoctoproject.org/mt/71347835/21656
Mute #yocto: https://lists.yoctoproject.org/mk?hashtag=yocto&subid=6691583
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to