Hi Simon,

On 3/3/26 4:14 PM, Simon Glass wrote:
Hi,

At present binman handles finding/building tools needed to build images.

There is no equivalent mechanism for the firmware blobs themselves.
Users must manually obtain these and point binman at them.

This is quite painful at present. Each board requires some splunking,
reading vendor documentation, etc.

I believe we could create a similar setup for blobs, where they are
described in the image description (compatible string) and there is a
way to build them, download them, etc.

What do people think?


I don't like what we're doing for bintools already. We don't necessarily know what we're building (e.g. build_from_git() which only allows building a given branch's HEAD or the default branch's HEAD), we download stuff without enforcing checksum (e.g. fetch_from_drive()/fetch_from_url()) from places we don't control, and downloading from the distro's package manager is APT-centric (and racey, since apt-get update can be run from two Binman processes at a time and that is still racey to date, apt-get install can be made multiprocess-safe but is also racey in older versions of APT). At least, we can bypass all that if the bintool is already present on the system.

This already isn't giving me much trust in doing that for binaries that are going to be running on the target. This is us trying to come up with yet another build system, I'm not sure it is worth it. But I can see the UX improvement even though compiling for Rockchip is kinda muscle memory at this point to me so I don't see the high cost newcomers may experience.

There are build systems (Yocto) that deny network access outside of fetching source code. That is, binman won't be allowed to fetch anything from the network once the u-boot recipe is outside of the do_fetch task (fetching U-Boot source code). Therefore, it is an absolute must that the same binman nodes must be compiled with the blobs available in-tree vs building them with binman. We also must guarantee binary reproducibility, things that Yocto does right now (at least for OpenEmbedded-Core, which admittedly doesn't contain TF-A or OP-TEE OS) and Buildroot aims to do IIRC. Reproducibility isn't that easy to achieve and we would need tests to guarantee that (Yocto does as part of the CI). Then you have the question of multi-toolchain support or support for different flags in binary vs U-Boot itself. We very well could compile TF-A with clang but U-Boot with GCC for example.

We should be able to replace the binaries without getting a new version (commit hash) of U-Boot. E.g. there may be a CVE fixed in TF-A but we shouldn't have to wait for either TF-A or U-Boot to do a new release to get this fix in.

What exactly are you trying to prevent from happening or to improve? Should we somehow force documentation for boards to contain instructions on how to build/fetch those binaries and make that part of our CI insted of mocking the binaries? Note that this also will increase the time spent in CI, which is already quite long.

Cheers,
Quentin

Reply via email to