> SLSA attestations do *not* protect against a rogue release manager 100% agree with this statement. Never did. And I think the idea of "what is allowed to be run automatically" basically translates to those expectations. But it's not "rogue release manager"—it's a "rogue environment" (potentially changed by the entity that provides the runner). And those two are essentially equivalent.
> simple, well-reviewed GitHub Actions workflow, combined with a script that lets the release manager verify the *input* artifacts, should provide enough security to release “convenience” (hence unofficial) binaries. Maybe we are discussing the same thing. I think we are too closely "tied" to binary reproducibility checks or maybe we should change the name we are using because too many people get confused. Checking binary bit-for-bit reproducibility is the easiest, but it's not the only form of "reproducibility check" I can imagine. You don't even have to run exactly what was run remotely (or by release manager) if you want to check for reproducibility. For me - as explained before "Explainable diff" is good enough. And it does not have to be a `diff` output. Ideally, someone (preferably 3 PMC members voting on the release) should validate that the output resulted from the given input and verify it (ideally with 100% certainty assuming their system was not compromised). And this should be part of the process. That is my personal expectation. Now, how it's done is a different story. I imagine this process involves preparing a packaged version of x binaries pulled from SVN and singing it - using a specific set of tools that produce the packages (run on remote server). For me personally verification of "reproducibility" (or whatever else we call it) could look like this: 1) pulls the same inputs from SVN 2) Verify that those inputs are present - unmodified - in the package (or if modifications are applied before packing, apply the same modifications the signing process does). Assume this is done via source code in the ASF repo and tools / binaries we trust enough (there is always **something** we should trust, like Python to run the scripts, Bash, etc.) If there is any other code inside that package—for example, code generated by the packaging tool—run the source script to verify that this "package tool generated" code has not been modified. Ideally, someone who provides the packaging tool should also provide a verification tool. 3) Verify that the package contains only the items described above. 4) Finally (of course) verify that the package signature is properly verified using our public key I assume that the generated package is not an "unknown format blob," but rather some kind of package.zip, whatever- mauybe disguised as an `.exe` file - usually that's what those packaging tools do) - and the structure of those files is well defined (or at least reverse engineered with good level of understanding of what's inside). When I was CTO in a mobile development company, we prepared, of course .IPA files [1] for iOS and APK files [2] for Android [2] were well-known: people understood their content, and ready-to-use well-established open-source tools existed to read, understand, parse, and analyse their internally packaged content. We even developed iOS private stores that essentially re-packaged and resigned the apps, adding features for private app-store distribution. Eventually, they had to use macOS-based machines for the final re-signing, but the packages were deeply analysed and checked automatically during the process. For example, we scanned binaries inside for malware signatures before submitting them to the company's private distribution channels. That was a huge airline in Germany, by the way, and a funny story with the company's trade unions (!) - they actually verified the API calls made to ensure we are truly protecting the privacy of company employees, so knowing what we are packaging was a serious concern. [1] https://www.appdome.com/how-to/devsecops-automation-mobile-cicd/appdome-basics/structure-of-an-ios-app-binary-ipa [2] https://en.wikipedia.org/wiki/Apk_(file_format) I could easily imagine that - for example - the .msi (I guess produced for Windows is at the very least reverse-engineered and there are tools to extract and verify its internals individually. Possibly even those are tools that work on Mac or Linux without needing to install Windows. If we have such a check - I would be really happy to tick the "It's reproducible" box. J. On Sun, Mar 29, 2026 at 4:30 PM Piotr P. Karwasz <[email protected]> wrote: > Hi Jarek, > > On 21.03.2026 13:28, Jarek Potiuk wrote: > > I think attestations provided with a remote, inaccessible (by the release > > manager) build system inherently differ from reproducible builds and > > protect against a bit different scenarios. One thing they do indeed - > they > > do prevent the risk from a rogue release manager. But they also shift > trust > > to GH making it more difficult to trace the sources of compromise when > > external attackers breach them - or even to know that a breach occurred. > > The latter point is crucial: if your GH runner is compromised, you will > > only find out after the release. If the machine used by the release > manager > > is compromised and you have reproducible builds, you will know before the > > release. I think for me the last point is really important, and it's the > > reason why dropping reproducible builds and switching to GH makes it a > bit > > more dangerous. > > > > GH actions do not provide the same level of security or address exactly > the > > same risk as reproducibility, mostly because compromising the single > system > > that provides attestations invalidates the attestation for that build. > > Reproducibility addresses this by requiring you to compromise few such > > systems - potentially largely independent ones—which makes it > exponentially > > harder to pull of - unless you are able to hack-in the tools (IDEs, AI > CLIS > > etc.) that multiple people are likely to use (and remote build system > > doesn't need). > > SLSA attestations do *not* protect against a rogue release manager and > the Trivy incident illustrates this well. The malicious v0.69.4 > release was almost certainly published alongside SLSA build > attestations, because the attacker had full access to the same CI > credentials that generate them. I couldn't locate the attestation > payloads themselves, but there are Rekor entries corresponding to the > malicious Docker images. You can find them using the artifact > checksums published here: > > > https://www.docker.com/blog/trivy-supply-chain-compromise-what-docker-hub-users-should-know/ > > The *presence* of an attestation doesn't mean a release is legitimate. > What it does give you is richer information about the release process, > and that's where I think attestations and reproducible builds are > complementary rather than alternatives: > > - Reproducible builds let you verify that *output* artifacts are > identical bit-for-bit, but tell you nothing about inputs or build > environment. > > - SLSA attestations also record hashes of *input* artifacts and build > environment metadata. This means: > - If outputs are *reproducible* but you can't reproduce them locally, > the environment data can help you diagnose why. > - If outputs are *not* reproducible, but you trust the build > process, you can at least verify that the inputs were correct. > > For NetBeans, reproducible output artifacts remain the ideal. But > when that isn't achievable, I'd argue that a simple, well-reviewed > GitHub Actions workflow, combined with a script that lets the release > manager verify the *input* artifacts, should provide enough security to > release “convenience” (hence unofficial) binaries. > > Such workflow, of course should be minimal, with no caches and validate > the checksum of each downloaded resource. > > Piotr > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [email protected] > For additional commands, e-mail: > [email protected] > >
