>> On Thu, Sep 26, 2024 at 3:24 AM Bernhard M. Wiedemann via rb-general 
>> <rb-general@lists.reproducible-builds.org 
>> <mailto:rb-general@lists.reproducible-builds.org>> wrote:
>> Hi,
>> 
>> On our summit in Hamburg we discussed that r-b should be listed as a 
>> recommendation or requirement in new standards to encourage people to 
>> ensure builds are reproducible.
>> 
>> Via [1] I found 3 relevant standards:
>> 
>> * NIST Secure Software Development Framework = 
>> https://csrc.nist.gov/Projects/ssdf
>> * OpenSSF Scorecard = https://openssf.org/resources/guides/
>> * SLSA (Supply Chain Levels for Software Artifacts Framework)
>> 
>> 
>> SLSA level4 already lists reproducible builds as optional/recommended
>> = https://slsa.dev/spec/v1.0/faq#q-what-about-reproducible-builds

Not really. Reproducible builds are *mentioned*, but they are not really 
"recommended" in SLSA.
Here is what its FAQ says:

"[Verified reproducible] is one option to secure build steps of a supply chain 
...
That said, verified reproducible builds are not a complete solution to supply 
chain integrity, nor are they practical in all cases:
... Some builds cannot easily be made reproducible, as noted above. ...
Therefore, SLSA does not require verified reproducible builds directly. 
Instead, verified reproducible builds are one option for implementing the 
requirements."

I have earlier proposed creating a separate track in SLSA to specifically
focus on reproducible builds. Since sometimes reproducible builds
are hard to do, my idea was to create some intermediate levels that would
help people gain partial credit for steps moving towards reproducibility.
However, lack of time has meant I haven't
had any time to devote to making progress. 


> NIST 
>> https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-218.pdf 
>> has on page 16:
>>> PO.3.2: Follow recommended security practices to
>>> deploy, operate, and maintain tools and toolchains.
>> 
>>> Example 4: Implement the technologies and processes needed for reproducible
>>> builds.

That's simply an example of how one might implement supporting toolchains.

The overall practice says you should use automation to "improve... 
reproducibility
[and many other characteristics]", but I don't think you could claim that
reproducibility is a *requirement* of the SSDF. You can easily *NOT*
have reproducible builds, yet claim conformance to SSDF with a straight face.

> In the OpenSSF docs, I found
>> https://github.com/ossf/scorecard/blob/main/docs/checks.md
>> but I think, it should be promoted in other contexts there, too.

Yes, that's where the Scorecard checks are described.
There's no check for reproducible builds in Scorecard.

Most of the Scorecard checks focus on the source code repo, but
"Signed-Releases" does indeed have a check on release artifacts
(in this case, are they signed). So checks on releases *are* already being done
in certain cases.

One problem is the effort to do so. Scorecard is run weekly on over 1 million 
OSS
projects. Adding a test for reproducibility would take a LOT more CPU power
(and therefore real money), unless it can depend on another data source
like <https://reproducible-builds.org/citests/>. Most of those are system-level
packages (though Go is language-level). In contrast, a vast number of the 
Scorecard
projects are packaged at the language-level (e.g., JavaScript, Python, etc.),
so we currently have a data mismatch.

I can imagine having such an integration, but that'd take effort. If someone is
willing to re-run reproducible build checks for many language-level ecosystems, 
and
we can work out an API to query & can trust those results, I can *totally* see
it being a Scorecard measure. If LF or OpenSSF has to run many tests every week,
I won't say it *can't* be done, but someone would have to argue for its funding.


> On Sep 26, 2024, at 10:21 AM, Justin Cappos <jc4...@nyu.edu> wrote:
>> 
>> You could also suggest this for addition to the OpenSSF Badge Program.   
>> https://www.bestpractices.dev/en
>> 
>> It likely would need to be at a new, higher level rather than gold.  Other 
>> things in this ecosystem like in-toto attestations, etc. could likely also 
>> be added there.   


Reproducible builds are a gold-level requirement, and have been for years.

Here's more info.

You can see the OpenSSF Best Practices Badge criteria at all levels here:
https://www.bestpractices.dev/criteria
To force display of the English version (instead of your preferred natural 
language) use:
https://www.bestpractices.dev/en/criteria
There are 3 badge levels: Passing, Silver, and Gold.

I lead that project & I'm a big fan of reproducible builds. When they're 
available
they can counter many attacks. HOWEVER, while
in some cases they're already available or relatively easy, in other cases
reproducible builds are challenging. So they're not at the Passing level.

Instead, at the Passing level we simply require an automated build system:
• If the software produced by the project requires building for use, the 
project MUST provide a working build system that can automatically rebuild the 
software from source code. {N/A allowed} [build]

At the Silver level there is this requirement which is "on the way":
   • The project MUST be able to repeat the process of generating information 
from source files and get exactly the same bit-for-bit result. If no building 
occurs (e.g., scripting languages where the source code is used directly 
instead of being compiled), select "not applicable" (N/A). {N/A justification} 
{Met justification} [build_repeatable]
That's *almost* reproducible builds, but there's no requirement that someone 
outside be able to do or verify it, which obviously is a weaker form.

Reproducible builds *ARE* already a gold-level requirement:
Gold:
The project MUST have a reproducible build. If no building occurs (e.g., 
scripting languages where the source code is used directly instead of being 
compiled), select "not applicable" (N/A). {N/A justification} {Met URL} 
[build_reproducible]

--- David A. Wheeler


Reply via email to