Hi all,
On 6/30/25 05:51, Ismael Luceno wrote:
A wider definition threatens with a chasing game we don't want to play with
upstream authors.
We want people to, ideally, fix their buildsystems, and maintain that support
forward.
Yes, in a perfect world, we want upstreams to always produce reproducible
artifacts.
But someone has to actually *do this work*. And not everyone is convinced that reproducible builds are a priority or
even necessary, unfortunately. So what are we to do with this?
Should we just say "ok, this upstream doesn't have the desire, or time, or resources to guarantee reproducible builds,
therefore reproducible builds for this project are a lost cause"? This seems a very defeatist attitude to me.
At some point it can be made a requirement, we don't expect to do any reverse
engineering, and we don't want it to be an afterthought in the future.
A narrow definition keeps those problems at bay as inherently out of scope.
I would push back on this as well as I do not think this is the place for a definition. This may be a _goal_, but it is
not the _definition_ of a reproducible build. The definition should be clearly applicable to individual artifacts in my
opinion, rather than merely projects as a whole. I think whether a project follows reproducible builds belongs to
something more like OpenSSF scorecard.
I want to see a definition where I can go to a specific artifact/binary, read some associated documentation for that
artifact/binary and say "yes, this is reproducible" or "no, this is not reproducible".
Binary distributions should aim for the same experience source based
distributions have been providing for 25 years, binary packages should act like
an optimisation to skip the build more or less.
Are you talking about Linux distributions here? What does this have to do with reproducible builds? This sounds just
like a very specific application of reproducible builds imo.
So it isn't about verifying the work of any single maintainer, but ideally a
distributed check on the whole ecosystem.
Expanding on my earlier comments, a "distributed check on the whole ecosystem" _is_ inheriently "verifying the work of
any single maintainer" but doing that hundreds or thousands of times, in a repeated, (semi-)automated manner. You can't
have a distributed check without verifying individual works.
-Samuel