Bruno Haible wrote:
Jacob Bachmeyer wrote:
Essentially, this would be an automated release building service:  upon
request, make a Git checkout, run autogen.sh or equivalent, make dist,
and publish or hash the result.  The problem is that an attacker who
manages to gain commit access to a repository may be able to launch
attacks on the release building service, since "make dist" can run
scripts.  The service could probably mount the working filesystem noexec
since preparing source releases should not require running (non-system)
binaries and scripts can be run by directly feeding them into their
interpreters even if the filesystem is mounted noexec, but this still
leaves all available interpreters and system tools potentially available.
Well, it'd at least make things more difficult for the attacker, even
if it wouldn't stop them completely.
Actually, no, it would open a *new* target for attackers---the release building service itself. Mounting the scratchpad noexec would help to complicate attacks on that service, but right now there is *no* central point for an attacker to hit to compromise releases. If a central release building service were set up, it would be a target, and an attacker able to arrange a persistent compromise of the service could then tamper with later releases as they are built. This should be fairly easy to catch, if an honest maintainer has a secure environment, ("Why the **** does the central release service tarball not match mine? And what the ******** is the extra code in this diff between its tarball and mine!?") but there is a risk that, especially for large projects, maintainers start relying on the central release service instead of building their own tarballs.

The problem here was not a maintainer with a compromised system---it seems that "Jia Tan" was a malefactor's sock puppet from the start.

There are several problems that such an automated release building service
would create. Here are a couple of them:

* First of all, it's a statement of mistrust towards the developer/maintainer,
  if developers get pressured into using an automated release building
  service rather than producing the tarballs on their own.
  This demotivates and turns off developers, and it does not fix the
  original problem: If a developer is in fact a malefactor, they can
  also commit malicious code; they don't need to own the release process
  in order to do evil things.

Limiting trust also limits the value of an attack as well, thus protecting the developers/maintainers from at least sane attackers in some ways. I also think that this point misunderstands the original proposal (or I have misunderstood it). To some extent, projects using Automake already have that automated release building service; we call it "make dist" and it is a distributed service running on each maintainer's machine, including distribution package maintainers who regenerate the Autotools files. A compromise of a developer's machine is thus valuable as it allows to tamper with releases, but the risk is managed somewhat by each developer building only their own releases.

A central service as a "second opinion" would be a risk, but would also make those compromises even more difficult---now the attacker must hit both the central service *and* the dev box *and* coordinate to ensure that only packages prepared at the central service for which the maintainer's own machine is cracked are tampered, lest the whole thing be detected. This is even harder on the attacker, which is a good thing, of course.

The more dangerous risk is that the central service becomes overly trusted and ceases to be merely the "second opinion" on a release. If that occurs, not only would we be right back to no real check on the process, but now *all* the releases come from one place. A compromise of the central release service would then allow *all* releases to be tampered, which is considerably more valuable to an attacker.

* Such an automated release building service is a piece of SaaSS. I can
  hardly imagine how we at GNU tell people "SaaSS is as bad as, or worse
  than, proprietary software" and at the same time advocate the use of
  such a service.

As long as it runs on published Free Software and anyone is free to set up their own instance, I would disagree here. I think we need to work out where the line between "hosting" and "SaaSS" actually is, and I am not sure that it has a clear technical description, since SaaSS is ultimately an ethical issue.

* Like Jacob mentioned, such a service quickly becomes a target for
  attackers. So, instead of trusting a developer, you now need to trust
  the technical architecture and the maintainers of such a service.

I think I may know an example of something similar: if I recall correctly, F-Droid originally would only distribute apps built on their own compile farm, to guard against malicious developers who publish one set of sources but actually build from another. They now allow developers to use their own signing keys, but will only distribute packages for which their compile farm can generate a reproducible build matching the developer's.

The major difference between this concept and F-Droid is, of course, that F-Droid is distributing binary packages, while this concept of a central release service would prepare source tarballs from Git.

* If this automated release building service is effective in the sense
  that it eliminates evil actions from the developer, it must have extra
  complexity, to allow testing of the tarballs before they get published.
  Think about it: who will set the release tag on the git repository
  and publish that ("git push --tags")?
    - If the developer does it, then the developer has the power to
      move the git tag, which implies that the published tarballs
      (from the build service) will not match the contents of the git
      repository at that tag.
    - So, it has to be the build service which sets and pushes the
      git tag. But it needs to allow for the possibility to do release
      tarball testing and thus canceling/withdrawing the release before
      it gets published.
  It does get complicated...

* The OpenSSF is already pushing for such a release build service,
  through "OpenSSF scorecards". Summary [1]:
    "Scorecard is an automated tool from the OpenSSF that assesses
     19 different vectors with heuristics ("checks") associated with
     important software security aspects and assigns each check
     a score of 0-10.…"
  - They are pretending that their criteria guard against "malicious
    maintainers" [2]. However, in the xz case [3] they failed: they
    assigned a good score, despite of binary blobs in the repository.
  - Their tool pushes the developers to using GitHub. [2].
  - Their tool makes it clear that such a release build service requires
    consideration of "token permissions" and "branch protections" [3].

Bruno

[1] https://openssf.org/
[2] https://securityscorecards.dev/
[3] https://securityscorecards.dev/viewer/?uri=github.com/tukaani-project/xz

OpenSSF looks heavy on marketing and light on substance. They have an article <URL:https://openssf.org/blog/2024/03/30/xz-backdoor-cve-2024-3094/> about the xz backdoor incident. They fail quite impressively in describing how the backdoor actually works and generally seem rather ... vapid and vaporous.

Oh, and their tool at [3] does not work in IceCat. LibreJS blocks the scripts that make it go.


-- Jacob


Reply via email to