Tom Lane:
You'd have to commit a failing patch first to break CI for all other
developers.

No, what I'm more worried about is some change in the environment
causing the build to start failing.  When that happens, it'd better
be an environment that many of us are familiar with and can test/fix.

The way I understand how this work is, that the images for the VMs in which those CI tasks run, are not just dynamically updated - but are actually tested before they are used in CI. So the environment doesn't just change suddenly.

See e.g. [1] for a pull request to the repo containing those images to update the linux debian image from bullseye to bookworm. This is exactly the image we're talking about. Before this image is used in postgres CI, it's tested and confirmed that it actually works there. If one of the jobs was using musl - that would be tested as well. So this job would not just suddenly start failing for everybody.

I do see the "familiarity" argument for the SanityCheck task, but for a different reason: Even though it's unlikely for this job to fail for musl specific reasons - if you're not familiar with musl and can't easily test it locally, you might not be able to tell immediately whether it's musl specific or not. If musl was run in one of the later jobs, it's much different: You see all tests failing - alright, not musl specific. You see only the musl test failing - yeah, musl problem. This should give developers much more confidence looking at the results.

Best,

Wolfgang

[1]: https://github.com/anarazel/pg-vm-images/pull/91


Reply via email to