Le vendredi 09 novembre 2018 à 10:28 -0500, Stephen Gallagher a écrit :
> 
> Consider the Go case: we know that most Go packages will be statically
> linked (issues with that are a different topic), so we know they will
> work fine once built. However, if the application upstream cannot
> build with the latest "stable" version because of
> backwards-incompatible changes, it seems to me that it's perfectly
> reasonable to allow them to use a slightly older Go compiler from a
> module stream. Strictly speaking, this is no different from offering
> an SCL or a compat-* package of the compiler, except that having it as
> a module means that its executables and other tools will be in
> standard locations, so the build process doesn't need to be patched to
> locate them elsewhere.

Please do not drag Go into this if you want to handwave Go away
problems. Yes modules will be useful in Go but only to blow away in EPEL
the rotten Go codebase RHEL ships.

But anyway, since you referred to GO.

Go is the perfect example of why bundling as a general approach does not
work and does not scale. In case you haven't noticed years of bundling
Go side has resulted in such a deep widespread rot Google is scrambling
to write a Go v2 language version that will force Go projects to version
and update.

All the people that claim bundling allows “using a slightly older
version” (implying it's a good safe maintained older version) are lying
through their teeth. Yes it allows doing that but that's not how people
use it. And it does not matter if you bundle via self-provided windows
DLLS, containers, flatpacks, modules or rhel versions. 

Bundling basically allows reusing third party code blindly without any
form of audit or maintenance. You take third party code, you adapt your
code to its current API, and you forget about it.

You definitely *not* check it for security legal or other problems, you
definitely *not* check regularly if CVEs or updates have been released,
you definitely *not* try to maintain it yourself. Any bundler dev that
tells you otherwise lies. The average bundler dev will tell you “Look at
my wonderful up to date award-wining modern code, security problems? Ah,
that, not my code, I bundle it, not my problem”.

It is however a *huge* problem for the people on the receiving end of
the resulting software, static builds or not. Static builds do not add
missing new features or fix security holes. They just remove the shared
libs that could  be used by the sysadmin use to track them. And since
malware authors do not bother identifying how software was compiled,
before attempting to exploit it, static builds do not hinder them the
slightest.

While trying to improve Go packaging in Fedora by myself I found serious
old security issues in first-class Go code. First-class as in benefiting
from huge publicised ongoing dev investments from major companies like
Google, Red Hat or Microsoft. It’s not hard, you do not even need to
write Go code, just take the bundled pile of components those bundle,
and read the upstream changelog of those components for later versions.
You will hit pearls like “emergency release because of *** CVE”. Or
“need to change the API to fix a race in auth token processing”. And the
answer of the projects that bundled a previous state of this code was
never “we have a problem” or “we have fixed it some other way” but “go
away, we haven’t planned to look or touch this code before <remote
future>”.

And, again, I’m no Go dev, or dev in general, I didn’t even try any form
of systematic audit, that was just the bits jumping to attention when I
hit API changes and had to look at the code history to try to figure
when they occurred. The day any bundled codebase is subjected to the
kind of herd security research java was some years ago and CPUs are
today sparks are going to fly all over the place.

And this is a natural phenomenon trivial to explain. Maintaining old
code versions is hard. Upstreams are not interested in supporting you.
You have to redo their work by yourself, while forbidding yourself API
changes (if you were ready to accept them you wouldn't have bundled in
the first place). And modern code is so deeply interdependent, freeze
one link in the dependency web and you get cascading effects all other
the place. You quickly end up maintaining old versions of every single
link in this web. If you try to do it seriously, you effectively have to
fork and maintain the whole codebase. IE all the no-support problems of
barebones free software, with none of the community friends help that
should come with it.

That's what RH tries to do for EL versions. It takes a *huge* dev
investment to do in a semi-secure no-new features way. And after a
decade, RH just dumps the result, because even with all those efforts,
it reaches terminal state and has no future.

There is only one way to maintain cheaply lots of software components
that call each other all over the place. That’s to standardise on the
latest stable release of each of them (and when upstream does not
release, the latest commit). And aggressively port everything to the
updates of those versions when they occur. If you are rich, maintain a
couple of those baselines, maximum. flatpackers do not say otherwise
with their frameworks (except I think they deeply underestimate the
required framework scope).

And sure, every once in a while, porting takes consequent effort, it can
not be done instantaneously, devs are mobilized elsewhere, etc. That's
when you use targeted compat packages, to organise the porting effort,
to push the bits already ported while keeping the ones not ready yet.
And to *trace* this exception, to remind yourself that if you do not fix
this you’re going to be in deep trouble and get in old code maintenance
hell.

Not porting is not an “optimization”. Not porting is pure unadulterated
technical debt. Porting to upstream API changes *is* cheaper
than freezing on an old version of upstream’s code you get to maintain
in upstream’s stead.

If you try to use modules as a general release mechanism, and not
temporary compat mechanisms, you *will* hit this old code maintenance
hell sooner than you think. Not a problem for RH, old code maintenance
is basically the reason people pay for RHEL, huge problem for Fedora.

Because bundling has never been a magic solution. It’s only a magic
solution when you are the average dev that does not want to maintain
other people’s code, nor adapt to this other people’s code changes.  

One bonus of bundling is removal of any kind of nagging that would
incite the dev to take a look at what happens upstream, so he can sleep
soundly  at night.

But the real bundling perk is that, because container and static build
introspection tech is immature, you get to *not* *maintain* the code you
ship to users, with bosses, security auditors, etc being none the wiser.
Force any bundler dev to actually maintain all the code he ships, and I
can assure you, his love affair with bundling will end at once.

Regards,

-- 
Nicolas Mailhot
_______________________________________________
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org

Reply via email to