But the new plan reminds me strongly of bad experiences in Haskell-land.
That sounds interesting. Is this documented somewhere?
Not really documented, more my experience from having followed
various Haskell lists and forums for over 15 years now (*).
I have often seen the pattern where a language or library design
choice was discussed in length and detail in the appropriate sub-forum,
then decided on, implemented, and rolled out, at first without a
hitch, as long as it only affected those directly in the know; then,
a couple of months later or so, the main forums would start seeing
trouble reports from users who didn't use the new feature directly,
but who where relying on code (applications/libraries) whose
_dependencies_ where affected by the change.
When Haskell implementations still had monolithic flags to enable
all extensions at once, it would happen like this: some dependency D
would not compile in standard mode, because it needed one of the
extended language features, say F1; so extended mode had to be
enabled for that dependency D, and sometimes for the whole project
P; but, in extended mode, _all_ extended features were enabled,
including the one feature F2 that had recently been changed; enabling
the new version of F2 would then interfere with compilation of either
D or P, hopefully via an early error - compilation failure.
But the dependency D had been written long before F2 had been
redesigned, and P itself had no direct uses of either F1 or F2, only
via the dependency D. So the authors/maintainers of D and P had
no idea of the discussion that led to the change of F2, they just saw
an early error for code that used to work (and would still work, but
for the fact that they had upgraded to newer implementations).
Similar fun ensued when authors started to use some extensions,
but had to enable all extensions to make their code compile: in
addition to the extended features they were working with, their
code was now also affected by all other extended features, often
expert-mode extensions that they had no idea about.
When Haskell implementations started using explicit feature-based
versioning, thing calmed down considerably. Library authors would
select the set of extensions they needed to use, and enable them
only for their code (think of it as a specification of a language "API").
So neither their library code nor the code of library clients were
affected by unrelated features (or changes to unrelated features).
And authors moving into language extensions would also enable
them one by one, slowly building up to the level of complexity and
expertise they needed. Blog posts would mention which extensions
needed to be enabled to make examples run. When code used a
language extension, the compiler would complain that this wasn't
part of the portable standard, and suggest which pragma to use in
order to enable just this one extension.
That doesn't mean that all problems are solved (eg, library versioning
remains an interesting problem), but language extensions cause less
trouble than they used to, and there is much less pressure for extending
the language standard (Haskell 2010 is still very conservative, but a
small flood of extensions are available, selectively).
Claus
(*) For background, the early history of Haskell is documented
in this HOPL-III paper (no discussion of feature-based versioning,
but describes the early language revisions, up to the long-time
standard Haskell98, as well as some of the extensions, and the
conflict between language design experimentation and stability):
"A History of Haskell: being lazy with class", 2007
http://research.microsoft.com/en-us/um/people/simonpj/papers/history-of-haskell/index.htm
_______________________________________________
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss