On 01/02/14 00:09, Tony Arcieri wrote:
On Fri, Jan 31, 2014 at 4:03 PM, Lee Braiden <[email protected]
<mailto:[email protected]>> wrote:
This would be counterproductive. If a library cannot be upgraded
to 1.9, or even 2.2, because some app REQUIRES 1.4, then that
causes SERIOUS, SECURITY issues.
Yes, these are exactly the types of problems I want to help solve.
Many people on this thread are talking about pinning to specific
versions of libraries. This will prevent upgrades in the event of a
security problem.
Good dependency resolvers work on constraints, not specific versions.
Agreed.
Are you familiar with semantic versioning?
http://semver.org/
Semantic Versioning would stipulate that a backwards incompatible
change in an API would necessitate a MAJOR version bump. This
indicates a break in the original contract.
I'm familiar, in the sense that it's what many libs/apps do, but again,
I don't believe that library 29.x should be backwards-incompatible with
28.x. Major versions of a package, to me, should indicate major new
features, but not abandonment of old features. If you want to redesign
some code base so it's incompatible (i.e., no longer the same thing),
then it deserves a new name.
Let's compare the mindsets of backwards-compatible library design,
vs.... oh, let's call it "major-breakage" ;) language design:
Let's say you follow a common major-breakage approach, and do this:
1) Create a "general-compression-library", version 1.0, which uses the
LZ algorithm, and exposes some details of that.
2) During the course of development, you get ideas for version 2.0
2) You publish the 1.x library
3) Create a "general-compression-library", version 2.0. This, you
decide, will use LZMA algorithm, and exposes some details of that.
4) You publish the 2.x library.
5) You receive a patch from someone, adding BZIP support, for 1.x. It
includes code to make 1.x more general. However, it's incompatible with
2.x, and you've moved on, so you drop it, or backport your 2.x stuff.
Maybe you publish 3.x, but now it's incompatible with 2.x AND 1.x...
6) All the while, people have been using your libraries in products, and
some depend on 1.x, some on 2.x, some on 3.x. It's a mess of
compatibility hell, with no clear direction, security issues due to
unmaintained code, etc.
Because details are exposed in each, 2.0 breaks compatibility with 1.x.
Under a model where version 2.x can be incompatible with version 1.x,
you say, "OK, fine. Slightly broken stuff, but new features. People
can upgrade and use the new stuff, or not. Up to them."
The problem though, is that the thinking behind all this is wrong-headed
--- beginning from bad assumptions --- and the acceptance of
backward-incompatibility encourages that way of thinking.
Let's enforce backwards-compatiblity, and see what *might* happen, instead:
1) You create a "general-compression-library", version 1.0. You use the
LZ algorithm, and expose details of that.
2) During the course of development, you get ideas for 2.0
3) You're about to publish the library, and realise that your 2.0
changes won't be backwards compatible, because 1.x exposes API details
in a non-futureproof way.
4) You do a little extra work on 1.x, making it more general -- i.e.,
living up to its name.
5) You publish 1.x
6) You create version 2.x, which ALSO supports LZMA.
7) You publish version 2.x, which now has twice as many features, does
what it says on the tin by being a general compression library, etc.
8) You receive a patch from someone, adding BZIP support, for 1.x. You
merge it in, and publish 3.x, which now supports 3 compression formats.
9) All the while, people have been using your libraries in products:
they all work with general compression library x.x, later versions being
better, minor OR major. No security issues, because you just upgrade to
the latest library version.
Now, instead of one base library and two forks, you have a one library
with three versions, each backwards-compatible, each building features
over the last. That's a MUCH better outcome.
Now, that does involve a bit more foresight, but I think it's the kind
of foresight that enforcing backwards compatibility encourages, and
rightly so.
I said *might* happen. Let's explore another turn of events, and
imagine that you didn't have the foresight in step 3 above: you create
"general-compression-library", never realising that it's not general at
all, and that 1.x is going to be incompatible with 2.x, until 1.x is
published, and you come to create 2.x. Under a backwards-compatibility
model, that might go like this:
1) You create general-compression-library, version 1.0, with LZ support,
expose details of that, and publish it.
2) You want to add LZMA support to this library, but can't because it
breaks backwards compatibility.
3) Instead, you create a new library, "universal-compression-library",
1.0, with plugin support, including a built-in plugins for both LZMA and
(via general-compression-library 1.0), LZ support.
4) You publish this as universal-compression-library, v 2.0
5) You receive a patch for BZIP support, for general-compression-library
1.x. It adds new features to general compression library, to support
both LZ and BZIP. You thank the contributor for his patches, publish
g-c-l 2.0, create a plugin for u-c-l, to support it as well.
6) All the while, people have been using your libraries in products:
some depend on the latest version of general-compression library,
version x.x, later versions being better. Some use the newer
universal-compression-library, version x.x, later versions being
better. There are two libraries, which are maintained to some extent
for now. Security issues are reduced over the first example.
So this is NOT as great an outcome as the second example, admittedly.
However, it's still much better than the first example. In contrast to
the first example, there is now a clear direction: a newer, more
future-proof, more compatible library is coming to the fore, clearly
distinguished by a new name. More products are using it, and, if
general-compression-library is ever fully deprecated, its code can be
ported to the universal lib / replaced with other, non-deprecated
plugins for the same functionality. Only products that directly
depended on the old, broken design of general-compression-library are at
risk due to unmaintained code. In that case, someone is likely to port
the application code to use universal-compression-library instead:
especially if the deprecation notice tells them to.
The difference? In the original example, you have multiple forks. No
one forks, distinguished only by version numbers. Different forks are
incompatible with each other, patches go to either, and no fork is
clearly superior, because now you have multiple solutions for the same
problem, under the same name.
Really, all we're saying here is, "don't switch things out from under
people". If your library is meant to do X, when you do A, then don't
make it suddenly crash if you don't do B beforehand. If one recipe
requires two steps, and recipe requires one, then they are different
recipes and should have different names. If two recipes use a set of
steps, and yet produce different outcomes, then there are ingredients in
there that you need to be aware of, and so really, they are different
recipes, with different ingredients, and deserve different names.
--
Lee
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev