On 01/02/14 14:55, Vladimir Matveev wrote:
Is it possible at all to find the latest version of a library which is
still compatible completely automatically? Incompatibilites can be
present on logic level, so the compilation wiith incompatible version
will succeed, but the program will work incorrectly. I don't think
that this can be solved without assumptions about versioning (like
semver) and/or without manual intervention.

No, it's not. It's always going to be the library developers / programmer's responsibilty, to some extent.

For example, if a library adds three new functions to fit within some begin/end wrapper, it may modify the begin/end functions to behave differently. If the library author does that in a way that breaks existing logic, then that's a bug, to my mind, or a deliberate divergence / API contract breakage.

At that point, what the author has REALLY done is decided that his original design for begin() end(), and for that whole part of the library in general, is wrong, and needs a REDESIGN. What he can then do is:

a) Create different functions, which have extended functionality, and support the three new in-wrapper functions. So, you could call:

        begin()
            old_funcs...
        end()

    OR:

        extended_begin()
            old_funcs()
            new_funcs()
        extended_end()

b) Create a new library, similar to the old one, but with new functionality, new API guarantees, etc.

Ignoring the problem just creates a mess though, which ripples throughout the development space (downstream products, library forks, etc.), and no package manager will completely solve it after the fact, except to acknowledge the mess and install separate packages for every program that needs them (but that has security / feature-loss issues).


Couldn't we just use more loose variant of version pinning inside
semantic versioning, with manual user intervention when it is needed?
For example, assuming there is something like semantic versioning
adopted, packages specify dependencies on certain major version, and
the dependency resolver downloads latest available package inside this
major version.

You can do that within a major version, except for one case - multiple developers creating diverged versions of 2.13, based on 2.12, each with their own features. Really, though, what you're doing is just shifting/brushing the compatibility issue under the rug each time: y is OK in x.y because x guarantees backwards compatibility. Fork1 in x.y.fork1 is OK, because x.y guarantees backwards compatibility... and so on, ad infinitum. Whatever level you're at, you have two issues:

a) Backwards compatibility between library versions
b) The official, named version of the library, vs. unofficial code.

Assuming you guarantee A in some way (backwards compatibility in general, across all versions of the library, or backwards compatibility for minor versions), you still have incompatibility if (b) arises, which it will in all distributed repository scenarios, UNLESS you can do something like git's version tracking per branch, where any version number is unique, and also implies every version before. Then you're back to whether you want to do that per major version, or overall.

But doing it per major version recursively raises the question of which major version is authorised: what if you have a single library at 19.x, and TWO people create 20.0 independently? Again, you have incompatibility. So, you're back to the question of (a): is it the same library, or should an author simply stay within the bounds of a library's API, and fork a new CONCEPTUALLY DIFFERENT new lib (most likely with a new name) when they break that API?


If for some reason automatically selected dependency is
incompatible with our package or other dependencies of our package,
the user can manually override this selection

But what does the user know about library APIs? He needs to dig into the logic of the program, and worse, the logic of underlying libraries, to figure out that:

somelib::begin() from github://somelib/someplace/v23.2/src/module1/submod2/utils.rs, line 24

does not mean the same as:

somelib::begin() from github://somelib/otherplace/v23.2/src/module1/submod2/utils.rs, line 35

! ;)


major version. This is, as far as I understand, the system of slots
used by Portage as Vladimir Lushnikov described. Slots correspond to
major versions in semver terms, and other packages depend on concrete
slot.

This sounds interesting (I'll have to track down Vladimir's original post on that), but so far, I'm not sure it solves the problem of a forked minor version, any more than other methods solve a forked major version. It seems to me that it always comes back to people choosing to break library APIs, and other people trying to clean it up in one way or another, which ultimately fails, at some point -- major, minor, fork, repository, branch, or otherwise -- wherever the guarantee of backwards-compatibility is no longer given.

But the user has ultimate power to select whichever version they
need, overriding automatic choice.

I agree that overriding choices is always important. For example, a library may make NO guarantees about performance, or may make guarantees to improve performance in certain ways, but if you know that one version happens to performs well in your particular environment, and later versions don't, then may you choose to use that anyway, assuming it "just works" and you don't want to maintain it or upgrade it for security, etc. Those are big trade-offs, though, and should not be encouraged. Again, the right solution is to fork the code at the version that works performance-wise, and introduce new API guarantees for differenet performance characterics. At that point, your library with different performance should have a different name, but should still be getting security updates etc., rather than just being pinned at one version forever.


--
Lee


_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to