Just two small remarks:

1. C++11 actually introduced the idea of *inline namespace* specifically to
detect the use of multiple (incompatible) versions of a single library. The
*compiling* phase sees through inline namespaces as if they were not there,
however the symbol get mangled differently, creating a linking/loading
error when trying to resolve symbols.


2. I appreciate the idea of using both version A.B.C and X.Y.Z of a
particular library, however what happens when I pass an object "Dummy"
created by A.B.C to a function of X.Y.Z expecting a "Dummy":

- is there any guarantee that the memory layout is identical ?
- could there by an issue with virtual tables (for traits) ? If X.Y.Z calls
"slot 3" of the virtual table (known as "(int, string) -> int") and it
turns out that A.B.C had a completely different function here (say
"(string, string) -> string") then I am foreseeing a crash.

The latter is a very real issue in C++, as there is no portable way to
include a virtual method in a class hierarchy. It just so happens that the
Itanium ABI guarantees that you can get away with introducing such a method
as the last virtual method in the last class of the hierarchy that
introduces new virtual methods... but this is quite error-prone, and it is
impossible to know whether a user of your library further refine your
class...

In short, isn't there a risk of crashes if one accidentally links two
versions of a given library and start exchanging objects ? It seems
impractical to prove that objects created by one version cannot
accidentally end up being passed to the other version:

- unless the types differ at compilation time (seems awkward)
- or you can prove that at most one version of the library may "leak" those
types (ie, all others dependencies use the other versions purely internally)

Escape analysis is pretty though in general, however should you have a
mechanism for external/internal dependencies (the latter not being exposed
in the interface), then the compiler could possibly be augmented to prove
it, I guess. And thus you would just have to ensure, for each library, that
no two dependencies reference the same library (but different versions) as
an external dependency of theirs.


Well, maybe not so small remarks...

-- Matthieu



On Sat, Feb 1, 2014 at 10:39 AM, Gaetan <[email protected]> wrote:

> There is not only API change. Sometime, from a minor version to another, a
> feature get silently broken (that is silent regression). While it might not
> impact libA which depends on it, but it may fail libB which also depends on
> it, but with a previous version.
> As a result, libA force installation of this dependency without any
> concern (all its features works) but libB get broken without any concern.
>
> And that the real mess to deal with.
>
> That's happened this week at my job...
>
> I largely prefer each library be self contained, ie, if libA depends on
> libZ version X.X.X, and libB depends on libZZ version Y.Y.Y, just let each
> one be installed and used at there own version. That is perfectly
> acceptable (and even recommended) for a non system integrated software (for
> example when a companie want to build a software with minimum system
> dependency that would run on any version of Ubuntu, with the only
> dependency on libc.
> On the other hand, when the software get integrated into the distribution
> (ubuntu, redhat, homebrew), let the distrib version manager do its job.
>
>
>
> -----
> Gaetan
>
>
>
> 2014-02-01 Tony Arcieri <[email protected]>:
>
>> On Fri, Jan 31, 2014 at 4:03 PM, Lee Braiden <[email protected]> wrote:
>>
>>> This would be counterproductive.  If a library cannot be upgraded to
>>> 1.9, or even 2.2, because some app REQUIRES 1.4, then that causes SERIOUS,
>>> SECURITY issues.
>>>
>>
>> Yes, these are exactly the types of problems I want to help solve. Many
>> people on this thread are talking about pinning to specific versions of
>> libraries. This will prevent upgrades in the event of a security problem.
>>
>> Good dependency resolvers work on constraints, not specific versions.
>>
>> The ONLY realistic way I can see to solve this, is to have all higher
>>> version numbers of the same package be backwards compatible, and have
>>> incompatible packages be DIFFERENT packages, as I mentioned before.
>>>
>>> Really, there is a contract here: an API contract.
>>
>>
>> Are you familiar with semantic versioning?
>>
>> http://semver.org/
>>
>> Semantic Versioning would stipulate that a backwards incompatible change
>> in an API would necessitate a MAJOR version bump. This indicates a break in
>> the original contract.
>>
>> Ideally if people are using multiple major versions of the same package,
>> and a security vulnerability is discovered which affects all versions of a
>> package, that the package maintainers release a hotfix for all major
>> versions.
>>
>> --
>> Tony Arcieri
>>
>> _______________________________________________
>> Rust-dev mailing list
>> [email protected]
>> https://mail.mozilla.org/listinfo/rust-dev
>>
>>
>
> _______________________________________________
> Rust-dev mailing list
> [email protected]
> https://mail.mozilla.org/listinfo/rust-dev
>
>
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to