Jeff Younker wrote:
Thats a great theory, but that's not how the real world works. Python
packages are an ecology where there will be inconsistencies between
different minor versions of the same package.
I'm not sure what you're arguing here. If you're saying
that having a version management
Jeff Younker wrote:
That's good and fine for the situation where one application is
being deployed on a Python installation, but that isn't a very realistic
situation. There is usually more than one application/script/whatever
running off of a given Python installation.
Here's an example of
Paul Moore wrote:
Is nobody but me seeing shades of Windows DLL hell in all of this?
Dll hell was caused because there was no versioning, and new dll
overwrote older ones, while not being compatible. If we add a versioning
checking, we won't have dll hell problem, but dependency hell,
David Cournapeau wrote:
Library versioning without API
stability just does not make sense to me.
Yes, obviously if you have library versioning you need to
use it properly in order for it to be any use.
How do you do if you have a package D
which depends on both C and B, and C needs one
David Cournapeau wrote:
what is needed is a stable API for the used packages.
That's a nice ideal to aim for, but it's only achievable
for old and mature packages.
One could change the package name every time the API
changes, but then *any* change to the API would make the
package unusable by
Cliff Wells wrote:
I think the convention is major.minor where minor releases are
backwards-compatible and major releases aren't expected to be (but might
be).
AFAIK, that's the general rule, but python itself does not respect this
convention, so I don't see this happening for python