On 15 Feb 2017 23:28, "Paul Moore" <[email protected]> wrote:
So, in summary, * I agree that libraries pinning dependencies too tightly is bad. * Distributions can easily enough report such pins upstream when the library is initially packaged, so there's no ongoing cost here (just possibly a delay before the library can be packaged). No, we can't easily do this. libraries.io tracks more than *two million* open source projects. Debian is the largest Linux distribution, and only tracks 50k packages. That means it is typically going to be *app* developers that run into the problem of inappropriately pinned dependencies and So if we rely on a manual "publish with pinned dependencies", "get bug report from redistributor or app developer", "republish with unpinned dependencies", we'll be in a situation where: - the affected app developer or redistributor is going to have a negative experience with the project - the responsible publisher is either going to have a negative interaction with an end user or redistributor, or else they'll just silently move on to find an alternative library - we relinquish any control of the tone used when the publisher is alerted to the problem By contrast, if we design the metadata format such that *PyPI* can provide a suitable error message, then: - publishers get alerted to the problem *prior* to publication - end users and redistributors are unlikely to encounter the problem directly - we retain full control over the tone of the error notification * Libraries can legitimately have appropriate pins (typically to ranges of versions). So distributions have to be able to deal with that. * Applications *should* pin precise versions. Distributions have to decide whether to respect those pins or remove them and then take on support of the combination that upstream doesn't support. * But application pins should be in a requirements.txt file, so ignoring version specs is pretty simple (just a script to run against the requirements file). Applications also get packaged as sdists and wheel files, so pydist.json does need to handle that case. Nick is suggesting that the requirement metadata be prohibited from using exact pins, but there's alternative metadata for "yes, I really mean an exact pin". To me: 1. This doesn't have any bearing on *application* pins, as they aren't in metadata. 2. Distributions still have to be able to deal with libraries having exact pins, as it's an explicitly supported possibility. 3. You can still manage (effectively) exact pins without being explicit - foo >1.6,<1.8 pretty much does it. And that doesn't even have to be a deliberate attempt to break the system, it could be a genuine attempt to avoid known issues, that just got too aggressive. People aren't going to do the last one accidentally, but they *will* use "==" when transferring app development practices to library development. So we're left with additional complexity for library authors to understand, for what seems like no benefit in practice to distribution builders. - We'll get more automated conversions with pyp2rpm and similar tools that "just work" without human intervention - We'll get fewer negative interpersonal interactions between upstream publishers and downstream redistributors It won't magically make everything all sunshine and roses, but we're currently at a point where about 70% of pyp2rpm conversions fail for various reasons, so every little bit helps :) The only stated benefit of the 2 types of metadata is to educate library authors of the benefits of not pinning versions - and it seems like a very sweeping measure, where bug reports from distributions seem like they would be a much more focused and just as effective approach. We've been playing that whack-a-mole game for years, and it sucks enormously for both publishers and redistributors from a user experience perspective. More importantly though, it's already failing to scale adequately, hence the rise of technologies like Docker, Flatpak, and Snappy that push more integration and update responsibilities back to application and service developers. The growth rates on PyPI mean we can expect those scalability challenges to get *worse* rather than better in the coming years. By pushing this check down into the tooling infrastructure, the aim would be to make the automated systems take on the task of being the "bad guy", rather than asking humans to do it later. Cheers, Nick.
_______________________________________________ Distutils-SIG maillist - [email protected] https://mail.python.org/mailman/listinfo/distutils-sig
