On 1 February 2014 18:23, Vinay Sajip <vinay_sa...@yahoo.co.uk> wrote: > On Fri, 31/1/14, Brian Wickman <wick...@gmail.com> wrote: > >> There are myriad other practical reasons. Here are some: > > Thanks for taking the time to respond with the details - they are good data > points > to think about! > >> Lastly, there are social reasons. It's just hard to convince most engineers >> to use things like pkg_resources or pkgutil to manipulate resources >> when for them the status quo is just using __file__. Bizarrely the social >> challenges are just as hard as the abovementioned technical challenges. > > I agree it's bizarre, but sadly it's not surprising. People get used to > certain ways > of doing things, and a certain kind of collective myopia develops when it > comes to looking at different ways of doing things. Having worked with fairly > diverse systems in my time, ISTM that sections of the Python community have > this myopia too. For example, the Java hatred and PEP 8 zealotry that you see > here and there. > > One of the things that's puzzled me, for example, is why people think it's > reasonable > or even necessary to have copies of pip and setuptools in every virtual > environment > - often the same people who will tell you that your code isn't DRY enough! > It's > certainly not a technical requirement, yet one of the reasons why PEP 405 > venvs > aren't that popular is that pip and setuptools aren't automatically put in > there. It's a > social issue - it's been decided that rather than exploring a technical > approach to > addressing any issue with installing into venvs, it's better to bundle pip > and setuptools > with Python 3.4, since that will seemingly be easier for people to swallow :-)
FWIW, installing into a venv from outside it works fine (that's how ensurepip works in 3.4). However, it's substantially *harder* to explain to people how to use it correctly that way. In theory you could change activation so that it also affected the default install locations, but the advantage of just having them installed per venv is that you're relying more on the builtin Python path machinery rather than adding something new. So while it's wasteful of disk space and means needing to upgrade them in every virtualenv, it does actually categorically eliminate many potential sources of bugs. Doing things the way pip and virtualenv do them also meant there was a whole pile of design work that *didn't need to be done* to get a functional system up and running. Avoiding work by leveraging existing capabilities is a time honoured engineering tradition, even when the simple way isn't the most elegant way. Consider also the fact that we had full virtual machines long before we have usable Linux containers: full isolation is actually *easier* than partial isolation, because there are fewer places for things to go wrong, and less integration work to do in the first place. That said, something I mentioned to the OpenStack folks a while ago (and I think on this list, but potentially not), is that I have now realised the much-reviled (for good reason) *.pth files actually have a legitimate use case in allowing API compatible versions of packages to be shared between multiple virtual environments - you can trade reduced isolation for easier upgrades on systems containing multiple virtual environments by adding a suitable *.pth file to the venv rather than the package itself. While there's currently no convenient tooling around that, they're a feature CPython has supported for as long as I can remember, so tools built on that idea would comfortably work on all commonly supported Python versions. Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig