On 3 December 2016 at 01:33, Robert T. McGibbon <rmcgi...@gmail.com> wrote:
> Isn't this issue already solved by (and the raison d'ĂȘtre of) the multiple > third-party Python redistributors, like the various OS package maintainers, > Continuum's Anaconda, Enthought Canopy, ActiveState Python, WinPython, etc? > Yep. Once you start talking content curation, you're in a situation where: - you're providing an ongoing service that will always be needed (the specific packages will change, but the task won't) - you need to commit to a certain level of responsiveness for security issues - you'll generally need to span multiple language ecosystems, not just Python - exactly which packages are interesting will depend on the user audience you're targeting - the tolerance for API breakage will also vary based on the audience you're targeting - you'll often want to be able to carry patches that aren't present in the upstream components - you'll need to decide which target platforms you want to support If a curation community *isn't* doing any of those things, then it isn't adding a lot of value beyond folks just doing DIY integration in their CI system by pinning their dependencies to particular versions. As far as the comments about determining dependencies goes, the way pip does it generally works fine, you just need a sandboxed environment to do the execution, and both redistributors and open source information providers like libraries.io are actively working on automating that process (coping with packages as they exist on PyPI today, rather than relying on the upstream community to change anything about the way Python packaging works). Cheers, Nick. -- Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org https://mail.python.org/mailman/listinfo/distutils-sig