On 1 February 2015 at 11:01, Donald Stufft <[email protected]> wrote: > Do you expect some automated tool to take advantage of this link? > > In other words, what’s the benefit over just having a link to the docker > container in the long_description or in the metadata 2.0 project urls?
I agree that from an implementation perspective, this could just be a new recommended URL in the project URLs metadata (e.g. "Reference Container Images"). If folks don't think the idea sounds horrible, I'll make that update to the PEP 459 draft. However, the bigger picture I'm mostly interested in is consistency of presentation in the PyPI web UI (probably at some point after the migration to Warehouse - I originally started writing this idea up as a Warehouse RFE), and in making providing reference Docker images something we explicitly recommend doing in the Python Package User Guide for PyPI published web service projects that support deploying on Linux (even Microsoft are aiming to make it easy to deploy Docker based containers on their Azure public cloud by way of Linux VMs [1]). (Longer term, xdg-app looks promising for rich client Linux applications, but that's a significantly less mature piece of technology, which says a lot given the relative immaturity of the container image based approach to Linux service deployment). Folks are certainly already free to point their users at prebuilt container images if they want to, so this idea would specifically be about shifting to explicitly recommending container images as a good approach to dealing with the challenges created by the lack of a good cross-distro way to described Linux ABI compatibility requirements. When even Red Hat, SUSE and Canonical are saying "Yeah, you know what? Just use containers and we'll take care of figuring out a way to deal with the resulting security, auditing and integrated system qualification consequences", that's a pretty strong hint that pursuing the "declarative platform requirements" approach may not actually be viable in the case of Linux. Such a feature wouldn't need to be specifically about linking to DockerHub (we could refer to something more generic like "reference Linux container images"), DockerHub is just currently the easiest way to publish them (Docker's a fully open source company, so external hosting is already possible for folks that are particularly keen to do so, but being on DockerHub integrates most easily with the standard docker client tools - similar to the situation with pip and other language ecosystem specific tools). This was a connection my brain made this morning as a result of the recent thread on Linux ABI compatibility declarations - if even the commercial Linux distros are essentially giving up on the "programmatically declare your platform ABI compatibility requirements" approach at the application layer in favour of "bundle-your-dependencies-while-still-supporting-a-decent-auditing-mechanism" (i.e. container images on backend servers or the xdg-app approach [2] for rich client applications), we might be well advised to follow their lead. I also think this is one of the key lessons being learned on the commercial Linux vendor side from the rise of Android as the dominant (by far) Linux client OS: the combination of "independently updated siloed applications" with "an integrated platform for running and updating siloed applications and allowing them to interoperate in a controlled fashion" is a model that really works well for a wide range of users, a much wider range than the "do-your-own-integration" adventure that has been the traditional Linux experience outside the world of formal commercial compatibility certification programs. Cheers, Nick. [1] http://azure.microsoft.com/blog/2015/01/08/introducing-docker-in-microsoft-azure-marketplace/ [2] https://wiki.gnome.org/Projects/SandboxedApps -- Nick Coghlan | [email protected] | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - [email protected] https://mail.python.org/mailman/listinfo/distutils-sig
