On Sun, Jan 24, 2016 at 4:08 AM, Nick Coghlan <ncogh...@gmail.com> wrote:
> On 24 January 2016 at 12:31, Robert T. McGibbon <rmcgi...@gmail.com> wrote:
>> On Sat, Jan 23, 2016 at 6:19 PM, Chris Barker <chris.bar...@noaa.gov> wrote:
>>>
>>> 1)  each package that needs a third partly lib statically links it in.
>>> 2)  each package that needs a third partly lib provides it, linked with a
>>> relative path (IIUC, that's how most Windows packages are done.
>>> 3) We establish some standard for providing binary libs as wheels, so that
>>> other packages can depend on them and link to them.
>>
>> In my view, all of these are valid options. I think much of this will need
>> to be worked out by the communities -- especially if individual packages and
>> subcommunities decide to take the option (3) approach. I hope this PEP will
>> enable the communities involved in OpenGIS, audio processing, image
>> processing, etc to work out the solutions that work for them and their
>> users.
>>
>> Perhaps one thing that is missing from the PEP is an explicit statement that
>> option (3) is compatible with the manylinux1 tag -- bundling is a valid
>> solution, but it's not the *only* solution.
>
> I've long resisted the notion of defining our own cross-distro
> platform ABI, but the Docker build environment that was put together
> for the manylinux project has made me realise that doing that may not
> be as hellish in a post-Docker world as it would have been in a
> pre-Docker world. (Since we can go with the specification + reference
> implementation approach that CPython has used so successfully for so
> long, rather than having to have the build environment and ABI
> specification be entirely exhaustive).
>
> On Windows and Mac OS X, our binary compatibility policies for wheel
> files are actually pretty loose - it's "be binary compatible with the
> python.org builds for that platform, including linking against the
> appropriate C standard library", and that's about it. Upgrades to
> those ABIs are then driven by CPython switching to newer base
> compatibility levels (dropping end-of-life versions on the Windows
> side [1], and updating to new deployment target macros on the Mac OS X
> side). Folks with external dependencies either bundle them, skip
> publishing wheel files, or just let them fail at import time if the
> external dependency is missing. (Neither platform has an anti-bundling
> culture, though, so I assume a lot of folks go with the first option
> over the last one)
>
> If the aim is to bring Linux wheel support in line with Windows and
> Mac OS X, then rather than defining a *new* compatibility tag (which
> would require new pip clients to process), perhaps we could instead
> adopt a similarly loose policy on what the existing generic "linux"
> tag means as we have for Windows and Mac OS X: it could just mean
> wheel files that are binary compatible with the Python binaries in the
> "manylinux" build environment. The difference would then just be that
> the target Linux ABI would be defined by PyPA and the manylinux
> developers, rather than by python-dev.

It's an option I guess, though Donald's message below makes it rather
less attractive :-). The other thing is that as compared to Windows or
OS X, it requires much more attention to actually meet the target
Linux ABI -- on Windows or OS X an out-of-the-box build for a simple
project will more-often-than-not legitimately meet the ABI, and if you
can make a package that also works on your office-mate's computer then
it will probably work everywhere. On Linux, the way glibc versioning
works means that just doing the obvious 'pip wheel' call will
basically never give you a wheel that meets the ABI, and testing on
your office-mate's computer proves nothing (except that you're both
running Ubuntu 15.10 or whatever). Also, there's a huge quantity of
existing linux-tagged wheels out there that definitely don't meet the
ABI.

> In terms of the concerns regarding the age of gcc needed to target
> CentOS 5.11, it would be good to know just what nominating CentOS 6.x
> as the baseline ABI instead would buy us - CentOS 5 is going on 9
> years old (released 2007) and stopped receiving full updates back in
> 2014 [2], while RHEL/CentOS 6 is just over 5 years old and has another
> year of full updates left. The CentOS 6 ABI should still be old enough
> to be compatible with the Debian 6 ABI (current stable is Debian 8),
> as well as the Ubuntu 12.04 LTS ABI (Ubuntu 16.04 LTS is due out in a
> few months).

AFAICT everyone I've found publishing info on distributing generic
Linux binaries is currently using CentOS 5 as their target -- not just
manylinux1, but also the Holy Build Box / "travelling ruby" folks,
Firefox (not sure exactly what they're using but it seems to be <=
CentOS 5), etc. I guess bumping up to CentOS 6 would be trivial enough
-- just keep the same library list and bump up the minimum version
requirements for glibc / libgcc / libstdc++ -- but I think we'd be
pioneers here, and that's something we might not want to be at the
same time that we're first dipping our toes into the water :-).

GCC 4.8 was released in 2013; it's not actually terribly old. It has
decent C++11 support, and it's sufficient to compile things like LLVM
and Qt and Firefox. (Compare to Windows, where anyone building py27
wheels gets to use MSVC 2008, which doesn't even know C99.) So I'd be
inclined to stick with CentOS 5 for now, and gather some experience
while waiting to see how far it can go before it breaks.

The one thing that does give me pause is that whenever we *do* decide
to switch to manylinux2, then it's going to be a big drag to wait for
a whole pip release/upgrade cycle -- Debian unstable is still shipping
pip 1.5.6 (released May 2014) :-(. And when it comes to wheel
compatibility tags and pip upgrades, the UX is really terrible: if pip
is too old to recognize the provided .wheels, then it doesn't tell the
user "hey, you should upgrade me" or otherwise provide some hint that
there might be a trivial solution to this problem; instead it just
silently downloads the source and attempts to build it (and quite
often blows up after 30 pegging the CPU for 30 minutes or something).

I guess one way to square this circle would be for pip to have some
logic that checks for manylinux[0-9]+ platform tags, and if it sees a
wheel like this with a platform tag that post-dates its own release,
AND the only other option is to build from source, then it tells the
user "hey, there's an *excellent* chance that there's a new pip that
could give you a wheel right now -- what do you want me to do?". Or we
could even make it fail-open rather than fail-closed, like:

If pip knows about manylinux 1..n, then given wheels for manylinux (n
-1), n, and (n+1), then it should have the preference ordering:
  n > (n - 1) > (n + 1)
i.e., for known platform tags we prefer newer platform tags to older
ones; for unknown platform tags from the future, we optimistically
assume that they'll probably work (since the whole idea of the
manylinux tags is that they will work almost everywhere), but we
prefer known tags to unknown tags, so that we only install the
manylinux(n+1) wheel if nothing else is available. (And print some
message saying what we're doing.)

...well, or maybe just erroring out when it sees the future and asking
the user to help would be good enough :-). This would impose the
requirement going forward that we'd have to wait for a pip release
with support for manylinuxN before allowing manylinuxN onto PyPI, but
that doesn't seem too onerous.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig

Reply via email to