On Fri, Jan 22, 2016 at 1:33 AM, M.-A. Lemburg <m...@egenix.com> wrote:
> On 21.01.2016 20:05, Matthew Brett wrote:
>> Hi,
>>
>> On Thu, Jan 21, 2016 at 2:05 AM, M.-A. Lemburg <m...@egenix.com> wrote:
>>> On 21.01.2016 10:31, Nick Coghlan wrote:
>>>> On 21 January 2016 at 19:03, M.-A. Lemburg <m...@egenix.com> wrote:
>>>>> By using the version based approach, we'd not run into this
>>>>> problem and gain a lot more.
>>>>
>>>> I think it's better to start with a small core that we *know* works,
>>>> then expand later, rather than trying to make the first iteration too
>>>> wide. The "manylinux1" tag itself is versioned (hence the "1" at the
>>>> end), so "manylinux2" may simply have *more* libraries defined, rather
>>>> than newer ones.
>>>
>>> My argument is that the file based approach taken by the PEP
>>> is too limiting to actually make things work for a large
>>> set of Python packages.
>>>
>>> It will basically only work for packages that do not interface
>>> to other external libraries (except for the few cases listed in
>>> the PEP, e.g. X11, GL, which aren't always installed or
>>> available either).
>>>
>>> IMO, testing the versions of a set of libraries is a safer
>>> approach. It's perfectly fine to have a few dependencies
>>> not work in a module because an optional system package is not
>>> installed, e.g. say a package comes with UIs written in
>>> Qt and one in GTK.
>>
>> Please forgive my slowness, but I don't understand exactly what you
>> mean.  Can you give a specific example?
>>
>> Say my package depends on libpng.
>>
>> Call the machine I'm installing on the client machine.
>>
>> Are you saying that, when I build a wheel, I should specify to the
>> wheel what versions of libpng I can tolerate on the the client
>> machine, and if if the client does have a compatible version, then pip
>> should raise an error, perhaps with a useful message about how to get
>> libpng?
>>
>> If you do mean that, how do you want the PEP changed?
>
> I already posted a change proposal earlier on in the thread.
> I'll repeat it here (with a minor enhancements):

Okay, I think I get it now. I'll try to repeat back to summarize and
see if I have understood your proposal correctly:

In the PEP 513 "manylinux1" approach, when users do 'pip install foo',
then one of three things happens:
1) they get a working foo and are immediately good-to-go, or
2) pip says "I'm sorry, there's no compatible wheel", or
3) something else happens, in which case this is a bug, and the spec
provides some framework to help us determine whether this is a bug in
the wheel, a bug in pip, or a bug in the spec.

In your approach, users do 'pip install foo', and then pip installs
the wheel, and then when they try to use the wheel they get an error
message from the dynamic linker about missing libraries, and then the
user has to read the docs or squint at these error messages in order
to figure out what set of apt-get / yum / pacman / ... commands they
need to run in order to make foo work. (And possibly there is no such
combination of commands that will actually work, because e.g. the
wheel was linked against Debian's version of libbar.so.7 and Fedora's
version of libbar.so.7 turns out to have an incompatible ABI, or
Fedora simply doesn't provide a libbar.so.7 package at all.)

I won't express any opinion on your alternative PEP with its own
platform tag without reading it, but we're not going to change PEP 513
to work this way.

>  * no lock-out of package authors who would like to push
>    wheel files for their packages to PyPI, but happen to
>    use libraries not in the predefined list of the original
>    draft PEP

https://mail.python.org/pipermail/distutils-sig/2016-January/028050.html

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig

Reply via email to