On 24 Jan 2014 19:41, "Paul Moore" <p.f.mo...@gmail.com> wrote:
>
> On 24 January 2014 00:17, Oscar Benjamin <oscar.j.benja...@gmail.com>
wrote:
> > You need to bear in mind that people currently have a variety of ways
> > to install numpy on Windows that do work already without limitations
> > on CPU instruction set. Most numpy users will not get any immediate
> > benefit from the fact that "it works using pip" rather than "it works
> > using the .exe installer" (or any of a number of other options). It's
> > the unfortunate end users and the numpy folks who would have to pick
> > up the pieces if/when the SSE2 assumption fails.
>
> The people who would benefit are those who (like me!) don't have a
> core requirement for numpy, but who just want to "try it out"
> casually, or for experimenting or one-off specialised scripts. These
> are the people who won't be using one of the curated distributions,
> and quite possibly will be using a virtualenv, so the exe installers
> won't work. Giving these people a means to try numpy could introduce a
> wider audience to it.
>
> Having said that, I can understand the reluctance to have to deal with
> non-specialist users hitting obscure "your CPU is too old" errors -
> that's *not* a good initial experience.
>
> And your point that it's just as reasonable for pip to adopt a partial
> solution in the short term is also fair - although it would be harder
> for pip to replace an API we added and which people are using, than it
> would be for numpy to switch to deploying better wheels when the
> facilities become available. So the comparison isn't entirely equal.

There's also the fact that we're still trying to recover from the setup.py
situation (which was a "quick and easy" alternative to a declarative build
system), so quick hacks in the core metadata specs that will then be locked
in for years by backwards compatibility requirements are definitely *not*
acceptable. We already have more than enough of those in the legacy
metadata we're aiming to replace :P

All NumPy should need to reduce end user confusion to tolerable levels is
an import time CPU check that raises an error that includes a link to
stable URL explaining the limitations of the published wheel file, and
alternative ways of obtaining NumPy (like Christophe's installers, or a
science & data analysis focused distribution like Anaconda or EPD, or
bootstrapping conda).

In return, as Paul points out, it becomes substantially easier for people
that *aren't* wholly invested in the scientific Python stack to try it out
with their regular tools, rather than having to completely change how they
work with Python.

Also consider that, given the status quo, any users that might see that new
error instead get even *more* incomprehensible errors as pip attempts to
build NumPy from source and fails at doing so.

The choice given the current metadata standards isn't between confusing
Windows users or not, it's between confusing 100% of those that try "pip
install numpy" with cryptic errors from a failed build at install time and
confusing a much smaller percentage of those with a CPU compatibility error
at runtime.

Is the latter a desirable *final* state? No, and metadata 2.0 will aim to
address that. It is, however, substantially better than the status quo and
doesn't run the risk of compromising interoperability standards we're going
to have to live with indefinitely.

Cheers,
Nick.

>
> Paul
_______________________________________________
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig

Reply via email to