[Distutils] Re: Archive this list & redirect conversation elsewhere?

2020-08-04 Thread Oscar Benjamin
On Tue, 4 Aug 2020 at 23:03, Brett Cannon  wrote:
> On Thu, Jul 30, 2020 at 8:41 AM Wes Turner  wrote:
>>
>> I confess that I don't even know how to subscribe to all threads of a 
>> discourse.
>>
>> - [ ] How to subscribe to all threads of discourse
>
> Go to the category you care about, e.g. 
> https://discuss.python.org/c/packaging/14, and if you look in the right side 
> next to "+ New Topic" you will see a bell. you can click that and choose to 
> what level you want to follow new topics (only new threads, notification of 
> all comments, direct notification of all comments, etc.).

What I haven't quite got my head around is: what exactly is the
"workflow" with discourse if you are a regular follower/contributor on
some forum?

Do people who use it a lot begin by going to the forum website?

Do they get the email notifications and interact via those?

I've been working with discourse in the latter mode and from that
perspective it seems inferior. If the expectation is that I have to
begin by going to the website then that changes my fundamental
approach. Right now I subscribe to many mailing lists and they all
route to an IMAP folder. When I feel like browsing them I can go in
and skim messages from a wide variety of mailing lists.

The other process seems to be that I begin by choosing to go to the
discourse forum website in order to look at messages in a particular
forum that I actively choose to look at at that particular time. If
that's the case then I would inevitably end up following fewer mailing
lists/forums since each one requires a momentary active decision from
me to read that particular list. I can imagine that that might reduce
the wider participation that is a big part of the purpose of these
lists. Maybe other people would be more likely to follow things that
way but I certainly wouldn't.


Oscar
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/N2LLGS23QSXPXZHPNPFNEUJ4D35Y4UPD/


Re: [Distutils] Ensuring source availability for PyPI entries / PEP: Build system abstraction for pip/conda etc

2016-02-10 Thread Oscar Benjamin
On 10 February 2016 at 12:21, M.-A. Lemburg  wrote:
>> So "easy to achieve" still needs someone to take the time to deal with
>> these sorts of issue. It's the usual process of the people willing to
>> put in the effort get to choose the direction (which is also why I
>> just provide feedback, and don't tend to offer my own proposals,
>> because I'm not able to commit that sort of time).
>
> Wait. You are missing the point that the setup.py interface
> already does work, so no extra effort is needed. All that's
> needed is some documentation of what's currently being used,
> so that other tools can support the interface going forward.

You can see an example of a minimal setup.py file here:

https://github.com/oscarbenjamin/setuppytest/blob/master/setuppytest/setup.py

I wrote that some time ago and don't know if it still works (that's
the problem with just having a de facto standard).

> At the moment, pip this interface is only defined by
> "what pip uses" and that's a moving target.

The setup.py interface is a terrible interface for tools like pip to
use and for tools like flit to emulate. Currently what pip does is to
invoke

$ python setup.py egg_info --egg-base $TEMPDIR

to get the metadata. It is not possible to get the metadata without
executing the setup.py which is problematic for many applications.
Providing a static pypa.json file is much better: tools can read a
static file to get the metadata.

To install a distribution pip runs:

$ python setup.py install --record $RECORD_FILE \
--single-version-externally-managed

So the setup.py is entirely responsible not just for building but also
for installing everything. This makes it very difficult to develop a
system where different installer tools and different build tools can
cooperate to allow end users to specify installation options. It also
means that the installer has no direct control over where any of the
files are installed.

If you were designing this from scratch then there are some obvious
things that you would want to do differently here. The setup.py
interface also has so much legacy usage that it's difficult for
setuptools and pip to evolve. The idea with this proposal is to
decouple things by introducing a new interface with well defined and
sensible behaviour.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [final version?] PEP 513 - A Platform Tag for Portable Linux Built Distributions

2016-01-30 Thread Oscar Benjamin
On 30 January 2016 at 08:58, Nick Coghlan  wrote:
>
> I applied both this iteration and the previous one to the PEPs repo in
> order to review it, so modulo caching issues, this latest draft is
> live now.
>
> I also think this version covers everything we need it to cover, so
> I'm going to mark it as Active and point to this post as the
> resolution :)

I had to see PEP 1 to understand what "Active" means but now I see
that it means that this PEP is approved but subject to indefinite
tinkering:
https://www.python.org/dev/peps/pep-0001/#pep-review-resolution

Brilliant, good work everyone! I'm looking forward to this.

So AFAICT the actions are:
1) PyPI allows uploading wheels with manylinux tag.
2) pip updated to recognise these wheels on the appropriate Linux systems.
3) Packagers make and upload wheels

Is there also a policy of informing python-dev once a PEP is approved
here? PEP 1 is ambiguous on that point but something like this really
needs to be known about more widely than this list.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 513: A Platform Tag for Portable Linux Built Distributions Version

2016-01-28 Thread Oscar Benjamin
On 28 January 2016 at 07:46, Nathaniel Smith  wrote:
>
> On further thought, I realized that it actually has to be in the
> standard library directory / namespace, and can't live in
> site-packages: for the correct semantics it needs to be inherited by
> virtualenvs; if it isn't then we'll see confusing rare problems. And
> virtualenvs inherit the stdlib, but not the site-packages (in
> general).

Surely virtualenv can be made to import this information from the
parent environment. It's already virtualenv's job to set up pip etc.
so that you're ready to install things from PyPI.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Trouble using setuptools to build separate c++ extension

2016-01-04 Thread Oscar Benjamin
On 20 December 2015 at 03:15, Thomas Nyberg  wrote:
> Hello I'm having trouble understanding the right way to build a c++ module
> using setuptools. I've been reading the docs, but I'm confused where I
> should be putting my build options. Everything builds fine on its own. I
> have my sources in src/ and my headers in include/.
>
> My first problem is that I'm having trouble figuring out where to put my
> build flags. Here is the Makefile I'm currently using:
>
> 
> srcs=$(wildcard *.cpp)
> srcs+=$(wildcard src/*.cpp)
> objs=$(patsubst %.cpp,%.o,$(srcs))
>
> cc=g++
> ccflags=-std=c++11 -g -O3 -fPIC
> includes=-I. -I./include/ -I/usr/include/python2.7/ -I/usr/include/boost
> libflags=-L. -L/usr/lib/x86_64-linux-gnu
> ldflags= -shared -Wl,--export-dynamic
>
> patent_computer_cpp.so: $(objs)
> $(cc) $(libflags) $(ldflags) $(objs) -o patent_computer_cpp.so
> -lboost_python -lpython2.7
>
> %.o:%.cpp
> $(cc) $(ccflags) $(includes) -c -o $@ >lt;
> 
>
> Unfortunately I can't post the sources, but they compile fine to produce the
> `patent_computer_cpp.so` file which can be imported as a module. Maybe I
> should also point out that I'm using boost-python (I don't think this is the
> issue though).
>
> I just can't figure out how to get setuptools.Extension to use these build
> flags. I've seen recommendations online saying that I should set CFLAGS as
> an environment variable and set OPT='' as an environment variable as well,
> but this just feels wrong given the simplicity of my setup. (Besides the
> shared object doesn't seem to compile correctly in this case.) I've tried
> using the extra_compile_args option in setup.py, but that fails.
>
> Is there a way to avoid setting environment variables like this or is this
> the accepted way to build this kind of software? Am I missing some obvious
> docs somewhere? Thanks for any help.

Hi Thomas,

Whether or not what you're currently doing is acceptable really
depends. Who needs to install this? Do you know which OS or compiler
they will use etc? Are you hoping that this will be installed by pip?

The setup.py is supposed to allow the end user a bit of freedom to use
different OS/compiler etc. and free up the module author from needing
to know exactly where the Python header files and link libraries will
be on each platform. OTOH if you are just writing for exactly one
platform then what you have may be more convenient and flexible.

I would approach this step-by-step to convert that to using setup.py.
So the first part is to tell extension about all of the cpp files and
get it to use g++ to compile it. The next step is to tell about the
additional include locations. The next step is the additional
libraries that need linking. There are arguments to Extension and
setup for each of these things. The last part is to get the exact
right compiler flags.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Installing packages using pip

2015-11-14 Thread Oscar Benjamin
On 14 Nov 2015 11:12, "Paul Moore"  wrote:
>
> On 13 November 2015 at 23:38, Nathaniel Smith  wrote:
> > But details of R's execution model make this easier to do.
>
> Indeed. I don't know how R works, but Python's module caching
> behaviour would mean this would be full of surprising and confusing
> corner cases ("I upgraded but I'm still getting the old version" being
> the simplest and most obvious one).
>
> > Maybe it could be supported for the special case of installing new
packages with no upgrades

Maybe it could prompt the user that the interpreter will need to be
restarted for the changes to take effect. IDLE runs the interactive
interpreter in a separate process so it could restart the subprocess
without closing the GUI (after prompting the user with a restart/continue
dialogue).

I'm not sure if the standard interpreter would be able to relaunch itself
but it could at least exit and tell the user to restart (after a yes/no
question in the terminal). The command could also be limited to the when
the interpreter is in interactive mode.

How it works in the terminal is less important to me than how it works in
IDLE though; being able to teach how to use Python through IDLE (deferring
discussion of terminals etc) is useful for introductory programming classes.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The future of invoking pip

2015-11-11 Thread Oscar Benjamin
On 11 November 2015 at 06:35, Nick Coghlan  wrote:
>
> Longer term, it may even make sense to take the "python" command on
> *nix systems in that direction, or, at the very least, make "py" a
> cross-platform invocation technique:
> https://mail.python.org/pipermail/linux-sig/2015-October/00.html

This would also be good. The inconsistency between Windows and
everything else is just annoying here. I've never been able to
recommend the use of py.exe even though most of my students are on
Windows and the lab machines are Windows because many of them use OSX
and a few use Linux.

Also although I can reliably assume that "python" is on PATH I can't
know what version it is since it is most likely 3.x on Windows and 2.x
on everything else which means that every script I give them has to be
2/3 compatible. With py.exe I could recommend "py -3" or I guess "py
somescript.py" would throw a helpful error if the shebang doesn't
match (which would be good).

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The future of invoking pip

2015-11-09 Thread Oscar Benjamin
On 9 November 2015 at 10:44, Wolfgang Maier
 wrote:
>
> Something I miss in all the discussions taking place here is the fact that
> python -m pip is the officially documented way of invoking pip at
> https://docs.python.org/3/installing/index.html#basic-usage and it is not
> particularly helpful if that recommendation keeps changing back and forth.
>
> I know some people don't like the wordy invocation, but other people
> (including me) use and teach it because it works reliably. Just because a
> pip executable based invocation pattern looks better, I don't think it
> justifies the change.

I also teach this invocation. Somehow you have to select the Python
version you're interested in and I really don't see why

$ pip -p ./foo/bar/python ...

is better than

$ ./foo/bar/python -m pip ...

I already need to explain to students how to ensure that their Python
executable is on PATH. Needing pip to be on PATH as well is just
another possible source of confusion (even if there's only one Python
installation).

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] warning about potential problem for wheels

2015-10-14 Thread Oscar Benjamin
On 14 Oct 2015 19:00, "Chris Barker"  wrote:
>
> On Wed, Oct 14, 2015 at 9:54 AM, Antoine Pitrou 
wrote:
>>
>> > IS that the case:
>> > """
>> > Note that my recently retired computer was 64 bit and had SSE but
didn't
>> > have SSE2 (I'm fairly sure - CPU was some budget AMD model)
>> > """
>> >
>> > granted, such machines are probably really really rare, but maybe it
does
>> > matter for 64 bit, too?
>>
>> Unless I'm mistaken, SSE2 is part of the spec for x86-64 (spec which
>> was originally devised by AMD), so I'm a bit skeptical about a
>> SSE2-less 64-bit CPU.  Do you have any reference?
>
>
> That was a quote from this thread... I have no idea beyond that.

I wrote that but now I think about it Antoine is right. SSE2 is fully
supported on all x86-64 CPUs. I must have been confusing my old home
computer with my old work computer (which was 32 bit and ran XP). No way to
check now though...

The problem with SSE2 may go away soon but then we have the problem with
SSE3 and so on. Most of us have CPUs that support instruction sets beyond
those used in the lowest common denominator builds of Python provided in
the Windows binaries (and distro binaries etc). Likewise for extension
modules on PyPI.

Numpy's Windows installer bundles several BLAS binaries with different
levels of SSE and this was the initial reason for not providing Windows
wheels. The problem is being solved though by switching from ATLAS to
OpenBLAS which selects different levels of SSE at runtime.

Maybe that approach (runtime machine code selection) is the only way to
marry the needs of packaging with the desire to fully utilise CPU
capabilities. It keeps the packaging side simple at the expense of pushing
the complexity onto the project authors. Though it's probably only viable
for something like a BLAS library which would often  contain a load of hand
crafted assembly code anyway.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] warning about potential problem for wheels

2015-10-11 Thread Oscar Benjamin
On Sun, 11 Oct 2015 17:44 Antoine Pitrou  wrote:

On Sun, 11 Oct 2015 08:07:30 -0700
Steve Dower  wrote:
>
> This does only affect 32-bit builds, so now I'm thinking about the
> possibility of treating those as highly compatible while the 64-bit
> ones get better performance treatment, though I'm not sure how that
> could actually play out. It may help remove some of the questions
> about which one to use though.

That sounds reasonable to me. I don't know Windows very much, but are
there still many people using 32-bit Windows these days (on x86, I
mean)?



I don't know but I think it makes sense to follow Windows' lead. So if 3.5
supports Vista and Vista doesn't require SSE2 then CPython shouldn't
either. If 3.6 or whatever drops support for Vista and if Windows 7
requires SSE2 then CPython can require it too. I assume this what happens
with the OSX binaries.

Note that my recently retired computer was 64 bit and had SSE but didn't
have SSE2 (I'm fairly sure - CPU was some budget AMD model). Also after
SSE2 we have SSE3 etc and I've seen no indication that x86-64 manufacturers
are going to stop adding new instructions. So this general issue isn't
limited to 32 bit hardware and won't be solved by special casing that. I
think it makes sense to have a general policy for architectures that will
be supported by the official build in future.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] warning about potential problem for wheels

2015-10-11 Thread Oscar Benjamin
On Sun, 11 Oct 2015 15:31 Donald Stufft  wrote:

Will something built against 3.5.0 with SSE work on 3.5.1 without SSE? What
about the inverse?


It should be fine either way as long as the CPU can handle the particular
instructions used. X86 is backward compatible like that so unless the
compiler does something funny when the SSE option is enabled it should be
fine.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] warning about potential problem for wheels

2015-10-10 Thread Oscar Benjamin
On Sat, 10 Oct 2015 23:37 Laura Creighton  wrote:

In a message of Sat, 10 Oct 2015 21:52:58 -, Oscar Benjamin writes:

>Really this is just a case of an unsupported platform. It's unfortunate
>that CPython doesn't properly support this hardware but I think it's
>reasonable that if you have to build your interpreter from source then you
>have to build your extension modules as well.

Alas that there is no easy way to detect.  The situation I am
imagining is where the administrators of a school build pythons for
the students to run on their obsolete hardware, and then the poor
students don't understand why pip doesn't work.  But I suppose we
will just get to deal with that problem when and if it happens.



Does it sound plausible to you that a school would build their own Pythons?
I only know a few schools and I'd be very surprised if this happened at one
of them but I guess there's a lot of schools in the world...

The administrators at my daughter's school don't even understand how to put
text into an email let alone install compilers and build Python!

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] warning about potential problem for wheels

2015-10-10 Thread Oscar Benjamin
On Sat, 10 Oct 2015 20:53 Laura Creighton  wrote:

(note, I currently don't have mail delivery on for distutils.  I could
change this, but right now I don't think I have a lot to contribute.
This is just a warning).

If you have old windows hardware, which does not support SSE2, and
windows 7, you can build your own python 3.5.  This will work. But
wheels won't, you will need to build them from source as well.
see: http:// bugs.python.org
/issue25361


This means that wheels could start failing.  It would be good if
the wheels themselves could detect this problem and protest in a
reasonable fashion, but I have no idea if this is possible.  In any
case, I thought you needed to know.



There is no way for wheels to do this. A wheel is just a zip file with a
standardised layout. Pip just extracts the zip, reads the metadata and
copies the files to the appropriate locations. The metadata has no way to
describe the fact that it the wheel contains SSE2 dependent binaries. The
standard tools used to create wheels don't know anything about the contents
of the compiled binaries so they don't really have a way to detect that the
wheel depends on SSE2.

Really this is just a case of an unsupported platform. It's unfortunate
that CPython doesn't properly support this hardware but I think it's
reasonable that if you have to build your interpreter from source then you
have to build your extension modules as well.

I'm not sure of a robust solution to detecting the problem at install time.
Extension module authors can only really guarantee that their Windows
binaries are compatible with standard released binaries. So if someone
builds their own interpreter using different compiler options then there's
no real way for pip or the extension module author to know if the binaries
will be compatible. So either pip rejects all binaries for a non standard
interpreter build or it installs them and hopes for the best.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Towards a simple and standard sdist format that isn't intertwined with distutils

2015-10-09 Thread Oscar Benjamin
On Fri, 9 Oct 2015 19:35 Carl Meyer  wrote:

On 10/09/2015 12:28 PM, Oscar Benjamin wrote:
> Why would it need dynamic metadata for the windows matplotlib wheel to
> have different metadata from the OSX matplotlib wheel? The platform
> Windows/OSX is static and each wheel declares its own dependencies
> statically but differently. Am I missing something?

I didn't say that required dynamic metadata (wheel metadata is already
static). I just said that it works fine currently, and that it becomes
an open question with the move towards static metadata in both source
and binary releases, because we have to answer questions like "what
information beyond just package/version makes up a complete node in a
dependency graph."



Assuming it's tied to the operating system it doesn't matter surely. When
pip runs on Windows it can ignore dependencies that apply to other
platforms so I don't see how this case makes it more complex.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Towards a simple and standard sdist format that isn't intertwined with distutils

2015-10-09 Thread Oscar Benjamin
On Fri, 9 Oct 2015 19:01 Carl Meyer  wrote:

On 10/09/2015 11:18 AM, Paul Moore wrote:
> On 9 October 2015 at 18:04, Chris Barker  wrote:
>> 1) what in the world is a "source wheel"? And how is it different than an
>> sdist (other than maybe in a different file format.
>
> A "source wheel" is the proposed name for a to-be-defined replacement
> for sdists. For now, you can think of "source wheel" and "sdist" as
> the same.
>
>> 2) Is it indeed "OK" with the current PEPs and tools for different binary
>> wheels to have different dependencies? This would be the example of, for
>> instance the Matplotlib binary wheel for Windows depends on a py_zlib,
>> whereas the binary wheel for OS-X relies on the the system lib, and
therefor
>> does not have that dependency?
>>  (and has anyone worked out the linking issues so that that would all
work
>> with virtualenv and friends...)
>
> It's not *currently* OK for different binary wheels to have different
> dependencies. At least I don't think it is. It's basically not
> something that as far as I'm aware anyone has ever considered an
> option up till now, and so it's quite likely that there are
> assumptions baked into the tools that would break if different builds
> of (a given version of) a package had different dependencies.

AFAIK this is actually just fine currently, it's just not considered
ideal for a hopeful future static-metadata world.



Why would it need dynamic metadata for the windows matplotlib wheel to have
different metadata from the OSX matplotlib wheel? The platform Windows/OSX
is static and each wheel declares its own dependencies statically but
differently. Am I missing something?

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Towards a simple and standard sdist format that isn't intertwined with distutils

2015-10-08 Thread Oscar Benjamin
On 8 October 2015 at 14:34, Ionel Cristian Mărieș  wrote:
>
> On Thu, Oct 8, 2015 at 4:01 PM, Donald Stufft  wrote:
>>
>> One of the features in the original PEP was the ability to produce
>> multiple
>> different Wheels from the same source release much like how Debian does.
>> e.g.
>> numpy-1.0.newsdistthing could produce numpy-pyopenblas-12.6.whl and
>> numpy-mkl-7.8.whl, etc etc where there would be a bunch of names/versions
>> that
>> would differ from the name/version of the original sdist thing that was
>> being
>> proposed.
>
>
> Sorry if this sounds obtuse but isn't that useless overspecialization? They
> can just publish `numpy-mlk` and `numpy-thatblas` or whatever on PyPI, and
> that will even work better when it comes to dependencies.
> I mean, if you
> build something for `numpy-mkl` then it wouldn't work on a `numpy-otherblas`
> anyway right?

It depends. If you're using numpy from pure Python code the difference
between mkl and otherblas is probably irrelevant. So in most cases
you'd want to be able to depend just on "numpy" but in some cases
you'd need to be more specific. Perhaps you could solve that with
"provides"...

Really though it's probably best to keep the set of binaries on PyPI
internally consistent and not try to represent everything. My point
earlier was that regardless of what goes on PyPI as the official numpy
wheel there will be many people using the numpy code in other ways. If
pip is not the only consumer of a source release then it's not really
reasonable to dictate (and redesign in a less human-friendly way) its
layout purely for pip's benefit.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Towards a simple and standard sdist format that isn't intertwined with distutils

2015-10-08 Thread Oscar Benjamin
On 8 October 2015 at 13:05, Ionel Cristian Mărieș  wrote:
> On Thu, Oct 8, 2015 at 1:18 PM, Oscar Benjamin 
> wrote:
>>
>> I think this satisfies all of the requirements for static metadata and
>> one-to-one correspondence of source wheels and binary wheels. If numpy
>> followed this then I imagine that there would be a single source wheel
>> on PyPI corresponding to the one configuration that would be used
>> consistently there. However numpy still needs to separately release
>> the code in a form that is also usable in all of the many other
>> contexts that it is already used.
>
> Can't that configuration just be the build defaults? There would be a single
> source but with some preset build configuration. People with different needs
> can just override those.

Yeah, I guess so. Maybe I'm just not understanding what the
"one-to-one" correspondence is supposed to mean. Earlier in the thread
it was said to be important because of wheel caching etc. but if it's
possible to configure different builds then it's not really
one-to-one.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Towards a simple and standard sdist format that isn't intertwined with distutils

2015-10-08 Thread Oscar Benjamin
On 8 October 2015 at 12:46, Paul Moore  wrote:
> On 8 October 2015 at 11:18, Oscar Benjamin  wrote:
>>
>> A concrete example would be whether or not the numpy source wheel
>> depends on pyopenblas. Depending on how numpy is built the binary
>> wheel may or may not depend on pyopenblas. It doesn't make any sense
>> to say that the numpy source release depends on pyopenblas so what
>> should be the dependencies of the source wheel?
>
> Well, I said this previously but I don't have any objections to the
> idea that binary wheels have additional dependencies - so the source
> wheel doesn't depend on pyopenblas but the binary does.

Okay, I guess I'm confused by what you mean when you say that a source
wheel (or sdist) should have a "one-to-one" correspondence with a
binary wheel.

> But as I understand it, this is currently theoretical - there isn't
> yet any pyopenblas validate these speculations against?

I don't think pyopenblas is ready yet but it is being developed with
specifically this scenario in mind.


> So unless I'm mistaken about what you're saying, I don't see any issue
> here. Unless you're saying that you're not willing to work under some
> of the constraints I describe above

As an aside : I'm not a contributor to numpy. I just use it a lot and
teach people how to use it (which is where the packaging problems come
in).

> - but in that case, you need pip's
> compatibility matching, dependency resolution, or automated wheel
> build processes to change. That's fine but to move the discussion
> forwards, we'd then need to understand (and agree with) whatever
> changes you need in pip. At the moment, I'm not aware that anyone has
> asked for substantive changes to pip's behaviour in these areas as
> part of this proposal.

I don't think anyone is suggesting significant changes to pip's
dependency resolution. Compatibility matching does need improvement
IMO. Also the automated build process does need to be changed -
specifically we need build-requires so that third party build tools
can work. I didn't think improving the build process was
controversial...


>> I think this satisfies all of the requirements for static metadata and
>> one-to-one correspondence of source wheels and binary wheels. If numpy
>> followed this then I imagine that there would be a single source wheel
>> on PyPI corresponding to the one configuration that would be used
>> consistently there. However numpy still needs to separately release
>> the code in a form that is also usable in all of the many other
>> contexts that it is already used. IOW they will need to continue to
>> issue source releases in more or less the same form as today. It makes
>> sense for PyPI to host the source release archives on the project page
>> even if pip will simply ignore them.
>
> So you're talking about numpy only supporting one configuration via
> PyPI, and expecting any other configurations to be made available only
> via other channels? I guess you could do that, but I hope you won't.
> It feels to me like giving up before we've properly tried to
> understand the issues.

Okay so again I'm not a numpy dev.

Numpy already supports being used in lots of setups that are not via
pypi. Apart from Cristoph's builds you have all kinds of people
building on all kinds of OSes and linking with different BLAS
libraries in different ways. Some people will compile numpy statically
with CPython. If you follow the discussions about numpy development
it's clear that the numpy devs don't know all of the ways that numpy
is built and used.

Clearly pip/PyPI cannot be used to statically link numpy with CPython
or for all of the different (often non-redistributable) BLAS libraries
so numpy will support some setups that are not possible through pip.
That's fine, I don't see the problem with that. At the moment an sdist
is the same thing is a source release. If you propose to change it so
that projects should upload source wheels and then make source wheels
something tightly defined (e.g. a zip file containing exactly two
directories setup to build for one particular configuration) then
there needs to be a separate way to simply release the code in
traditional format as is done now.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Towards a simple and standard sdist format that isn't intertwined with distutils

2015-10-08 Thread Oscar Benjamin
On 7 October 2015 at 22:41, Paul Moore  wrote:
> On 7 October 2015 at 22:28, Nathaniel Smith  wrote:
>> Maybe I have misunderstood: does it actually help pip at all to have
>> static access to name and version, but not to anything else? I've been
>> assuming not, but I don't think anyone's pointed to any examples yet
>> of the problems that pip is encountering due to the lack of static
>> metadata -- would this actually be enough to solve them?
>
> The principle I am working on is that *all* metadata in a source wheel
> should be statically available - that's not just for pip, but for all
> other consumers, including distro packagers. What's not set in stone
> is precisely what (subsets of) metadata are appropriate for source
> wheels as opposed to (binary) wheels.

A concrete example would be whether or not the numpy source wheel
depends on pyopenblas. Depending on how numpy is built the binary
wheel may or may not depend on pyopenblas. It doesn't make any sense
to say that the numpy source release depends on pyopenblas so what
should be the dependencies of the source wheel?

One possibility which I think is what Nathaniel is getting at is that
there is a source release and then that could be used to generate
different possible source wheels each of which would correspond to a
particular configuration of numpy. Each source wheel would correspond
to one binary wheel and have all static metadata but there still needs
to be a separate source release that is used to generate the different
source wheels.

The step that turns a source wheel into a binary wheel would be
analogous to the ./configure step in a typical makefile project.
./configure is used to specify the options corresponding to all the
different ways of compiling and installing the project. After running
./configure the command "make" is unparametrised and performs the
actual compilation: this step is analogous to converting a source
wheel to a binary wheel.

I think this satisfies all of the requirements for static metadata and
one-to-one correspondence of source wheels and binary wheels. If numpy
followed this then I imagine that there would be a single source wheel
on PyPI corresponding to the one configuration that would be used
consistently there. However numpy still needs to separately release
the code in a form that is also usable in all of the many other
contexts that it is already used. IOW they will need to continue to
issue source releases in more or less the same form as today. It makes
sense for PyPI to host the source release archives on the project page
even if pip will simply ignore them.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Towards a simple and standard sdist format that isn't intertwined with distutils

2015-10-07 Thread Oscar Benjamin
On Wed, 7 Oct 2015 19:42 Donald Stufft  wrote:

On October 7, 2015 at 2:31:03 PM, Oscar Benjamin (oscar.j.benja...@gmail.com)
wrote:
> >
> Your idea of an sdist as something that has fully static build/runtime
> dependency metadata and a one to one correspondence with binary
> wheels is not a usable format when releasing the code for e.g.
> numpy 1.10. It's fine to say that pip/PyPI should work with the
> source in some other distribution format and numpy could produce
> that but it means that the standard tarball release needs to be
> supported some how separately. Numpy should be able to use PyPI
> in order to host the tarball even if pip ignores the file.
>
>
> If numpy released only source wheels then there would be more
> than one source wheel for each release corresponding to e.g.
> the different ways that numpy is linked. There still needs to
> be a way to release a single file representing the code for the
> release as a whole.
>

Can you expand on this please? I've never used numpy for anything serious
and
I'm trying to figure out why and what parts of what I'm thinking of wouldn't
work for it.



Currently I can take the code from the numpy release and compile it in
different incompatible ways. For example I could make a wheel that bundles
a BLAS library. Or I could make a wheel that expects to use a system BLAS
library that should be installed separately somehow or I could build a
wheel against pyopenblas and make a wheel that depends on pyopenblas. Or I
could link a BLAS library statically into numpy.

A numpy release supports being compiled and linked in many different ways
and will continue to do so regardless of any decisions made by PYPA. What
that means is that there is not a one to one correspondence between a numpy
release and a binary wheel. If there must be a one to one correspondence
between a source wheel and a binary wheel then it follows that there cannot
be a one to one correspondence between the source release and a source
wheel.

Of course numpy could say that they will only upload one particular source
wheel and binary wheel to PyPI but people need to be able to use the source
release in many different ways. So only releasing a source wheel that maps
one to one to a particular way of compiling numpy is not an acceptable way
for numpy to release its code.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Towards a simple and standard sdist format that isn't intertwined with distutils

2015-10-07 Thread Oscar Benjamin
On Wed, 7 Oct 2015 18:51 Donald Stufft  wrote:


On October 7, 2015 at 1:27:31 PM, Nathaniel Smith (n...@pobox.com) wrote:
> On Mon, Oct 5, 2015 at 6:51 AM, Donald Stufft wrote:
> [...]
> > I also don't think it will be confusing. They'll associate the VCS
thing (a source release)
> as something focused on development for most everyone. Most people won't
explicitly
> make one and nobody will be uploading it to PyPI. The end goal in my mind
is someone produces
> a source wheel and uploads that to PyPI and PyPI takes it from there.
Mucking around with
> manually producing binary wheels or producing source releases other than
what's checked
> into vcs will be something that I suspect only advanced users will do.
>
> Of course people will make source releases, and should be able to
> upload them to PyPI. The end goal is that *pip* will not use source
> releases, but PyPI is not just there for pip. If it was, it wouldn't
> even show package descriptions :-).
>
> There are projects on PyPI right now, today, that have no way to
> generate sdists and will never have any need for "source wheels"
> (because they don't use distutils and they build "none-any" wheels
> directly from their source). It should still be possible for them to
> upload source releases for all the other reasons that having source
> releases is useful: they form a permanent record of the whole project
> state (including potentially docs, tests, working notes, etc. that
> don't make it into the wheels), human users may well want to download
> those archives, Debian may prefer to use that as their orig.tar.gz,
> etc. etc.
>
> And on the other end of the complexity scale, there are projects like
> numpy where it's not clear to me whether they'll ever be able to
> support "source wheels", and even if they do they'll still need source
> releases to support user configuration at build time.

We must have different ideas of what a source release vs source wheel would
look like, because I'm having a hard time squaring what you've said here
with
what it looks like in my head. In my head, source releases (outside of the
VCS
use case) will be rare and only for very complex packages that are doing
very
complex things. Source wheels will be something that will be semi mandatory
to
being a well behaved citizen (for Debian and such to download) and binary
wheels will be something that you'll want to have but aren't required. I
don't
see any reason why source wheels wouldn't include docs, tests, and other
misc
files.

I picture building a binary wheel directly being something similar to using
fpm
to build binary .deb packages directly, totally possible but unadvised.

Having talked to folks who deal with Debian/Fedora packages, they won't
accept
a binary wheel as the input source and (given how I explained it to them)
they
are excited about the concept of source wheels and moving away from dynamic
metadata and towards static metadata.


Your idea of an sdist as something that has fully static build/runtime
dependency metadata and a one to one correspondence with binary wheels is
not a usable format when releasing the code for e.g. numpy 1.10. It's fine
to say that pip/PyPI should work with the source in some other distribution
format and numpy could produce that but it means that the standard tarball
release needs to be supported some how separately. Numpy should be able to
use PyPI in order to host the tarball even if pip ignores the file.

If numpy released only source wheels then there would be more than one
source wheel for each release corresponding to e.g. the different ways that
numpy is linked. There still needs to be a way to release a single file
representing the code for the release as a whole.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] (no subject)

2015-08-18 Thread Oscar Benjamin
You extension module is called _spam in your setup.py so the function
should be init_spam.

On Tue, 18 Aug 2015 22:50 garyr  wrote:

> Not according to the Python documentation.
>
> See the Python documentation Extending and Embedding the Python Interperter
> / Extending Python with C or C++ /
> A Simple Example:
>
> The method table must be passed to the interpreter in the module’s
> initialization function. The initialization function must be named
> initname(),
> where name is the name of the module, and should be the only non-static
> item
> defined in the module file:
>
> PyMODINIT_FUNC
> initspam(void)
> {
> (void) Py_InitModule("spam", SpamMethods);
> }
>
> I tried doing that and it crashed Python when I imported _spam
>
> - Original Message -
> From: "Oscar Benjamin" 
> To: "garyr" ; 
> Sent: Tuesday, August 18, 2015 12:51 PM
> Subject: Re: [Distutils] (no subject)
>
>
> > Should the function be called init_spam rather than initspam?
> >
> >
> > On Tue, 18 Aug 2015 19:19 garyr  wrote:
> >
> > I posted this on comp.lang.python but received no replies.
> >
> > I tried building the spammodule.c example described in the documentation
> > section "Extending Python with C or C++." As shown the code compiles OK
> but
> > generates a link error:
> >
> > LINK : error LNK2001: unresolved external symbol init_spam
> > build\temp.win32-2.7\Release\_spam.lib : fatal error LNK1120: 1
> unresolved
> > externals
> >
> > I tried changing the name of the initialization function spam_system to
> > init_spam and removed the static declaration. This compiled and linked
> > without errors but generated a system error when _spam was imported.
> >
> > I'm using Python 2.7.9. The same error occurs with Python 2.6.9.
> >
> > The code and the setup.py file are shown below. What do I need to do to
> fix
> > this?
> >
> > setup.py:
> >
> -
> > from setuptools import setup, Extension
> >
> > setup(name='spam',
> >version='0.1',
> >description='test module',
> >ext_modules=[Extension('_spam', ['spammodule.c'],
> >include_dirs=[C:\Documents and
> > Settings\Owner\Miniconda\include],
> >)],
> >)
> >
> > sammodule.c
> > --
> > #include 
> > static PyObject *SpamError;
> >
> > static PyObject *
> > spam_system(PyObject *self, PyObject *args)
> > {
> >const char *command;
> >int sts;
> >
> >if (!PyArg_ParseTuple(args, "s", &command))
> >return NULL;
> >sts = system(command);
> >if (sts < 0) {
> >PyErr_SetString(SpamError, "System command failed");
> >return NULL;
> >}
> >return PyLong_FromLong(sts);
> > }
> >
> > static PyMethodDef SpamMethods[] = {
> >
> >{"system",  spam_system, METH_VARARGS,
> > "Execute a shell command."},
> >{NULL, NULL, 0, NULL}/* Sentinel */
> > };
> >
> > PyMODINIT_FUNC
> > initspam(void)
> > {
> >PyObject *m;
> >
> >m = Py_InitModule("spam", SpamMethods);
> >if (m == NULL)
> >return;
> >
> >SpamError = PyErr_NewException("spam.error", NULL, NULL);
> >Py_INCREF(SpamError);
> >PyModule_AddObject(m, "error", SpamError);
> > }
> >
> >
> >
> >
> >
> >
> >
> > ___
> > Distutils-SIG maillist  -  Distutils-SIG@python.org
> > https://mail.python.org/mailman/listinfo/distutils-sig
> >
>
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] (no subject)

2015-08-18 Thread Oscar Benjamin
Should the function be called init_spam rather than initspam?


On Tue, 18 Aug 2015 19:19 garyr  wrote:

I posted this on comp.lang.python but received no replies.

I tried building the spammodule.c example described in the documentation
section "Extending Python with C or C++." As shown the code compiles OK but
generates a link error:

LINK : error LNK2001: unresolved external symbol init_spam
build\temp.win32-2.7\Release\_spam.lib : fatal error LNK1120: 1 unresolved
externals

I tried changing the name of the initialization function spam_system to
init_spam and removed the static declaration. This compiled and linked
without errors but generated a system error when _spam was imported.

I'm using Python 2.7.9. The same error occurs with Python 2.6.9.

The code and the setup.py file are shown below. What do I need to do to fix
this?

setup.py:
-
from setuptools import setup, Extension

setup(name='spam',
version='0.1',
description='test module',
ext_modules=[Extension('_spam', ['spammodule.c'],
include_dirs=[C:\Documents and
Settings\Owner\Miniconda\include],
)],
)

sammodule.c
--
#include 
static PyObject *SpamError;

static PyObject *
spam_system(PyObject *self, PyObject *args)
{
const char *command;
int sts;

if (!PyArg_ParseTuple(args, "s", &command))
return NULL;
sts = system(command);
if (sts < 0) {
PyErr_SetString(SpamError, "System command failed");
return NULL;
}
return PyLong_FromLong(sts);
}

static PyMethodDef SpamMethods[] = {

{"system",  spam_system, METH_VARARGS,
 "Execute a shell command."},
{NULL, NULL, 0, NULL}/* Sentinel */
};

PyMODINIT_FUNC
initspam(void)
{
PyObject *m;

m = Py_InitModule("spam", SpamMethods);
if (m == NULL)
return;

SpamError = PyErr_NewException("spam.error", NULL, NULL);
Py_INCREF(SpamError);
PyModule_AddObject(m, "error", SpamError);
}







___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Working toward Linux wheel support

2015-07-28 Thread Oscar Benjamin
On Fri, 24 Jul 2015 at 19:53 Chris Barker  wrote:

> On Tue, Jul 21, 2015 at 9:38 AM, Oscar Benjamin <
> oscar.j.benja...@gmail.com> wrote:
>
>>
>> I think it would be great to just package these up as wheels and put them
>> on PyPI.
>>
>
> that's the point -- there is no way with the current spec to specify a
> wheel dependency as opposed to a package dependency. i.e this particular
> binary numpy wheel depends on this other wheel, whereas the numpy source
> pacakge does not have that dependency -- and, indeed, a wheel for one
> platform may have different dependencies that\n other platforms.
>

I thought it was possible to do this with wheels. It's already possible to
have wheels or sdists whose dependencies vary by platform I thought.

The BLAS dependency is different. In particular the sdist is compatible
with more cases than a wheel would be so the built wheel would have a more
precise requirement than the sdist. Is that not possible with
pip/wheels/PyPI or is that a limitation of using setuptools to build the
wheel?


> So numpy could depend on "blas" and there could be a few different
>> distributions on PyPI that provide "blas" representing the different
>> underlying libraries. If I want to install numpy with a particular one I
>> can just do:
>>
>> pip install gotoblas  # Installs the BLAS library within Python dirs
>> pip install numpy
>>
>
> well,different implementations of BLAS are theoretically ABI compatible,
> but as I understand it, it's not actually that simple, so this is
> particularly challenging.
>

> But if it were, this would be a particular trick, because then that numpy
> wheel would depend on _some_ BLAS wheel, but there may be more than one
> option -- how would you express that
>

I imagined having numpy Require "blas OR openblas". Then openblas package
Provides "blas". Any other BLAS library also provides "blas". If you do
"pip install numpy" and "blas" is already provided then the numpy wheel
installs fine. Otherwise it falls back to installing openblas.

Potentially "blas" is not specific enough so the label could be
"blas-gfortran" to express the ABI.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Working toward Linux wheel support

2015-07-21 Thread Oscar Benjamin
On Fri, 17 Jul 2015 at 16:37 Chris Barker  wrote:

> TL;DR -- pip+wheel needs to address the non-python dependency issue before
> it can be a full solution for Linux (or anything else, really)
>
> 

>  - Packages with semi-standard dependencies: can we expect ANY Linux
> distro to have libfreetype, libpng, libz, libjpeg, etc? probably, but maybe
> not installed (would a headless server have libfreetype?). And would those
> version be all compatible (probably if you specified a distro version)
>  - Packages with non-standard non-python dependencies: libhdf5, lapack,
> BLAS, fortran(!)
>

I think it would be great to just package these up as wheels and put them
on PyPI. I'd really like to be able to (easily) have different BLAS
libraries on a per-virtualenv basis.

So numpy could depend on "blas" and there could be a few different
distributions on PyPI that provide "blas" representing the different
underlying libraries. If I want to install numpy with a particular one I
can just do:

pip install gotoblas  # Installs the BLAS library within Python dirs
pip install numpy

You could have a BLAS distribution that is just a shim for a system BLAS
that was installed some other way.

pip install --install-option='--blaslib=/usr/lib/libblas' systemblas
pip install numpy

That would give linux distros a way to provide the BLAS library that
python/pip understands without everything being statically linked and
without pip needing to understand the distro package manager. Also python
packages that want BLAS can use the Python import system to locate the BLAS
library making it particularly simple for them and allowing distros to move
things around as desired.

I would like it if this were possible even without wheels. I'd be happy
just that the commands to download a BLAS library, compile it, install it
non-globally, and configure numpy to use it would be that simple. If it
worked with wheels then that'd be a massive win.

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Making pip and PyPI work with conda packages

2015-05-19 Thread Oscar Benjamin
On 19 May 2015 at 10:55, Paul Moore  wrote:
>>
>> But python, setuptools, pip, wheel, etc. don't have a way to handle that
>> shared lib as a dependency -- no standard way where to put it, no way to
>> package it as a wheel, etc.
>>
>> So the way to deal with this with wheels is to statically link everything.
>> But that's not how conda pa cakges are built, so no way to leverage conda
>> here.
>
> Thanks for the explanation. So, in effect, conda-as-a-platform defines
> a (somewhat) incompatible platform for running Python, which can use
> wheels just as python.org Python can, but which uses
> conda-as-an-installer as its package manager (much like RPM or apt on
> Unix).
>
> The downside of this is that wheels built for conda (assuming that
> it's OK to link with shared libs) are not compatible with python.org
> builds (as those shared libs aren't available) and that difference
> isn't reflected in the wheel ABI tags (and it's not particularly
> clearly understood by the community, it seems). So publishing
> conda-based wheels on PyPI would be a bad idea, because they wouldn't
> work with python.org python (more precisely, only things that depend
> on shared libs are affected, but the point remains).

I've been peripherally following this thread so I may be missing the
point but it seems to me that Python already has a mature and flexible
way of locating and loading shared libs through the module/import
system. Surely the best way to manage non-Python shared libs is by
exposing them as extension modules which can be packaged up on PyPI.
Then you have dependency resolution for pip, you don't need to worry
about the OS-specific shared library loading details and ABI
information can be stored as metadata in the module. It would even be
possible to load multiple versions or ABIs of the same library as
differently named Python modules IIUC.

As a case in point numpy packages up a load of C code and wraps a
BLAS/Lapack library. Many other extension modules are written which
can all take advantage of the non-Python shared libraries that embody
numpy via its C API.

Is there some reason that this is not considered a good solution?


--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Help required for setup.py

2015-05-19 Thread Oscar Benjamin
On 19 May 2015 at 03:24, salil GK  wrote:
>
>I was trying to create my package for distribution. I have a requirement
> that I need to check if one particular command is available in the system (
> this command is not installed through a package - but a bundle is installed
> to get the command in the system ). I am using Ubuntu 14.04

Hi Salil, what is it that you actually want help with?

--
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Deferring metadata hooks

2014-03-02 Thread Oscar Benjamin
On 2 March 2014 21:05, Nick Coghlan  wrote:
>> >
>> > I think this approach may also encourage a design where projects do
>> > something sensible *by default* (e.g. NumPy defaulting to SSE2) and
>> > then use the (not yet defined) post-installation hooks to potentially
>> > *change away* from the default to something more optimised for that
>> > particular system (e.g. NumPy overwriting itself with an SSE3
>> > version), while still *allowing* developers to refuse to let the
>> > software install if the metadata hooks won't be run.
>>
>> I'm not sure but there does seem to be some discussion and movement
>> toward the idea of numpy distributing openblas binaries (which would
>> solve the SSE problem). See the threads starting here for more:
>> http://mail.scipy.org/pipermail/numpy-discussion/2014-February/069186.html
>> http://mail.scipy.org/pipermail/numpy-discussion/2014-February/069106.html
>>
>> (Note that shipping openblas binaries does not solve the ABI mismatch
>> problems that compatibility tags could address).
>
> So long as NumPy defines and publishes an extension with the relevant
> details in its metadata, the metadata constraints extension would eventually
> be able to automate consistency checks.
>
> However, I'm starting to think you may be right and it will be worth having
> that defined from the beginning, specifically to help ensure we keep the
> NumPy dependent wheels on PyPI consistent with each other.

I expect that those involved in distributing wheels for the scipy
stack would coordinate and quickly converge on a consistent set of
wheels for Windows/OSX on PyPI so I doubt that it would be an issue in
that sense.

Where it is an issue is for people who install different pieces from
different places i.e. mixing source builds, .exe installers, wheels
from PyPI, and perhaps even conda packages and wheels from other
places. If a mechanism was provided to prevent borken installs and
give helpful error messages I'm sure it would be taken advantage of.

If it were also possible to upload and select between multiple
variants of a distribution then that might lower the bar for numpy to
distribute e.g. openblas wheels. A user who was unhappy with openblas
could just as easily install an alternative. (Changing BLAS library is
a big deal for numpy).


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Deferring metadata hooks

2014-03-02 Thread Oscar Benjamin
On 2 March 2014 07:25, Nick Coghlan  wrote:
>
> I've just posted updated versions of PEP 426 and 459 that defer the
> "metadata hooks" feature. The design and behaviour of that extension
> is still way too speculative for me to approve in its current form,
> but I also don't want to hold up the rest of the changes in metadata
> 2.0 while we thrash out the details of a hook system.

The other idea that was discussed a few times but hasn't made it into
PEP 426 is the idea of compatibility tags. These are mentioned in the
(deferred) metabuild section:
http://legacy.python.org/dev/peps/pep-0426/#metabuild-system
but nowhere else in the PEP.

I certainly understand the desire to defer dealing with something as
complex as hooks but simple string compatibility tags are a much
simpler thing to include in the metadata and could be very useful. I'm
thinking of a situation where you can indicate things like ABI
compatibility for C/Fortran compiled code (e.g. libc, gfortran vs g77)
but there could easily be many other uses once wheel takes off.

> That said, I still don't want us to get into a situation where someone
> later publishes a wheel file that expects metadata hook support and
> older tools silently install it without running the hooks.
>
> Accordingly, the revised PEP 426 adds a single simpler feature to the
> extensions system: the idea of a "required extension".
>
> If a project sets that flag for an extension (by including
> "required_extension": true in the extension metadata), and an
> installation tool doesn't understand it, then the tool is required to
> either fail the installation attempt entirely or else fall back to
> installing from source.
>
> That way, project authors will be able to distinguish between "these
> metadata hooks are just an optimisation, things will still work if you
> don't run them" and "if you don't run these hooks, your installation
> will be broken".

Is there some need for metadata extensions to be optional by default?

> I think this approach may also encourage a design where projects do
> something sensible *by default* (e.g. NumPy defaulting to SSE2) and
> then use the (not yet defined) post-installation hooks to potentially
> *change away* from the default to something more optimised for that
> particular system (e.g. NumPy overwriting itself with an SSE3
> version), while still *allowing* developers to refuse to let the
> software install if the metadata hooks won't be run.

I'm not sure but there does seem to be some discussion and movement
toward the idea of numpy distributing openblas binaries (which would
solve the SSE problem). See the threads starting here for more:
http://mail.scipy.org/pipermail/numpy-discussion/2014-February/069186.html
http://mail.scipy.org/pipermail/numpy-discussion/2014-February/069106.html

(Note that shipping openblas binaries does not solve the ABI mismatch
problems that compatibility tags could address).


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Cross-platform way to get default directory for binary files like console scripts?

2014-02-21 Thread Oscar Benjamin
On 21 February 2014 13:24, Paul Moore  wrote:
>>
>> Is there cross-platform way to get default directory for binary files
>> (console scripts for instance) the same way one can use sys.executable
>> to get path to the Python's interpreter in cross-platform way?
>
> sysconfig.get_path("scripts") should work. If you're on a Python too
> old to have sysconfig then sorry, I've no idea (other than "you should
> upgrade" :-))

Ah, well that's better.

One question though: are these guaranteed to be consistent. I was
pointing at the actual code that distutils uses when installing where
as you're pointing at a module that independently lists an
over-lapping set of data:
http://hg.python.org/cpython/file/005d0678f93c/Lib/sysconfig.py#l21
http://hg.python.org/cpython/file/005d0678f93c/Lib/distutils/command/install.py#l28

For example sysconfig defines a scheme 'osx_framework_user' that
doesn't appear in distutils.command.install and uses slightly
different paths from posix_user. Does that mean that it is
inconsistent with what distutils would do?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] pip on windows experience

2014-01-25 Thread Oscar Benjamin
On 24 January 2014 10:18, Nick Coghlan  wrote:
>
> On 24 Jan 2014 19:41, "Paul Moore"  wrote:
>>
>> On 24 January 2014 00:17, Oscar Benjamin 
>> wrote:
>> > You need to bear in mind that people currently have a variety of ways
>> > to install numpy on Windows that do work already without limitations
>> > on CPU instruction set. Most numpy users will not get any immediate
>> > benefit from the fact that "it works using pip" rather than "it works
>> > using the .exe installer" (or any of a number of other options). It's
>> > the unfortunate end users and the numpy folks who would have to pick
>> > up the pieces if/when the SSE2 assumption fails.
>>
>> The people who would benefit are those who (like me!) don't have a
>> core requirement for numpy, but who just want to "try it out"
>> casually, or for experimenting or one-off specialised scripts. These
>> are the people who won't be using one of the curated distributions,
>> and quite possibly will be using a virtualenv, so the exe installers
>> won't work. Giving these people a means to try numpy could introduce a
>> wider audience to it.
>>
>> Having said that, I can understand the reluctance to have to deal with
>> non-specialist users hitting obscure "your CPU is too old" errors -
>> that's *not* a good initial experience.
>>
>> And your point that it's just as reasonable for pip to adopt a partial
>> solution in the short term is also fair - although it would be harder
>> for pip to replace an API we added and which people are using, than it
>> would be for numpy to switch to deploying better wheels when the
>> facilities become available. So the comparison isn't entirely equal.
>
> There's also the fact that we're still trying to recover from the setup.py
> situation (which was a "quick and easy" alternative to a declarative build
> system), so quick hacks in the core metadata specs that will then be locked
> in for years by backwards compatibility requirements are definitely *not*
> acceptable. We already have more than enough of those in the legacy metadata
> we're aiming to replace :P

It wasn't a totally serious suggestion: I knew what your response would be. ;)

I'll try to summarise your take on this: You would like to take the
time to ensure that Python packaging is done properly. That may mean
that some functionality isn't available for some time, but you think
that it's better to "get it right" than rush something out the door
just to "get it working fast".

That's not an unreasonable position to take but I wanted to contrast
that with your advice to numpy: Just rush something out of the door
even if it has obvious problems. Don't worry about getting it right;
we'll do that later...

We all want a solution that definitely works so that you can advise
any old noob to use it. So if you could say 'just use pip' then that
would be great. But if you say...
'''
'just use pip...

unless your CPU doesn't support SSE2. Don't worry if you've never
heard of SSE2 just do 'pip install numpy' and then 'python -c "import
numpy"'. If you see an error like "your CPU doesn't support SSE2
please install the non-SSE version of numpy." then you'll need to
install numpy using one of the other options listed below and make
sure that you do that before trying to use pip to install any of these
other packages and if you use Cristoph's .exe for numpy then the you
can't use pip for scipy and some other set of packages (I'm not
totally sure which) so you shouldn't use pip for anything after that.
Unless it's a pure Python package. Don't worry if you don't know what
a pure Python package is, just try it with pip and if it doesn't work
just try something else...
"""
... then putting the wheel on PyPI becomes substantially less
attractive. Just having to explain that pip might not work and then
trying to describe when it will and won't and what to do about it is a
pain. I wouldn't want to recommend to my students that they do this
unless I was confident that it would work.

Also, note that I don't really think a post-install script is the best
solution for something like this. It would be better to have an
extensible system for querying things like CPU capability. It would
also be better to have an extensible system for detecting things like
Fortran ABI compatibility - this can also be handled with a
post-install script but it's not the best solution. Are there any
plans to solve these problems? Also is there a roadmap describing the
expected timeline for future packaging features?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] pip on windows experience

2014-01-25 Thread Oscar Benjamin
On 24 January 2014 22:40, Paul Moore  wrote:
> On 24 January 2014 22:21, Chris Barker  wrote:
>> well, numpy _should_ build out of the box with nothing special if you are
>> set up to build regular extensions. I understand that a lto f Windows users
>> are not set up to build extensions at all, but tehy ar presumably used to
>> getting "compiler not found" errors (or whatever the message is). But you
>> won't get an optimized numpy and much of the rest of the "stack" is harder
>> to build: scipy, matplotlib.
>
> Seriously? If I have MSVC 2010 installed, pip install numpy will
> correctly build numpy from source? It's a *long* time since I tried
> this, but I really thought building numpy was harder than that.
>
> A quick test later:
> No BLAS/ATLAS/LAPACK causes a string of warnings, And ignoring the
> rest of the error stack (which I'm frankly not interested in investing
> the time to diagnose and fix) I get "RuntimeError: Broken toolchain:
> cannot link a simple C program". Which is utter rubbish - I routinely
> build extensions with this installation.
>
> So no, numpy does not build out of the box. Ah well.

Last time I tried with mingw it worked (I've since departed the
Windows world). I think official numpy binaries for Windows are built
with mingw (Christoph uses MSVC though).


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] pip on windows experience

2014-01-23 Thread Oscar Benjamin
On 23 January 2014 23:58, Nick Coghlan  wrote:
>
> I really think that's our best near term workaround - still room for
> improvement, but " pip install numpy assumes SSE2" is a much better
> situation than "pip install numpy doesn't work on Windows".

Is it? Do you have any idea what proportion of (the relevant) people
would be using Windows with hardware that doesn't support SSE2? I feel
confident that it's less than 10% but I don't know how to justify a
tighter bound than that.

You need to bear in mind that people currently have a variety of ways
to install numpy on Windows that do work already without limitations
on CPU instruction set. Most numpy users will not get any immediate
benefit from the fact that "it works using pip" rather than "it works
using the .exe installer" (or any of a number of other options). It's
the unfortunate end users and the numpy folks who would have to pick
up the pieces if/when the SSE2 assumption fails.

> Such a change would help a lot of people *right now*, while still leaving
> room to eventually figure out something more sophisticated (like postinstall
> hooks or simpler runtime multi-build support or NumPy changing to a
> dependency that internally makes this decision at runtime).

Postinstall hooks are not that sophisticated and most packaging
systems have them. You're advocating for numpy to take a dodgy
compromise here but can it not be the other way round? Wheel, pip etc.
could quickly agree on and implement a postinstall hook that would
work for numpy and then that could be made more sophisticated later
on.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] pip on windows experience

2014-01-23 Thread Oscar Benjamin
On Thu, Jan 23, 2014 at 12:16:02PM +, Paul Moore wrote:
> 
> The official numpy installer uses some complex magic to select the
> right binaries based on your CPU, and this means that the official
> numpy "superpack" wininst files don't convert (at least I don't think
> they do, it's a while since I tried).

It's probably worth noting that numpy are toying around with wheels and
have uploaded a number of them to PyPI for testing:
http://sourceforge.net/projects/numpy/files/wheels_to_test/

Currently there are only OSX wheels there (excluding the puer Python
ones) and they're not available on PyPI. I assume that they're waiting
for a solution for the Windows installer (a post-install script for
wheels). That would give a lot more impetus to put wheels up on PyPI.

The Sourceforge OSX wheels are presumably not getting that much use
right now. The OSX-specific numpy wheel has been downloaded 4 times in
the last week: twice on Windows and twice on Linux!

> But happily, Christoph Gohlke hosts a huge list of readymade wininst
> installers for hard-to-build projects, and the 3 you mention are all
> there. He's very good about building for latest Pythons, too (3.4 is
> already there for many packages). Anyone working on Windows who
> doesn't know his site
> (http://www.lfd.uci.edu/~gohlke/pythonlibs/) should check it out.

Also I've seen Cristoph mention on the numpy-discussion list that he was
at leasting testing building wheels although none seem to be available
on his site at the moment.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Binary dependency management, round 2 :)

2013-12-06 Thread Oscar Benjamin
On 6 December 2013 13:54, Nick Coghlan  wrote:
> On 4 December 2013 21:10, Nick Coghlan  wrote:
>> == Regarding conda ==
>>
>> In terms of providing an answer to the question "Where does conda fit
>> in the scheme of packaging tools?", my conclusion from the thread is
>> that once a couple of security related issues are fixed (think PyPI
>> before the rubygems.org compromise for the current state of conda's
>> security model), and once the Python 3.3 compatibility issue is
>> addressed on Windows, it would be reasonable to recommend it as one of
>> the current options for getting hold of pre-built versions of the
>> scientific Python stack.
>>
>> I think this is important enough to warrant a "NumPy and the
>> Scientific Python stack" section in the user guide (with Linux distro
>> packages, Windows installers and conda all discussed as options):
>>
>> https://bitbucket.org/pypa/python-packaging-user-guide/issue/37/add-a-dedicated-numpy-and-the-scientific
>
> I created a draft of this new section at
> https://bitbucket.org/pypa/python-packaging-user-guide/pull-request/12/recommendations-for-numpy-et-al/diff

It's probably worth listing each of the full scientific Python
distributions on this page (or just linking to it), rather than just
Anaconda:
http://www.scipy.org/install.html


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-06 Thread Oscar Benjamin
On 6 December 2013 13:06, David Cournapeau  wrote:
>
> As Ralf, I think it is overkill. The problem of SSE vs non SSE is because of
> one library, ATLAS, which as IMO the design flaw of being arch specific. I
> always hoped we could get away from this when I built those special
> installers for numpy :)
>
> MKL does not have this issue, and now that openblas (under a BSD license)
> can be used as well, we can alleviate this for deployment. Building a
> deployment story for this is not justified.

Oh, okay that's great. How hard would it be to get openblas numpy
wheels up and running? Would they be compatible with the existing
scipy etc. binaries?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Binary dependency management, round 2 :)

2013-12-05 Thread Oscar Benjamin
On 5 December 2013 00:06, Marcus Smith  wrote:
>>
>>  but Anoconda does some a nifty thing: it make s conda package that holds
>> the shared lib, then other packages that depend on it depend on that
>> package, so it will both get auto--installed
>>
>> But I don't see why you couldn't do that with wheels.
>
> exactly,  that's what I'm really proposing/asking,  is that maybe wheels
> should formally go in that direction.
> i.e. not just packaging python projects, but packaging non-python
> dependencies that python projects need (but have those dependencies be
> optional, for those who want to fulfill those deps using

I don't think it matters whether anyone "formally" goes in that
direction. If it's possible then it will happen for some things sooner
or later. I hope it does happen too, for things like build tools,
BLAS/LAPACK libraries etc. Virtualenv+pip could become a much more
convenient way to set up a software configuration than currently
exists on Windows and OSX (and on Linux distros if you're not looking
to mess with the system install).


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Oscar Benjamin
On 4 December 2013 20:56, Ralf Gommers  wrote:
> On Wed, Dec 4, 2013 at 5:05 PM, Chris Barker - NOAA Federal
>  wrote:
>>
>> So a lowest common denominator wheel would be very, very, useful.
>>
>> As for what that would be: the superpack is great, but it's been around a
>> while (long while in computer years)
>>
>> How many non-sse machines are there still out there? How many non-sse2?
>
> Hard to tell. Probably <2%, but that's still too much. Some older Athlon XPs
> don't have it for example. And what if someone submits performance
> optimizations (there has been a focus on those recently) to numpy that use
> SSE4 or AVX for example? You don't want to reject those based on the
> limitations of your distribution process.
>
>> And how big is the performance boost anyway?
>
> Large. For a long time we've put a non-SSE installer for numpy on pypi so
> that people would stop complaining that ``easy_install numpy`` didn't work.
> Then there were regular complaints about dot products being an order of
> magnitude slower than Matlab or R.

Yes, I wouldn't want that kind of bad PR getting around about
scientific Python "Python is slower than Matlab" etc.

It seems as if there is a need to extend the pip+wheel+PyPI system
before this can fully work for numpy. I'm sure that the people here
who have been working on all of this would be very interested to know
what kinds of solutions would work best for numpy and related
packages.

You mentioned in another message that a post-install script seems best
to you. I suspect there is a little reluctance to go this way because
one of the goals of the wheel system is to reduce the situation where
users execute arbitrary code from the internet with admin privileges
e.g. "sudo pip install X" will download and run the setup.py from X
with root privileges. Part of the point about wheels is that they
don't need to be "executed" for installation. I know that post-install
scripts are common in .deb and .rpm packages but I think that the use
case there is slightly different as the files are downloaded from
controlled repositories whereas PyPI has no quality assurance.

BTW, how do the distros handle e.g. SSE? My understanding is that they
just strip out all the SSE and related non-portable extensions and
ship generic 686 binaries. My experience is with Ubuntu and I know
they're not very good at handling BLAS with numpy and they don't seem
to be able to compile fftpack as well as Cristoph can.

Perhaps a good near-term plan might be to
1) Add the bdist_wheel command to numpy - which may actually be almost
automatic with new enough setuptools/pip and wheel installed.
2) Upload wheels for OSX to PyPI - for OSX SSE support can be inferred
from OS version which wheels can currently handle.
3) Upload wheels for Windows to somewhere other than PyPI e.g.
SourceForge pending a distribution solution that can detect SSE
support on Windows.

I think it would be good to have a go at wheels even if they're not
fully ready for PyPI (just in case some other issue surfaces in the
process).


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-04 Thread Oscar Benjamin
On 4 December 2013 12:10, Nick Coghlan  wrote:
> On 4 December 2013 20:41, Oscar Benjamin  wrote:
>>
>> Another possibility is that the pip/wheel/PyPI/metadata system can be
>> changed to allow a "variant" field for wheels/sdists. This was also
>> suggested in the same thread by Nick Coghlan:
>> https://mail.python.org/pipermail/distutils-sig/2013-August/022432.html
>>
>> The variant field could be used to upload multiple variants e.g.
>> numpy-1.7.1-cp27-cp22m-win32.whl
>> numpy-1.7.1-cp27-cp22m-win32-sse.whl
>> numpy-1.7.1-cp27-cp22m-win32-sse2.whl
>> numpy-1.7.1-cp27-cp22m-win32-sse3.whl
>> then if the user requests 'numpy:sse3' they will get the wheel with
>> sse3 support.
>
> That was what I was originally thinking for the variant field, but I
> later realised it makes more sense to treat the "variant" marker as
> part of the *platform* tag, rather than being an independent tag in
> its own right: 
> https://bitbucket.org/pypa/pypi-metadata-formats/issue/15/enhance-the-platform-tag-definition-for
>
> Under that approach, pip would figure out all the variants that
> applied to the current system (with some default preference order
> between variants for platforms where one system may support multiple
> variants). Using the Linux distro variants (based on ID and RELEASE_ID
> in /etc/os-release) as an example rather than the Windows SSE
> variants, this might look like:
>
>   cp33-cp33m-linux_x86_64_fedora_19
>   cp33-cp33m-linux_x86_64_fedora
>   cp33-cp33m-linux_x86_64

I find that a bit strange to look at since I expect it to be like a
taxonomic hierarchy like so:

cp33-cp33m-linux
cp33-cp33m-linux_fedora
cp33-cp33m-linux_fedora_19
cp33-cp33m-linux_fedora_19_x86_64

Really you always need the architecture information though so

cp33-cp33m-linux_x86_64
cp33-cp33m-linux_fedora_x86_64
cp33-cp33m-linux_fedora_19_x86_64

> The Windows SSE variants might look like:
>
>   cp33-cp33m-win32_sse3
>   cp33-cp33m-win32_sse2
>   cp33-cp33m-win32_sse
>   cp33-cp33m-win32

I would have thought something like:

cp33-cp33m-win32
cp33-cp33m-win32_nt
cp33-cp33m-win32_nt_vista
cp33-cp33m-win32_nt_vista_sp2

Also CPU information isn't hierarchical, so what happens when e.g.
pyfftw wants to ship wheels with and without MMX instructions?

>> I think it would be good to work out a way of doing this with e.g. a
>> cpuinfo package. Many other packages beyond numpy could make good use
>> of that metadata if it were available. Similarly having an extensible
>> mechanism for selecting wheels based on additional information about
>> the user's system could be used for many more things than just CPU
>> architectures.
>
> Yes, the lack of extensibility is the one concern I have with baking
> the CPU SSE info into the platform tag. On the other hand, the CPU
> architecture info is already in there, so appending the vectorisation
> support isn't an obviously bad idea, is orthogonal to the
> "python.expects" consistency enforcement metadata and would cover the
> NumPy use case, which is the one we really care about at this point.

An extensible solution would be a big win. Maybe there should be an
explicit metadata option that says "to get this piece of metadata you
should install the following package and then run this command
(without elevated privileges?)".


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-04 Thread Oscar Benjamin
On 4 December 2013 07:40, Ralf Gommers  wrote:
> On Wed, Dec 4, 2013 at 1:54 AM, Donald Stufft  wrote:
>>
>> I’d love to get Wheels to the point they are more suitable then they are
>> for SciPy stuff,
>
> That would indeed be a good step forward. I'm interested to try to help get
> to that point for Numpy and Scipy.

Thanks Ralf. Please let me know what you think of the following.

>> I’m not sure what the diff between the current state and what
>> they need to be are but if someone spells it out (I’ve only just skimmed
>> your last email so perhaps it’s contained in that!) I’ll do the arguing
>> for it. I
>> just need someone who actually knows what’s needed to advise me :)
>
> To start with, the SSE stuff. Numpy and scipy are distributed as "superpack"
> installers for Windows containing three full builds: no SSE, SSE2 and SSE3.
> Plus a script that runs at install time to check which version to use. These
> are built with ``paver bdist_superpack``, see
> https://github.com/numpy/numpy/blob/master/pavement.py#L224. The NSIS and
> CPU selector scripts are under tools/win32build/.
>
> How do I package those three builds into wheels and get the right one
> installed by ``pip install numpy``?

This was discussed previously on this list:
https://mail.python.org/pipermail/distutils-sig/2013-August/022362.html

Essentially the current wheel format and specification does not
provide a way to do this directly. There are several different
possible approaches.

One possibility is that the wheel spec can be updated to include a
post-install script (I believe this will happen eventually - someone
correct me if I'm wrong). Then the numpy for Windows wheel can just do
the same as the superpack installer: ship all variants, then
delete/rename in a post-install script so that the correct variant is
in place after install.

Another possibility is that the pip/wheel/PyPI/metadata system can be
changed to allow a "variant" field for wheels/sdists. This was also
suggested in the same thread by Nick Coghlan:
https://mail.python.org/pipermail/distutils-sig/2013-August/022432.html

The variant field could be used to upload multiple variants e.g.
numpy-1.7.1-cp27-cp22m-win32.whl
numpy-1.7.1-cp27-cp22m-win32-sse.whl
numpy-1.7.1-cp27-cp22m-win32-sse2.whl
numpy-1.7.1-cp27-cp22m-win32-sse3.whl
then if the user requests 'numpy:sse3' they will get the wheel with
sse3 support.

Of course how would the user know if their CPU supports SSE3? I know
roughly what SSE is but I don't know what level of SSE is avilable on
each of the machines I use. There is a Python script/module in
numpexpr that can detect this:
https://github.com/eleddy/numexpr/blob/master/numexpr/cpuinfo.py

When I run that script on this machine I get:
$ python cpuinfo.py
CPU information: CPUInfoBase__get_nbits=32 getNCPUs=2 has_mmx has_sse2
is_32bit is_Core2 is_Intel is_i686

So perhaps someone could break that script out of numexpr and release
it as a separate package on PyPI. Then the instructions for installing
numpy could be something like
"""
You can install numpy with

$pip install numpy

which will download the default version without any CPU-specific optimisations.

If you know what level of SSE support your CPU has then you can
download a more optimised numpy with either of:

$ pip install numpy:sse2
$ pip install numpy:sse3

To determine whether or not your CPU has SSE2 or SSE3 or no SSE
support you can install and run the cpuinfo script. For example on
this machine:

$ pip install cpuinfo
$ python -m cpuinfo --sse
This CPU supports the SSE3 instruction set.

That means we can install numpy:sse3.
"""

Of course it would be a shame to have a solution that is so close to
automatic without quite being automatic. Also the problem is that
having no SSE support in the default numpy means that lots of people
would lose out on optimisations. For example if numpy is installed as
a dependency of something else then the user would always end up with
the unoptimised no-SSE binary.

Another possibility is that numpy could depend on the cpuinfo package
so that it gets installed automatically before numpy. Then if the
cpuinfo package has a traditional setup.py sdist (not a wheel) it
could detect the CPU information at install time and store that in its
package metadata. Then pip would be aware of this metadata and could
use it to determine which wheel is appropriate.

I don't quite know if this would work but perhaps the cpuinfo could
announce that it "Provides" e.g. cpuinfo:sse2. Then a numpy wheel
could "Requires" cpuinfo:sse2 or something along these lines. Or
perhaps this is better handled by the metadata extensions Nick
suggested earlier in this thread.

I think it would be good to work out a way of doing this with e.g. a
cpuinfo package. Many other packages beyond numpy could make good use
of that metadata if it were available. Similarly having an extensible
mechanism for selecting wheels based on additional information about
the user's system c

Re: [Distutils] Handling the binary dependency management problem

2013-12-03 Thread Oscar Benjamin
On 3 December 2013 21:13, Donald Stufft  wrote:
> I think Wheels are the way forward for Python dependencies. Perhaps not for
> things like fortran. I hope that the scientific community can start
> publishing wheels at least in addition too.

The Fortran issue is not that complicated. Very few packages are
affected by it. It can easily be fixed with some kind of compatibility
tag that can be used by the small number of affected packages.

> I don't believe that Conda will gain the mindshare that pip has outside of
> the scientific community so I hope we don't end up with two systems that
> can't interoperate.

Maybe conda won't gain mindshare outside the scientific community but
wheel really needs to gain mindshare *within* the scientific
community. The root of all this is numpy. It is the biggest dependency
on PyPI, is hard to build well, and has the Fortran ABI issue. It is
used by very many people who wouldn't consider themselves part of the
"scientific community". For example matplotlib depends on it. The PyPy
devs have decided that it's so crucial to the success of PyPy that
numpy's basically being rewritten in their stdlib (along with the C
API).

A few times I've seen Paul Moore refer to numpy as the "litmus test"
for wheels. I actually think that it's more important than that. If
wheels are going to fly then there *needs* to be wheels for numpy. As
long as there isn't a wheel for numpy then there will be lots of
people looking for a non-pip/PyPI solution to their needs.

One way of getting the scientific community more on board here would
be to offer them some tangible advantages. So rather than saying "oh
well scientific use is a special case so they should just use conda or
something", the message should be "the wheel system provides solutions
to many long-standing problems and is even better than conda in (at
least) some ways because it cleanly solves the Fortran ABI issue for
example".


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-03 Thread Oscar Benjamin
On 3 December 2013 22:18, Chris Barker  wrote:
> On Tue, Dec 3, 2013 at 12:48 AM, Nick Coghlan  wrote:
>>
>> Because it already works for the scientific stack, and if we don't provide
>> any explicit messaging around where conda fits into the distribution
>> picture, users are going to remain confused about it for a long time.
>
> Do we have to have explicit messaging for every useful third-party package
> out there?
>
>> > I'm still confused as to why packages need to share external
>> > dependencies (though I can see why it's nice...) .
>>
>> Because they reference shared external data, communicate through shared
>> memory, or otherwise need compatible memory layouts. It's exactly the same
>> reason all C extensions need to be using the same C runtime as CPython on
>> Windows: because things like file descriptors break if they don't.
>
> OK -- maybe we need a better term than shared external dependencies -- that
> makes me think shared library. Also even the scipy stack is not as dependent
> in build env as we seem to thin it is -- I don't think there is any reason
> you can't use the "standard" MPL with Golke's MKL-build numpy, for instance.
> And I"m pretty sure that even scipy and numpy don't need to share their
> build environment more than any other  extension (i.e. they could use
> different BLAS implementations, etc... numpy version matters, but that's
> handled by the usual dependency handling.

Sorry, I was being vague earlier. The BLAS information is not
important but the Fortran ABI it exposes is:
http://docs.scipy.org/doc/numpy/user/install.html#fortran-abi-mismatch

MPL - matplotlib for those unfamiliar with the acronym - depends on
the numpy C API/ABI but not the Fortran ABI. So it would be
incompatible with, say, a pure Python implementation of numpy (or with
numpypy) but it should work fine with any of the numpy binaries
currently out there. (Numpy's C ABI has been unchanged from version
1.0 to 1.7 precisely because changing it has been too too painful to
contemplate).

> The reason Gohke's repo, and Anoconda and Canopy all exist is because it's a
> pain to build some of this stuff, period, not complex compatibly issues --
> and the real pain goes beyond the standard scipy stack (VTK is a killer!)

I agree that the binary compatibility issues are not as complex as
some are making out but it is a fact that his binaries are sometimes
binary-incompatible with other builds. I have seen examples of it
going wrong and he gives a clear warning at the top of his downloads
page:
http://www.lfd.uci.edu/~gohlke/pythonlibs/

>> but in their enthusiasm, the developers are pitching it as a general
>> purpose packaging solution. It isn't,
>
> It's not? Aside from momentum, and all that, could it not be a replacement
> for pip and wheel?

Conda/binstar could indeed be a replacement for pip and wheel and
PyPI. It currently lacks many packages but less so than PyPI if you're
mainly interested in binaries. For me pip+PyPI is a non-starter (as a
complete solution) if I can't install numpy and matplotlib.

>> By contrast, conda already exists, and already works, as it was designed
>> *specifically* to handle the scientific Python stack.
>
> I'm not sure we how well it works -- it works for Anoconda, and good point
> about the scientifc stack -- does it work equally well for other stacks? or
> mixing and matching?

I don't even know how well it works for the "scientific stack". It
didn't work for me! But I definitely know that pip+PyPI doesn't yet
work for me and working around that has caused me a lot more pain then
it would be to diagnose and fix the problem I had with conda. They
might even accept a one line, no-brainer pull request for my fix in
less then 3 months :) https://github.com/pypa/pip/pull/1187

>> This means that one key reason I want to recommend it for the cases where
>> it is a good fit (i.e. the scientific Python stack) is so we can explicitly
>> advise *against* using it in other cases where it will just add complexity
>> without adding value.
>
> I'm actually pretty concerned about this: lately the scipy community has
> defined a core "scipy stack":
>
> http://www.scipy.org/stackspec.html
>
> Along with this is a push to encourage users to just go with a scipy
> distribution to get that "stack":
>
> http://www.scipy.org/install.html
>
> and
>
> http://ipython.org/install.html
>
> I think this is in response to a years of pain of each package trying to
> build binaries for various platforms, and keeping it all in sync, etc. I
> feel their pain, and "just go with Anaconda or Canopy" is good advise for
> folks who want to get the "stack" up and running as easily as possible.

The scientific Python community are rightfully worried about potential
users losing interest in Python because these installation problems
occur for every noob who wants to use Python. In scientific usage
Python just isn't fully installed yet until numpy/scipy/matplotlib
etc. is. It makes perfect sense to try and get peo

Re: [Distutils] Handling the binary dependency management problem

2013-12-03 Thread Oscar Benjamin
On 3 December 2013 13:53, Nick Coghlan  wrote:
> On 3 December 2013 22:49, Oscar Benjamin  wrote:
>
> Hmm, I likely wouldn't build it into the core requirement system (that
> all operates at the distribution level), but the latest metadata
> updates split out a bunch of the optional stuff to extensions (see
> https://bitbucket.org/pypa/pypi-metadata-formats/src/default/standard-metadata-extensions.rst).
> What we're really after at this point is the ability to *detect*
> conflicts if somebody tries to install incompatible builds into the
> same virtual environment (e.g. you installed from custom index server
> originally, but later you forget and install from PyPI).
>
> So perhaps we could have a "python.expects" extension, where we can
> assert certain things about the metadata of other distributions in the
> environment. So, say that numpy were to define a custom extension
> where they can define the exported binary interfaces:
>
> "extensions": {
> "numpy.compatibility": {
> "api_version": 1,
> "fortran_abi": "openblas-g77"
> }
> }
[snip]
>
> I like the general idea of being able to detect conflicts through the
> published metadata, but would like to use the extension mechanism to
> avoid name conflicts.

Helping to prevent borken installs in this way would definitely be an
improvement. It would be a real shame though if PyPI would contain all
the metadata needed to match up compatible binary wheels but pip would
only use it to show error messages rather than to actually locate the
wheel that the user wants.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-03 Thread Oscar Benjamin
On 3 December 2013 11:54, Nick Coghlan  wrote:
> On 3 December 2013 21:22, Oscar Benjamin  wrote:
>> AFAICT conda/binstar are alternatives for pip/PyPI that happen to host
>> binaries for some packages that don't have binaries on PyPI. (conda
>> also provides a different - incompatible - take on virtualenvs but
>> that's not relevant to this proposal).
>
> It sounds like I may have been confusing two presentations at the
> packaging mini-summit, as I would have sworn conda used hashes to
> guarantee a consistent set of packages. I know I have mixed up
> features between hashdist and conda in the past (and there have been
> some NixOS features mixed in there as well), so it wouldn't be the
> first time that has happened - the downside of mining different
> distribution systems for ideas is that sometimes I forget where I
> encountered particular features :)

I had the same confusion with hashdist at the start of this thread
when I said that conda was targeted at HPC. So if we both make the
same mistake I guess it's forgiveable :)

> If conda doesn't offer such an internal consistency guarantee for
> published package sets, then I agree with the criticism that it's just
> an alternative to running a private PyPI index server hosting wheel
> files pre-built with particular options, and thus it becomes
> substantially less interesting to me :(

Perhaps Travis who is still CC'ed here could comment on this since it
is apparent that no one here really understands what conda is and he
apparently works for Continuum Analytics so should (hopefully) know a
little more...

> Under that model, what conda is doing is *already covered* in the
> draft metadata 2.0 spec (as of the changes I posted about the other
> day), since that now includes an "integrator suffix" (to indicate when
> a downstream rebuilder has patched the software), as well as a
> "python.integrator" metadata extension to give details of the rebuild.
> The namespacing in the wheel case is handled by not allowing rebuilds
> to be published on PyPI - they have to be published on a separate
> index server, and thus can be controlled based on where you tell pip
> to look.

Do you mean to say that PyPI can (should) only host a
binary-compatible set of wheels and that other index servers should do
the same?

I still think that there needs to be some kind of compatibility tags either way.

> So, I apologise for starting the thread based on what appears to be a
> fundamentally false premise, although I think it has still been useful
> despite that error on my part (as the user confusion is real, even
> though my specific proposal no longer seems as useful as I first
> thought).
>
> I believe helping the conda devs to get it to play nice with virtual
> environments is still a worthwhile exercise though (even if just by
> pointing out areas where it *doesn't* currently interoperate well, as
> we've been doing in the last day or so), and if the conda
> bootstrapping issue is fixed by publishing wheels (or vendoring
> dependencies), then "try conda if there's no wheel" may still be a
> reasonable fallback recommendation.

Well for a start conda (at least according to my failed build)
over-writes the virtualenv activate scripts with its own scripts that
do something completely different and can't even be called with the
same signature. So it looks to me as if there is no intention of
virtualenv compatibility.

As for "try conda if there's no wheel" according to what I've read
that seems to be what people who currently use conda do.

I thought about another thing during the course of this thread. To
what extent can Provides/Requires help out with the binary
incompatibility problems? For example numpy really does provide
multiple interfaces:
1) An importable Python module that can be used from Python code.
2) A C-API that can be used by compiled C-extensions.
3) BLAS/LAPACK libraries with a particular Fortran ABI to any other
libraries in the same process.

Perhaps the solution is that a build of a numpy wheel should clarify
explicitly what it Provides at each level e.g.:

Provides: numpy
Provides: numpy-capi-v1
Provides: numpy-openblas-g77

Then a built wheel for scipy can Require the same things. Cristoph
Gohlke could provide a numpy wheel with:

Provides: numpy
Provides: numpy-capi-v1
Provides: numpy-intelmkl

And his scipy wheel can require the same. This would mean that pip
would understand the binary dependency problems during dependency
resolution and could reject an incompatible wheel at install time as
well as being able to find a compatible wheel automatically if one
exists in the server. Unlike the hash-based dependencies we can see
that it is possible to depend on the numpy C-API without necessarily
depending on any p

Re: [Distutils] Handling the binary dependency management problem

2013-12-03 Thread Oscar Benjamin
On 1 December 2013 04:15, Nick Coghlan  wrote:
>
> conda has its own binary distribution format, using hash based
> dependencies. It's this mechanism which allows it to provide reliable
> cross platform binary dependency management, but it's also the same
> mechanism that prevents low impact security updates and
> interoperability with platform provided packages.

Nick can you provide a link to somewhere that explains the hash based
dependency thing please?

I've read the following...

http://docs.continuum.io/conda/
https://speakerdeck.com/teoliphant/packaging-and-deployment-with-conda
http://docs.continuum.io/anaconda/index.html
http://continuum.io/blog/new-advances-in-conda
http://continuum.io/blog/conda
http://docs.continuum.io/conda/build.html

...but I see no reference to hash-based dependencies.

In fact the only place I have seen a reference to hash-based
dependencies is your comment at the bottom of this github issue:
https://github.com/ContinuumIO/conda/issues/292

AFAICT conda/binstar are alternatives for pip/PyPI that happen to host
binaries for some packages that don't have binaries on PyPI. (conda
also provides a different - incompatible - take on virtualenvs but
that's not relevant to this proposal).


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-03 Thread Oscar Benjamin
On 3 December 2013 10:19, Nick Coghlan  wrote:
>> Or
>> how about a scientist that wants wxPython (to use Chris' example)?
>> Apparently the conda repo doesn't include wxPython, so do they need to
>> learn how to install pip into a conda environment? (Note that there's
>> no wxPython wheel, so this isn't a good example yet, but I'd hope it
>> will be in due course...)
>
> No, it's the other way around - for cases where wheels aren't yet
> available, but conda provides it, then we should try to ensure that
> "pip install conda && conda init && conda install " does the
> right thing (including conda upgrading previously pip installed
> packages when necessary, as well as bailing out gracefully when it
> needs to).

Perhaps it would help if there were wheels for conda and its
dependencies. "pycosat" (whatever that is) breaks when I pip install
conda:

$ pip install conda
Downloading/unpacking pycosat (from conda)
  Downloading pycosat-0.6.0.tar.gz (58kB): 58kB downloaded
  Running setup.py egg_info for package pycosat

Downloading/unpacking pyyaml (from conda)
  Downloading PyYAML-3.10.tar.gz (241kB): 241kB downloaded
  Running setup.py egg_info for package pyyaml

Installing collected packages: pycosat, pyyaml
  Running setup.py install for pycosat
building 'pycosat' extension
q:\tools\MinGW\bin\gcc.exe -mdll -O -Wall
-Iq:\tools\Python27\include -IQ:\venv\PC -c pycosat.c -o
build\temp.win32-2.7\Release\pycosat.o
In file included from pycosat.c:18:0:
picosat.c: In function 'picosat_stats':
picosat.c:8179:4: warning: unknown conversion type character 'l'
in format [-Wformat]
picosat.c:8179:4: warning: too many arguments for format
[-Wformat-extra-args]
picosat.c:8180:4: warning: unknown conversion type character 'l'
in format [-Wformat]
picosat.c:8180:4: warning: too many arguments for format
[-Wformat-extra-args]
In file included from pycosat.c:18:0:
picosat.c: At top level:
picosat.c:8210:26: fatal error: sys/resource.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
Complete output from command Q:\venv\Scripts\python.exe -c "import
setuptools;__file__='Q:\\venv\\build\\pycosat\\setup.py';exec(compile(open(__file__).read().replace('\r\n',
'\n'), __file__, 'exec'))" install --record
c:\docume~1\enojb\locals~1\temp\pip-lobu76-record\install-record.txt
--single-version-externally-managed --install-headers
Q:\venv\include\site\python2.7:
running install

running build

running build_py

creating build

creating build\lib.win32-2.7

copying test_pycosat.py -> build\lib.win32-2.7

running build_ext

building 'pycosat' extension

creating build\temp.win32-2.7

creating build\temp.win32-2.7\Release

q:\tools\MinGW\bin\gcc.exe -mdll -O -Wall -Iq:\tools\Python27\include
-IQ:\venv\PC -c pycosat.c -o build\temp.win32-2.7\Release\pycosat.o

In file included from pycosat.c:18:0:

picosat.c: In function 'picosat_stats':

picosat.c:8179:4: warning: unknown conversion type character 'l' in
format [-Wformat]

picosat.c:8179:4: warning: too many arguments for format [-Wformat-extra-args]

picosat.c:8180:4: warning: unknown conversion type character 'l' in
format [-Wformat]

picosat.c:8180:4: warning: too many arguments for format [-Wformat-extra-args]

In file included from pycosat.c:18:0:

picosat.c: At top level:

picosat.c:8210:26: fatal error: sys/resource.h: No such file or directory

compilation terminated.

error: command 'gcc' failed with exit status 1


Cleaning up...
Command Q:\venv\Scripts\python.exe -c "import
setuptools;__file__='Q:\\venv\\build\\pycosat\\setup.py';exec(compile(open(__file__).read().replace('\r\n',
'\n'), __file__, 'exec'))" install --record
c:\docume~1\enojb\locals~1\temp\pip-lobu76-record\install-record.txt
--single-version-externally-managed --install-headers
Q:\venv\include\site\python2.7 failed with error code 1 in
Q:\venv\build\pycosat
Storing complete log in c:/Documents and Settings/enojb\pip\pip.log


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Oscar Benjamin
On 2 December 2013 13:54, Paul Moore  wrote:
>
> If the named projects provided Windows binaries, then there would be
> no issue with Christoph's stuff. But AFAIK, there is no place I can
> get binary builds of matplotlib *except* from Christoph. And lxml
> provides limited sets of binaries - there's no Python 3.3 version, for
> example. I could continue :-)

The matplotlib folks provide a list of binaries for Windows and OSX
hosted by SourceForge:
http://matplotlib.org/downloads.html

So do numpy and scipy.

> Oh, and by the way, in what sense do you mean "cross-platform" here?
> Win32 and Win64? Maybe I'm being narrow minded, but I tend to view
> "cross platform" as meaning "needs to think about at least two of
> Unix, Windows and OSX". The *platform* issues on Windows (and OSX, I
> thought) are solved - it's the ABI issues that we've ignored thus far
> (successfully till now :-))

Exactly. A python extension that uses Fortran needs to indicate which
of the two Fortran ABIs it uses. Scipy must use the same ABI as the
BLAS/LAPACK library that numpy was linked with. This is core
compatibility data but there's no way to communicate it to pip.
There's no need to actually provide downloadable binaries for both
ABIs but there is a need to be able to detect incompatibilities.

Basically if
1) There is at least one single consistent set of built wheels for
Windows/OSX for any popular set of binary-interdependent packages.
2) A way to automatically detect incompatibilities and to
automatically find compatible built wheels.
then *a lot* of packaging problems have been solved.

Part 1 already exists. There are multiple consistent sets of built
installers (not wheels yet) for many hard to build packages. Part 2
requires at least some changes in pip/PyPI.

I read somewhere that numpy is the most frequently cited dependency on
PyPI. It can be built in multiple binary-incompatible ways. If there
is at least a way for the installer to know that it was built in "the
standard way" (for Windows/OSX) then there can be a set of binaries
built to match that. There's no need for a combinatorial explosion of
compatibility tags - just a single set of compatibility tags that has
complete binaries (where the definition of complete obviously depends
on your field).

People who want to build in different incompatible ways can do so
themselves, although it would still be nice to get an install time
error message when you subsequently try to install something
incompatible.

For Linux this problem is basically solved as far as beginners are
concerned because they can just use apt.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Oscar Benjamin
On 2 December 2013 09:19, Paul Moore  wrote:
> On 2 December 2013 07:31, Nick Coghlan  wrote:
>> The only problem I want to take off the table is the one where
>> multiple wheel files try to share a dynamically linked external binary
>> dependency.
>
> OK. Thanks for the clarification.
>
> Can I suggest that we need to be very careful how any recommendation
> in this area is stated? I certainly didn't get that impression from
> your initial posting, and from the other responses it doesn't look
> like I was the only one.

I understood what Nick meant but I still don't understand how he's
come to this conclusion.

> We're only just starting to get real credibility for wheel as a
> distribution format, and we need to get a very strong message out that
> wheel is the future, and people should be distributing wheels as their
> primary binary format. My personal litmus test is the scientific
> community - when Christoph Gohlke is distributing his (Windows) binary
> builds as wheels, and projects like numpy, ipython, scipy etc are
> distributing wheels on PyPI, rather than bdist_wininst, I'll feel like
> we have got to the point where wheels are "the norm". The problem is,
> of course, that with conda being a scientific distribution at heart,
> any message we issue that promotes conda in any context will risk
> confusion in that community.

Nick's proposal is basically incompatible with allowing Cristoph
Gohlke to use pip and wheels. Christoph provides a bewildering array
of installers for prebuilt packages that are interchangeable with
other builds at the level of Python code but not necessarily at the
binary level. So, for example, His scipy is incompatible with the
"official" (from SourceForge) Windows numpy build because it links
with the non-free Intel MKL library and it needs numpy to link against
the same. Installing his scipy over the other numpy results in this:
https://mail.python.org/pipermail//python-list/2013-September/655669.html

So Christoph can provide wheels and people can manually download them
and install from them but would beginners find that any easier than
running the .exe installers? The .exe installers are more powerful and
can do things like the numpy super-pack that distributes binaries for
different levels of SSE support (as discussed previously on this list
the wheel format cannot currently achieve this). Beginners will also
find .exe installers more intuitive than running pip on the command
line and will typically get better error messages etc. than pip
provides. So I don't really see why Cristoph should bother switching
formats (as noted by Paul before anyone who wants a wheel cache can
easily convert his installers into wheels).

AFAICT what Nick is saying is that it's not possible for pip and PyPI
to guarantee the compatibility of different binaries because unlike
apt-get and friends only part of the software stack is controlled.
However I think this is not the most relevant difference between pip
and apt-get here. The crucial difference is that apt-get communicates
with repositories where all code and all binaries are under control of
a single organisation. Pip (when used normally) communicates with PyPI
and no single organisation controls the content of PyPI. So there's no
way for pip/PyPI to guarantee *anything* about the compatibility of
the code that they distribute/install, whether the problems are to do
with binary compatibility or just compatibility of pure Python code.
For pure Python distributions package authors are expected to solve
the compatibility problems and pip provides version specifiers etc
that they can use to do this. For built distributions they could do
the same - except that pip/PyPI don't provide a mechanism for them to
do so.

Because PyPI is not a centrally controlled single software stack it
needs a different model for ensuring compatibility - one driven by the
community. People in the Python community are prepared to spend a
considerable amount of time, effort and other resources solving this
problem. Consider how much time Cristoph Gohlke must spend maintaining
such a large internally consistent set of built packages. He has
created a single compatible binary software stack for scientific
computation. It's just that PyPI doesn't give him any way to
distribute it. If perhaps he could own a tag like "cgohlke" and upload
numpy:cgohlke and scipy:cgohlke then his scipy:cgohlke wheel could
depend on numpy:cgohlke and numpy:cgohlke could somehow communicate
the fact that it is incompatible with any other scipy distribution.
This is one way in which pip/PyPI could facilitate the Python
community to solve the binary compatibility problems.

[As an aside I don't know whether Cristoph's Intel license would
permit distribution via PYPI.]

Another way would be to allow the community to create compatibility
tags so that projects like numpy would have mechanisms to indicate
e.g. Fortran ABI compatibility. In this model no one owns a particular
tag but projects tha

Re: [Distutils] Handling the binary dependency management problem

2013-12-01 Thread Oscar Benjamin
On Dec 1, 2013 1:10 PM, "Paul Moore"  wrote:
>
> On 1 December 2013 04:15, Nick Coghlan  wrote:
> > 2. For cross-platform handling of external binary dependencies, we
> > recommend boostrapping the open source conda toolchain, and using that
> > to install pre-built binaries (currently administered by the Continuum
> > Analytics folks). Specifically, commands like the following should
> > work on POSIX systems without needing any local build machinery, and
> > without needing all the projects in the chain to publish wheels: "pip
> > install conda && conda init && conda install ipython"
>
> Hmm, this is a somewhat surprising change of direction.

Indeed it is. Can you clarify a little more how you've come to this
conclusion, Nick and perhaps explain what conda is?

I looked at conda some time ago and it seemed to be aimed at HPC (high
performance computing) clusters which is a niche use case where you have
large networks of computation nodes containing identical hardware. (unless
I'm conflating it with something else).

Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Plans for binary wheels, and PyPi and OS-X

2013-10-31 Thread Oscar Benjamin
On Oct 31, 2013 8:50 PM, "Tres Seaver"  wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 10/31/2013 02:24 PM, Donald Stufft wrote:
> > To be honest the same problems likely exists on Windows, it?s just
> > less likely and the benefits of prebuilt binaries greater.
>
> For all platforms *except* Windows, wheels are essentially caches --
> there is no real reason to distribute them via PyPI at all, because OSx
> and Linux develpoers will have tools to build them from sdists.

What if an OSX user wants to install numpy/scipy? How easy is it to do this
from source (I really don't know)?

Building the necessary BLAS/LAPACK libraries isn't easy on any platform.
It's just easier on a linux distro when the package manager can do it for
you.

Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] AttributeError: 'tuple' object has no attribute 'split'

2013-10-31 Thread Oscar Benjamin
On Oct 31, 2013 4:09 PM, "Dominique Orban" 
wrote:
>
> On 25 October, 2013 at 2:06:34 PM, Dominique Orban (
dominique.or...@gmail.com) wrote:
> >
> >
> >
> >On 25 October, 2013 at 1:56:26 PM, Oscar Benjamin (
oscar.j.benja...@gmail.com) wrote:
> >>
> >>On Oct 25, 2013 3:52 PM, "Dominique Orban"
> >>wrote:
> >>>
> >>>
> >>>
> >>> On 25 October, 2013 at 9:31:16 AM, Oscar Benjamin (
> >>oscar.j.benja...@gmail.com) wrote:
> >>> >
> >>> >On 24 October 2013 21:04, Dominique Orban wrote:
> >>> >>
> >>> >> I hope this is the right place to ask for help. I'm not finding
much
> >>comfort in the PyPi documentation or in Google searches. I uploaded my
> >>package `pykrylov` with `python setup.py sdist upload`. Installing it
> >>locally with `python setup.py` install works fine but `pip install
> >>pykrylov` breaks with the messages below. I since removed it from PyPI
but
> >>I get the same error message if I try installing from the git
repository.
> >>I'm hoping someone can put me on track as I've no idea what's wrong. You
> >>can see my setup.py here:
> >>> >>
> >>> >>
> >>
https://github.com/dpo/pykrylov/blob/ea553cdb287f6e685406ceadcb297fd6704af52d/setup.py
> >>> >>
> >>> >> I'm using Python 2.7.5 on OSX installed with Homebrew and pip
1.4.1.
> >>Attempts to upgrade setuptools or pip result in another error message
> >>(AttributeError: 'str' object has no attribute 'rollback')...
> >>> >
> >>> >Can you install a more recent setuptools by downloading it and
running
> >>> >the setup.py yourself?
> >>> >
> >>
https://pypi.python.org/packages/source/s/setuptools/setuptools-1.1.6.tar.gz
> >>>
> >>> Thanks for the suggestion. I'm still getting the same error with
> >>setuptools 1.1.6. I also tried "upgrading" Numpy (since I'm using Numpy
> >>distutils) by installing from their git repository, and I'm still
getting
> >>the same error.
> >>>
> >>> Is anything obviously wrong with the setup.py?
> >>
> >>I don't know but I'm not totally clear what you mean. Previously you
> >>described multiple problems: with pip, setuptools and pykrylov. Have you
> >>successfully installed setuptools now?
> >>
> >>If the "same error" is with pykrylov's setup.py have you tried debugging
> >>it? E.g. 'python -m pdb setup.py install'
> >
> >"python setup.py install" works fine. It's the installation with pip
that returns the error message I mentioned. I was wondering if something in
setup.py didn't agree with pip/setuptools.
> >
> >Yes I installed setuptools 1.1.6. But "pip install -e git://
github.com/dpo/pykrylov.git@ea553cd#egg=pykrylov" still returns
"AttributeError: 'tuple' object has no attribute 'split'".
> >
> >I hope I'm making sense.
>
> Anybody can provide any help with this?

Run pip under pdb. Find the location of the pip script and run:
python -m pdb /path/to/pip install ...

Then find out which object has the wrong type and see if you can trace it
back to something in the setup.py.

Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] AttributeError: 'tuple' object has no attribute 'split'

2013-10-25 Thread Oscar Benjamin
On Oct 25, 2013 3:52 PM, "Dominique Orban" 
wrote:
>
>
>
> On 25 October, 2013 at 9:31:16 AM, Oscar Benjamin (
oscar.j.benja...@gmail.com) wrote:
> >
> >On 24 October 2013 21:04, Dominique Orban wrote:
> >>
> >> I hope this is the right place to ask for help. I'm not finding much
comfort in the PyPi documentation or in Google searches. I uploaded my
package `pykrylov` with `python setup.py sdist upload`. Installing it
locally with `python setup.py` install works fine but `pip install
pykrylov` breaks with the messages below. I since removed it from PyPI but
I get the same error message if I try installing from the git repository.
I'm hoping someone can put me on track as I've no idea what's wrong. You
can see my setup.py here:
> >>
> >>
https://github.com/dpo/pykrylov/blob/ea553cdb287f6e685406ceadcb297fd6704af52d/setup.py
> >>
> >> I'm using Python 2.7.5 on OSX installed with Homebrew and pip 1.4.1.
Attempts to upgrade setuptools or pip result in another error message
(AttributeError: 'str' object has no attribute 'rollback')...
> >
> >Can you install a more recent setuptools by downloading it and running
> >the setup.py yourself?
> >
https://pypi.python.org/packages/source/s/setuptools/setuptools-1.1.6.tar.gz
>
> Thanks for the suggestion. I'm still getting the same error with
setuptools 1.1.6. I also tried "upgrading" Numpy (since I'm using Numpy
distutils) by installing from their git repository, and I'm still getting
the same error.
>
> Is anything obviously wrong with the setup.py?

I don't know but I'm not totally clear what you mean. Previously you
described multiple problems: with pip, setuptools and pykrylov. Have you
successfully installed setuptools now?

If the "same error" is with pykrylov's setup.py have you tried debugging
it? E.g. 'python -m pdb setup.py install'

Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] AttributeError: 'tuple' object has no attribute 'split'

2013-10-25 Thread Oscar Benjamin
On 24 October 2013 21:04, Dominique Orban  wrote:
>
> I hope this is the right place to ask for help. I'm not finding much comfort 
> in the PyPi documentation or in Google searches. I uploaded my package 
> `pykrylov` with `python setup.py sdist upload`. Installing it locally with 
> `python setup.py` install works fine but `pip install pykrylov` breaks with 
> the messages below. I since removed it from PyPI but I get the same error 
> message if I try installing from the git repository. I'm hoping someone can 
> put me on track as I've no idea what's wrong. You can see my setup.py here:
>
> https://github.com/dpo/pykrylov/blob/ea553cdb287f6e685406ceadcb297fd6704af52d/setup.py
>
> I'm using Python 2.7.5 on OSX installed with Homebrew and pip 1.4.1. Attempts 
> to upgrade setuptools or pip result in another error message (AttributeError: 
> 'str' object has no attribute 'rollback')...

Can you install a more recent setuptools by downloading it and running
the setup.py yourself?
https://pypi.python.org/packages/source/s/setuptools/setuptools-1.1.6.tar.gz


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] post install hook

2013-10-21 Thread Oscar Benjamin
On 21 October 2013 11:38, Thomas Güttler  wrote:
> Hi,
>
> I can live without a post-install hook.
>
> But I think the documentation of setuptools
> should contain information about this.
>
> https://bitbucket.org/pypa/setuptools/issue/92/docs-post-install-hook

That seems reasonable to me. Why don't you write a patch (as suggested
in the issue)?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Deprecate and Block requires/provides

2013-10-17 Thread Oscar Benjamin
On 17 October 2013 16:53, Donald Stufft  wrote:
>
> On Oct 17, 2013, at 11:49 AM, Michael Foord  wrote:
>
> Package upload certainly worked, and that is what is going to be broken.
>
>
> So would you be ok with deprecating and removing to equal "this metadata
> silently
> gets sent to /dev/null" in order to not break uploads for what would have
> affected
> roughly 4% of the total new releases on PyPI in 2013.

What about emitting a warning on upload/download for deprecated
metadata and a warning on the PyPI page for the distribution?

I don't know whether it's possible to implement that server side or if
it would only apply to newer versions of distutils/setuptools etc. but
it would give people who are still uploading sdists with this metadata
a chance to make the fix in their own time rather than suddenly being
unable to release updates.


Cheers,
Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 453 quirk: pyvenv --no-download with an upgraded system pip

2013-09-18 Thread Oscar Benjamin
On 18 September 2013 15:26, Nick Coghlan  wrote:
> In creating the next draft of PEP 453, I noticed an odd quirk of the
> proposed "pyvenv --no-download" option: it bootstraps the version of
> pip *that was provided with Python*, rather than the version currently
> installed in the base Python installation.
>
> That seems incredibly strange to me, and I expect it will confuse
> users as well. "I'm using Python 3.4 and have upgraded pip to 1.6, but
> my virtual environments are only getting pip 1.5 when I use the
> '--no-download' option. HALP!".

Could the getpip script have a way to update its internal/bundled pip
when updating the other pip? Or perhaps an explicit update bundle
command?

Could getpip be the recommended way to update pip/setuptools generally
(perhaps solving some of the other problems that can occur) and always
update its bundle?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 453 Round 2 - Explicit bootstrapping of pip in Python installations

2013-09-16 Thread Oscar Benjamin
On 16 September 2013 12:27, Nick Coghlan  wrote:
>
> On 16 Sep 2013 19:55, "Oscar Benjamin"  wrote:
>>
>> I still don't understand why this is preferable to just shipping a
>> recent stable pip/setuptools and providing instructions to update
>> post-install. That would surely be a lot simpler not just to implement
>> but for others to understand.
>>
>> If I'm happy to use the bundled version of pip then why do I need to
>> do anything to make it usable after having already run the installer?
>> If I want the new version then why is 'py -m getpip' better than 'py
>> -m pip install -U pip'?
>
> You don't, the installer bootstraps it for you. Running it explicitly should
> only be needed when building from source, or bootstrapping a previously
> pip-free virtual environment.

Oh okay. So basically the normal thing is that pip just gets installed
automatically when you install Python. For most people the whole of
the "explicit bootstrapping" described in the PEP is an implementation
detail that occurs *implicitly* during installation? The only point of
relevance from a user perspective is that running the installer
without a network connection leaves you with an older version of
pip/setuptools.

> The complicated bootstrapping dance is both to make pip easy to leave out if
> people really don't want it and to avoid the CPython platform installers and
> pip getting into a fight about who is responsible for the files.

Surely this is only relevant for people using the installers since if
you're capable of building CPython from source then you should be
plenty capable of installing pip/setuptools from source as well.
Likewise if you're installing via a distro package manager then you're
not going to use this bootstrapping script. If this is just for the
Windows and OSX installers then can they not just have a tickbox for
installing the bundled pip and another tickbox for updating it (both
on by default)? If you need to update it after installation then you
can just use pip to update itself.

Who, apart from the Windows and OSX installers, is going to use this
bootstrap script?

When you say that pip and the installer could get into a fight about
responsibility do you mean for uninstallation? Presumably if you're
uninstalling Python then you also want to uninstall the associated pip
installation so it's fine for the installer to just delete everything
to do with pip anyway right?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 453 Round 2 - Explicit bootstrapping of pip in Python installations

2013-09-16 Thread Oscar Benjamin
On 15 September 2013 16:33, Donald Stufft  wrote:
> So there've been a number of updates to PEP453, so i'm posting it here again 
> for more discussion:
>

> Explicit Bootstrapping
> ==
>
> An additional module called ``getpip`` will be added to the standard library
> whose purpose is to install pip and any of its dependencies into the
> appropriate location (most commonly site-packages). It will expose a single
> callable named ``bootstrap()`` as well as offer direct execution via
> ``python -m getpip``. Options for installing it such as index server,
> installation location (``--user``, ``--root``, etc) will also be available
> to enable different installation schemes.
>
> It is believed that users will want the most recent versions available to be
> installed so that they can take advantage of the new advances in packaging.
> Since any particular version of Python has a much longer staying power than
> a version of pip in order to satisfy a user's desire to have the most recent
> version the bootstrap will contact PyPI, find the latest version, download it,
> and then install it. This process is security sensitive, difficult to get
> right, and evolves along with the rest of packaging.
>
> Instead of attempting to maintain a "mini pip" for the sole purpose of
> installing pip the ``getpip`` module will, as an implementation detail,
> include a private copy of pip and its dependencies which will be used to
> discover and install pip from PyPI. It is important to stress that this
> private copy of pip is *only* an implementation detail and it should *not*
> be relied on or assumed to exist.
>
> Not all users will have network access to PyPI whenever they run the
> bootstrap. In order to ensure that these users will still be able to
> bootstrap pip the bootstrap will fallback to simply installing the included
> copy of pip. The pip ``--no-download`` command line option will be supported
> to force installation of the bundled version, without even attempting to
> contact PyPI.
>
> This presents a balance between giving users the latest version of pip,
> saving them from needing to immediately upgrade pip after bootstrapping it,
> and allowing the bootstrap to work offline in situations where users might
> already have packages downloaded that they wish to install.

I still don't understand why this is preferable to just shipping a
recent stable pip/setuptools and providing instructions to update
post-install. That would surely be a lot simpler not just to implement
but for others to understand.

If I'm happy to use the bundled version of pip then why do I need to
do anything to make it usable after having already run the installer?
If I want the new version then why is 'py -m getpip' better than 'py
-m pip install -U pip'?


>
> Recommendations for Downstream Distributors
> ===
>
> A common source of Python installations are through downstream distributors
> such as the various Linux Distributions [#ubuntu]_ [#debian]_ [#fedora]_, OSX
> package managers [#homebrew]_, or python specific tools [#conda]_. In order to
> provide a consistent, user friendly experience to all users of Python
> regardless of how they attained Python this PEP recommends and asks that
> downstream distributors:
>
> * Ensure that whenever Python is installed pip is also installed.
>
>   * This may take the form of separate packages with dependencies on each
> other so that installing the Python package installs the pip package
> and installing the pip package installs the Python package.
>
> * Do not remove the bundled copy of pip.

Are distros really going to be okay with this idea? Many of them have
CPython in their base install so you're basically asking that they
always ship a parallel package management system that is outside of
their control.

Personally I think that it's unfortunate that distro package managers
don't have a --user option like pip does but I've always assumed that
they had some good reason for not wanting any old user to be able to
easily install things without admin/root privileges. This would break
that arrangement since any user would be able to use 'pip install
--user' to install anything from PyPI. I imagine that lots of
deployment sites would want to disable this even if the distro has it
enabled by default.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-09-08 Thread Oscar Benjamin
On 8 September 2013 12:07, Paul Moore  wrote:
> On 7 September 2013 23:36, Carl Meyer  wrote:
>
> The *other* problem is that a custom implementation of an egg-info
> command is pretty much certain to be incompatible with pip injecting
> setuptools. And that's the big issue, injecting setuptools actively
> prevents people writing their own implementations of the relevant
> command line APIs.

What makes you say that? I haven't checked but I assumed that the pip
injecting setuptools situation wouldn't override a custom command
provided by:

setup(..., cmdclass={'egg_info': MyEggInfo}, ...)

Or is it that the action of the egg_info command is not fully
specified anywhere? AFAICT the command is reasonably stable, so
someone could implement their own without too much difficulty (not
that package authors should be expected to do this).

>>> It seems like the egg_info command is the sole
>>> reason, or did I miss something?
>>
>> Also the installation command (not having to detect what the setup.py
>> uses and decide accordingly whether to supply
>> --single-version-externally-managed)
>
> That to me is a major issue with setuptools, as it *behaves
> differently* than distutils does for the same command line. But
> setting that aside, again a setup.py could implement a custom cmdclass
> that simply ignores the --single-version-externally-managed flag on
> install. And again, doing so could easily be incompatible with
> setuptools injection.

Can you elaborate on how it behaves differently? Do you mean that when
the --single-version-externally-managed option is not provided the
install command would do something different in a setuptools setup.py
compared iht a vanilla distutils setup.py?

>> and the installed format, which I
>> already mentioned but you snipped in your reply. (Setuptools with
>> --single-version-externally-managed installs metadata in a .egg-info
>> dir, plain distutils just installs a single file of metadata, not a
>> directory; the directory gives pip a place to put the results of --record).
>
> That is an issue which hasn't been picked up on yet. But I'd argue
> that pip could easily check what version the setup.py created and
> adapt accordingly (upgrading the single-file format to the directory
> format). Sure, it doesn't, because the setuptools injection makes it
> unnecessary. But that's getting cause and effect backwards...

It wouldn't be too hard for pip to do this. The relevant code is here:
https://github.com/pypa/pip/blob/develop/pip/req.py#L643

However pip doesn't get to that point without having first called
egg_info. So the setup.py in question would have to implement
setuptools-style .egg-info format anyway unless the egg_info command
were permitted to supply metadata in a different format.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-09-05 Thread Oscar Benjamin
On 5 September 2013 13:34, Daniel Holth  wrote:
> On Thu, Sep 5, 2013 at 5:36 AM, Oscar Benjamin
>  wrote:
>
> --single-version-externally-managed just means "install everything
> into a flat site-packages" rather than installing them into their own
> (egg) directories.

Does that mean that the option could be safely ignored by distutils?

Obviously if X has a vanilla distutils setup.py then this is what
'python setup.py install' would do anyway. Or is it possible that X
could be installed as a dependency of Y that uses setuptools in such a
way that this option wouldn't get passed to X's setup.py install
command? In that case presumably pip would expect the 'setup.py
install' command to do something different.

> If you would like to advance the state of the art of distutils you
> should consider implementing a dist-info command that builds a
> dist-info directory instead of an egg-info directory [it's possible
> pip will recognize this automatically if it uses pkg_resources to
> inspect the dependencies].

Pip only checks for the '.egg-info' extension so it won't pick up any
PEP 376 metadata files:
https://github.com/pypa/pip/blob/develop/pip/req.py#L646

> You could also try for a bdist_wheel
> feature -- Vinay's distil has shown how this can be done with the
> install command by passing --install-platlib=x etc. as per the wheel
> layout, by converting egg-info to dist-info, by adding a manifest, and
> zipping the result.

I was really just trying to identify what is the minimum required to
work right now. Does pip or anything else ever use bdist_wheel during
installation from sdist?

> In setuptools you can just write the new command plugins once as an
> add-on package and have them available in every sdist.
>
> You might also look into supporting installs by an installer without
> running the hated setup.py install command. The installer could always
> generate an intermediate wheel, or it could avoid some of the (usually
> very fast) copying by defining and generating a manifest of category
> -> [{source path : destination path relative to the scheme}, ...] as
> in "purelib" : [ { "src/__init__.py" -> "__init__.py'"}, ...]; the
> installer would be able to interpret the manifest in much the same way
> as a wheel package.

Apart from uploading wheels to PyPI how can you support installation
with pip without 'python setup.py install' (or 'python setup.py
bdist_egg' for easy_install)?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-09-05 Thread Oscar Benjamin
On 4 September 2013 19:16, Éric Araujo  wrote:
> Le 30/08/2013 03:23, Paul Moore a écrit :
>> On 30 August 2013 00:08, Nick Coghlan  wrote:
>>> We also need to officially bless pip's trick of forcing the use of
>>> setuptools for distutils based setup.py files.
>> Do we? What does official blessing imply? We've managed for years without
>> the trick being "official"...
>>
>> The main reason it is currently used is to allow setup.py install to
>> specify --record, so that we can get the list of installed files. If
>> distutils added a --record flag, for example, I don't believe we'd need the
>> hack at all. (Obviously, we'd still need setuptools so we could use wheel
>> to build wheels, but that's somewhat different as it's a new feature).
>> Maybe a small distutils patch is better than blessing setuptools here?
>
> distutils’ install command provides --record.

Indeed it does. I've created a minimal pip-compatible setup.py here:
https://github.com/oscarbenjamin/setuppytest
https://github.com/oscarbenjamin/setuppytest/blob/master/setuppytest/setup.py

The parts that pip requires that are not included in distutils are:
1) The egg_info command.
2) Creating the .egg-info directory during the install command.
3) --single-version-externally-managed

I didn't test what happens if the sdist is installed to satisfy a
dependency (I'm not sure how to do that without uploading to PyPI) but
it presumably would do something different from
--single-version-externally-managed in that case.

The precise invocations that the setup.py needs to support are:

python setup.py egg_info --egg-base $EGG_DIRECTORY

$ python setup.py install --record $RECORD_FILE \
--single-version-externally-managed \
[--install-headers $HEADERS_DIR]

The --install-headers option is provided when installing into a virtualenv.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-09-04 Thread Oscar Benjamin
On 4 September 2013 13:51, Paul Moore  wrote:
> On 4 September 2013 12:58, Nick Coghlan  wrote:
>>
>> However, a more significant problem is that the whole idea is based on
>> pure vapourware. That ideal "it's just like setuptools, but with a
>> more elegant implementation that could be used to replace distutils in
>> Python 3.4" library *doesn't exist*, and I have no desire to wait
>> around in the (likely vain) hope of somebody stepping up to create it.
>
> The problem with not even defining an interface is that there is no
> upgrade path. Users use setuptools/pip or wait for the new solution.
> Nobody can write their own replacement for setuptools (bento, for
> example) and be sure it will integrate with pip.
>
> It's a longer term issue for certain, but it *is* important. If we
> don't get away from the implementation defining the behaviour, we'll
> just perpetuate the problem of nobody being able to innovate because
> everything needs to implement most of setuptools.

It doesn't need to be "just like setuptools". Distutils could be made
to be "just like distutils" but with the addition of the *minimum*
that is required to make it work in the new packaging system. Then a
vanilla distutils setup.py can gain the relevant new commands without
the package author needing to put setuptools in the setup.py and
without the user needing to install setuptools and wheel.

Distutils is already just like setuptools for a subset of what
setuptools does. It could be made just like setuptools for a slightly
larger subset that also includes the features deemed to be necessary
in the future. Then pip's setuptools monkey-patching could just phase
out along with the older Python versions where it is necessary.

Projects using Cython often use Cython's distutils extension which
isn't directly compatible with many setuptools commands (although
setuptools-cython apparently tries to address this). These projects
could really benefit from wheel support but they won't get it from
setuptools. It could be added to Cython's distutils extension but then
that doesn't help the packages with non-Cython C extensions which
often don't use setuptools. I think a lot of projects would benefit
from bdist_wheel being added to distutils even if it's only there for
future Python versions.

Until the "minimum that is required" of setup.py and the setup
function has been identified I don't see how you can rule out the
possibility of bringing distutils up to that specification.

>> Instead, I think the far more pragmatic option at this point is to
>> just tell people "your setup.py must run correctly with setuptools
>> imported in that process. If it doesn't, it is your setup.py is
>> broken, not the build tool that imported setuptools, or setuptools
>> itself for monkey-patching distutils".
>
> The big question here, I suppose is do any such projects exist?
> There's a lot of nervousness (I hesitate to use the term FUD) about
> "how will projects that don't work with setuptools react?" but no
> actual evidence of such projects existing. I believe that cx_Oracle
> had an issue in the past. I must try to test that out again and report
> it to them if so. Maybe MAL's projects are another potential test
> case. And maybe numpy. It's hard to be sure, because projects in this
> category are likely the more complex ones, and as a result are
> probably also pretty hard to build (and consequently to test...)

Numpy already has logic to try and do the right thing with or without
setuptools and with different versions of setuptools. If you look
here:
https://github.com/numpy/numpy/blob/master/numpy/distutils/core.py#L6

At the moment it looks like:

if 'setuptools' in sys.modules:
have_setuptools = True
from setuptools import setup as old_setup
# easy_install imports math, it may be picked up from cwd
from setuptools.command import easy_install
try:
# very old versions of setuptools don't have this
from setuptools.command import bdist_egg
except ImportError:
have_setuptools = False
else:
from distutils.core import setup as old_setup
have_setuptools = False

Presumably the next step is to add:

try:
from wheel.bdist_wheel import bdist_wheel
except ImportError:
have_wheel = False
else:
have_wheel = True

followed by:

if have_wheel:
numpy_cmdclass['bdist_wheel'] = bdist_wheel

It looks a bit of a mess but it's worth bearing in mind that the numpy
folks will basically do whatever is required to make all of this stuff
work (not that it's okay to antagonise them with arbitrary changes).

The more relevant concern is how any of this affects smaller and less
well-maintained projects. Setuptools addresses various problems in
distutils for pure Python projects but AFAIK it doesn't make it any
easier to build extension modules. Numpy has all the logic above
precisely because they're trying to emulate the monkey-patching
behaviour of setuptools. However smaller projects will often just use
distut

Re: [Distutils] Comments on PEP 426

2013-09-04 Thread Oscar Benjamin
On 4 September 2013 12:20, Paul Moore  wrote:
> On 4 September 2013 12:05, Oscar Benjamin  wrote:
>> Also would this be sufficient to decouple pip and setuptools (a
>> reasonable goal in itself)? Or does pip depend on setuptools in more
>> ways than the distutils monkey-patching?
>
> I've not got round to reviewing the code (it's on my list) but I think
> it would be sufficient. There is a fair amount of internal pip use of
> *pkg_resources* (for versions, requirements parsing, and such like)
> but that's a somewhat different matter - it would be trivial to
> extract and vendor pkg_resources if we so wished.
>
> We may still need the "inject setuptools" hack in certain cases,
> simply because pure-distutils packages simply don't provide that
> interface out of the box. And that may be a major issue because
> there's no obvious way to detect when a project needs that hack
> without the project saying so, or us making an assumption. But it's
> much less of a technical issue at that point.

What I meant was: If distutils gained the minimal missing setuptools
commands then would that be sufficient to decouple setuptools and pip.
I guess you've answered that above as "probably".

I don't know what commands pip requires but for the sake of argument
let's imagine that it's bdist_wheel and egg_info. Then if distutils as
of Python 3.4 (more realistically 3.5...) gained those commands then
pip would be able to know very easily whether it needed to inject
setuptools:

if sys.version_info < (3, 4):
inject_setuptools()

Or perhaps

from distutils.somewhere import cmdclasses

if not ('bdist_wheel' in cmdclasses and 'egg_info' in cmdclasses):
inject_setuptools()


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-09-04 Thread Oscar Benjamin
On 4 September 2013 11:30, Donald Stufft  wrote:
>
> On Sep 4, 2013, at 6:21 AM, "M.-A. Lemburg"  wrote:
>
>> I quite like the idea of using setup.py as high level
>> interface to a package for installers to use, since that
>> avoids having to dig into the details built into the
>> setup.py code (and whether it uses setuptools, bento,
>> custom code, etc.).
>
> I like it as a temporary solution that is backwards compatible with the old
> tooling but I don't think it should be the interface going into the future.

It shouldn't be the recommended interface for new packages but it will
need to be supported at least as a backward compatibility mode for a
*long* time.

> If I recall correctly Tarek started out trying to improve distutils and ended
> up inadvertently breaking things which ended up getting his changes
> backed out and the block on changes to distutils was placed and the
> distutils2/packaging effort was started.

As I remember it, the problem Tarek had was that a __private
function/method somewhere in distutils was being used by setuptools.
Tarek changed the behaviour of the private function which resulted in
breakage for setuptools so the changes were backed out. It was then
decided that if even modifying private interfaces could not be done in
a backward-compatible way then there could basically be no changes to
distutils unless absolutely required for the purposes of building
CPython itself.

You could argue that this was perhaps something of an over-reaction.
For example the changes proposed here are to add new code/commands
rather than modify existing ones. The potential for breakage is a lot
lower in this case.

Also Tarek's problems were IIRC confounded by setuptools being
unmaintained at the time so that the onus was entirely on Tarek not to
make any change that could break anything for setuptools. At the time
it seemed to be considered that there could be no expectation that
setuptools itself could make reasonable adjustments in preparation for
or after any change in distutils. I think most projects would
understand if their setup.py breaks in a new Python version because
they were accessing private methods. The problem with setuptools was
that a lot of projects who only used the documented setuptools APIs
would have experienced breakage and then been rightly upset about the
situation.

> All of this completely skirts the fact that any change to distutils would only
> be available in 3.4+ or 3.5+ which makes it's value practically zero. It's
> not like other modules in the library where you can reasonably expect
> someone to have a backport installed if you need to use the new features.
> Setuptools has already gone through the long process of getting everyone
> to get it installed, why would we want to go through that again for a system
> that should eventually be deprecated entirely?

If there is a minimal interface that setup.py should support then I
think it's very reasonable to say that a simple distutils setup.py
script should export that interface. Specifically it should be
possible to do 'python setup.py bdist_wheel' and 'python setup.py
egg_info'. Projects with a distutils-based setup.py will be around for
many future Python versions.

Also would this be sufficient to decouple pip and setuptools (a
reasonable goal in itself)? Or does pip depend on setuptools in more
ways than the distutils monkey-patching?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-09-04 Thread Oscar Benjamin
On 4 September 2013 08:51, M.-A. Lemburg  wrote:
> On 04.09.2013 09:27, Paul Moore wrote:
>> On 4 September 2013 08:13, M.-A. Lemburg  wrote:
>>
>>> I guess that's what the suggestion is all about: avoiding
>>> reinventing the wheel, endless discussions and instead going
>>> for standard software refactoring techniques.
>>
>> To my mind the biggest issue (and again, I'm with Antoine here -
>> people keep forgetting this) is that there are no defined API specs to
>> work to. You can't implement "just the important bits" of setuptools
>> without knowing what those bits are, and what the interface to them
>> is.
>
> I don't quite follow you there. setuptools does come with
> documentation and the code is available to be read and reused.
>
> The situation is similar for distutils itself. Most of the
> details are laid out in the code. You just need to take
> a bit of time and learn the concepts - which is not all
> that hard.

An implementation is not a defined API spec even if it does come with
some documentation. Tools like pip will need to install projects whose
setup.py uses distutils, or setuptools, or monkey-patched versions of
distutils/setuptools or a reimplementation of some version of
distutils, or with a setup.py that uses neither distutils nor
setuptools. What is needed is an independent specification of the
minimal command-line interface that a setup.py should provide that is
consistent with how things work now for each of those types of
setup.py and sufficient for the needs of past, present and future
packaging tools.

There is documentation for e.g. the egg_info command:
https://pythonhosted.org/setuptools/setuptools.html#egg-info-create-egg-metadata-and-set-build-tags
However this is really just a description of how to use the command
rather than a specification of what expectations can be made of it by
other tools and libraries.

The problem with implementation defined specifications is that there's
no way to reasonably fix or improve the implementation. It is never
possible to say that an implementation conforms or contravenes a
standard if the implementation *is* the standard. When pip fails to
install a project X from PyPI it is not possible to say which of X or
pip (or distutils/setuptools if used) is buggy since there is no
explicitly documented required behaviour anywhere apart from the
general expectation that 'pip install X' should work.

As a case in point 'pip install bento' does not work (fails to find
the egg info directory). I haven't discovered the reason for this yet
but I wouldn't be surprised if the reason is that bento's setup.py
differs in its behaviour in some way that isn't specified in any API
docs. If the answer is that the bento authors should read the whole of
the setuptools codebase and ensure that what they produce is exactly
the same then they may as well use setuptools and there's basically no
way for anyone to distribute sdists that don't use
distutils/setuptools.

If the expected minimal behaviour of setup.py including the relevant
parts that currently *need* to come from setuptools (i.e. the reasons
for pip to monkeypatch distutils with setuptools) were independently
specified in a PEP then those features could be incorporated into
future versions of distutils without propagating the
implementation-defined problems of the past. It would be possible for
pip and other tools to make assumptions in a backward- and
forward-compatible way and to fix bugs in all tools and libraries
since it would be clear what is a bug and what is not.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-08-31 Thread Oscar Benjamin
On 31 August 2013 16:18, Nick Coghlan  wrote:
>
> Even the current bento issue mentioned in this thread appears to be Windows 
> specific.

I don't think you read what I wrote properly.

There are two aspects to the bento issue:
1) Somehow pip isn't picking up bento's egg info directory.
2) There's a bug in pip where it tries to os.remove() a file before closing it.

The bug in 2) only shows up as an error on Windows and only when the
code path from 1) is triggered. However it is definitely a bug in pip.

For issue 1) I don't know enough about setuptools to understand what's
different about bento's setup.py. The egg_info command works AFAICT:

$ curl https://pypi.python.org/packages/source/b/bento/bento-0.1.1.tar.gz
> b.tgz
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  568k  100  568k0 0  3039k  0 --:--:-- --:--:-- --:--:-- 3324k
$ tar -xzf b.tgz
$ cd bento-0.1.1/
$ ls
LICENSE.txt  PACKAGERS.txt  README.rst  THANKS  bento  bento.info
bentomakerlib  bootstrap.py  bscript  setup.py
$ py -2.7 setup.py egg_info
running egg_info
running build
running config
$ ls
LICENSE.txtREADME.rst  bento   bento.info bootstrap.py  build
PACKAGERS.txt  THANKS  bento.egg-info  bentomakerlib  bscript   setup.py
$ ls bento.egg-info/
PKG-INFO  SOURCES.txt  dependency_links.txt  entry_points.txt
ipkg.info  not-zip-safe  requires.txt  top_level.txt


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-08-31 Thread Oscar Benjamin
On 31 August 2013 16:03, Antoine Pitrou  wrote:
> Oscar Benjamin  gmail.com> writes:
>>
>> > I tend to disagree. Such bugs are not fixed, not because they shouldn't /
>> > can't be fixed, but because distutils isn't really competently maintained
>> > (or not maintained at all, actually; Éric sometimes replies on bug entries
>> > but he doesn't commit anything these days).
>>
>> So is that particular issue a lost cause?
>
> Why would it be?

Because there's no maintainer to commit or reject a patch (unless I've
misunderstood your comment above).

>> > The idea that "distutils shouldn't change" was more of a widely-promoted
>> > propaganda item than a rational decision, IMO. Most setup scripts wouldn't
>> > suffer from distutils changes or improvements; the few that *may* suffer
>> > belong to large projects which probably have other items to solve when a
>> > new Python comes out, anyway.
>>
>> It's not just the setup script for a particular project. It's the
>> particular combination of compilers and setup.py invocations used by
>> any given user for any given setup.py from each of the thousands of
>> projects that do anything non-trivial in their setup.py.
>
> I don't know what those "thousands of projects" are. Most Python projects
> don't even need a compiler, except Python itself.

Well thousands may be an exaggeration :)

>> For example
>> in the issue I mentioned above the spanner in the works came from PJE
>> who wanted to use --compiler=mingw32 while surreptitiously placing
>> Cygwin's gcc on PATH:
>> http://bugs.python.org/issue12641#msg161514
>> It's hard for distutils to react to outside changes in e.g. external
>> compilers because of the need to try and prevent breaking countless
>> unknown and obscure setups for each end user.
>
> This sounds like a deformation of reality. Most users don't have
> "unknown and obscure setups", they actually have quite standardized
> and well-known ones (think Windows, OS X, mainstream Linux distros).

True.

> Sure, in some communities (scientific programming, I suppose) there
> may be obscure setups, but those communities have already grown their
> own bag of tips and tricks, AFAIK.

Yes they do. Our trick at my work is to have professionals build
everything for the obscure setups. My experience with building is just
about installing on my own Windows/Ubuntu desktop machines.

The point I was making is really that breakage occurs on a per-user
basis rather than a per-project basis. Reasoning about what a change
in distutils will do is hard because you're trying to reason about end
user setups rather than just large well-maintained projects. A new
build tool outside the stdlib wouldn't be anywhere near as
constrained.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-08-31 Thread Oscar Benjamin
On 31 August 2013 14:24, Antoine Pitrou  wrote:
> Oscar Benjamin  gmail.com> writes:
>>
>> It will always be possible to ship a setup.py script that can
>> build/install from an sdist or VCS checkout. The issue is about how to
>> produce an sdist with a setup.py that is guaranteed to work with past,
>> current, and future versions of distutils/pip/setuptools/some other
>> installer so that you can upload it to PyPI and people can run 'pip
>> install myproj'. It shouldn't be necessary for the package author to
>> use distutils/setuptools in their setup.py just because the user wants
>> to install with pip/setuptools or vice-versa.
>
> Agreed... But then, deprecating setup.py in favour of setup.cfg is a
> more promising path for cross-tool compatibility, than trying to promote
> one tool over another.

The difference between this

# setup.py
if sys.argv[1] == 'install':
from myproj.build import build
build()

and something like this

# setup.cfg
[install]
command = "from myproj.build import build; build()"

is that one works now for all relevant Python versions and the other
does not. With the setup.cfg end users cannot simply do 'python
setup.py install' unless they have some additional library that can
understand the setup.cfg. Even if future Python versions gain a new
stdlib module/script for this current versions won't have it.

That is why I agree with Nick that the best thing to do is to
explicitly document what is *currently* required to make things work
and guarantee that it will continue to work for the foreseeable
future. Then alternative better ways to specify the build commands in
future can be considered as hinted in the PEP:
http://www.python.org/dev/peps/pep-0426/#metabuild-system

>> Distutils is tied down with backward compatibility because of the
>> number of projects that would break if it changed. Even obvious
>> breakage like http://bugs.python.org/issue12641 goes unfixed for years
>> because of worries that fixing it for 1 users would break some
>> obscure setup for 100 users (no matter how broken that other setup
>> might otherwise be).
>
> I tend to disagree. Such bugs are not fixed, not because they shouldn't /
> can't be fixed, but because distutils isn't really competently maintained
> (or not maintained at all, actually; Éric sometimes replies on bug entries
> but he doesn't commit anything these days).

So is that particular issue a lost cause?

> The idea that "distutils shouldn't change" was more of a widely-promoted
> propaganda item than a rational decision, IMO. Most setup scripts wouldn't
> suffer from distutils changes or improvements; the few that *may* suffer
> belong to large projects which probably have other items to solve when a
> new Python comes out, anyway.

It's not just the setup script for a particular project. It's the
particular combination of compilers and setup.py invocations used by
any given user for any given setup.py from each of the thousands of
projects that do anything non-trivial in their setup.py. For example
in the issue I mentioned above the spanner in the works came from PJE
who wanted to use --compiler=mingw32 while surreptitiously placing
Cygwin's gcc on PATH:
http://bugs.python.org/issue12641#msg161514
It's hard for distutils to react to outside changes in e.g. external
compilers because of the need to try and prevent breaking countless
unknown and obscure setups for each end user.

Although in that particular issue I think it's really just a
responsibility thing: the current breakage can be viewed as externally
caused. Fixing it trades a large amount of breakage that is gcc's
fault for a small amount of breakage that would be Python's fault.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-08-31 Thread Oscar Benjamin
On 31 August 2013 12:03, Antoine Pitrou  wrote:
> Donald Stufft  stufft.io> writes:
>> >
>> > The sticking point is that you don't *have* to install something 
>> > third-party
>> > to get yourself working on some packaging. Being able to benefit from
>> > additional features *if* you install something else is of course fine.
>>
>> Out of the four you listed I'm most familiar with Django's packaging which
>> has gone to significant effort *not* to require setuptools. Most packages
>> aren't willing to go through that effort and either simply require setuptools
>> or they include a distutils fallback which often times doesn't work correctly
>> except in simple cases*.
>>
>> * Not that it couldn't work correctly just they don't use it so they never
> personally
>> experience any brokenness and most people do not directly execute setup.py
>> so the installers tend to handle the differences/brokenness for them.
>
> Executing setup.py directly is very convenient when working with a
> development or
> custom build of Python, rather than install the additional "build tools" 
> (which
> may have their own compatibility issues).
> For example I can easily install Tornado with Python 3.4 that way.
>
> I'm not saying most people will use setup.py directly, but there are 
> situations
> where it's good to do so, especially for software authors.

It will always be possible to ship a setup.py script that can
build/install from an sdist or VCS checkout. The issue is about how to
produce an sdist with a setup.py that is guaranteed to work with past,
current, and future versions of distutils/pip/setuptools/some other
installer so that you can upload it to PyPI and people can run 'pip
install myproj'. It shouldn't be necessary for the package author to
use distutils/setuptools in their setup.py just because the user wants
to install with pip/setuptools or vice-versa.

Distutils is tied down with backward compatibility because of the
number of projects that would break if it changed. Even obvious
breakage like http://bugs.python.org/issue12641 goes unfixed for years
because of worries that fixing it for 1 users would break some
obscure setup for 100 users (no matter how broken that other setup
might otherwise be). That kind of breakage is totally unacceptable to
projects like numpy which is why they fixed the same bug in their own
distutils extension 3 years ago. I claim that the only reason projects
like numpy still use (extensions and monkey-patches of) distutils for
building is that there is no documented way for them to distribute
sdists that build using anything other than distutils.

Bento tries to implement its own setup.py and when I try to install it
with pip I find a bug in pip from code paths that wouldn't get hit if
bento were using setuptools in their own setup.py. If it weren't for
that bug and the output had instead been:

$ pip install bento
Downloading/unpacking bento
  Downloading bento-0.1.1.tar.gz (582kB): 582kB downloaded
  Running setup.py egg_info for package bento
Installing collected packages: bento
  Running setup.py install for bento

Error:
  Could not find .egg-info directory in install record for bento

then we could argue about which of pip or bento was in contravention
of the (non-existent) specification that defines the setup.py
interface.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-08-30 Thread Oscar Benjamin
On 30 August 2013 13:49, Daniel Holth  wrote:
> On Fri, Aug 30, 2013 at 7:54 AM, Oscar Benjamin  
> wrote:
>>
>> I just tried to install bento to test it out and:
>>
>> $ pip install bento
>> Downloading/unpacking bento
>>   Downloading bento-0.1.1.tar.gz (582kB): 582kB downloaded
>>   Running setup.py egg_info for package bento
>> Installing collected packages: bento
>>   Running setup.py install for bento
>>   Could not find .egg-info directory in install record for bento
>> Cleaning up...
>> Exception:
>> Traceback (most recent call last):
>>   File "Q:\tools\Python27\lib\site-packages\pip\basecommand.py", line
>> 134, in main
>> status = self.run(options, args)
>>   File "Q:\tools\Python27\lib\site-packages\pip\commands\install.py",
>> line 241, in run
>> requirement_set.install(install_options, global_options,
>> root=options.root_path)
>>   File "Q:\tools\Python27\lib\site-packages\pip\req.py", line 1298, in 
>> install
>> requirement.install(install_options, global_options, *args, **kwargs)
>>   File "Q:\tools\Python27\lib\site-packages\pip\req.py", line 668, in install
>> os.remove(record_filename)
>> WindowsError: [Error 32] The process cannot access the file because it
>> is being used by another process:
>> 'c:\\docume~1\\enojb\\locals~1\\temp\\pip-aae65s-record\\install-record.txt'
>>
>> Storing complete log in c:/Documents and Settings/enojb\pip\pip.log
>>
>> I tried deleting the mentioned file but just got the same error
>> message again. Is that a bento/pip/setuptools bug? I notice that the
>> bento docs don't mention pip on the installation page:
>> http://cournape.github.io/Bento/html/install.html
>>
>> Here's the appropriate version information:
>>
>> $ pip --version
>> pip 1.4.1 from q:\tools\python27\lib\site-packages (python 2.7)
>> $ python --version
>> Python 2.7.5
>> $ python -c 'import setuptools; print(setuptools.__version__)'
>> 1.1
>>
>> (I just very carefully updated pip/setuptools based on Paul's previous
>> instructions).
>>
>> The bento setup.py uses bento's own setup() command:
>> https://github.com/cournape/Bento/blob/master/setup.py
>
> It looks like you cannot install bento itself using pip on Windows. It
> might be a Windows bug "WindowsError: [Error 32] The process cannot
> access the file because it is being used by another process:". It's a
> little better on Linux (it gets installed) but I don't think Bento is
> really meant to be installed in this way.

I't's a bug in pip. The file in question is opened by pip a few lines
above. The particular code path is called because the else
logger.warn() clause gets triggered (i.e. where it says "## FIX ME"
:) )

f = open(record_filename)
for line in f:
line = line.strip()
if line.endswith('.egg-info'):
egg_info_dir = prepend_root(line)
break
else:
logger.warn('Could not find .egg-info directory in
install record for %s' % self)
## FIXME: put the record somewhere
## FIXME: should this be an error?
return
f.close()
new_lines = []
f = open(record_filename)
for line in f:
filename = line.strip()
if os.path.isdir(filename):
filename += os.path.sep

new_lines.append(make_path_relative(prepend_root(filename),
egg_info_dir))
f.close()
f = open(os.path.join(egg_info_dir, 'installed-files.txt'), 'w')
f.write('\n'.join(new_lines)+'\n')
f.close()
finally:
if os.path.exists(record_filename):
os.remove(record_filename)
os.rmdir(temp_location)

The error comes from the os.remove line 2nd from bottom. The file was
opened in the top line. The logger.warn code path returns without
closing the file. If I add f.close() just before return then I get:

$ pip install bento
Downloading/unpacking bento
  Downloading bento-0.1.1.tar.gz (582kB): 582kB downloaded
  Running setup.py egg_info for package bento
Installing collected packages: bento
  Running setup.py install for bento
  Could not find .egg-info directory in install record for bento
Successfully installed bento
Cleaning up...

It's probably better to use the with statement though.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-08-30 Thread Oscar Benjamin
On 29 August 2013 16:49, Paul Moore  wrote:
> On 29 August 2013 16:02, Oscar Benjamin  wrote:
>>On 29 August 2013 15:30, Antoine Pitrou  wrote:
> [...]
>>> (after all, it's just "python setup.py build_bdist", or something :-))
>>
>> The point is that pip and other packaging tools will use 'python
>> setup.py ...' to do all the building and wheel making and so on.
>> However the required interface that setup.py should expose is not
>> documented anywhere and is essentially implementation defined where
>> the implementation is the setup() function from a recent version of
>> setuptools. In the interest of standardising the required parts of
>> existing practice the required subset of this interface should be
>> documented.
>
> Specifically, the command is
>
> python setup.py bdist_wheel
>
> But that requires the wheel project and setuptools to be installed,
> and we're not going to require all users to have those available.
>
> Also, other projects can build wheels with different commands/interfaces:
> * distlib says put all your built files in a set of directories then
> do wheel.build(paths=path_mapping) - no setup.py needed at all
> * pip says pip wheel requirement (but that uses setuptools/wheel under the 
> hood)
> * bento might do something completely different

Yes, but whatever is used if the required interface from setup.py is
documented it's easy enough for a distribution author to create a
setup.py that would satisfy those commands. It could be as easy as:

if sys.argv[1] == 'bdist_wheel':
sys.exit(subprocess.call(['bentomaker', 'build_wheel'])

or whatever. Then, if bento is build-required (or distributed in the
sdist), 'pip wheel' would work right? Bento could even ship/generate
setup.py files for bento-using distributions to use (I assume
'bentomaker sdist' does actually do this but I got an error installing
bento; see below...).

However, right now it's not clear exactly what the command line
interface would need to be e.g.: Should setup.py process any optional
arguments? How should it know what filename to give the wheel and what
directory to put it in? Should the setup.py assume that its current
working directory is the VCS checkout or unpacked sdist directory?
Will pip et al. infer sucess/failure from the return code? Who is
supposed to be responsible for any cleanup if necessary?

> The whole question of standardising the command line API for building
> (sdists and) wheels is being avoided at the moment, as it's going to
> be another long debate (setup.py is too closely associated with
> distutils and/or setuptools for some people).

Yes but rather than try to think of something better I'm just
suggesting to document what is *already* required, with some guarantee
of backward compatibility that will be respected in the future. Even
if wheels become commonplace and are used by the most significant
projects there will still be a need to build some distributions from
source e.g. because the authors didn't build a wheel for your
architecture, or the user/author prefer to make build-time
optimisations etc.

> AIUI, we're sort of moving towards the "official" command line API
> being pip's (so "pip wheel XXX") but that's not a complete answer as
> currently pip internally just uses the setup.py command line, and the
> intention is to decouple the two so that alternative build tools (like
> bento, I guess) get a look in. It's all a bit vague at the moment,
> though, because nobody has even looked at what alternative build tools
> might even be needed.
>
> I could have this completely wrong, though - we're trying very hard to
> keep the work in small chunks, and building is not one of those chunks
> yet.

Leaving the build infrastructure alone for now seems reasonable to me.
However if a static target is created for third-party build tools then
there could be more progress on that front.

I just tried to install bento to test it out and:

$ pip install bento
Downloading/unpacking bento
  Downloading bento-0.1.1.tar.gz (582kB): 582kB downloaded
  Running setup.py egg_info for package bento
Installing collected packages: bento
  Running setup.py install for bento
  Could not find .egg-info directory in install record for bento
Cleaning up...
Exception:
Traceback (most recent call last):
  File "Q:\tools\Python27\lib\site-packages\pip\basecommand.py", line
134, in main
status = self.run(options, args)
  File "Q:\tools\Python27\lib\site-packages\pip\commands\install.py",
line 241, in run
requirement_set.install(install_options, global_options,
root=options.root_path)
  File "Q:\tools\Python27\lib\site-packages\pip\req.py", line 1298, in install
 

Re: [Distutils] Comments on PEP 426

2013-08-29 Thread Oscar Benjamin
On 29 August 2013 18:11, Daniel Holth  wrote:
> It probably makes sense for some version of bdist_wheel to be merged
> into setuptools eventually. In that system pip would document which
> setup.py commands and arguments it uses and a non-distutils-derived
> setup.py would have to implement a minimal set of commands to
> interoperate. This is basically where we are today minus the "minimal"
> and "documented" details.
>
> The alternative, not mutually exclusive solution would be to define a
> Python-level detect/build plugin system for pip  which would call a
> few methods to generate an installable from a source distribution.
>
> It doesn't exist yet mostly because the pip developers haven't written
> enough alternative build systems. There is no strategic reason for the
> delay.

I thought that the list in the PEP seemed reasonable:

python setup.py dist_info
python setup.py sdist
python setup.py build_ext --inplace
python setup.py test
python setup.py bdist_wheel

Most projects already have a setup.py that can do these things with
the exception of bdist_wheel. The only ambiguity is that it's not
clear whether the expectation is *exactly* those invocations or
whether any other command line options etc. would be needed.

Can it not simply be documented that these are the commands needed by
current packaging tools (and return codes, expected behaviour, ...) to
fit with the current bleeding edge infrastructure?

I would have thought that that would be good enough as a stop-gap
while a better non-setup.py solution awaits.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Comments on PEP 426

2013-08-29 Thread Oscar Benjamin
On 29 August 2013 15:30, Antoine Pitrou  wrote:
> Nick Coghlan  gmail.com> writes:
>
>> >> This version of the metadata specification continues to use ``setup.py``
>> >> and the distutils command syntax to invoke build and test related
>> >> operations on a source archive or VCS checkout.
>> >
>> > I don't really understand how Metadata 2.0 is dependent on the distutils
>> > command scheme. Can you elaborate?
>>
>> Given an sdist, how do you build a wheel?
>>
>> Given a source checkout or raw tarball, how do you build an sdist or
>> generate the distribution's metadata?
>>
>> The whole problem of building from source is currently woefully
>> underspecified, and there's a lot to be said in favour of
>> standardising a subset of the existing setuptools command line API
>
> Hmmm... I'm not sure I follow the reasoning. The internal mechanics
> of building a binary archive may deserve standardizing, and perhaps
> a dedicated distlib API for it, but why would that impact the
> command-line API?
>
> (after all, it's just "python setup.py build_bdist", or something :-))

The point is that pip and other packaging tools will use 'python
setup.py ...' to do all the building and wheel making and so on.
However the required interface that setup.py should expose is not
documented anywhere and is essentially implementation defined where
the implementation is the setup() function from a recent version of
setuptools. In the interest of standardising the required parts of
existing practice the required subset of this interface should be
documented.

Projects like numpy/scipy that deliberately don't use the setup()
function from either setuptools or distutils need to know what
interface is expected of their setup.py. The same goes for any attempt
to build a new third-party package that would be used as a replacement
for building with distutils/setuptools (which are woefully inadequate
for projects with significant C/Fortran etc. code): any new build
system needs to have an API that it should conform to. On the other
side the packaging tools like pip etc. need to know what interface
they can *require* of setup.py without breaking compatibility with
non-distutils/setuptools build systems.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] What does it mean for Python to "bundle pip"?

2013-08-22 Thread Oscar Benjamin
On 22 August 2013 16:33, Chris Barker - NOAA Federal
 wrote:
> On Thu, Aug 22, 2013 at 6:52 AM, Oscar Benjamin
>  wrote:
>
>> I'm pretty sure the current Windows installer just doesn't bother with
>> BLAS/LAPACK libraries. Maybe it will become possible to expose them
>> via a separate wheel-distributed PyPI name one day.
>
> Well, the rule of thumb with Windows binaries is that you bundle in
> (usually via static linking) all the libs you need -- numpy could have
> a semi-optimized LAPACK or not, and the user shouldn't know the
> difference at install time. But the trick in this case is that numpy
> is used by itself, but also widely used with external C and Fortran
> that might want LAPACK. (including scipy, in fact...)
>
> But maybe this is all too much to bite off for pip and wheels. If we
> could get to a state where "pip install numpy" and "pip install scipy"
> would do something reasonable, if not optimized, I think that would be
> great!

Agreed.

And actually 'pip wheel numpy' works fine for me on Windows with MinGW
installed. (I don't even need to patch distutils because
numpy.distutils fixed the MinGW bug three years ago!). There's no
BLAS/LAPACK support but I assume it has the appropriate SSE2 build
which is basically what the win32 superpack gives you.

> And it's really not a big deal to say:
>
> If you want an optimized LAPACK, etc.  for your system, you need to do
> something special/ by hand/ etc...
>
> "something special" may be as simple as "download
> numpy_optimized_just_for_this_machine.whl and install it with pip.

Exactly. Speaking personally, I do all my real computation on Linux
clusters managed by scientific software professionals who have
hand-tuned and pre-built a bewildering array of alternative
BLAS/LAPACK setups and numpy etc. to go with. For me having numpy on
Windows is just for developing, debugging etc. so hard-core
optimisation isn't important.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] What does it mean for Python to "bundle pip"?

2013-08-22 Thread Oscar Benjamin
On 22 August 2013 12:57, Vinay Sajip  wrote:
>> I think that the installer ships variants for each architecture and
>> decides at install time which to place on the target system. If that's
>> the case then would it be possible for a wheel to ship all variants so
>> that a post-install script could sort it out (rename/delete) after the
>> wheel is installed?
>
> It's not just about the architecture on the target system, it's also about
> e.g. what libraries are installed on the target system. Files like
> numpy/__config__.py and numpy/distutils/__config__.py are created at build
> time, based on local conditions, and those files would then be written to
> the wheel. On the installation machine, the environment may not be
> compatible with those configurations computed on the build machine. Those
> are the things I was talking about which may need moving from build-time to
> run-time computations.

I'm pretty sure the current Windows installer just doesn't bother with
BLAS/LAPACK libraries. Maybe it will become possible to expose them
via a separate wheel-distributed PyPI name one day. That would help
since they're currently not very easy to setup/build on Windows but
the same sse etc. issues would apply to them as well.

For now just leaving out BLAS/LAPACK is probably okay. apt-get doesn't
bother to install them for numpy either (on Ubuntu). It will set them
up properly if you explicitly ask for them though.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] What does it mean for Python to "bundle pip"?

2013-08-22 Thread Oscar Benjamin
On 21 August 2013 22:22, Paul Moore  wrote:
> On 21 August 2013 22:13, Nick Coghlan  wrote:
>>
>> Wheel is a suitable replacement for bdist_wininst (although anything that
>> needs install hooks will have to wait for wheel 1.1, which will support
>> metadata 2.0). It's just not a replacement for what hashdist and conda let
>> you do when you care more about reproducibility than you do about security
>> updates.
>
> OK, that's a good statement - wheels as a better bdist_wininst is all I want
> to be able to promote (and yes, if you need post-install hooks, wait for
> wheel 1.1).

Okay, so going back to my earlier question...

Oscar asked:
> BTW is there any reason for numpy et al not to start distributing
> wheels now? Is any part of the wheel
> specification/tooling/infrastructure not complete yet?

the answer is basically yes to both questions.

The pip+PyPI+wheel infrastructure is not yet able to satisfy numpy's
needs as the wheel spec doesn't give sufficiently fine-grained
architecture information and there's no way to monkey-patch the
installation process in order to do what the current installers do.

It seems to me that the ideal solution for numpy is not really the
post-install script but a way to distribute wheels appropriate to the
given CPU. Bundling the different binaries in one installer makes
sense for an installer that is manually downloaded by a user but not
for one that is automatically downloaded.

There's a pure Python script here that seems to be able to obtain the
appropriate information:
https://code.google.com/p/numexpr/source/browse/numexpr/cpuinfo.py?r=ac92866e7929df669ca5e4e050179cd7448798f0

$ python cpuinfo.py
CPU information: CPUInfoBase__get_nbits=32 getNCPUs=2 has_mmx has_sse2
is_32bit is_Core2 is_Intel is_i686

So perhaps numpy could upload multiple wheels:

numpy-1.7.1-cp27-cp22m-win32.whl
numpy-1.7.1-cp27-cp22m-win32_sse.whl
numpy-1.7.1-cp27-cp22m-win32_sse2.whl
numpy-1.7.1-cp27-cp22m-win32_sse3.whl

Then ordinary pip would just install the win32 wheel but "fancypip"
could install the one with the right level of sse2 support.

Or is there perhaps a way that a distribution like numpy could depend
on another distribution that finds CPU information and informs or
hooks into pip etc. so that pip would be able to gain this support in
an extensible way?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Installing from a wheel

2013-08-21 Thread Oscar Benjamin
On 21 August 2013 15:57, Paul Moore  wrote:
> On 21 August 2013 15:48, Oscar Benjamin  wrote:
>>
>> Is it perhaps safer to suggest the following?
>> a) uninstall pip/setuptools/distribute
>> b) run ez_setup.py
>> c) run get-pip.py
>
> It probably is. I've heard concerns that people want to avoid suggesting
> manual uninstalls and having to download the setup scripts. But it seems
> simple enough to me. (What would I know, I just run virtualenv and leave it
> at that :-))

I walked right into that one: I definitely could have used a
virtualenv for this. However I couldn't have used a virtualenv to
update my system pip so it would support wheel installation.

> Glad it worked in the end, anyway, and sorry if my instructions made it
> harder than it needed to be.

No they didn't. There was no point when I didn't know how to revert
everything with 'rm -r'.

> As regards distribute, I suspect that the reason you hit issues is that if
> you have a setuptools that's older than 0.7 (or whatever the first merged
> version was) then an upgrade can end up jumping through some hoops and going
> through a "dummy" distribute version that's there to handle the
> fork/re-merge somehow. I honestly don't know how it all works, I'm just
> going off what I saw on some of the discussions on pypa-dev at the time. It
> all sounded very clever to me, but a bit fragile. I'm a simple soul, and
> prefer to just wipe it out and reinstall, so I zoned out after a while:-) I
> doubt the details matter to you now, though...

No they don't. But I'm with you on the uninstall/reinstall thing. That
would be my recommendation to anyone who needs to upgrade.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Installing from a wheel

2013-08-21 Thread Oscar Benjamin
On 21 August 2013 15:57, Daniel Holth  wrote:
>
> A fresh virtualenv would have been the humane way to get a working
> 'pip install wheel'.

Good point. I think I learned an important point going through that
upgrade mess though: uninstall/reinstall is safer than upgrade.

> Wheel's built in installer isn't intended to replace or be better than
> pip in any way. It's just for reference or bootstrapping.

Fair enough. Can I suggest that it have a --version option (since it
is traditional)?

> FYI if you point pip directly at the .whl file you can omit --use-wheel.

Okay I've just tried that and that's definitely the way I want to use it.

So basically:
$ python setup.py bdist_wheel  # Makes wheels
and
$ pip install foo.whl  # Installs wheels

If someone wants to import the bdist_wheel command and use it outside
of setuptools setup() (in the way that numpy does) where should they
import it from? I'm thinking of something like this:
https://github.com/numpy/numpy/blob/master/numpy/distutils/command/bdist_rpm.py

Is the following appropriate?

from wheel.bdist_wheel import bdist_wheel

class mybdist_wheel(bdist_wheel):
 ...

(the wheel API docs don't describe using bdist_wheel from Python code.)


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Installing from a wheel

2013-08-21 Thread Oscar Benjamin
On 21 August 2013 14:56, Paul Moore  wrote:
> On 21 August 2013 14:28, Oscar Benjamin  wrote:
>>
>> So I tried updating everything e.g.:
>>
>> $ pip install -U wheel pip setuptools
>
> [lots omitted for brevity]
>
> Some thoughts.
>
> pip 1.3.1 predates pip's wheel support so you wouldn't have had pip install
> --use-wheel there.
>
> The upgrade error may have been because pip install -U pip tries to install
> a new pip.exe while pip.exe is in use. The error might not be too bad
> (pip.exe doesn't actually need to change).

Maybe, although the path
c:\\docume~1\\enojb\\locals~1\\temp\\pip-6echt4-uninstall\\tools\\python27\\scripts\\pip.exe
is not the location of the pip I used and is not on PATH.

> For safety, "python -m pip install -U pip --force-reinstall" might be worth
> doing.

Okay done. Seems to work fine.

> You quite probably shouldn't have upgraded setuptools like you did. It looks
> like you had a pre-merge version, and upgrading across the distribute merge
> appears to be fun (I have personally never encountered that particular
> flavour of fun, but that's what I'm led to believe).

This is not an old Python installation. I installed this as a clean
installation to test the patches I uploaded for issue12641 2-3 months
ago. I wouldn't have deliberately installed distribute (I know it's
obsoleted) so I don't know how it got there.

> For safety you should
> check your site-packages for setuptools and distribute installations. Maybe
> manually remove distribute if present,

I got this far:
$ rm -r /q/tools/Python27/Lib/site-packages/distribute-0.6.40-py2.7.egg/

and then
$ pip
Traceback (most recent call last):
  File "q:\tools\Python27\Scripts\pip-script.py", line 5, in 
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources

> and then "python -m pip install -U setuptools --force-reinstall"

Alas, this one doesn't work any more either:
$ python -m pip
q:\tools\Python27\python.exe: No module named pkg_resources; 'pip' is
a package and cannot be directly executed

> (don't do a combined run of pip and setuptools
> together, that's one of the scary failure modes IIRC).

Okay so I manually deleted everything for
pip/setuptools/distribute/easy_install from Scripts and site-packages
and started again. Following the instructions for install pip and
setuptools for Windows I downloaded and ran ez_setup.py followed by
get-pip.py. Then I got this error:

$ py -2.7 get-pip.py
Downloading/unpacking pip

pip can't proceed with requirement 'pip' due to a pre-existing build directory.
 location: c:\docume~1\enojb\locals~1\temp\pip-build-enojb\pip
This is likely due to a previous installation that failed.
pip is being responsible and not assuming it can delete this.
Please delete it and try again.

Cleaning up...
Exception:
Traceback (most recent call last):
  File 
"c:\docume~1\enojb\locals~1\temp\unpacker-c3e81q-scratchdir\pip\basecommand.py",
line 134, in main
status = self.run(options, args)
  File 
"c:\docume~1\enojb\locals~1\temp\unpacker-c3e81q-scratchdir\pip\commands\install.py",
line 236, in run
requirement_set.prepare_files(finder,
force_root_egg_info=self.bundle, bundle=self.bundle)
  File "c:\docume~1\enojb\locals~1\temp\unpacker-c3e81q-scratchdir\pip\req.py",
line 1071, in prepare_files
raise e
PreviousBuildDirError:
pip can't proceed with requirement 'pip' due to a pre-existing build directory.
 location: c:\docume~1\enojb\locals~1\temp\pip-build-enojb\pip
This is likely due to a previous installation that failed.
pip is being responsible and not assuming it can delete this.
Please delete it and try again.

The path it refers to doesn't exist but deleting a similar directory
gets it working:

$ rm -r ~/Local\ Settings/temp/pip-6echt4-uninstall/
enojb@ENM-OB:/q$ py -2.7 get-pip.py
Downloading/unpacking pip
  Downloading pip-1.4.1.tar.gz (445kB): 445kB downloaded
  Running setup.py egg_info for package pip
   ...

Okay so now I'm back in business ('pip list' works etc.).

> pip 1.4.1 should be able to pip uninstall a distribution installed from a
> wheel (but TBH, I would have expected 1.3.1 to be able to, as well. The
> installed data looked OK).

Yes it can:

$ pip uninstall spam
Uninstalling spam:
  q:\tools\python27\lib\site-packages\spam-1.0.dist-info\description.rst
  q:\tools\python27\lib\site-packages\spam-1.0.dist-info\metadata
  q:\tools\python27\lib\site-packages\spam-1.0.dist-info\pydist.json
  q:\tools\python27\lib\site-packages\spam-1.0.dist-info\record
  q:\tools\python27\lib\site-packages\spam-1.0.dist-info\top_level.txt
  q:\tools\python27\lib\site-packages\spam-1.0.dist-info\wheel
  q:\tools\python27\lib\site-packages\spam.py
Proceed (y/n)? y
  Successf

Re: [Distutils] Installing from a wheel

2013-08-21 Thread Oscar Benjamin
On 21 August 2013 14:08, Paul Moore  wrote:
> On 21 August 2013 13:59, Oscar Benjamin  wrote:
>>
>> $ cat spam.py
>> # spam.py
>> print('running spam from:', __file__)
[snip]
>
> Looks good. You might want to add the (undocumented) universal flag to
> setup.cfg, as your wheel is Python only and works for Python 2 and 3, and so
> not version-specific.

Really I need to import print_function for that universality to be
true but we'll overlook that :)

> setup.cfg:
>
> [wheel]
> universal=1

Okay so I need setup.cfg as well as setup.py.

[snip]
>
> Looks good. I thought wheel install gave some progress output, but it's a
> long time since I used it and I may be misremembering. You can also use pip
> install --use-wheel if you prefer (assuming you have pip 1.4+)

Okay, that's good. I'd rather just use the pip command than use wheel directly.

>> So now how do I uninstall it?
>>
>> $ pip uninstall spam
>> Can't uninstall 'spam'. No files were found to uninstall.
>>
>> The wheel command doesn't seem to have an uninstall option either
>
> Odd. pip uninstall should work. Can you confirm your version of pip and
> wheel? And can you list the contents of the spam-1.0.dist-info directory in
> your site-packages?

$ pip --version
pip 1.3.1 from q:\tools\python27\lib\site-packages\pip-1.3.1-py2.7.egg
(python 2.7)

$ wheel --version  # This gives a usage message

>>> import wheel
>>> wheel.__version__
'0.21.0'

$ ls /q/tools/Python27/Lib/site-packages/spam-1.0.dist-info/
DESCRIPTION.rst  METADATA  RECORD  WHEEL  pydist.json  top_level.txt
$ cat /q/tools/Python27/Lib/site-packages/spam-1.0.dist-info/*
UNKNOWN


Metadata-Version: 2.0
Name: spam
Version: 1.0
Summary: UNKNOWN
Home-page: UNKNOWN
Author: UNKNOWN
Author-email: UNKNOWN
License: UNKNOWN
Platform: UNKNOWN

UNKNOWN


spam-1.0.dist-info\DESCRIPTION.rst,sha256=OCTuuN6LcWulhHS3d5rfjdsQtW22n7HENFRh6jC6ego,10
spam-1.0.dist-info\METADATA,sha256=N7NDv-twCNGywvm1HXdz67MoFL4xIUoT5p39--tGGB8,179
spam-1.0.dist-info\WHEEL,sha256=ceN1GNMAiWCEADx3_5pdpmZwt4A_AtSxSxYSCyHhhPw,98
spam-1.0.dist-info\pydist.json,sha256=rptnmxTtRo0YZfBQZbIxMdHWDAg48f0UhCDmdymzHbk,174
spam-1.0.dist-info\top_level.txt,sha256=KE4wKczjrl7gsFhmEA4wAEY1n1OuTHf-azTAWqenLO4,5
spam.py,sha256=_5V9b8A2xHt-590km2JzJniHeWIiXbdU_wVHONhTzms,48
spam-1.0.dist-info/RECORD,,
Wheel-Version: 1.0
Generator: bdist_wheel (0.21.0)
Root-Is-Purelib: true
Tag: py27-none-any

{"document_names": {"description": "DESCRIPTION.rst"}, "name": "spam",
"metadata_version": "2.0", "generator": "bdist_wheel (0.21.0)",
"summary": "UNKNOWN", "version": "1.0"}spam

So I tried updating everything e.g.:

$ pip install -U wheel pip setuptools
Requirement already up-to-date: wheel in q:\tools\python27\lib\site-packages
Downloading/unpacking pip from
https://pypi.python.org/packages/source/p/pip/pip-1.4.1.tar.gz#md5=6afbb46aeb48abac658d4df742bff714
  Downloading pip-1.4.1.tar.gz (445kB): 445kB downloaded
  Running setup.py egg_info for package pip

warning: no files found matching '*.html' under directory 'docs'
warning: no previously-included files matching '*.rst' found under
directory 'docs\_build'
no previously-included directories found matching 'docs\_build\_sources'
Downloading/unpacking distribute from
https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a
  Downloading distribute-0.7.3.zip (145kB): 145kB downloaded
  Running setup.py egg_info for package distribute

Downloading/unpacking setuptools>=0.7 from
https://pypi.python.org/packages/source/s/setuptools/setuptools-1.0.tar.gz#md5=3d196ffb6e5e4425daddbb4fe42a4a74
(from distribute)
  Downloading setuptools-1.0.tar.gz (679kB): 679kB downloaded
  Running setup.py egg_info for package setuptools

Installing collected packages: pip, distribute, setuptools
  Found existing installation: pip 1.3.1
Uninstalling pip:
  Successfully uninstalled pip
  Running setup.py install for pip

warning: no files found matching '*.html' under directory 'docs'
warning: no previously-included files matching '*.rst' found under
directory 'docs\_build'
no previously-included directories found matching 'docs\_build\_sources'
Installing pip-script.py script to q:\tools\Python27\Scripts
Installing pip.exe script to q:\tools\Python27\Scripts
Installing pip.exe.manifest script to q:\tools\Python27\Scripts
Installing pip-2.7-script.py script to q:\tools\Python27\Scripts
Installing pip-2.7.exe script to q:\tools\Python27\Scripts
Installing pip-2.7.exe.manifest script to q:\tools\Python27\Sc

[Distutils] Installing from a wheel

2013-08-21 Thread Oscar Benjamin
This is the first time that I've tested using wheels and I have a
couple of questions.

Here's what I did (is this right?):

$ cat spam.py
# spam.py
print('running spam from:', __file__)
$ cat setup.py
from setuptools import setup

setup(name='spam',
  version='1.0',
  py_modules=['spam'])

$ python setup.py bdist_wheel
running bdist_wheel
...
creating build\bdist.win32\wheel\spam-1.0.dist-info\WHEEL
$ ls
build  dist  setup.py  spam.egg-info  spam.py
$ ls dist/
spam-1.0-py27-none-any.whl

Okay, so far so good. I have the wheel and everything makes sense. Now
I want to test installing it:

$ wheel install --wheel-dir=./dist/ spam

The line above gives no output. I expect something like 'installing
spam... installed.'. It also ran so quickly that I thought that
nothing had happened.

A quick check reveals that the module was installed:

$ cd ~
$ python -m spam
('running spam from:', 'q:\\tools\\Python27\\lib\\site-packages\\spam.py')
$ pip list | grep spam
spam (1.0)

So now how do I uninstall it?

$ pip uninstall spam
Can't uninstall 'spam'. No files were found to uninstall.

The wheel command doesn't seem to have an uninstall option either.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] What does it mean for Python to "bundle pip"?

2013-08-21 Thread Oscar Benjamin
On 21 August 2013 11:39, Paul Moore  wrote:
> On 21 August 2013 11:29, Oscar Benjamin  wrote:
>>
>> I may have misunderstood it but looking at this
>>
>> https://github.com/numpy/numpy/blob/master/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in#L147
>> I think that the installer ships variants for each architecture and
>> decides at install time which to place on the target system. If that's
>> the case then would it be possible for a wheel to ship all variants so
>> that a post-install script could sort it out (rename/delete) after the
>> wheel is installed?
>
> Wheel 1.0 does not have the ability to bundle multiple versions (and I don't
> think tags are fine-grained enough to cover the differences numpy need,
> which are at the "do you have the SSE instruction set?" level AIUI).
> Multi-version wheels are a possible future extension, but I don't know if
> anyone has thought about fine-grained tags.

No, but the wheel could do like the current numpy installer and ship
_numpy.pyd.nosse
_numpy.pyd.sse1
_numpy.pyd.sse2
_numpy.pyd.sse3
as platlib files and then a post-install script can check for SSE
support, rename the appropriate file to _numpy.pyd and delete the
other _numpy.pyd.* files.

> This is precisely the sort of input that the numpy people could provide to
> make sure that the wheel design covers their needs.

I'm I right in guessing (since the question keeps being evaded :) )
that a post-install script is not possible with pip+wheel+PyPI?.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] What does it mean for Python to "bundle pip"?

2013-08-21 Thread Oscar Benjamin
On 21 August 2013 08:04, Vinay Sajip  wrote:
> Oscar Benjamin  gmail.com> writes:
>
>> I think that they are responsible for installing the f2py script in
>> each of my Scripts directories. I never use this script and I don't
>> know what numpy wants with it (my understanding is that the Fortran
>> parts of numpy were all shifted over to scipy).
>
> IIUC, if a third-party extension wants to use Fortran, the build process
> converts it using f2py in to a Python-importable extension. It may be a
> feature for distributions that use numpy, even if numpy doesn't use Fortran
> itself.

Okay, that makes sense. I'm sure that's not a big problem. It won't
work very well on Windows (the case where wheels are really needed)
anyway since it doesn't have a wrapper script and won't get picked up
by make etc.

>> > 2. Tags (not in general, but AIUI numpy distribute a fancy installer that
>> > decides what compiled code to use depending on whether you have certain CPU
>> > features - they may want to retain that, and to do so may prefer to have
>> > more fine-grained tags, which in turn may or may not be possible to
>> > support). I don't think that's a critical issue though.
>>
>> I guess this is what you mean:
>> https://github.com/numpy/numpy/blob/master/tools/win32build/cpuid/test.c
>>
>> Is there no way for them to run a post-install script when pip
>> installing wheels from PyPI?
>
> I'm not sure that would be enough. The numpy installation checks for various
> features available at build time, and then writes numpy source code which is
> then installed. When building and installing on the same machine, perhaps no
> problem - but there could be problems when installation happens on a
> different machine, since the sources written to the wheel at build time
> would encode information about the build environment which may not be valid
> in the installation environment.
>
> ISTM for numpy to work with wheels, all of this logic would need to move
> from build time to run time, but I don't know how pervasive the
> source-writing approach is and how much work would be entailed in switching
> over to run-time adaptation to the environment.

I may have misunderstood it but looking at this
https://github.com/numpy/numpy/blob/master/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in#L147
I think that the installer ships variants for each architecture and
decides at install time which to place on the target system. If that's
the case then would it be possible for a wheel to ship all variants so
that a post-install script could sort it out (rename/delete) after the
wheel is installed?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] What does it mean for Python to "bundle pip"?

2013-08-20 Thread Oscar Benjamin
On 20 August 2013 16:21, Paul Moore  wrote:
> On 20 August 2013 16:09, Oscar Benjamin  wrote:
>>
>> BTW is there any reason for numpy et al not to start distributing
>> wheels now? Is any part of the wheel
>> specification/tooling/infrastructure not complete yet?
>
> Not really. It's up to them to do so, though. Maybe their toolset makes that
> more difficult, I don't believe they use setuptools, for example - that's
> their problem, but it may not be one they are interested in solving :-(

They seem to be using setuptools commands here:
https://github.com/numpy/numpy/blob/master/numpy/distutils/core.py#L48
https://github.com/numpy/numpy/blob/master/setupegg.py

> The biggest issues outstanding are:
>
> 1. Script handling, which is a bit flaky still (but I don't think that
> affects numpy)

I think that they are responsible for installing the f2py script in
each of my Scripts directories. I never use this script and I don't
know what numpy wants with it (my understanding is that the Fortran
parts of numpy were all shifted over to scipy).

> 2. Tags (not in general, but AIUI numpy distribute a fancy installer that
> decides what compiled code to use depending on whether you have certain CPU
> features - they may want to retain that, and to do so may prefer to have
> more fine-grained tags, which in turn may or may not be possible to
> support). I don't think that's a critical issue though.

I guess this is what you mean:
https://github.com/numpy/numpy/blob/master/tools/win32build/cpuid/test.c

Is there no way for them to run a post-install script when pip
installing wheels from PyPI?

> Getting numpy et al on board would be a huge win - if wheels can satisfy
> their needs, we could be pretty sure we haven't missed anything. And it gets
> a big group of users involved, giving us a lot more real world use cases.
> Feel like sounding the numpy community out on the idea?

Maybe. I'm not usually on their mailing lists but I'd be willing to
find out what they think. First I need to be clear that I know what
I'm talking about though!


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] What does it mean for Python to "bundle pip"?

2013-08-20 Thread Oscar Benjamin
On 20 August 2013 14:49, Nick Coghlan  wrote:
>
> On 20 Aug 2013 05:51, "Paul Moore"  wrote:
>>
>> But yes, if I made extensive use of binary extensions, I'd hate this
>> approach. That's why I keep saying that the biggest win for wheels will be
>> when they become the common means of distributing Windows binaries on PyPI,
>> in place of wininst/msi.
>
> Scientific users will always be better off with something like
> hashdist/conda, since that ignores platform interoperability and easy
> security updates in favour of hash based reproducibility. Continuum
> Analytics also already take care of providing the prebuilt binary versions.

Hashdist looks useful but it's for people who will build everything
from source (as is basically required in the HPC environments for
which it is designed). This is still problematic on Windows (which is
never used for HPC).

Conda looks interesting though, I'll give that a try soon.

> The pip ecosystem is more appropriate for pure Python code and relatively
> simple C extensions (including cffi bindings).

The core extensions that I would want to put into each and every
virtualenv are things like numpy and matplotlib. These projects have
been reliably providing binary installers for Windows for years and
I'm sure that they will soon be distributing wheels. The current PyPI
binaries for numpy are here:
https://pypi.python.org/pypi/numpy
Is it not a fairly simple change to make it so that they're also
uploading wheels?

BTW is there any reason for numpy et al not to start distributing
wheels now? Is any part of the wheel
specification/tooling/infrastructure not complete yet?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] What does it mean for Python to "bundle pip"?

2013-08-20 Thread Oscar Benjamin
On 20 August 2013 09:51, Paul Moore  wrote:
> 1. Will the bundled pip go into the system site-packages or the user
> site-packages? Does this depend on whether the user selects "install for all
> users" or "install for just me"?

If you have get-pip then why not choose at that point whether you want
to install for the system or for all users e.g.: 'py -3.4 -m get-pip
--user' (or perhaps reverse the default)?

> 2. If pip goes into system site-packages, what happens with the uninstaller?
> It doesn't know about pip, so it won't uninstall Python cleanly. (Not a
> major point, you can delete the directory manually after uninstalling, but
> it's untidy). Maybe the uninstaller should just unconditionally delete all
> of site-packages as well as whatever files it knows were installed. This is
> a "normal" issue when installing into the system Python, but for people who
> avoid that and use virtualenvs (e.g. me :-)) it's new (and annoying, as
> we'll never use the system pip in any case...)

Can you not just teach the Python installer to check for pip and
remove it if found?

> This raises another point - to an extent, I don't care about any of this, as
> I routinely use virtualenvs. But if using pip to manage the system python is
> becoming the recommended approach, I'd like to understand what precisely the
> recommendation is so that I can see if it's better than what I currently do
> - for instance, I've never used --user so I don't know if it will be of
> benefit to me. I assume that this will go in the packaging user guide in due
> course, but I don't know who will write it (does anyone have the relevant
> experience? most people I know recommend virtualenv...)

If I could install everything I wanted with pip then virtualenvs would
be more practical. Maybe when wheel distribution becomes commonplace
I'll start doing that. I basically always want to install a large
number of third party packages before I do anything though.

So for me the procedure on ubuntu is something like:
1) install ubuntu
2) sudo apt-get install python-numpy python-scipy python-matplotlib
ipython python-sympy python-dev cython python-pygraph python-tables
python-wxgtk2.8 python-pywt python-sphinx ...

On Windows the procedure is:
1) Install Python
2) Get MSIs for numpy, scipy, wxPython, matplotlib, PyQt, numexpr, ...
3) Setup PATH or create a shell/batch script called 'python' that does
the right thing.
4) Run ez_setup.py and Install pip
5) Patch distutils (http://bugs.python.org/issue12641)
6) Use pip for cython, sympy, ipython, pyreadline, spyder, sphinx,
docutils, line_profiler, coverage, ...
7) Build and install my own commonly used private packages.
8) Get more prebuilt binaries for other awkward packages when
necessary: pytables, numexpr, mayavi, ...

(You can see why some people just install Python(x, y) or EPD right?)

It takes quite a while to do all this and then I have basically all
the packages I want minus a few pippable ones. At this point I don't
really see the point in creating a virtualenv except to test something
that I'm personally developing. Or am I missing something?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] What does it mean for Python to "bundle pip"?

2013-08-20 Thread Oscar Benjamin
Paul wrote:
> Given that the installer includes the py.exe launcher, if you leave the
defaults, then at a command prompt "python" doesn't work. But that's fine,
because "py" does. And if you have multiple versions of Python installed,
you don't *want* python on PATH, because then you have to manage your PATH.
Why bother when "py -2.7" or "py -3.3" does what you want with no path
management? Once you want any *other* executables, though, you have to deal
with PATH (especially in the multiple Pythons case). That is a new issue,
and one that hasn't been thought through yet, and we don't have a good
solution.

>From a user perspective I think that 'py -3.4 -m pip ...' is an improvement
as it means I can easily install or upgrade for a particular python
installation (I tend to have a few). There's no need to put Scripts on PATH
just to run pip. I think this should be the recommended invocation for
Windows users.

Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] How to handle launcher script importability?

2013-08-14 Thread Oscar Benjamin
On 14 August 2013 14:48, Paul Moore  wrote:
>
> But I do see your point regarding things like subprocess. It's a shame, but
> anything other than exes do seem to be second class citizens on Windows.
> BTW, you mention bat files - it bugs me endlessly that bat files seem to
> have a more privileged status than "other" script formats whether that's .py
> or .ps1 or whatever. I've never managed to 100% convince myself that they
> are special in a way that you can't replicate with suitable settings
> (PATHEXT, etc, etc). I think it's that .bat is hard-coded in the OS search
> algorithm or something, though.

I think it is hard-coded into CreateProcess (at least on some versions
of Windows). It certainly isn't a documented feature, but as
demonstrated in my previous post it does work on XP.

> The docs are not easy to locate on the
> various aspects of matter.

I just tried to find documentation but all I found was this (with
dead-links to MS):
http://blog.kalmbachnet.de/?postid=34

> (If bat files didn't have their horrible nesting
> and ctrl-C handling behaviours, they'd be a viable solution...)

You were right to cry about these previously.

To give an example of where these subprocess issues might matter.
sphinx auto-generates Makefiles that call 'sphinx-build' with no
extension. The sphinx-build command has a setuptools .exe wrapper so
that it will be picked up. I wouldn't confidently assume that for all
combinations of Windows version and 'make' implementation that 'make'
would know how to find sphinx-build for anything other than an .exe.

A quick experiment shows that my own make handles shebangs if present
and then falls back to just calling CreateProcess which handles .exe
files and (via the undocumented hack above) .bat files . It does not
respect PATHEXT and the error when the extension is provided but no
shebang is given clearly shows it using the same sys-call as used by
Python's subprocess module:

Q:\tmp>show main
'show' is not recognized as an internal or external command,
operable program or batch file.

Q:\tmp>type Makefile
all:
mycmd.py

Q:\tmp>type mycmd.py

print 'hello'

Q:\tmp>make
mycmd.py
process_begin: CreateProcess(Q:\tmp\mycmd.py, mycmd.py, ...) failed.
make (e=193): Error 193
make: *** [all] Error 193

Q:\tmp>mycmd.py
hello


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] How to handle launcher script importability?

2013-08-14 Thread Oscar Benjamin
On 13 August 2013 20:58, Paul Moore  wrote:
>
> On 13 August 2013 18:08, Oscar Benjamin  wrote:
>>
>> On 13 August 2013 17:33, Paul Moore  wrote:
>> >
>> > On another point you mention, Cygwin Python should be using Unix-style 
>> > shell
>> > script wrappers, not Windows-style exes, surely? The whole point of Cygwin
>> > is that it emulates Unix, after all... So I don't see that as an argument
>> > either way.
>>
>> So say I have a ~/bin directory where I put my scripts that I want to
>> be generally available. I install something with
>> python setup.py install --install-scripts=~/bin
>> so that the scripts/script-wrappers go in there because I want to be
>> able to always access that program under that name. Don't be fooled by
>> the unixy tilde: I'm running ordinary Windows Python in that command
>> in git-bash, not Cygwin. Now if that folder is on PATH while I am in
>> Cygwin I can run the program with the same name if an .exe wrapper was
>> added. I can't run it with the same name if it's a .py/,bat file
>> because Cygwin doesn't have the implicit strip-the-extension PATHEXT
>> feature and can't run .bat files.
>
> Ah, OK, thanks for the clarification.
>
> In that case I can see why you'd prefer exe wrappers (or maybe cygwin bash 
> shell wrappers, or shell aliases...). Maybe an option to still use exe 
> wrappers is worth it - but honestly, I'd say that in that context you 
> probably have enough expertise to understand the issue and make your own 
> solution relatively easily.

Yes, but I'd like it if pip install some_cmd would "just work".

> What about having in your .bashrc:
>
> for prog in ls ~/bin/*.py; do
> alias $(basename $prog .py)=$prog
> done
>
> (Excuse me if I got the precise details wrong there). OK, you need to rerun 
> .bashrc if you add new scripts. It's not perfect. But it's not a showstopper 
> either.

There are ways to make it work for every different environment where I
would type the command. Really though it's a pain to have to set these
things up everywhere.

Also this still doesn't work with subprocess(..., shell=False). There
are a huge range of programs that can invoke subprocesses of a given
name and I want them all to work with commands that I install from
pypi. There are good reasons to use shell=False: the subprocess
documentation contains no less than 5 warning boxes about shell=True!
This is not peculiar to Python's subprocess module: it is the
underlying Windows API calls regardless of which language the parent
process is implemented in. Here's a demo of what happens with Robert
Kern's kernprof.py script that doesn't have an .exe wrapper (on my
system; it's possible that I didn't install it with pip).

$ python
Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit
(Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> subprocess.call(['kernprof.py'], shell=True)  # Uses file-association
Usage: kernprof.py [-s setupfile] [-o output_file_path] scriptfile [arg] ...

2
>>> import os
>>> os.environ['PATHEXT']
'.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.PY;.PYC;.PSC1;.RB;.RBW'
>>> subprocess.call(['kernprof'], shell=True)  # Uses PATHEXT
Usage: kernprof.py [-s setupfile] [-o output_file_path] scriptfile [arg] ...

2
>>> subprocess.call(['kernprof'], shell=False)  # Needs an .exe wrapper!
Traceback (most recent call last):
  File "", line 1, in 
  File "q:\tools\Python27\lib\subprocess.py", line 524, in call
return Popen(*popenargs, **kwargs).wait()
  File "q:\tools\Python27\lib\subprocess.py", line 711, in __init__
errread, errwrite)
  File "q:\tools\Python27\lib\subprocess.py", line 948, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
>>> subprocess.call(['kernprof.py'], shell=False)  # Needs an .exe wrapper!
Traceback (most recent call last):
  File "", line 1, in 
  File "q:\tools\Python27\lib\subprocess.py", line 524, in call
return Popen(*popenargs, **kwargs).wait()
  File "q:\tools\Python27\lib\subprocess.py", line 711, in __init__
errread, errwrite)
  File "q:\tools\Python27\lib\subprocess.py", line 948, in _execute_child
startupinfo)
WindowsError: [Error 193] %1 is not a valid Win32 application

Here's what happens if I put kernprof.bat next to kernprof.py (the
.bat file just @echos "running kernprof"):

>>> import subprocess
>>&g

Re: [Distutils] How to handle launcher script importability?

2013-08-13 Thread Oscar Benjamin
On 13 August 2013 17:33, Paul Moore  wrote:
>
> On another point you mention, Cygwin Python should be using Unix-style shell
> script wrappers, not Windows-style exes, surely? The whole point of Cygwin
> is that it emulates Unix, after all... So I don't see that as an argument
> either way.

So say I have a ~/bin directory where I put my scripts that I want to
be generally available. I install something with
python setup.py install --install-scripts=~/bin
so that the scripts/script-wrappers go in there because I want to be
able to always access that program under that name. Don't be fooled by
the unixy tilde: I'm running ordinary Windows Python in that command
in git-bash, not Cygwin. Now if that folder is on PATH while I am in
Cygwin I can run the program with the same name if an .exe wrapper was
added. I can't run it with the same name if it's a .py/,bat file
because Cygwin doesn't have the implicit strip-the-extension PATHEXT
feature and can't run .bat files.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script)

2013-07-22 Thread Oscar Benjamin
On 19 July 2013 20:48, Steve Dower  wrote:
>> From: Oscar Benjamin
>> I don't know whether or not you intend to have wrappers also work for
>> Python 2.7 (in a third-party package perhaps) but there is a slightly
>> subtle point to watch out for when non-ASCII characters in sys.argv
>> come into play.
>>
>> Python 2.x uses GetCommandLineA and 3.x uses GetCommandLineW. A
>> wrapper to launch 2.x should use GetCommandLineA and CreateProcessA to
>> ensure that the 8-bit argument strings are passed through unaltered.
>> To launch 3.x it should use the W versions. If not then the MSVC
>> runtime (or the OS?) will convert between the 8-bit and 16-bit
>> encodings using its own lossy routines.
>
> The launcher should always use GetCommandLineW, because the command line is 
> already stored in a 16-bit encoding. GetCommandLineA will decode to an 8-bit 
> encoding using some code page/settings (I can probably find out exactly which 
> ones, but I don't know/care off the top of my head), and CreateProcessA will 
> convert back using (hopefully) the same code page.
>
> There is never any point passing data between *A APIs in Windows, because 
> they are just doing the conversion in the background. All you gain is that 
> the launcher will corrupt the command line before python.exe gets a chance to.

Okay, thanks for the correction.

The issue that made me think this was to do with calling Python 2.x as
a subprocess of 3.x and vice-versa. When I looked back at it now I saw
that the problem was to do with explicitly encoding with
sys.getfilesystemencoding() in Python and using the mbcs codec (which
previously had no error handling apart from 'replace').


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Q about best practices now (or near future)

2013-07-18 Thread Oscar Benjamin
On 18 July 2013 13:13, Nick Coghlan  wrote:
>
> On 18 Jul 2013 21:48, "Oscar Benjamin"  wrote:
>
>> In another thread you mentioned the idea that someone would build
>> without using distutils/setuptools by using a setup.py that simply
>> invokes an alternate build system that is build-required by the sdist.
>> That's fine for simple cases but how many 'python setup.py 's
>> should the setup.py support?
>
> Please read PEP 426, as I cover this in detail. If anything needs further
> clarification, please let me know.

Okay, I have actually read that before but I forgot about that bit. It says:
'''
In the meantime, the above operations will be handled through the
distutils/setuptools command system:
python setup.py dist_info
python setup.py sdist
python setup.py build_ext --inplace
python setup.py test
python setup.py bdist_wheel
'''

That seems a sufficiently minimal set of commands. What I wonder when
reading it is whether any other command line options are expected to
be supported. For example if the setup.py is using
distutils/setuptools then you could do something like:

   python setup.py sdist --dist-dir=some_dir

Should it be explicitly not required that the setup.py should support
any other invocation than those listed and should just report
success/failure by error code?

Also in the event of failure is it the job of setup.py to clean up
after itself (since there's no clean command)?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Q about best practices now (or near future)

2013-07-18 Thread Oscar Benjamin
On 17 July 2013 22:43, Nick Coghlan  wrote:
>
> On 18 Jul 2013 01:46, "Daniel Holth"  wrote:
>>
>> On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon  wrote:
>> > I'm going to be pushing an update to one of my projects to PyPI this
>> > week
>> > and so I figured I could use this opportunity to help with patches to
>> > the
>> > User Guide's packaging tutorial.
>> >
>> > But to do that I wanted to ask what the current best practices are.
>> >
>> > * Are we even close to suggesting wheels for source distributions?
>>
>> No, wheels don't replace source distributions at all. They just let
>> you install something without having to have whatever built the wheel
>> from its sdist. It is currently nice to have them available.
>>
>> I'd like to see an ambitious person begin uploading wheels that have
>> no traditional sdist.
>
> Argh, don't even suggest that. Such projects could never be included in a
> Linux distribution - we need the original source to push into a trusted
> build system.

What do you mean by this?

I interpret Daniel's comment as meaning that there's no setup.py in
the sdist. And I think it's a great idea and that lots of others would
be very happy to ditch the setup.py concept in favour of something
entirely different from the distutils way of doing things.

In another thread you mentioned the idea that someone would build
without using distutils/setuptools by using a setup.py that simply
invokes an alternate build system that is build-required by the sdist.
That's fine for simple cases but how many 'python setup.py 's
should the setup.py support?

Setuptools setup() supports the following:
build, build_py, build_ext, build_clib, build_scripts, clean, install,
install_lib, install_headers, install_scripts, install_data, sdist,
register, bdist, bdist_dumb, bdist_rpm, bdist_wininst, upload, check,
rotate, develop, setopt, saveopts, egg_info, upload_docs,
install_egg_info, alias, easy_install, bdist_egg, test

(Presumably bdist_wheel would be there if I had a newer setuptools).


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Q about best practices now (or near future)

2013-07-17 Thread Oscar Benjamin
On 17 July 2013 17:59, Brett Cannon  wrote:
>
> But it also sounds like that project providing wheel distributions is too
> early to include in the User's Guide.

There are already many guides showing how to use distutils/setuptools
to do things the old way. There are also confused bits of
documentation/guides referring to now obsolete projects that at one
point were touted as the future. It would be really good to have a
guide that shows how the new working with wheels and metadata way is
expected to work from the perspective of end users and package authors
even if this isn't fully ready yet.

I've been loosely following the packaging work long enough to see it
change direction more than once. I still find it hard to see the
complete picture for how pip, pypi, metadata, setuptools, setup.py,
setup.json, wheels and sdists are expected to piece together in terms
of what a package author is expected to do and how it affects end
users. A guide (instead of a load of PEPs) would be a great way to
clarify this for me and for the many others who haven't been following
the progress of this at all.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Q about best practices now (or near future)

2013-07-17 Thread Oscar Benjamin
On 17 July 2013 20:52, Daniel Holth  wrote:
> On Wed, Jul 17, 2013 at 3:39 PM, Barry Warsaw  wrote:
>> On Jul 17, 2013, at 08:34 PM, Oscar Benjamin wrote:
>>
>>>I imagined that distro packaging tools would end up using the wheel as
>>>an intermediate format when building a deb from a source deb.
>>
>> Do you mean, the distro would download the wheel or that it would build it
>> during the build step for the archive?  Probably not the former, as any 
>> binary
>> blobs in a wheel would both violate policy and likely be inappropriate for 
>> all
>> the platforms we build for.
>>
>
> The distro packager will likely only have to type "python -m some_tool
> install ... " instead of "setup.py install ...". IIRC distro packaging
> normally does installation into some temporary directory which is then
> archived to create the distro package. The existence of wheel probably
> doesn't make any difference.

Currently sdists provides a relatively uniform interface in the way
that the setup.py can be used for build/installation. If
non-traditional sdists become commonplace then that will not be the
case any more. On the other hand the wheel format provides not just a
uniform interface but a formally specified one that I imagine is more
suitable for the kind of automated processing that is done by distros.

I'm not a distro packager but I imagined that they would find it more
convenient to have tools that turn one formally specified format into
another than to run the installation in a monkey-patched environment.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Q about best practices now (or near future)

2013-07-17 Thread Oscar Benjamin
On 17 July 2013 20:39, Barry Warsaw  wrote:
> On Jul 17, 2013, at 08:34 PM, Oscar Benjamin wrote:
>
>>I imagined that distro packaging tools would end up using the wheel as
>>an intermediate format when building a deb from a source deb.
>
> Do you mean, the distro would download the wheel or that it would build it
> during the build step for the archive?  Probably not the former, as any binary
> blobs in a wheel would both violate policy and likely be inappropriate for all
> the platforms we build for.

I meant the latter. The source deb would comprise the sdist (that may
or may not be "traditional") and other distro files. The author of the
sdist designed it with the intention that it could be turned into a
wheel in some way (perhaps not the traditional one). So the natural
way to build it is to use the author's intended build mechanism, end
up with a wheel, and then convert that to an installable deb.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Q about best practices now (or near future)

2013-07-17 Thread Oscar Benjamin
On 17 July 2013 19:46, Barry Warsaw  wrote:
>On Jul 17, 2013, at 11:46 AM, Daniel Holth wrote:
>>
>>I'd like to see an ambitious person begin uploading wheels that have
>>no traditional sdist.
>
> You're not getting rid of sdists are you?
>
> Please note that without source distributions (preferably .tar.gz) your
> package will never get distributed on a Linux distro.
>
> Maybe the keyword here is "traditional" though.

Yeah, I think what Daniel means is that the sdist->wheel
transformation could be done by a tool unlike distutils and
setuptools. The sdist as supplied would not be something that could be
directly installed with 'python setup.py install' but it could be
turned into a wheel by bento/waf/yaku/scons etc.

> In that case, keep in mind
> that at least in Debian and its derivatives, we have a lot of tools that make
> it pretty trivial to package something setup.py based from PyPI.  If/when that
> goes away, it will be more difficult to get new package updates, until the
> distro's supporting tools catch up.

I imagined that distro packaging tools would end up using the wheel as
an intermediate format when building a deb from a source deb. Would
that not make things easier long-term? In the short term, you can
expect that whatever solution people use is likely to be convertible
to a traditional sdist in some straight-forward way e.g. 'bentomaker
sdist'.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 426 updated based on last round of discussion

2013-07-17 Thread Oscar Benjamin
On 17 July 2013 13:17, Nick Coghlan  wrote:
> That said, the new metadata standard does deliberately include a few
> pieces intended to make such things easier to define:
>
> 1. The extensions concept - using a structured data format like JSON
> makes it much easier for platform specific tools (or even pip itself)
> to say "declare this metadata, and we will run these commands
> automatically"

Okay, so this is where you can put the "I need a [specific]
C-compiler" information. Then a pip alternative (or a future pip) that
knew more about C compilation could respond appropriately.

The PEP doesn't explicitly say anything about how a tool should handle
unrecognised metadata extensions; it seems fairly obvious to me that
they are supposed to be ignored but perhaps this should be explicitly
stated.

On the other hand it would be useful to be able to say: if you don't
understand my "fortran" metadata extension then you don't know how to
install/build this distribution. Is there a way e.g. to indicate a
build/install dependency on the tool understanding some section in the
extension metadata, or that an extension is compulsory somehow?

Then a user could do:
$ pip install autocont
Error installing "autocont": required extension "fortran" not understood.
See http://pypa.org/list_of_known_extensions.htm for more information.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 426 updated based on last round of discussion

2013-07-17 Thread Oscar Benjamin
On 17 July 2013 12:10, Paul Moore  wrote:
>
> I can't imagine it's practical to auto-install a C compiler

Why not?

> - or even to check for one before building.
>
> But I can see it being useful for
> introspection purposes to know about this type of requirement. (A C compiler
> could be necessary, or optional for speedups, a particular external library
> could be needed, etc)

Perhaps instead the installer tool would give you a way to clarify
that you do have a C compiler and to warn if not.

Alternatively a meta-package could be used to indicate (when
installed) that a compatible C-compiler is available and then other
distributions could depend on it for building.

> The data would likely only be as good as what project developers provide,
> but nevertheless having standard places to record the data could encourage
> doing so...
>
> OTOH, maybe this is metadata 3.0 stuff - I feel like at the moment we need
> to get what we have now out of the door rather than continually adding extra
> capabilities.

I wasn't proposing to hold anything up or add new capabilities. I'm
just trying to see how far these changes go towards making non-pure
Python software automatically installable. Everything I would want to
build "build requires" software that is not on pypi.

It would be great if e.g. the instructions for installing Cython on
Windows could just be "pip install cython" instead of this:
http://wiki.cython.org/InstallingOnWindows


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 426 updated based on last round of discussion

2013-07-17 Thread Oscar Benjamin
On 16 July 2013 14:40, Nick Coghlan  wrote:
>
> The latest version of PEP 426 is up at 
> http://www.python.org/dev/peps/pep-0426/

Just looking at the "Build requires" section I found myself wondering:
is there any way to say that e.g. a C compiler is required for
building, or a Fortran compiler or any other piece of software that
isn't a "Python distribution"?

The example shows Cython which is commonly built and used with MinGW
on Windows. I guess that it would be possible to create a pypi
distribution that would install MinGW and set it up as part of a
Python installation so that a project such as Cython could depend on
it with e.g.:

"name": "Cython",
"build_requires": [
  {
"requires": ["pymingw"],
"environment": "sys.platform == 'win32'"
  }
]
"run_requires": [
  {
"requires": ["pymingw"],
"environment": "sys.platform == 'win32'"
  }
]

But it would be unfortunate to depend on MinGW in the event that the
user actually has the appropriate MSVC version.

Or perhaps there could be a meta-distribution called "CCompiler" that
installs MinGW only if the the appropriate MSVC version is not
available. Or could there be an environment marker to indicate the
presence of particularly common requirements such as having a C
compiler?


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


  1   2   >