[Distutils] Re: Why lockbot?

2019-06-02 Thread Ralf Gommers
Another issue with the lockbot: I was watching specific issues and PRs
because they're important to me (as a packager and a maintainer of
numpy.distutils. The locking has made it very hard to now follow any
updates. Even if someone opens a new issue or PR to continue where a locked
one, there's no way for anyone but maintainers to comment on the original
issue (which isn't likely to happen consistently) to alert me of that. And
I really don't want to watch the whole repo, that's many times more noisy.

Ralf



On Sun, Jun 2, 2019 at 1:40 PM Paul Moore  wrote:

> One thought - is it possible in github to subscribe to "everything
> that isn't closed"? I couldn't see such an issue but that would (a)
> let me ignore the lockbot messages, and (b) let me ignore close
> threads so I wouldn't even need lockbot.
>
> Paul
>
> On Sun, 2 Jun 2019 at 12:26, Paul Moore  wrote:
> >
> > The bot has just been enabled, and it's catching up on historical bugs
> > (which can't be done all in one go, as rate limits would hit us).
> > Hopefully it should die down in a few days.
> >
> > Whether there's a way to tell github not to send such notifications,
> > or whether it's possible for us to configure the bot to lock the
> > thread but not add a comment, I don't know.
> > Paul
> >
> > On Sun, 2 Jun 2019 at 09:12, Robert Collins 
> wrote:
> > >
> > > Seems like most of the pip bugmail I get now is lockbot messages
> > > telling me that a bug that hasn't received any discussion for a long
> > > time now can't have any more discussion. Is that really needed? The
> > > github UI shows the lock status of bugs itself...
> > >
> > > -Rob
> > > --
> > > Distutils-SIG mailing list -- distutils-sig@python.org
> > > To unsubscribe send an email to distutils-sig-le...@python.org
> > > https://mail.python.org/mailman3/lists/distutils-sig.python.org/
> > > Message archived at
> https://mail.python.org/archives/list/distutils-sig@python.org/message/2Z4THA3D3JQSUYVK7YNKVYAKHVCZJZ7Y/
> --
> Distutils-SIG mailing list -- distutils-sig@python.org
> To unsubscribe send an email to distutils-sig-le...@python.org
> https://mail.python.org/mailman3/lists/distutils-sig.python.org/
> Message archived at
> https://mail.python.org/archives/list/distutils-sig@python.org/message/ROZ4Z2TYDE62XZEOHGJV635ZB6CCMZAW/
>
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/F2FQ5NETWTAT533N7TNWQ5OGPHKSZ66R/


Re: [Distutils] Extracting distutils into setuptools

2017-09-27 Thread Ralf Gommers
On Thu, Sep 28, 2017 at 8:46 AM, xoviat  wrote:

> No. Setuptools is what projects without a build_backend in pyproject.toml
> get. Not distutils. We should make it clear now that the distutils
> namespace belongs to setuptools except for when building cpython.
>
> On Sep 27, 2017 2:33 PM, "Ned Deily"  wrote:
>
>> On Sep 27, 2017, at 13:31, Steve Dower  wrote:
>> > setuptools is totally welcome in my book to simply copy the compiler
>> infrastructure we already have from core and never look back. It really
>> does need to be maintained separately from CPython, especially on Windows
>> where we continue to get innovation in the targeted tools. I know it's a
>> big ask, and it's one that I can't personally commit real time to (though I
>> obviously will as much as possible), but I do think it is necessary for our
>> ecosystem to not be tied to CPython release cycles.
>>
>> Whatever is done, keep in mind that currently distutils is required to
>> build Python itself, e.g. the standard library.  And that at least one
>> important project, numpy, already subclasses distutils.
>>
>
For numpy that seems fixable (if it even breaks, it may not). As long as
the setuptools maintainers are willing to keep numpy.distutils
compatibility, I'm happy to make the necessary changes in numpy.

FYI, it has happened twice (IIRC) in the last five years that a new
setuptools release broke numpy.distutils. This was fixed very quickly with
good collaboration between the projects.

Ralf



>
>> --
>>   Ned Deily
>>   n...@python.org -- []
>>
>> ___
>> Distutils-SIG maillist  -  Distutils-SIG@python.org
>> https://mail.python.org/mailman/listinfo/distutils-sig
>>
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Extracting distutils into setuptools

2017-09-27 Thread Ralf Gommers
On Wed, Sep 27, 2017 at 3:30 PM, xoviat  wrote:

> This was a comment by @zooba (Steve Dower):
>
> > (FWIW, I think it makes *much* more sense for setuptools to fix this by
> simply forking all of distutils and never looking back. But since we don't
> live in that world yet, it went into distutils.)
>
> And here is my response:
>
> > Since you mention it, I agree with that proposal. But currently we have
> core developers contributing to distutils and @jaraco contributing to
> setuptools. @jaraco is quite competent, but I doubt that he would be able
> to maintain an independent fork of distutils by himself.
>
> > In short, I think your proposal is a good one, but how can we allocate
> manpower?
>
> (issue31595 on bugs.python.org)
>
> So what do others think of this? My sense of things is that people are
> open to the idea, but there isn't a plan to make it happen.
>

My $2c: I'd only be a very occasional contributor, but it makes a lot of
sense from the point of view of packaging/distributing changes to
distutils. Also setuptools is a lot better maintained than distutils, and
using the bug tracker is a much better experience. So many reasons to do
it, and I'd certainly be more likely to report bugs and/or fix them.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] A possible refactor/streamlining of PEP 517

2017-07-17 Thread Ralf Gommers
On Mon, Jul 17, 2017 at 11:30 PM, Thomas Kluyver 
wrote:

> On Mon, Jul 17, 2017, at 01:07 PM, Paul Moore wrote:
> > If we have a consensus here that "build a sdist and build a wheel from
> > it" is an acceptable/viable main route for pip to generate wheels
> > (with "just ask the backend" as fallback) then I'm OK with not
> > bothering with an "ask the backend to build a wheel out of tree"
> > option. My recollection of the history was that there was some
> > resistance in the past to pip going down the "build via sdist" route,
> > but if that's now considered OK in this forum, then I'm fine with
> > assuming that either I was mistaken or things have changed.
>
> I think I was one of the people arguing against going via an sdist. The
> important point for me is that an sdist is not a requirement for
> installing from source -  it's ok by me if it tries building an sdist
> first and then falls back to building a wheel directly.
>

Same here, I had a preference for not going via sdist but am OK with the
current status of the PEP.

Cheers,
Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] A possible refactor/streamlining of PEP 517

2017-07-17 Thread Ralf Gommers
On Mon, Jul 17, 2017 at 10:15 PM, Nick Coghlan  wrote:

> On 17 July 2017 at 20:00, Ralf Gommers  wrote:
> >
> >
> > On Mon, Jul 17, 2017 at 8:53 PM, Nick Coghlan 
> wrote:
> >>
> >> On 17 July 2017 at 18:29, Ralf Gommers  wrote:
> >> > On Mon, Jul 17, 2017 at 7:50 PM, Nick Coghlan 
> >> > wrote:
> >> >> The minimal specification for in-place builds is "Whatever you would
> >> >> do to build a wheel file from an unpacked sdist".
> >> >
> >> > Eh no, in-place has nothing to do with building a wheel. Several
> people
> >> > have
> >> > already pointed this out, you're mixing unrelated concepts and that's
> >> > likely
> >> > due to you using a definition for in-place/out-of-place that's
> >> > nonstandard.
> >>
> >> I'm using in-place specifically to mean any given PEP 517 backend's
> >> equivalent of an unqualified "./setup.py build_wheel".
> >
> >
> > Thanks. Very much nonstandard and possibly circular, but at least you've
> > defined it:) I suggest you pick more precise wording, because this leaves
> > little room for the more common use of in-place. Which you can define in
> > several flavors as well, but all of them definitely have the property
> that
> > if you put the source directory on sys.path you can import and use the
> > package.  build_wheel does not have that property.
>
> Ah, thanks for clarifying. That's using "in-place" to refer to the
> Python-specific notion of an editable install ('setup.py develop',
> 'pip install -e', etc).


Not really Python-specific, here's two of the first results of a Google
search:
https://cmake.org/Wiki/CMake_FAQ#Out-of-source_build_trees
https://stackoverflow.com/questions/4018869/what-is-the-in-place-out-of-place-builds
It's basically: build artifacts go right next to the source files. For
Python it then follows that you can import from the source dir, but that's
just a consequence and not part of the definition of in-place at all.


> Not a usage I've personally encountered, but
> I'm also a former embedded systems developer that now works for an
> operating system company, so I'm not necessarily the most up to speed
> on common terminology in environments more specifically focused on
> Python itself, rather than the full C/C++(/Rust/Go)/Python stack :)
>
> The in-place/out-of-tree sense currently used in the PEP (and my posts
> to the list about this point) is the common meaning for compiled
> languages, and hence the one common to most full-fledged build
> systems.
>

Well, you keep on saying "build_wheel". A wheel is a packaging artifact
rather than a build artifact, and is Python-specific. So not common for
compiled languages.

My mental picture is:
1. build steps (in/out-place) produce .o, .so, etc. files
2. building a wheel is a two-step process: first there's a build step (see
point 1), then a packaging step producing a .whl archive.

I suspect most people will see it like that. Hence it is super confusing to
see you describing a *build* concept like in-place with reference to a
*packaging* command like build_wheel.

Cheers,
Ralf


However, it will definitely make sense to clarify that point, as it's
> quite reasonable for folks to read a phrase with a Python specific
> meaning in a PEP, even if key parts of that PEP are primarily about
> effectively interfacing with build systems originally designed to
> handle precompiled languages :)
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] A possible refactor/streamlining of PEP 517

2017-07-17 Thread Ralf Gommers
On Mon, Jul 17, 2017 at 8:53 PM, Nick Coghlan  wrote:

> On 17 July 2017 at 18:29, Ralf Gommers  wrote:
> > On Mon, Jul 17, 2017 at 7:50 PM, Nick Coghlan 
> wrote:
> >> The minimal specification for in-place builds is "Whatever you would
> >> do to build a wheel file from an unpacked sdist".
> >
> > Eh no, in-place has nothing to do with building a wheel. Several people
> have
> > already pointed this out, you're mixing unrelated concepts and that's
> likely
> > due to you using a definition for in-place/out-of-place that's
> nonstandard.
>
> I'm using in-place specifically to mean any given PEP 517 backend's
> equivalent of an unqualified "./setup.py build_wheel".


Thanks. Very much nonstandard and possibly circular, but at least you've
defined it:) I suggest you pick more precise wording, because this leaves
little room for the more common use of in-place. Which you can define in
several flavors as well, but all of them definitely have the property that
if you put the source directory on sys.path you can import and use the
package.  build_wheel does not have that property.

For an
> autotools backend, that might ultimately mean something like
> "./configure && make python_wheel". It *doesn't* necessarily mean the
> equivalent of "./configure && make", because it wouldn't make sense to
> assume that a project's *default* build target for a full-fledged
> build system will be to make Python wheel files (fortunately,
> frontends won't need to take, since hiding those kinds of details will
> be up to backends).
>
> I'm using out-of-tree to mean (as a baseline) what Daniel suggested:
> any given backend's equivalent of "./setup.py build -b
>  build_wheel" (e.g. variant directories in Scons).
>

Leave off build_wheel (which is some metadata generation + zipping up the
right files on top of building), then out-of-tree build is a clear concept.


>
> One additional config setting needed: the build/target directory
>
> This approach means that backends can implement build directory
> support without caring in the slightest about how Python frontends
> intend to use it, and without worrying overly much about the different
> kinds of source directory (VCS clone, unpacked VCS release tarball,
> unpacked sdist) except insofar as they'll need to be able to detect
> which of those they've been asked to build from if it matters to their
> build process (e.g. generating Cython files in the non-sdist cases).
>

This seems useful and clear.


>
> The non-standard semantic convention being proposed specifically as
> part of PEP 517 is then solely that for frontends like pip, if
> build_sdist fails, they should fall back to just asking the backend
> for an out-of-tree build,


Say "asking the backend to build a wheel in a clean tmpdir" or something
like that. Not clear who decides the path to the build dir by the way, is
it frontend or backend or or
frontend-if-it-specifies-one-otherwise-up-to-backend?


> rather than doing anything more exotic (or
> Python-specific).


Building a wheel is inherently Python-specific.


> This *won't* give them the general assurance of
> sdist consistency that actually building via the sdist will, but
> that's fine - the assumption is that a frontend that cares about that
> assurance will only be using this interface if the sdist build already
> failed, so full assurance clearly isn't possible in the current
> environment.
>

That strategy makes sense, seems like there's consensus on it.

Cheers,
Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] A possible refactor/streamlining of PEP 517

2017-07-17 Thread Ralf Gommers
On Mon, Jul 17, 2017 at 7:50 PM, Nick Coghlan  wrote:

> On 17 July 2017 at 15:41, Thomas Kluyver  wrote:
> > E) When we do work out the need and the semantics for in place builds,
> > we can write another PEP adding an optional hook for that.
>

> The minimal specification for in-place builds is "Whatever you would
> do to build a wheel file from an unpacked sdist".


Eh no, in-place has nothing to do with building a wheel. Several people
have already pointed this out, you're mixing unrelated concepts and that's
likely due to you using a definition for in-place/out-of-place that's
nonstandard. It would be helpful if you either defined your terminology or
(better) just dropped in-place/out-of-place and replaced it with for
example "an empty tmpdir" vs. "a default directory which may contain build
artifacts from previous builds" vs. .

Note that distutils behavior adds to the confusion here: `build_ext
--inplace` is actually an out-of-place build where the final extension
modules are copied back into the source tree (but not any intermediate
artifacts).

Cheers,
Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] A possible refactor/streamlining of PEP 517

2017-07-14 Thread Ralf Gommers
On Sat, Jul 15, 2017 at 9:31 AM, Daniel Holth  wrote:

> I proposed the build directory parameter because the copytree hook made no
> sense to me. It is not a perfect substitute but perhaps a configurable
> build directory is nice on its own without having to satisfy all older
> arguments in favor of copytree. I think true in-place builds are the
> oddball (for example 2to3 or any build where sources have the same name as
> outputs needs a build directory to put the translated .py files, otherwise
> it would overwrite the source). What people think of as in-place builds in
> distutils are usually just builds into the default build directory.
>

That's not the interesting part, it doesn't matter if a build is done in
build/lib*/etc inside the repo or outside, what matters is that the final
build artifacts are placed back in the source tree. So a C extension will
have .c files in the tree, and after an inplace build it will have .c and
.so (but no .o !).

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] A possible refactor/streamlining of PEP 517

2017-07-10 Thread Ralf Gommers
On Mon, Jul 10, 2017 at 7:13 PM, Thomas Kluyver 
wrote:

> On Mon, Jul 10, 2017, at 07:01 AM, Nick Coghlan wrote:
> > So I think we have pretty solid evidence that the reason the
> > procedural "build directory preparation" hook wasn't sitting well with
> > people was because that isn't the way build systems typically model
> > the concept, while a "build directory" setting is very common (even if
> > that "setting" is "the current working directory when configuring or
> > running the build").
>
> Hooray! :-)
>
> Do we want to also provide a build_directory for the build_sdist hook?
> In principle, I don't think making an sdist should involve a build step,
> but I know that some projects do perform steps like cython code gen or
> JS minification before making the sdist. I think this was a workaround
> to ease installation before wheel support was widespread, and I'd be
> inclined to discourage it now, so my preference would be no
> build_directory parameter for build_sdist. Backends which insist on
> generating intermediates at that point can make a temp dir themselves.
>

No preference on yes/no for build_directory for build_sdist hook, but
invoking Cython on .pyx files to generate C code rather than checking in
generated C code is good practice. I don't think we want to go back to
checking in generated code, nor do we want to store generated code in
tmpdirs (because that loses the advantage of not having to regenerate when
.pyx hashes are identical).

Ralf


> Then I guess that the choice between building a wheel directly and
> attempting to build an sdist first (with direct fallback) is one for
> frontends, and doesn't need to be specified.
>
> Thomas
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] A possible refactor/streamlining of PEP 517

2017-07-06 Thread Ralf Gommers
On Thu, Jul 6, 2017 at 8:57 PM, Thomas Kluyver  wrote:

> Thank-you all for the discussion and the attempts to accommodate flit,
> but I'll bow out now. It's become clear that the way flit approaches
> packaging is fundamentally incompatible with the priorities other people
> have for the ecosystem. Namely, I see sdists as archival artifacts to be
> made approximately once per release, but the general trend is to make
> them a key part of the build pipeline.
>

For the record: your view makes perfect sense to me, and is conceptually
cleaner than the one that PEP 517 in its current form prefers.

Making a guerilla tool with no concern for integration was fun. It
> became frustrating as people began to use it and expected it to play
> well with other tools, so I jumped on PEP 517 as a way to bring it into
> the fold. That didn't work out, and a tool that doesn't play well with
> pip can only be an attractive nuisance at best, even if it technically
> complies with the relevant specs.
>
> Flit is therefore deprecated, and I recommend anyone using it migrate
> back to setup.py packaging.
>

I hope you'll reconsider that deprecation - flit is one of only two (AFAIK)
active attempts at making a saner build tool (enscons being the other one),
and does have real value I think.

Either way, thanks for all the effort you put in!

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Provisionally accepting PEP 517's declarative build system interface

2017-06-03 Thread Ralf Gommers
On Sat, Jun 3, 2017 at 8:59 PM, Paul Moore  wrote:

> On 3 June 2017 at 03:14, Nathaniel Smith  wrote:
> > So far my belief is that packages with expensive build processes are
> > going to ignore you and implement, ship, document, and recommend the
> > direct source-tree->wheel path for developer builds. You can force the
> > make-a-wheel-from-a-directory-without-copying-and-then-install-it
> > command have a name that doesn't start with "pip", but it's still
> > going to exist and be used. Why wouldn't it? It's trivial to implement
> > and it works, and I haven't heard any alternative proposals that have
> > either of those properties. [1]
>
> I may be misunderstanding you, but it's deeply concerning if you're
> saying "as a potential backend developer, I'm sitting here listening
> to the discussion about PEP 517 and I've decided not to raise my
> concerns but simply to let it be implemented and then ignore it".
>

I think you partly misunderstood - "ignore you" should mean "ignore pip" or
"ignore the mandatory sdist part of PEP 517" not "ignore all of PEP 517".
And concerns have been raised (just rejected as less important than the
possibility of more bug reports to pip)?

And I agree with Nathaniel's view in the paragraph above.


> OTOH, I'm not sure how you plan on ignoring it - are you suggesting
> that projects like numpy won't support "pip install numpy" except for
> wheel installs[1]?
>

Of course not, that will always be supported. It's just that where the
developer/build docs now say "python setup.py ..." we want them to say "pip
install . -v" and with sdist generation that won't happen - they will
instead say "somenewtool install ." where somenewtool is a utility that
does something like:
1. invoke backend directly to build a wheel (using PEP 517 build_wheel
interface)
2. install the wheel with pip
and probably also
1. invoke backend directly for in-place build
2. make the inplace build visible (may involve telling pip to uninstall the
project if it's installed elsewhere, and messing with PYTHONPATH or pip
metadata)

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Provisionally accepting PEP 517's declarative build system interface

2017-05-29 Thread Ralf Gommers
> I think there's some pip bug somewhere discussing this 

https://github.com/pypa/pip/issues/2195
https://github.com/pypa/pip/pull/3219
plus some long mailing list threads IIRC

On Mon, May 29, 2017 at 9:19 PM, Paul Moore  wrote:

> On 29 May 2017 at 08:05, Nathaniel Smith  wrote:
> > Right now pip doesn't really have a good way to expressing the latter.

> 'pip install directory/' is relatively unambiguously saying that I
> > want a local install of some potentially-locally-modified files, and
> > while it might involve a temporary wheel internally there's no need to
> > expose this in any way (and e.g. it certainly shouldn't be cached), so
> > I think it's OK if this builds in-place and risks giving different
> > results than 'pip install sdist.tar.gz'. (Also note that in the most
> > common case where a naive user might use this accidentally, where
> > they've downloaded an sdist, unpacked it manually, and then run 'pip
> > install .', they *do* give the same results -- the potential for
> > trouble only comes when someone runs 'pip install .' multiple times in
> > the same directory.)
>
> I think that the key thing here is that as things stand, pip needs a
> means to copy an existing "source tree", as efficiently as possible.
> For local directories (source checkouts, typically) there's a lot of
> clutter that isn't needed to replicate the "source tree" aspect of the
> directory - but we can't reliably determine what is clutter and what
> isn't.
>
> Whether that copying is a good idea, in the face of the need to do
> incremental builds, is something of an open question - clearly we
> can't do something that closes the door on incremental builds, but
> equally the overhead of copying unwanted data is huge at the moment,
> and we can't ignore that.
>
> Talking about a "build a sdist" operation brings a whole load of
> questions about whether there should be a sdist format, what about
> sdist 2.0, etc into the mix. So maybe we should avoid all that, and
> say that pip[1] needs a "copy a source tree" operation. Backends
> SHOULD implement that by skipping any unneeded files in the source
> tree,but can fall back to a simple copy if they wish. In fact, we
> could make the operation optional and have *pip* fall back to copying
> if necessary. It would then be a backend quality of implementation
> issue if builds are slow because multi-megabyte git trees get copied
> unnecessarily.
>

Doesn't that just move the problem from pip to backends? It's still a
choice between:
(1) making no copy (good for in-place builds and also fine for pbr & co,
but needs education or a "pip release" type command)
(2) making a full copy like now including .git, .vagrant, etc. (super
inefficient)
(3) making an efficient copy (will likely still break pbr and
setuptools-scm, *and* break in-place builds)

(This operation might also help tools like setuptools-scm that need
> git information to work - the backend could extract that information
> on a "copy" operation and pit it somewhere static for the build).
>

If the backend can do it, so can pip right?

Ralf



>
> Paul
>
> [1] As pip is currently the only frontend, "what pip needs right now"
> is the only non-theoretical indication we have of what frontends might
> need to do, so we should be cautious about dismissing this as "pip
> shouldn't work like this", IMO.
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Best practice to build binary wheels on Github+Travis and upload to PyPI

2017-03-13 Thread Ralf Gommers
On Mon, Mar 13, 2017 at 10:47 PM, Lele Gaifax  wrote:

> Hi all,
>
> I'd like to learn how to configure a project I keep on Github so that at
> release time it will trigger a build of binary wheels for different
> versions
> of Python 3 and eventually uploading them to PyPI.
>
> At first I tried to follow the Travis deploy instruction[1], but while that
> works for source distribution it cannot be used to deploy binary wheels
> because AFAICT Travis does not build “manylinux1”-marked wheels.
>
> I then found the manylinux-demo project[2] that uses Docker and contains a
> a script able to build the wheels for every available version of Python.
> OTOH,
> it does not tackle to PyPI upload step.
>
> I will try to distill a custom recipe for my own needs looking at how other
> packages implemented this goal, but I wonder if there is already some
> documentation that could help me understanding better how to intersect the
> above steps.
>

Multibuild is probably the best place to start:
https://github.com/matthew-brett/multibuild

Here's a relatively simple and up-to-date example of how to produce wheels
for Windows, Linux and OS X automatically using multibuild:
https://github.com/MacPython/pywavelets-wheels

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] GSoC 2017 - Plan of Action for dependency resolver

2017-03-01 Thread Ralf Gommers
On Thu, Mar 2, 2017 at 9:07 AM, Donald Stufft  wrote:

>
> On Mar 1, 2017, at 3:02 PM, Ralf Gommers  wrote:
>
>
>
> On Wed, Mar 1, 2017 at 4:14 AM, Pradyun Gedam  wrote:
>
>> Hello Everyone!
>>
>> Google released the list of accepted organizations for GSoC 2017 and PSF
>> is one of them.
>>
>
> I see pip is not yet listed as a PSF sub-org on http://python-gsoc.org/.
> This is pretty urgent to arrange:
>
> *"March 3* - Last day for Python sub-orgs to apply to participate
> with the PSF.
> (Assuming we get accepted by Google and can support sub-orgs, of
> course!)
> This deadline is for orgs who applies on their own and didn't make it,
> but still
>  wish to participate under the umbrella. "
>
> The original deadline was Feb 7. There's a good chance that Pip will still
> be accepted after March 3, but I wouldn't gamble on it.
>
> There are instructions under "Project Ideas" on http://python-gsoc.org/
> on how to get accepted as a sub-org.
>
>
>
> Oh. I’ve never done this before and Pradyun reached out so I had no idea I
> had to do this. I’ll go ahead and do this.
>

I'm the GSoC admin for SciPy, so need to keep track of the various
deadlines/todos. I'd be happy to ping you each time one approaches if that
helps.

There's a PSF GSoC mentors list that's not noisy and useful to join. You'll
be added to the Google GSoC-mentors list automatically if you start
mentoring in the program, but you may want to mute it or not use your
primary email address for it (it's high-traffic, very low signal to noise
and you can't unsubscribe).

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] GSoC 2017 - Plan of Action for dependency resolver

2017-03-01 Thread Ralf Gommers
On Wed, Mar 1, 2017 at 4:14 AM, Pradyun Gedam  wrote:

> Hello Everyone!
>
> Google released the list of accepted organizations for GSoC 2017 and PSF
> is one of them.
>

I see pip is not yet listed as a PSF sub-org on http://python-gsoc.org/.
This is pretty urgent to arrange:

*"March 3* - Last day for Python sub-orgs to apply to participate with
the PSF.
(Assuming we get accepted by Google and can support sub-orgs, of
course!)
This deadline is for orgs who applies on their own and didn't make it,
but still
 wish to participate under the umbrella. "

The original deadline was Feb 7. There's a good chance that Pip will still
be accepted after March 3, but I wouldn't gamble on it.

There are instructions under "Project Ideas" on http://python-gsoc.org/ on
how to get accepted as a sub-org.

Cheers,
Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Adding modules to library

2016-11-29 Thread Ralf Gommers
On Tue, Nov 29, 2016 at 2:26 PM,  wrote:

> Having trouble installing numpy and picamera modules to python shell.
>
> Need your help please.
>
> I am just learning Python Laguage.
>

On Windows, you're best off using a scientific Python distribution (like
Anaconda, see http://scipy.org/install.html#scientific-python-distributions
for all your options). If you just need numpy and know how to use pip
already, "pip install numpy" should work. Note that as soon as you need
more scientific packages (like scipy) that won't work though, so better
start off right and install a distribution.

Note that the mailing list of numpy or stackoverflow are better places than
this list to ask how best to install numpy.

Cheers,
Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Role of distutils.cfg

2016-11-11 Thread Ralf Gommers
On Fri, Nov 11, 2016 at 9:12 PM, Christoph Groth 
wrote:

> Ralf Gommers wrote:
>
> You forgot to add all your links.
>>
>
> I accidentally deleted them when re-posting my message.  The first time I
> sent it to this list without being subscribed, and it was unfortunately
> *silently* dropped.  (I had assumed that postings by non-members are
> moderated.)  Here they are:
>
> [1] https://pypi.python.org/pypi/kwant/1.2.2
> [2] https://gitlab.kwant-project.org/kwant/kwant/blob/master/setup.py
> [3] https://gitlab.kwant-project.org/kwant/kwant/issues/48#note_2494
>
> Most robust is to only pass metadata (name, maintainer, url,
>> install_requires, etc.). In a number of cases you're forced to pass
>> ext_modules or cmdclass, which usually works fine. Passing individual
>> paths, compiler flags, etc. sounds unhealthy.
>>
>
> Sounds reasonable, thanks for your advice.
>
> Is there any alternative to passing ext_modules?
>

What Numpy and Scipy do is pass a single Configuration instance, and define
all extensions and libraries in nested setup.py files, one per submodule.
Example: https://github.com/scipy/scipy/blob/master/setup.py#L327

If you pass ext_modules as a list of Extension instances, you'd likely also
avoid the issue. Example:
https://github.com/PyWavelets/pywt/blob/master/setup.py#L145

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Role of distutils.cfg

2016-11-10 Thread Ralf Gommers
On Fri, Nov 11, 2016 at 12:27 AM, Christoph Groth 
wrote:

> Hi,
>
> I have a question on how to best handle parameters to the distribution
> given that they can be shadowed by the global configuration file,
> distutils.cfg.
>
> Our project [1]


You forgot to add all your links.


> contains C-extensions that include NumPy’s headers.  To this end, our
> setup.py [2] sets the include_dirs parameter of setup() to NumPy’s include
> path.  We have chosen this way, since it allows to add a common include
> path to all the extensions in one go.  One advantage of this approach is
> that when the include_dirs parameters of the individual extensions are
> reconfigured (for example with a build configuration file), this does not
> interfere with the numpy include path.
>
> This has been working well for most of other users, but recently we got a
> bug report by someone for whom it doesn’t.  It turns out that his system
> has a distutils.cfg that precedes over the include_dirs parameter to
> setup() [3].
>
> My question is now: is there a policy on the correct use of
> distutils.cfg?  After all, it can override any parameter to any distutils
> command.  As such, is passing options like include_dirs to setup() a bad
> idea in the first place, or should rather the use of distutils.cfg be
> reserved to cases like choosing an alternative compiler?
>

I'm not aware of any policy, but in general I'd recommend to pass as little
to setup() as possible.

Most robust is to only pass metadata (name, maintainer, url,
install_requires, etc.). In a number of cases you're forced to pass
ext_modules or cmdclass, which usually works fine. Passing individual
paths, compiler flags, etc. sounds unhealthy.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Current Python packaging status (from my point of view)

2016-11-05 Thread Ralf Gommers
On Sat, Nov 5, 2016 at 7:57 PM, Nathaniel Smith  wrote:

> On Fri, Nov 4, 2016 at 11:36 PM, Nick Coghlan  wrote:
> > On 4 November 2016 at 03:56, Matthew Brett 
> wrote:
> >> But - it would be a huge help if the PSF could help with funding to
> >> get mingw-w64 working.  This is the crucial blocker for progress on
> >> binary wheels on Windows.
> >
> > Such a grant was already awarded earlier this year by way of the
> > Scientific Python Working Group (which is a collaborative funding
> > initiative between the PSF and NumFocus):
> > https://mail.python.org/pipermail/scientific/2016-January/000271.html
> >
> > However, we hadn't received a status update by the time I stepped down
> > from the Board,


This status update was sent to the PSF board in June:
http://mingwpy.github.io/roadmap.html#status-update-june-16.

Up until that report the progress was good, but after that progress has
stalled due to unavailability for private reasons of Carl Kleffner (the
main author of MingwPy).

Ralf



> > although it sounds like progress hasn't been good if
> > folks aren't even aware that the grant was awarded in the first place.
>

> There's two separate projects here that turn out to be unrelated: one
> to get mingw-w64 support for CPython < 3.5, and one for CPython >=
> 3.5. (This has to do with the thing where MSVC totally redid how their
> C runtime works.) The grant you're thinking of is for the python < 3.5
> part; what Matthew's talking about is for the python >= 3.5 part,
> which is a totally different plan and team.
>
> The first blocker on getting funding for the >= 3.5 project though is
> getting the team to write down an actual plan and cost estimate, which
> has not yet happened...
>
> -n
>
> --
> Nathaniel J. Smith -- https://vorpus.org
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Improved Stats Features in Python

2016-09-10 Thread Ralf Gommers
On Sun, Sep 11, 2016 at 11:26 AM, Rui Sarmento 
wrote:

> Hi Ralf,
>
> Yes, in fact I was trying to submit in git but I have some doubts. I will
> explore it more tomorrow (it is late here).
>
> For example I still have doubts with this:
>
> "If you are adding new functionality, you need to add it to the
> documentation by editing (or creating) the appropriate file in docs/source
> ."
>
> What exactly is "the appropriate file"?
>
> and also "Open the docs/source/release/versionX.X.rst file that has the
> version number of the next release and add your changes to the appropriate
> section", I see that the last version in the repository is version0.8.rst
> but I'm not sure this is the file I should edit...
>
> Maybe tomorrow with a good night sleep I'll figure it out.
>

Okay, I'd say just don't worry too much about those details if it's not
clear. Here's what you do:
1. Send an email to the statsmodels mailing list saying you want to add KMO
and Bartlett's spericity, ask if that's welcomed and in what file to put
that.
2. Add your functions in that file, and add tests for that (for function in
fname.py, tests go in tests/test_fname.py).
3. Commit that and put it up on your own GitHub account.
4. From there, send a pull request to statsmodels. Or if it's really not in
good enough shape, Cc me (@rgommers) and I'll give you a few pointers.

Let's take the discussion off this list, this is very off-topic.

Cheers,
Ralf



> Cheers,
>
> Rui
> Às 22:56 de 10-09-2016, Ralf Gommers escreveu:
>
>
>
> On Sun, Sep 11, 2016 at 6:24 AM, Rui Sarmento 
> wrote:
>
>> Dear Ralf,
>>
>> No problem, its always nice to discover something new. In fact I've seen
>> the statsmodel page you sent, talking about submitting with git. I'm not
>> familiar with these procedures. Is there a How-To you would suggest me to
>> read? It is the first time I submit to these repositories. My goal is to
>> submit two functions, one for Bartlett and another for KMO.
>>
>
> Did you see that this page expands at the bottom? This is pretty much a
> walkthrough of how you go about submitting a PR: http://statsmodels.
> sourceforge.net/devel/dev/git_notes.html. It also has links to a couple
> of other helpful tutorials.
>
> Cheers,
> Ralf
>
>
> Cheers,
>>
>> Rui
>>
>> Às 23:07 de 09-09-2016, Ralf Gommers escreveu:
>>
>>
>>
>> On Sat, Sep 10, 2016 at 10:01 AM, Rui Sarmento 
>> wrote:
>>
>>> Dear Ralf,
>>>
>>> Thank you for your suggestions. About the Bartlett test I'm aware that
>>> one of his tests (equal variance of samples) is already available.
>>> Nonetheless, I'm not talking about that particular test but about other
>>> Bartlett's test. The test I wish to contribute is directed to Factor
>>> Analysis and is related to the test for sphericity. I'll try to submit both
>>> to the statsmodel.
>>>
>>
>> Ah okay, thanks - learned something new. For Bartlett's sphericity test
>> statsmodels is probably also the best place indeed.
>>
>> Cheers,
>> Ralf
>>
>>
>>
>>> Best Regards,
>>>
>>> Rui
>>> Às 22:46 de 09-09-2016, Ralf Gommers escreveu:
>>>
>>>
>>>
>>> On Fri, Sep 9, 2016 at 6:29 PM, Ronny Pfannschmidt <
>>> opensou...@ronnypfannschmidt.de> wrote:
>>>
>>>> Hello Rui,
>>>>
>>>> this mailing list deal with tools you can use to publish 3rd party
>>>> packages to something like the pypi package index,
>>>>
>>>> if you want to add to the python stdlib, you need to get started with
>>>> python-ideas, python-dev and very likely write a PEP that will have to get
>>>> accepted.
>>>>
>>>> if you just want to publish your own library, you just need to upload
>>>> it to pypi and make it known.
>>>>
>>>> -- Ronny
>>>>
>>>> On 06.09.2016 17:06, Rui Sarmento wrote:
>>>>
>>>>> Dear Sirs,
>>>>>
>>>>> I've implemented some stats functions related to Factor Analysis in
>>>>> the statistics area. Specifically, the KMO test and the Bartlett test 
>>>>> also.
>>>>> At this time I do not seem to find any module performing these tests. Is
>>>>> there any chance I could add these functions to a package in Python. What
>>>>> is the procedure to perform such contribution.
>>>

Re: [Distutils] Improved Stats Features in Python

2016-09-10 Thread Ralf Gommers
On Sun, Sep 11, 2016 at 6:24 AM, Rui Sarmento 
wrote:

> Dear Ralf,
>
> No problem, its always nice to discover something new. In fact I've seen
> the statsmodel page you sent, talking about submitting with git. I'm not
> familiar with these procedures. Is there a How-To you would suggest me to
> read? It is the first time I submit to these repositories. My goal is to
> submit two functions, one for Bartlett and another for KMO.
>

Did you see that this page expands at the bottom? This is pretty much a
walkthrough of how you go about submitting a PR:
http://statsmodels.sourceforge.net/devel/dev/git_notes.html. It also has
links to a couple of other helpful tutorials.

Cheers,
Ralf


Cheers,
>
> Rui
>
> Às 23:07 de 09-09-2016, Ralf Gommers escreveu:
>
>
>
> On Sat, Sep 10, 2016 at 10:01 AM, Rui Sarmento 
> wrote:
>
>> Dear Ralf,
>>
>> Thank you for your suggestions. About the Bartlett test I'm aware that
>> one of his tests (equal variance of samples) is already available.
>> Nonetheless, I'm not talking about that particular test but about other
>> Bartlett's test. The test I wish to contribute is directed to Factor
>> Analysis and is related to the test for sphericity. I'll try to submit both
>> to the statsmodel.
>>
>
> Ah okay, thanks - learned something new. For Bartlett's sphericity test
> statsmodels is probably also the best place indeed.
>
> Cheers,
> Ralf
>
>
>
>> Best Regards,
>>
>> Rui
>> Às 22:46 de 09-09-2016, Ralf Gommers escreveu:
>>
>>
>>
>> On Fri, Sep 9, 2016 at 6:29 PM, Ronny Pfannschmidt <
>> opensou...@ronnypfannschmidt.de> wrote:
>>
>>> Hello Rui,
>>>
>>> this mailing list deal with tools you can use to publish 3rd party
>>> packages to something like the pypi package index,
>>>
>>> if you want to add to the python stdlib, you need to get started with
>>> python-ideas, python-dev and very likely write a PEP that will have to get
>>> accepted.
>>>
>>> if you just want to publish your own library, you just need to upload it
>>> to pypi and make it known.
>>>
>>> -- Ronny
>>>
>>> On 06.09.2016 17:06, Rui Sarmento wrote:
>>>
>>>> Dear Sirs,
>>>>
>>>> I've implemented some stats functions related to Factor Analysis in the
>>>> statistics area. Specifically, the KMO test and the Bartlett test also. At
>>>> this time I do not seem to find any module performing these tests. Is there
>>>> any chance I could add these functions to a package in Python. What is the
>>>> procedure to perform such contribution.
>>>>
>>>
>> Barlett is already implemented in SciPy: http://docs.scipy.org/doc/scip
>> y/reference/generated/scipy.stats.bartlett.html
>>
>> KMO isn't available anywhere as far as I can tell; statsmodels would be
>> the best place if you would like to contribute your implementation there.
>> See http://statsmodels.sourceforge.net/devel/dev/ for how to go about
>> that. I wouldn't bother proposing that for stdlib inclusion, it's way too
>> specialized for that.
>>
>> Cheers,
>> Ralf
>>
>>
>>
>>>> Thank you very much in advance for the suggestions.
>>>>
>>>> Best Regards,
>>>>
>>>> Rui
>>>>
>>>> ___
>>>> Distutils-SIG maillist  -  Distutils-SIG@python.org
>>>> https://mail.python.org/mailman/listinfo/distutils-sig
>>>>
>>>
>>> ___
>>> Distutils-SIG maillist  -  Distutils-SIG@python.org
>>> https://mail.python.org/mailman/listinfo/distutils-sig
>>>
>>
>>
>>
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Improved Stats Features in Python

2016-09-09 Thread Ralf Gommers
On Sat, Sep 10, 2016 at 10:01 AM, Rui Sarmento 
wrote:

> Dear Ralf,
>
> Thank you for your suggestions. About the Bartlett test I'm aware that one
> of his tests (equal variance of samples) is already available. Nonetheless,
> I'm not talking about that particular test but about other Bartlett's test.
> The test I wish to contribute is directed to Factor Analysis and is related
> to the test for sphericity. I'll try to submit both to the statsmodel.
>

Ah okay, thanks - learned something new. For Bartlett's sphericity test
statsmodels is probably also the best place indeed.

Cheers,
Ralf



> Best Regards,
>
> Rui
> Às 22:46 de 09-09-2016, Ralf Gommers escreveu:
>
>
>
> On Fri, Sep 9, 2016 at 6:29 PM, Ronny Pfannschmidt <
> opensou...@ronnypfannschmidt.de> wrote:
>
>> Hello Rui,
>>
>> this mailing list deal with tools you can use to publish 3rd party
>> packages to something like the pypi package index,
>>
>> if you want to add to the python stdlib, you need to get started with
>> python-ideas, python-dev and very likely write a PEP that will have to get
>> accepted.
>>
>> if you just want to publish your own library, you just need to upload it
>> to pypi and make it known.
>>
>> -- Ronny
>>
>> On 06.09.2016 17:06, Rui Sarmento wrote:
>>
>>> Dear Sirs,
>>>
>>> I've implemented some stats functions related to Factor Analysis in the
>>> statistics area. Specifically, the KMO test and the Bartlett test also. At
>>> this time I do not seem to find any module performing these tests. Is there
>>> any chance I could add these functions to a package in Python. What is the
>>> procedure to perform such contribution.
>>>
>>
> Barlett is already implemented in SciPy: http://docs.scipy.org/doc/
> scipy/reference/generated/scipy.stats.bartlett.html
>
> KMO isn't available anywhere as far as I can tell; statsmodels would be
> the best place if you would like to contribute your implementation there.
> See http://statsmodels.sourceforge.net/devel/dev/ for how to go about
> that. I wouldn't bother proposing that for stdlib inclusion, it's way too
> specialized for that.
>
> Cheers,
> Ralf
>
>
>
>>> Thank you very much in advance for the suggestions.
>>>
>>> Best Regards,
>>>
>>> Rui
>>>
>>> ___
>>> Distutils-SIG maillist  -  Distutils-SIG@python.org
>>> https://mail.python.org/mailman/listinfo/distutils-sig
>>>
>>
>> ___
>> Distutils-SIG maillist  -  Distutils-SIG@python.org
>> https://mail.python.org/mailman/listinfo/distutils-sig
>>
>
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Improved Stats Features in Python

2016-09-09 Thread Ralf Gommers
On Fri, Sep 9, 2016 at 6:29 PM, Ronny Pfannschmidt <
opensou...@ronnypfannschmidt.de> wrote:

> Hello Rui,
>
> this mailing list deal with tools you can use to publish 3rd party
> packages to something like the pypi package index,
>
> if you want to add to the python stdlib, you need to get started with
> python-ideas, python-dev and very likely write a PEP that will have to get
> accepted.
>
> if you just want to publish your own library, you just need to upload it
> to pypi and make it known.
>
> -- Ronny
>
> On 06.09.2016 17:06, Rui Sarmento wrote:
>
>> Dear Sirs,
>>
>> I've implemented some stats functions related to Factor Analysis in the
>> statistics area. Specifically, the KMO test and the Bartlett test also. At
>> this time I do not seem to find any module performing these tests. Is there
>> any chance I could add these functions to a package in Python. What is the
>> procedure to perform such contribution.
>>
>
Barlett is already implemented in SciPy:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bartlett.html

KMO isn't available anywhere as far as I can tell; statsmodels would be the
best place if you would like to contribute your implementation there. See
http://statsmodels.sourceforge.net/devel/dev/ for how to go about that. I
wouldn't bother proposing that for stdlib inclusion, it's way too
specialized for that.

Cheers,
Ralf



>> Thank you very much in advance for the suggestions.
>>
>> Best Regards,
>>
>> Rui
>>
>> ___
>> Distutils-SIG maillist  -  Distutils-SIG@python.org
>> https://mail.python.org/mailman/listinfo/distutils-sig
>>
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] What is the official position on distutils?

2016-09-03 Thread Ralf Gommers
On Sat, Sep 3, 2016 at 5:06 PM, Nick Coghlan  wrote:

> On 2 September 2016 at 19:28, Paul Moore  wrote:
> > On 2 September 2016 at 09:58, Sylvain Corlay 
> wrote:
> >> My point here was that I don't think that the proposed feature has much
> to
> >> do with the concerns that were raised about distutils in general,
> unless it
> >> is decided that incremental improvements to the library even backward
> >> compatible will not be accepted anymore.
> >
> > Agreed. I think your feature is only stalled for distutils by the lack
> > of people sufficiently comfortable with the code to apply it. The
> > suggestions to add it to setuptools are more in the way of practical
> > advice on how to make the feature available, rather than expressions
> > of a policy that "we don't make changes like that in the stdlib".
>
> However, one of the other consequences of the status quo is that if
> Jason's comfortable with a change for setuptools, there's very rarely
> going to be anyone that will argue with him if he also considers it a
> suitable addition to the next version of distutils :)
>
> Since Jason's primary involvement in distutils-sig & PyPA is as the
> lead setuptools maintainer, it's easy for folks to be unaware of the
> fact that he's a distutils maintainer as well.
>
> So perhaps that's what we should adopt as the official distutils-sig
> policy? Any proposed distutils changes should *always* go into
> setuptools, as that way they're available for all currently supported
> Python versions,


and better maintained, and easier to fix if there's bugs, etc.


> and then it's up to the setuptools project to
> escalate changes or change proposals for stdlib inclusion when they
> consider that an appropriate step.
>

+1. clear and pragmatic policy.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 527 - Removing Un(der)used file types/extensions on PyPI

2016-08-26 Thread Ralf Gommers
On Fri, Aug 26, 2016 at 7:08 PM, Thomas Kluyver 
wrote:

> On Thu, Aug 25, 2016, at 05:29 PM, Nick Coghlan wrote:
> > Could you give a bit more detail on how you came to be publishing
> > both? The main thing we're trying to avoid is missing a practical use
> > case for the status quo where folks can upload both - if it's just an
> > artifact of Windows and *nix having different default formats, then
> > the convergence in distutils and setuptools will fix it implicitly,
> > but if it's a deliberate design decision, then we need to check if
> > that's based on a misunderstanding of how pip/easy_install/et al
> > consume the two formats.
>

For Numpy and Scipy we also publish both, that's just because Windows users
often prefer .zip. I don't see an issue with dropping one of the two.

Ralf


>
> I think that script was created before my time in the project. I'd guess
> it's just a historical artefact, but Fernando might know more.
>
> Fernando: is there a reason we publish both .zip and .tar.gz sdists for
> each release? PyPA is thinking of only allowing one sdist per release.
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Proposed new Distutils API for compiler flag detection (Issue26689)

2016-08-24 Thread Ralf Gommers
On Mon, Aug 22, 2016 at 6:50 PM, Thomas Kluyver 
wrote:

> On Mon, Aug 22, 2016, at 07:15 AM, Sylvain Corlay wrote:
>
> I find this worrying that the main arguments to not include a patch would
> be that
>
>  - this part of the standard library is not very maintained (things don't
> get merged)
>  - earlier versions of won't have it
>
>
> Would it make sense to add it to both distutils and setuptools? The
> standard library continues to evolve, projects that require Python 3.6
> wouldn't need to use setuptools, but we could start using it sooner.
>

I don't have a problem with this, at least it avoids the main issues I
pointed out. Although I also don't see much benefit of adding the code to
distutils as well, given that the non-setuptools use is effectively
deprecated (by not adding support for new PEPs in distutils for example)
and less and less relevant every year.


> There's obviously some cost in code duplication; I haven't looked at the
> code in question, so I don't know how bad this is.
>

This patch is pretty short and understandable, so not bad.


> I've run into this argument before when trying to change things in
> non-packaging-related parts of the stdlib, and I agree with Sylvain that
> it's fundamentally problematic. If we're trying to improve the stdlib,
> we're obviously taking a long view, but that's how we ensure the stdlib is
> still useful in a few years time. This goes for packaging tools as much as
> anything else.
>

This I don't agree with - packaging is fundamentally different for the
reasons Donald gave.

Ralf


> I already have projects where I'm happy to require Python >=3.4, so being
> able to depend on Python 3.6 is not such a distant prospect.
>
> Thomas
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Proposed new Distutils API for compiler flag detection (Issue26689)

2016-08-21 Thread Ralf Gommers
On Mon, Aug 22, 2016 at 6:10 AM, Donald Stufft  wrote:

> We’re reaching a point where *some* projects are announcing the end of
> Python 2 support as of a certain date, but let us not forget that Python
> 2.7 is still an order of magnitude more than any other version of Python in
> terms of downloads from PyPI.
>

Even in 5 years from now, when almost all projects have dropped support for
Python 2.7, the reasoning remains the same. Projects will then support 3 or
4 Python 3.x versions, so still any new API added to distutils cannot be
used by those projects for 3-4 years. It does not make much sense to add
new things to distutils, with the exception maybe of something needed
particularly for a new Python version (like MSVC 2015 support in Python
3.5).

On top of that there are technical reasons (don't want to test combinations
of python+setuptools that both change per release) and organizational ones
(distutils maintenance is terrible, many simple bugfix patches don't get
merged for ages, setuptools at least fixes regressions quite fast).

I'm not sure if there's an official policy on adding new things to
distutils, but if not then this request is a good time to make one.
Assuming of course that the setuptools devs are willing to merge features
like the one from Sylvain.

Ralf



>
> On Aug 21, 2016, at 2:08 PM, Sylvain Corlay 
> wrote:
>
> Although we are reaching a tipping point where a lot of projects are
> announcing the end of Python 2 support as of a certain date.
>
> Whatever is in the latest version of Python 3 when it will be considered a
> sane decision to have a Python 3-only library will be considered standard.
>
> On Sun, Aug 21, 2016 at 5:28 PM, Donald Stufft  wrote:
>
>>
>> On Aug 21, 2016, at 5:18 AM, Sylvain Corlay 
>> wrote:
>>
>> With this reasoning, nothing should ever be added to the standard library.
>>
>>
>>
>> Packaging is a bit different than other things because the network effect
>> is much more prominent. There’s no real way to say, install a backport if
>> you need one, you just have to kind of wait until every has upgraded which
>> is unlikely other bits of the standard library. In addition, people writing
>> projects in Python that are designed to be distributed, they tend to need
>> to work across many versions of Python, while someone writing a project for
>> themselves only need to worry about whatever version of Python they are
>> deploying to. So while the new statistics module is, even without a
>> backport, immediately useful to people developing their own projects for a
>> recent version of Python, something in distutils is not useful for package
>> authors until it is the *minimum* version of Python they support.
>>
>> This generally makes the reward for changing distutils very small,
>> particularly with the 3.x split because very few authors are willing to
>> drop 2.7 much less go straight to 3.6 (or whatever) and for people making
>> their own, internal projects, distutils isn’t generally used a whole lot
>> there either.
>>
>> —
>> Donald Stufft
>>
>>
>>
>>
>
>
> —
> Donald Stufft
>
>
>
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Proposed new Distutils API for compiler flag detection (Issue26689)

2016-08-20 Thread Ralf Gommers
On Sat, Aug 20, 2016 at 8:33 AM, Ned Deily  wrote:

> Some months ago, Sylvain brought up a couple of proposals for Distutils.
> The second proposal received some discussion but it appears that the first
> one got lost.  Here it is:
>
> > From sylvain.corlay at gmail.com  Wed May 25 12:01:51 2016
> > From: sylvain.corlay at gmail.com (Sylvain Corlay)
> > Date: Wed, 25 May 2016 12:01:51 -0400
> > Subject: [Distutils] Distutils improvements regarding header installation
> >  and building C extension modules
> > Message-ID:  gmail.com>
> >
> > Hello everyone,
> >
> > This is my first post here so, apologies if I am breaking any rules.
> >
> > Lately, I have been filing a few bug reports and patches to distutils on
> > bugs.python.org that all concern the installation and build of C++
> > extensions.
> >
> > *1) The distutils.ccompiler has_flag method.*
> > (http://bugs.python.org/issue26689)
> >
> > When building C++ extension modules that require a certain compiler flag
> > (such as enabling C++11 features), you may want to check if the compiler
> > has such a flag available.
> >
> > I proposed a patch adding a `has_flag` method to ccompiler. It is an
> > equivalent to cmake' s CHECK_CXX_COMPILER_FLAG.
> >
> > The implementation is similar to the one of has_function which by the way
> > has a pending patch by minrk in issue (http://bugs.python.org/issue25544
> ).
>
> On python-dev and in the bug tracker, Sylvain has understandably asked for
> a review with an eye to adding this new feature to Python 3.6 whose feature
> code cutoff is scheduled for a few weeks from now.  As release manager, I
> am not opposed in general to adding new features to Distutils but I think
> we should be very cautious about modifying or adding new Distutils APIs,
> given that many third-party distribution authors want to support their
> packages on multiple versions.  So I want to make sure that there is some
> agreement that adding this new API starting with 3.6 is a good thing to do
> rather than having it go in under the radar.


I'd rather see that kind of thing added to setuptools. We're already having
to deal with setuptools as a moving target, so if distutils becomes one
again as well that means more testing with combinations of different Python
and setuptools versions. Imho distutils changes should be bugfix and
essential maintenance only.


> If there are technical review issues with the implementation, it would
> probably be better to give those directly on the bug tracker.
>

The usual one for disutils: it's a patch with zero tests and zero docs. It
looks pretty safe to add, but still

Ralf




>
> Opinions?
>
> Thanks!
> --Ned
>
> --
>   Ned Deily
>   n...@python.org -- []
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Request for comment: Proposal to change behaviour of pip install

2016-06-29 Thread Ralf Gommers
On Wed, Jun 29, 2016 at 7:46 AM, Nick Coghlan  wrote:

>
> On 28 Jun 2016 5:04 pm, "Ralf Gommers"  wrote:
> >
> >
> >
> > On Wed, Jun 29, 2016 at 12:45 AM, Robert Collins <
> robe...@robertcollins.net> wrote:
> >>
> >> On 29 June 2016 at 10:38, Ralf Gommers  wrote:
> >> >
> >> >
> >> > On Wed, Jun 29, 2016 at 12:16 AM, Nick Coghlan 
> wrote:
> >> >>
> >> >>
> >> >> On 26 Jun 2016 23:37, "Pradyun Gedam"  wrote:
> >> >> >
> >> >> > Hello List!
> >> >> >
> >> >> > I feel it’s fine to hold back the other changes for later but the
> >> >> > upgrade-strategy change should get shipped out to the world as
> quickly
> >> >> > as
> >> >> > possible. Even how the change is exposed the user can also be
> discussed
> >> >> > later.
> >> >> >
> >> >> > I request the list members to focus on only the change of the
> default
> >> >> > upgrade strategy to be non-eager.
> >> >> >
> >> >> > Does anyone have any concerns regarding the change of the default
> >> >> > upgrade
> >> >> > strategy to be non-eager? If not, let’s get just that shipped out
> as
> >> >> > soon as possible.
> >> >>
> >> >> Pairing that change with an explicit "pip upgrade-all" command would
> get a
> >> >> +1 from me, especially if there was a printed warning when the new
> upgrade
> >> >> strategy skips packages the old one would have updated.
> >> >
> >> > Please do not mix upgrade with upgrade-all. The latter is blocked by
> lack of
> >> > a SAT solver for a long time, and at the current pace that status may
> not
> >> > change for another couple of years. Also mixing these up is
> unnecessary, and
> >> > it was discussed last year on this list already to move ahead with
> upgrade:
> >> > http://article.gmane.org/gmane.comp.python.distutils.devel/24219
> >>
> >> I realise the consensus on the ticket is that its blocked, but I don't
> >> actually agree.
> >>
> >> Yes, you can't do it *right* without a full resolver, but you can do
> >> an approximation that would be a lot better than nothing (just narrow
> >> the specifiers given across all requirements). That is actually
> >> reasonable when you're dealing with a presumed-good-set of versions
> >> (which install doesn't deal with).
> >
> >
> > Honestly, not sure how to respond. You may be right, I don't have a
> technical opinion on an approximate upgrade-all now. Don't really want to
> have one either - when N core PyPA devs have been in consensus for a couple
> of years, then when dev N+1 comes along at the very last moment to
> challenge that consensus plus make it blocking for something we agreed was
> unrelated, that just feels frustrating (especially because it's becoming a
> pattern).
>
> "yum upgrade" has worked well enough for years without a proper SAT
> solver, and the package set in a typical Linux install is much larger than
> that in a typical virtual environment (although distro curation does reduce
> the likelihood of conflicting requirements arising in the first place).
>
Interesting. Issue https://github.com/pypa/pip/issues/59 is now dedicated
to upgrade-all (https://github.com/pypa/pip/issues/3786 is for upgrade), so
I'll copy the comments of Robert and you there.

> That said, rerunning pip-compile and then doing a pip-sync is already a
> functional equivalent of an upgrade-all operation (as is destroying and
> recreating a venv), so I agree there's no need to couple the question of
> supporting bulk upgrades in baseline pip with changing the behaviour of
> upgrading named components.
>
Thank you Nick.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Request for comment: Proposal to change behaviour of pip install

2016-06-28 Thread Ralf Gommers
On Wed, Jun 29, 2016 at 12:45 AM, Robert Collins 
wrote:

> On 29 June 2016 at 10:38, Ralf Gommers  wrote:
> >
> >
> > On Wed, Jun 29, 2016 at 12:16 AM, Nick Coghlan 
> wrote:
> >>
> >>
> >> On 26 Jun 2016 23:37, "Pradyun Gedam"  wrote:
> >> >
> >> > Hello List!
> >> >
> >> > I feel it’s fine to hold back the other changes for later but the
> >> > upgrade-strategy change should get shipped out to the world as quickly
> >> > as
> >> > possible. Even how the change is exposed the user can also be
> discussed
> >> > later.
> >> >
> >> > I request the list members to focus on only the change of the default
> >> > upgrade strategy to be non-eager.
> >> >
> >> > Does anyone have any concerns regarding the change of the default
> >> > upgrade
> >> > strategy to be non-eager? If not, let’s get just that shipped out as
> >> > soon as possible.
> >>
> >> Pairing that change with an explicit "pip upgrade-all" command would
> get a
> >> +1 from me, especially if there was a printed warning when the new
> upgrade
> >> strategy skips packages the old one would have updated.
> >
> > Please do not mix upgrade with upgrade-all. The latter is blocked by
> lack of
> > a SAT solver for a long time, and at the current pace that status may not
> > change for another couple of years. Also mixing these up is unnecessary,
> and
> > it was discussed last year on this list already to move ahead with
> upgrade:
> > http://article.gmane.org/gmane.comp.python.distutils.devel/24219
>
> I realise the consensus on the ticket is that its blocked, but I don't
> actually agree.
>
> Yes, you can't do it *right* without a full resolver, but you can do
> an approximation that would be a lot better than nothing (just narrow
> the specifiers given across all requirements). That is actually
> reasonable when you're dealing with a presumed-good-set of versions
> (which install doesn't deal with).
>

Honestly, not sure how to respond. You may be right, I don't have a
technical opinion on an approximate upgrade-all now. Don't really want to
have one either - when N core PyPA devs have been in consensus for a couple
of years, then when dev N+1 comes along at the very last moment to
challenge that consensus plus make it blocking for something we agreed was
unrelated, that just feels frustrating (especially because it's becoming a
pattern).

Mixing separate discussions/implementations up together does seem to be a
good way to make the whole thing stall again though, so I'll first try
repeating "this is unnecessary, please do not mix upgrade and upgrade-all".
Here's an alternative for the small minority that values the current
upgrade behavior:
  1. add a --recursive flag to keep that behavior accessible.
  2. add the printed warning that Nick suggests above.
That way we can have better defaults soon (Pradyun's PR seems to be in
decent shape), and add upgrade-all either when someone implements the full
resolver or when there's agreement on your approximate version.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Request for comment: Proposal to change behaviour of pip install

2016-06-28 Thread Ralf Gommers
On Wed, Jun 29, 2016 at 12:38 AM, Ralf Gommers 
wrote:

>
>
>
> On Wed, Jun 29, 2016 at 12:16 AM, Nick Coghlan  wrote:
>
>>
>> On 26 Jun 2016 23:37, "Pradyun Gedam"  wrote:
>> >
>> > Hello List!
>> >
>> > I feel it’s fine to hold back the other changes for later but the
>> > upgrade-strategy change should get shipped out to the world as quickly
>> as
>> > possible. Even how the change is exposed the user can also be discussed
>> later.
>> >
>> > I request the list members to focus on only the change of the default
>> > upgrade strategy to be non-eager.
>> >
>> > Does anyone have any concerns regarding the change of the default
>> upgrade
>> > strategy to be non-eager? If not, let’s get just that shipped out as
>> soon as possible.
>>
>> Pairing that change with an explicit "pip upgrade-all" command would get
>> a +1 from me, especially if there was a printed warning when the new
>> upgrade strategy skips packages the old one would have updated.
>>
> Please do not mix upgrade with upgrade-all. The latter is blocked by lack
> of a SAT solver for a long time, and at the current pace that status may
> not change for another couple of years. Also mixing these up is
> unnecessary, and it was discussed last year on this list already to move
> ahead with upgrade:
> http://article.gmane.org/gmane.comp.python.distutils.devel/24219
>

And, on request of Robert all discussion was moved to
https://github.com/pypa/pip/issues/3786, so we probably should not continue
this thread.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Request for comment: Proposal to change behaviour of pip install

2016-06-28 Thread Ralf Gommers
On Wed, Jun 29, 2016 at 12:16 AM, Nick Coghlan  wrote:

>
> On 26 Jun 2016 23:37, "Pradyun Gedam"  wrote:
> >
> > Hello List!
> >
> > I feel it’s fine to hold back the other changes for later but the
> > upgrade-strategy change should get shipped out to the world as quickly as
> > possible. Even how the change is exposed the user can also be discussed
> later.
> >
> > I request the list members to focus on only the change of the default
> > upgrade strategy to be non-eager.
> >
> > Does anyone have any concerns regarding the change of the default upgrade
> > strategy to be non-eager? If not, let’s get just that shipped out as
> soon as possible.
>
> Pairing that change with an explicit "pip upgrade-all" command would get a
> +1 from me, especially if there was a printed warning when the new upgrade
> strategy skips packages the old one would have updated.
>
Please do not mix upgrade with upgrade-all. The latter is blocked by lack
of a SAT solver for a long time, and at the current pace that status may
not change for another couple of years. Also mixing these up is
unnecessary, and it was discussed last year on this list already to move
ahead with upgrade:
http://article.gmane.org/gmane.comp.python.distutils.devel/24219

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Request for comment: Proposal to change behaviour of pip install

2016-06-26 Thread Ralf Gommers
On Mon, Jun 27, 2016 at 8:36 AM, Pradyun Gedam  wrote:

> Hello List!
>
> I feel it’s fine to hold back the other changes for later but the
> upgrade-strategy change should get shipped out to the world as quickly as
> possible. Even how the change is exposed the user can also be discussed
> later.
>
What do you mean by "ship" if you say the behavior can still be changed
later?


> I request the list members to focus on *only* the change of the default
> upgrade strategy to be non-eager.
>
> Does anyone have any concerns regarding the change of the default upgrade
> strategy to be non-eager? If not, let’s get *just* that shipped out as
> soon as possible.
>
The concerns were always with how to change it, one of:
(1) add "pip upgrade"
(2) change behavior of "pip install --upgrade"
(3) change behavior of "pip install"

Your sentence above suggests you're asking for agreement on (2), but I
think you want agreement on (3) right? At least that was the conclusion of
your PEP-style writeup.

Personally I don't have a preference anymore, as long as a choice is made
so we don't remain stuck where we are now.

Ralf




> Cheers,
> Pradyun Gedam
>
> On Mon, 27 Jun 2016 at 12:02 Pradyun Gedam pradyu...@gmail.com
>  wrote:
>
> On Sun, 26 Jun 2016 at 23:02 Donald Stufft  wrote:
>>
>>>
>>> On Jun 25, 2016, at 6:25 AM, Pradyun Gedam  wrote:
>>>
>>> There is currently a proposal to change the behaviour to pip install to
>>> upgrade a package that is passed even if it is already installed.
>>>
>>> This behaviour change is accompanied with a change in the upgrade
>>> strategy - pip would stop “eagerly” upgrading dependencies and would become
>>> more conservative, upgrading a dependency only when it doesn’t meet lower
>>> constraints of the newer version of a parent. Moreover, the behaviour of
>>>  pip install --target would also be changed so that --upgrade no longer
>>> affects it.
>>>
>>> I think bundling these two changes (and I think I might have been the
>>> one that originally suggested it) is making this discussion harder than it
>>> needs to be as folks are having to fight on multiple different fronts at
>>> once. I think the change to the default behavior of pip install is
>>> dependent on the change to —upgrade, so I suggest we focus on the change to
>>> —upgrade first, changing from a “recursive” to a “conservative” strategy.
>>> Once we get that change figured out and landed then we can worry about what
>>> to do with pip install.
>>>
>>
>> You were. In fact, the majority swayed in favour of changing the
>> behaviour of pip install post one of your comments on Github.
>>
>> I'll be happier *only* seeing in change the behaviour of --upgrade and
>> not --target or pip install. It reduces the number of things that changes
>> from 3 to 1. Much easier to discuss about.
>>
>> I’m not going to repeat the entire post, but I just made a fairly lengthy
>>> comment at
>>> https://github.com/pypa/pip/issues/3786#issuecomment-228611906 but to
>>> try and boil it down to a few points:
>>>
>>
>> Thanks for this.
>>
>>
>>> * ``pip install —upgrade`` is not a good security mechanism, relying on
>>> it is inconsistent at best. If we want to support trying to keep people on
>>> secure versions of software we need a better mechanism than this anyways,
>>> so we shouldn’t let it influence our choice here.
>>>
>>
>> AFAIK, this was the only outstanding concern raised against having a
>> non-eager (conservative) upgrade strategy.
>>
>> * For the general case, it’s not going to matter a lot which way we go,
>>> but not upgrading has the greatest chance of not breaking *already
>>> installed software*.
>>>
>>
>> I strongly agree with this. Another thing worth a mention is that it's
>> easier to get the lower bounds of your requirements correct, rather than
>> upper bounds.
>>
>>
>>> * For the hard-to-upgrade case, the current behavior is so bad that
>>> people are outright attempting to subvert the way pip typically behaviors,
>>> *AND* advocating for other’s to do the same, in an attempt to escape that
>>> behavior. I think that this is not a good place to be in.
>>>
>>
>> Ditto.
>>
>> —
>>>
>>> Donald Stufft
>>>
>>
>> Happy-to-see-Donald's-response-ly,
>> Pradyun Gedam
>>
> ​
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] enscons, a prototype SCons-powered wheel & sdist builder

2016-06-26 Thread Ralf Gommers
On Sun, Jun 26, 2016 at 10:45 PM, Daniel Holth  wrote:

> I've been working on a prototype Python packaging system powered by SCons
> called enscons. https://bitbucket.org/dholth/enscons . It is designed to
> be an easier way to experiment with packaging compared to hacking on
> distutils or setuptools which are notoriously difficult to extend. Now it
> is at the very earliest state where it might be interesting to others who
> are less familiar with the details of pip and Python package formats.
>

Interesting, thanks Daniel.

This does immediately bring back memories of the now deceased Numscons:
https://github.com/cournape/numscons. David Cournapeau wrote quite a bit
about it on his blog: https://cournape.wordpress.com/?s=numscons
Are you aware of it? It was able to build numpy and scipy, so maybe there's
still something worth stealing from it (it's 6 years old by now though).

Cheers,
Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Request for comment: Proposal to change behaviour of pip install

2016-06-26 Thread Ralf Gommers
@Pradyun thanks a lot for trying to get some movement in this issue again!


On Sun, Jun 26, 2016 at 8:27 AM, Pradyun Gedam  wrote:

> I think it's useful to see what other tools and package managers do. Doing
> something like them because they do it is not a good reason. Doing it
> because it's better UX is a good reason.
>
> I like what git does, printing a (possibly long) message in a version
> cycle to warn users that the behavior would be changing in a future version
> and how they can/should act on it. It has a clean transition period that
> allows users to make educated decisions.
>
> I, personally, am more in the favor of providing a `--upgrade-strategy
> (eager/non-eager)` flag (currently defaulting to eager) with `--upgrade`
> with the the warning message printing as above followed by a switch to
> non-eager.
>
> These two combined is IMO the best way to do a transition to non-eager
> upgrades by default.
>
> That said, a change to non-eager upgrades be could potentially cause a
> security vulnerability because a security package is kept at an older
> version. This asks if we should default to non-eager upgrades. This
> definitely something that should be addressed first, before we even talk
> about the transition. I'm by no means in a position to make an proper
> response and decision on this but someone else on this thread probably is.
>

This was addressed, many times over. On the main issue [1], on the pypa-dev
mailing list [2], on this list [3]. The decision that this is going to
happen is even documented in the pip docs [4]. The PR was explicitly asked
for after all that discussion [5], and has been submitted last year already
with all concrete review comments addressed.

There's also a reason that issue [1] is one of the most "+1"-ed issues I've
come across on GitHub - the current upgrade behavior is absolutely horrible
(see [3] for why).



> On Sun, 26 Jun 2016 at 10:59 Nick Coghlan  wrote:
>
>> On 25 June 2016 at 21:59, Robert Collins 
>> wrote:
>> > Lastly, by defaulting to non-recursive upgrades we're putting the
>> > burden on our users to identify sensitive components and manage them
>> > much more carefully.
>>
>
That's why there's also a plan to add an update-all command (see [1]). And
there's still "update --recursive" as well.


> Huh, true. I was looking at this proposal from the point of view of
>> container build processes where the system packages are visible from
>> the venv, and there the "only install what I tell you, and only
>> upgrade what you absolutely have to" behaviour is useful (especially
>> since you're mainly doing it in the context of generating a new
>> requirements.txt that is used to to the *actual* build).
>>
>
That's not the main reason it's useful. Just wanting no unexpected upgrades
and no failing upgrades of working packages which contain compiled code
when you do a simple "pip install -U smallpurepythonpackage" are much more
important. See [3] for a more eloquent explanation.

Ralf


[1] https://github.com/pypa/pip/issues/59
[2]
https://groups.google.com/forum/#!searchin/pypa-dev/upgrade/pypa-dev/vVLmo1PevTg/oBkHCPBLb9YJ
[3] http://article.gmane.org/gmane.comp.python.distutils.devel/24218
[4]
https://pip.pypa.io/en/stable/user_guide/#only-if-needed-recursive-upgrade
[5] http://thread.gmane.org/gmane.comp.python.scientific.user/36377
[6] https://github.com/pypa/pip/pull/3194



> However, I now think Robert's right that that's the wrong way to look
>> at it - I am *not* a suitable audience for the defaults, since we can
>> adapt our automation pipeline to whatever combination of settings pip
>> tells us we need to get the behaviour we want (as long as we're given
>> suitable deprecation periods to adjust to any behavioural changes, and
>> we can largely control that ourselves by controlling when we upgrade
>> pip, including patching it if absolutely necessary).
>>
>> By contrast, for folks that *aren't* using something like VersionEye
>> or requires.io to stay on top of security updates, "always run the
>> latest version of everything, and try to keep up with that upgrade
>> treadmill" really is the safest way to go, and that's what the current
>> eager upgrade behaviour provides.
>>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] on integrated docs in Warehouse and PyPI

2016-06-05 Thread Ralf Gommers
On Sun, Jun 5, 2016 at 6:33 AM, Nick Coghlan  wrote:

>
> On 4 Jun 2016 6:54 am, "Donald Stufft"  wrote:
> >
> >
> >> On Jun 4, 2016, at 9:33 AM, Nathaniel Smith  wrote:
> >>
> >> I think everyone would agree that having some nice doc hosting service
> available as an option would be, well, nice. Everyone likes options. But
> the current doc hosting is unpopular and feature poor, falls outside of the
> PyPI core mission, and is redundant with other more popular services, at a
> time when the PyPI developers are struggling to maintain core services.
> >
> >
> >
> > To add to what Nathaniel said here, there are a few problems with the
> current situation:
> >
> > Documentation hosting largely worked “OK” when it was just writing files
> out to disk, however we’ve since removed all use of the local disk (so that
> we can scale past 1 machine) and we’re now storing things in S3. This makes
> documentation hosting particularly expensive in terms of API calls because
> we need to do expensive list key operations to discover which files exist
> (versus package files where we have a database full of files).
>
> Amazon do offer higher level alternatives like https://aws.amazon.com/efs/
> for use cases like PyPI's docs hosting that assume they have access to a
> normal filesystem.
>
> Given the credential management benefits of integrated docs,
>
>From the RTD blog post linked by Nathaniel:
""
Our proposed grant, for $48,000, is to build a separate instance that
integrates with the Python Package Index’s upcoming website, Warehouse
. This integration will provide automatic
API reference documentation upon package release, with authentication tied
to PyPI and simple configuration inside the distribution.
""

> it does seem worthwhile to me for the PSF to invest in a lowest common
> denominator static file hosting capability,
>
Seems like a very poor way to spend money and developer time imho. The
original post by Jason brings up a few shortcomings of RTD, but I'm amazed
that that leads multiple people here to conclude that starting a new doc
hosting effort is the right answer to that. The much better alternative is:
read the RTD contributing guide [1] and their plans for PyPI integration
[2], then start helping out with adding those features to RTD.

There is very little chance that a new effort as discussed here can come
close to RTD, which is a quite active project with by now over 200
contributors. Starting a new project should be done for the right reasons:
existing projects don't have and don't want to implement features you need,
you have a better technical design, you want to reimplement to learn from
it, etc. There are no such reasons here as far as I can tell.

If there's money left for packaging related work, I'm sure we can think of
better ways to spend it. First thoughts:
- Accelerate PyPI integration plans for RTD
- Accelerate work on Warehouse
- Pay someone to review and merge distutils patches in the Python bug
tracker


Final thought: there's nothing wrong with distributed infrastructure for
projects. A typical project today may have code hosting on GitHub or
Bitbucket, use multiple CI providers in parallel, use a separate code
coverage service, upload releases to PyPI, conda-forge and GitHub Releases,
and host docs on RTD. Integrating doc hosting with PyPI doesn't change that
picture really.

Ralf

[1] https://github.com/rtfd/readthedocs.org/blob/master/docs/contribute.rst
[2] https://github.com/rtfd/readthedocs.org/issues/1957
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP: Build system abstraction for pip/conda etc

2016-02-11 Thread Ralf Gommers
On Thu, Feb 11, 2016 at 11:16 AM, Nick Coghlan  wrote:

> On 11 February 2016 at 17:50, Ralf Gommers  wrote:
> > On Wed, Feb 10, 2016 at 2:43 PM, Paul Moore  wrote:
> >>
> >> On 10 February 2016 at 13:23, Nick Coghlan  wrote:
> >> > On 10 February 2016 at 20:53, Paul Moore  wrote:
> >> >> We don't have to solve the whole "sdist 2.0" issue right now. Simply
> >> >> saying that in order to publish pypa.json-based source trees you need
> >> >> to zip up the source directory, name the file "project-version.zip"
> >> >> and upload to PyPI, would be sufficient as a short-term answer
> >> >> (assuming that this *would* be a viable "source file" that pip could
> >> >> use - and I must be clear that I *haven't checked this*!!!)
> >
> >
> > This is exactly what pip itself does right now for "pip install .", so
> > clearly it is viable.
> >
> >> until
> >> >> something like Nathaniel's source distribution proposal, or a
> >> >> full-blown sdist-2.0 spec, is available. We'd need to support
> whatever
> >> >> stopgap proposal we recommend for backward compatibility in those new
> >> >> proposals, but that's a necessary cost of not wanting to delay the
> >> >> current PEP on those other ones.
> >> >
> >> > One of the reasons I went ahead and created the specifications page at
> >> > https://packaging.python.org/en/latest/specifications/ was to let us
> >> > tweak interoperability requirements as needed, without wasting
> >> > people's time with excessive PEP wrangling by requiring a separate PEP
> >> > for each interface affected by a proposal.
> >> >
> >> > In this case, the build system abstraction PEP should propose some
> >> > additional text for
> >> >
> >> >
> https://packaging.python.org/en/latest/specifications/#source-distribution-format
> >> > defining how to publish source archives containing a pypa.json file
> >> > and the setup.py shim.
> >
> >
> > The setup.py shim should be optional right? If a package author decides
> to
> > not care about older pip versions, then the shim isn't needed.
>
> Given how long it takes for new versions of pip to filter out through
> the ecosystem, the shim's going to be needed for quite a while. Since
> we have the power to make things "just work" even for folks on older
> pip versions that assume use of the setuptools/distutils CLI, it makes
> sense to nudge sdist creation tools in that direction.
>
> The real pay-off here is getting setup.py out of most source repos and
> replacing it with a declarative format - keeping it out of sdists is a
> non-goal from my perspective.
>

I don't feel too strongly about this, but:
- there's also a usability argument for no setup.py in sdists (people will
still unzip an sdist and run python setup.py install on it)
- it makes implementing something like 'flit sdist' more complicated;
without the shim it can be as simply as just zipping the non-hidden files
in the source tree.
- if flit decides not to implement sdist (good chance of that), then people
*will* still need to add the shim to their own source repos to comply to
this 'spec'.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP: Build system abstraction for pip/conda etc

2016-02-10 Thread Ralf Gommers
On Wed, Feb 10, 2016 at 2:43 PM, Paul Moore  wrote:

> On 10 February 2016 at 13:23, Nick Coghlan  wrote:
> > On 10 February 2016 at 20:53, Paul Moore  wrote:
> >> We don't have to solve the whole "sdist 2.0" issue right now. Simply
> >> saying that in order to publish pypa.json-based source trees you need
> >> to zip up the source directory, name the file "project-version.zip"
> >> and upload to PyPI, would be sufficient as a short-term answer
> >> (assuming that this *would* be a viable "source file" that pip could
> >> use - and I must be clear that I *haven't checked this*!!!)
>

This is exactly what pip itself does right now for "pip install .", so
clearly it is viable.

until
> >> something like Nathaniel's source distribution proposal, or a
> >> full-blown sdist-2.0 spec, is available. We'd need to support whatever
> >> stopgap proposal we recommend for backward compatibility in those new
> >> proposals, but that's a necessary cost of not wanting to delay the
> >> current PEP on those other ones.
> >
> > One of the reasons I went ahead and created the specifications page at
> > https://packaging.python.org/en/latest/specifications/ was to let us
> > tweak interoperability requirements as needed, without wasting
> > people's time with excessive PEP wrangling by requiring a separate PEP
> > for each interface affected by a proposal.
> >
> > In this case, the build system abstraction PEP should propose some
> > additional text for
> >
> https://packaging.python.org/en/latest/specifications/#source-distribution-format
> > defining how to publish source archives containing a pypa.json file
> > and the setup.py shim.
>

The setup.py shim should be optional right? If a package author decides to
not care about older pip versions, then the shim isn't needed.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP: Build system abstraction for pip/conda etc

2016-02-10 Thread Ralf Gommers
On Wed, Feb 10, 2016 at 3:30 PM, David Cournapeau 
wrote:

>
>
>
> On Wed, Feb 10, 2016 at 1:52 PM, Paul Moore  wrote:
>

>> We should probably also check with the flit people that the proposed
>> approach works for them. (Are there any other alternative build
>> systems apart from flit that exist at present?)
>>
>
> I am not working on it ATM, but bento was fairly complete and could
> interoperate w/ pip (a few years ago at least):
> https://cournape.github.io/Bento/
>

I plan to test with Bento (I'm still using it almost daily to work on
Scipy) when an implementation is proposed for pip. The interface in the PEP
is straightforward though, I don't see any fundamental reason why it
wouldn't work for Bento if it works for flit.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] build system abstraction PEP, take #2

2015-12-09 Thread Ralf Gommers
On Wed, Dec 9, 2015 at 2:55 AM, Robert Collins 
wrote:

> Updated - tl;dr:
>
> The thing I'm least happy about is that implementing install support
> will require recursively calling back into pip, that or reimplementing
> the installation of wheels logic from within pip - because
> sufficiently old pip's won't call wheel at all.


You're specifying a new interface here, and updating pip itself is quite
easy. So why would you do things you're not happy about to support
"sufficiently old pip's"?


> And even modern pips
> can be told *not to call wheel*.


Isn't that something you can ignore? If the plan for pip anyway is to
always go sdist-wheel-install, why support this flag for a new build
interface?

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-11-08 Thread Ralf Gommers
On Sun, Nov 8, 2015 at 2:45 PM, Paul Moore  wrote:

> On 8 November 2015 at 13:34, Ralf Gommers  wrote:
> > On Sun, Nov 8, 2015 at 2:23 PM, Paul Moore  wrote:
> >>
> >> On 8 November 2015 at 11:13, Ralf Gommers 
> wrote:
> >>
> >> > "wheels and sdists" != "release artifacts"
> >>
> >> Please explain. All you've done here is state that you don't agree
> >> with me, but given no reasons.
> >
> > Come on, I elaborated in the sentence right below it. Which you cut out
> in
> > your reply. Here it is again:
> >
> > "I fully agree of course that we want things on PyPi (which are release
> > artifacts) to have unique version numbers etc. But wheels and sdists are
> > produced all the time, and only sometimes are they release artifacts."
>
> Sorry, my mistake. I didn't see how this part related (and still
> don't). What are wheels and sdists if they are not not "release
> artifacts"? Are we just quibbling about the what term "release
> artifact" means?


I'm not sure about that, I don't think it's just terminology (see below).
They obviously can be release artifacts, but they don't have to be - that's
what I meant with !=.


> If so, I'll revert to using "wheels and sdists" as I
> did in my repsonse. I thought it was obvious that wheels and sdists
> *are* the release artifacts in the process of producing Python
> packages. It doesn't matter where they are released *to*, it can be to
>
PyPI, or a local server, or just to a wheelhouse or other directory on
> your PC that you keep for personal use only. Once they are created by
> you as anything other than a temporary file in a multi-step install
> process they are "release artifacts" as I understand/mean the term.
>

To me there's a fairly fundamental difference between things that are
actually released (by the release manager of a project usually, or maybe
someone building a local wheelhouse) and things that are produced under the
hood by pip. For someone typing `pip install .`, sdist/wheel is an
implementation detail that is invisible to him/her and he/she shouldn't
have to care about imho.


> But terminology's not a big deal, as long as we understand each other.
>

Agreed.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-11-08 Thread Ralf Gommers
On Sun, Nov 8, 2015 at 2:23 PM, Paul Moore  wrote:

> On 8 November 2015 at 11:13, Ralf Gommers  wrote:
> > You only have two categories? I'm missing at least one important
> category:
> > users who install things from a vcs or manually downloaded code
> (pre-release
> > that's not on pypi for example). This category is probably a lot larger
> that
> > than that of developers.
>
> Hmm, I very occasionally will install the dev version of pip to get a
> fix I need. But I don't consider myself in that role as someone who
> pip should cater for - rather I expect to manage doing so myself,
> whether that's by editing the pip code to add a local version ID, or
> just by dealing with the odd edge cases.
>
> I find it hard to imagine that there are a significant number of users
> who install from development sources but who aren't developers


There are way more of those users than actual developers, I'm quite sure of
that. See below for numbers.


> (at least to the extent that testers of pre-release code are also
> developers).
> ...
>

That's not a very helpful way to look at it from my point of view. Those
users may just want to check that their code still works, or they need a
bugfix that's not in the released version, or 


> Personally, I think the issue here is that there are a lot of people
> in the scientific community who people outside that community would
> class as "developers",


Then I guess those "outside" would be web/app developers? For anyone
developing a library or some other infrastructure to be used somewhere
other than via a graphical or command line UI, I think the distinction I
make (I'll elaborate below) will be clear.


> but who aren't considered that way from within
> the community. I tend to try to assign to these people the expertise
> and responsibilities that I would expect of a developer, not of an end
> user. If in fact they are a distinct class of user, then I think the
> scientific community need to explain more clearly what expertise and
> responsibilities pip can expect of these users. And why treating them
> as developers isn't reasonable.
>

To give an example for Numpy:
  - there are 5-10 active developers with commit rights
  - there are 50-100 contributors who submit PRs
  - there are O(1000) people who read the mailing list
  - there are O(1 million) downloads/installs per year
Downloads/users are hard to count correctly, but there are at least 1000x
more users than developers (this will be the case for many popular
packages). Those users are often responsible for installing the package
themselves. They aren't trained programmers, only know Python to the extent
that they can get their work done, and they don't know much (if anything)
about packaging, wheels, etc. All they know may be "I have to execute
`python setup.py install`". Those are the users I'm concerned about.
There's no reasonable way you can classify/treat them as developers I
think.

By the way, everything we discuss here has absolutely no impact on what you
defined as "user" (the released-version only PyPi user). While it's
critical for what I defined as "the second kind of user".

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-11-08 Thread Ralf Gommers
On Sun, Nov 8, 2015 at 2:23 PM, Paul Moore  wrote:

> On 8 November 2015 at 11:13, Ralf Gommers  wrote:
>
> > "wheels and sdists" != "release artifacts"
>
> Please explain. All you've done here is state that you don't agree
> with me, but given no reasons.
>

Come on, I elaborated in the sentence right below it. Which you cut out in
your reply. Here it is again:

"I fully agree of course that we want things on PyPi (which are release
artifacts) to have unique version numbers etc. But wheels and sdists are
produced all the time, and only sometimes are they release artifacts."

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-11-08 Thread Ralf Gommers
On Sun, Nov 8, 2015 at 12:38 AM, Donald Stufft  wrote:

> On November 7, 2015 at 6:16:34 PM, Donald Stufft (don...@stufft.io) wrote:
> > > I want to reduce the “paths” that an installation can go down.
>
> I decided I’d make a little visual aid to help explain what I mean here
> (omitting development/editable installs because they are weird and will
> always be weird)!
>
> Here’s essentially the way that installs can happen right now
> https://caremad.io/s/Ol1TuV6R9K/. Each of these types of installations
> act subtly different in ways that are not very obvious to most people.
>
> Here’s what I want it to be: https://caremad.io/s/uJYeVzBlQG/. In this
> way no matter what a user is installing from (Wheel, Source Dist,
> Directory) the outcome will be the same and there won’t be subtly different
> behaviors based on what is being provided.
>

Thanks, clear figures. Your final situation is definitely way better than
what it's now. Here is what I proposed in a picture:
https://github.com/pypa/pip/pull/3219#issuecomment-154810578

Comparison:
   - same number of arrows in flowchart
   - total path length in my proposal is 1 shorter
   - my proposal requires one less build system interface to be specified
(sdist)

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-11-08 Thread Ralf Gommers
On Sun, Nov 8, 2015 at 1:03 AM, Paul Moore  wrote:

> On 7 November 2015 at 16:33, Ralf Gommers  wrote:
> > Your only concrete argument for it so far is aimed at developers
>
> I feel that there's some confusion over the classes of people involved
> here ("developers", "users", etc).
>

Good point. I meant your second category below.


> For me, the core user base for pip is people who use "pip install" to
> install *released* distributions of packages. For those people, name
> and version uniquely identifies a build, they often won't have a build
> environment installed, etc. These people *do* however sometimes
> download wheels manually and install them locally (the main example of
> this is Christoph Gohlke's builds, which are not published as a custom
> PyPI-style index, and so have to be downloaded and installed from a
> local directory).
>
> The other important category of user is people developing those
> released distributions. They often want to do "pip install -e", they
> install their own package from a working directory where the code may
> change without a corresponding version change, they expect to build
> from source and want that build cycle to be fast. Historically, they
> have *not* used pip, they have used setup.py directly (or setup.py
> develop, or maybe custom build tools like bento). So pip is not
> optimised for their use cases.
>

You only have two categories? I'm missing at least one important category:
users who install things from a vcs or manually downloaded code
(pre-release that's not on pypi for example). This category is probably a
lot larger that than that of developers.


> Invocations like "pip install " cater for the first
> category. Invocations like "pip install " cater for
> the second (although currently, mostly by treating the local directory
> as an unpacked sdist, which as I say is not optimised for this use
> case). Invocations like "pip install " are in the grey
> area - but I'd argue that it's more often used by the first category
> of users, I can't think of a development workflow that would need it.
>
> Regarding the point you made this comment about:
>
> >> 4. Builds (pip wheel) should always unpack to a temporary location and
> >> build there. When building from a directory, in effect build a sdist
> >> and unpack it to the temporary location.
>
> I see building a wheel as a release activity.


It's not just that. My third category of users above is building wheels all
the time. Often without even realizing it, if they use pip.


> As such, it should
> produce a reproducible result, and so should not be affected by
> arbitrary state in the development directory. I don't know whether you
> consider "ensuring the wheels aren't wrong" as aimed at developers or
> at end users, it seems to me that both parties benefit.
>

Ensuring wheels aren't wrong is something that developers need to do. End
users may benefit, but they benefit from many things developers do.

Personally, I'm deeply uncomfortable about *ever* encountering, or
> producing (as a developer) sdists or wheels with the same version
> number but functional differences.


As soon as you produce a wheel with any compiled code inside, it matters
with which compiler (and build flags, etc.) you build it. There are
typically subtle, and sometimes very obvious, functional differences. Same
for sdists, contents for example depend on the Cython version you have
installed when you generate it.


> I am OK with installing a
> development version (i.e., direct from a development directory into a
> site-packages, either as -e or as a normal install) where the version
> number doesn't change even though the code does, but for me the act of
> producing release artifacts (wheels and sdists) should freeze the
> version number. I've been bitten too often by confusion caused by
> trying to install something with the same version but different code,
> to want to see that happen.
>

"wheels and sdists" != "release artifacts"

I fully agree of course that we want things on PyPi (which are release
artifacts) to have unique version numbers etc. But wheels and sdists are
produced all the time, and only sometimes are they release artifacts.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-11-07 Thread Ralf Gommers
On Sat, Nov 7, 2015 at 3:57 PM, Paul Moore  wrote:

> On 7 November 2015 at 13:55, Ralf Gommers  wrote:
> > On Sat, Nov 7, 2015 at 2:02 PM, Paul Moore  wrote:
> >>
> >> On 7 November 2015 at 01:26, Chris Barker - NOAA Federal
> >>  wrote:
> >> > So what IS supposed to be used in the development workflow? The new
> >> > mythical build system?
> >
> > I'd like to point out again that this is not just about development
> > workflow. This is just as much about simply *installing* from a local git
> > repo, or downloaded sources/sdist.
>
> Possibly I'm misunderstanding here.
>

I had an example above of installing into different venvs. Full rebuilds
for that each time are very expensive. And this whole thread is basically
about `pip install .`, not about inplace builds for development.

As another example of why even for a single build/install it's helpful to
just let the build system do what it wants to do instead of first copying
stuff over, here are some timing results. This is for PyWavelets, which
isn't that complicated a build (mostly pure Python, 1 Cython extension):

1. python setup.py install:   40 s
2. pip install . --upgrade --no-deps:   58 s
# OK, (2) is slow due to using shutil, to be fixed to work like (3):
3. python setup.py sdist:  8 s
pip install dist/PyWavelets0.4.0.dev0+da1c6b4.tar.gz:  41 s
# so total time for (3) will be 41 + 8 = 49 s
# and a better alternative to (1)
4. python setup.py bdist_wheel:  34 s
pip install dist/PyWavelets-xxx.whl:   6 s
# so total time for (3) will be 34 + 6 = 40 s

Not super-scientific, but the conclusion is clear: what pip does is a lot
slower than what for me is the expected behavior. And note that without the
Cython compile, the difference in timing will get even larger.

That expected behavior is:
  a) Just ask the build system to spit out a wheel (without any magic)
  b) Install that wheel (always)



> > The "pip install . should reinstall" discussion in
> > https://github.com/pypa/pip/issues/536 is also pretty much the same
> > argument.
>
> Well, that one is about pip reinstalling if you install from a local
> directory, and not skipping the install if the local directory version
> is the same as the installed version. As I noted there, I'm OK with
> this, it seems reasonable to me to say that if someone has a directory
> of files, they may have updated something but not (yet) bumped the
> version.
>
> The debate over there has gone on to whether we force reinstall for a
> local *file* (wheel or sdist) which I'm less comfortable with. But
> that's is being covered over there.
>
> The discussion *here* is, I thought, about skipping build steps when
> possible because you can reuse build artifacts. That's not "should pip
> do the install?", but rather "*how* should pip do the install?"
> Specifically, to reuse build artifacts it's necessaryto *not* do what
> pip currently does for all (non-editable) installs, which is to
> isolate the build in a temporary directory and do a clean build.
> That's a sensible debate to have, but it's very different from the
> issue you referenced.
>
> IMO, the discussions currently are complex enough that isolating
> independent concerns is crucial if anyone is to keep track. (It
> certainly is for me!)
>

Agreed that the discussions are complex now. But imho they're mostly
complex because the basic principles of what pip should be doing are not
completely clear, at least to me. If it's "build a wheel, install the
wheel" then a lot of things become simpler.


>> Fair question. Unfortunately, the answer is honestly that there's no
> >> simple answer - pip is not a bad option, but it's not its core use
> >> case so there are some rough edges.
> >
> > My impression is that right now pip's core use-case is not "installing",
> but
> > "installing from PyPi (and similar repos". There are a lot of rough edges
> > around installing from anything on your own hard drive.
>
> Not true. The rough edges are around installing things where (a) you
> don't want to rely in the invariant that name and version uniquely
> identify an installation (that's issue 536) and (b) where you don't
> want to do a clean build, because building is complex, slow, or
> otherwise something you want to optimise (that's this discussion).
>
> I routinely download wheels and use them to install. I also sometimes
> download sdists and install from them, although 99.99% of the time, I
> download them, build them into wheels and install them from wheels. It
> *always* works exactly as I'd expect. But if I'm doing devel

Re: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-11-07 Thread Ralf Gommers
On Sat, Nov 7, 2015 at 2:02 PM, Paul Moore  wrote:

> On 7 November 2015 at 01:26, Chris Barker - NOAA Federal
>  wrote:
> > So what IS supposed to be used in the development workflow? The new
> > mythical build system?
>

I'd like to point out again that this is not just about development
workflow. This is just as much about simply *installing* from a local git
repo, or downloaded sources/sdist.

The "pip install . should reinstall" discussion in
https://github.com/pypa/pip/issues/536 is also pretty much the same
argument.

Fair question. Unfortunately, the answer is honestly that there's no
> simple answer - pip is not a bad option, but it's not its core use
> case so there are some rough edges.


My impression is that right now pip's core use-case is not "installing",
but "installing from PyPi (and similar repos". There are a lot of rough
edges around installing from anything on your own hard drive.


> I'd argue that the best way to use
> pip is with pip install -e, but others in this thread have said that
> doesn't suit their workflow, which is fine. I don't know of any other
> really good options, though.
>
> I think it would be good to see if we can ensure pip is useful for
> this use case as well, all I was pointing out was that people
> shouldn't assume that it "should" work right now, and that changing it
> to work might involve some trade-offs that we don't want to make, if
> it compromises the core functionality of installing packages.
>

It might be helpful to describe the actual trade-offs then, because as far
as I can tell no one has actually described how this would either hurt
another use-case or make pip internals much more complicated.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Platform tags for OS X binary wheels

2015-11-06 Thread Ralf Gommers
On Sat, Nov 7, 2015 at 1:04 AM, Chris Barker - NOAA Federal <
chris.bar...@noaa.gov> wrote:

> On Nov 6, 2015, at 3:57 PM, Robert McGibbon  wrote:
>
> I'm using the Python from the Miniconda installer with py35 released last
> week.
>
>
> Then you should not expect it to be able to find compatible binary wheels
> on PyPi.
>
> Pretty much the entire point of conda is to support Numpy and friends.
> It's actually really good that it DIDN'T go and install a binary wheel.
>
> You want:
>
> conda install numpy
>
> Trust me on that :-)
>
> There are some cases where pip installing a source package into a conda
> Python is fine -- but mostly only pure-Python packages.
>

Actually, the situation with pip on OS X is quite good. This should work
with at least python.org Python, MacPython and Homebrew (using wheels):

 pip install numpy scipy matplotlib pandas scikit-image scikit-learn

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-11-05 Thread Ralf Gommers
On Fri, Nov 6, 2015 at 12:37 AM, Donald Stufft  wrote:

> If ``pip install —build … —no-clean …`` worked to do incremental builds,
> would that satisfy this use case? (without the —upgrade and —no-deps,
> —no-deps is only needed because —upgrade and —upgrade is needed because of
> another ticket that I think will get fixed at some point).
>

Then there's at least a way to do it, but it's all very unsatisfying. Users
are again going to have a hard time finding this. And I'd hate to have to
type that every time.

Robert and Nathaniel have argued the main points already so I'm not going
to try to go in more detail, but I think the main point is:

  - we want to replace `python setup.py install` with `pip install .` in
order to get proper uninstalls and dependency handling.
  - except for those two things, `python setup.py install` does the
expected thing while pip is trying to be way too clever which is unhelpful.

Ralf


> On November 5, 2015 at 6:09:46 PM, Ralf Gommers (ralf.gomm...@gmail.com)
> wrote:
> > On Thu, Nov 5, 2015 at 11:44 PM, Ralf Gommers
> > wrote:
> >
> > >
> > >
> > > On Thu, Nov 5, 2015 at 11:29 PM, Donald Stufft wrote:
> > >
> > >> I’m not at my computer, but does ``pip install —no-clean —build > >>
> build dir>`` make this work?
> > >>
> > >
> > > No, that option seems to not work at all. I tried with both a relative
> and
> > > an absolute path to --build. In the specified dir there are subdirs
> created
> > > (src.linux-i686-2.7/), but they're empty. The actual build still
> > > happens in a tempdir.
> > >
> >
> > Commented on the source of the problem with both `--build` and
> `--no-clean`
> > in https://github.com/pypa/pip/issues/804
> >
> > Ralf
> > ___
> > Distutils-SIG maillist - Distutils-SIG@python.org
> > https://mail.python.org/mailman/listinfo/distutils-sig
> >
>
> -
> Donald Stufft
> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372
> DCFA
>
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-11-05 Thread Ralf Gommers
On Thu, Nov 5, 2015 at 11:44 PM, Ralf Gommers 
wrote:

>
>
> On Thu, Nov 5, 2015 at 11:29 PM, Donald Stufft  wrote:
>
>> I’m not at my computer, but does ``pip install —no-clean —build > build dir>`` make this work?
>>
>
> No, that option seems to not work at all. I tried with both a relative and
> an absolute path to --build. In the specified dir there are subdirs created
> (src.linux-i686-2.7/), but they're empty. The actual build still
> happens in a tempdir.
>

Commented on the source of the problem with both `--build` and `--no-clean`
in https://github.com/pypa/pip/issues/804

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-11-05 Thread Ralf Gommers
On Thu, Nov 5, 2015 at 11:29 PM, Donald Stufft  wrote:

> I’m not at my computer, but does ``pip install —no-clean —build  build dir>`` make this work?
>

No, that option seems to not work at all. I tried with both a relative and
an absolute path to --build. In the specified dir there are subdirs created
(src.linux-i686-2.7/), but they're empty. The actual build still
happens in a tempdir.

Ralf

P.S. adding flags for the various issues (/ things under discussion) this
is what I actually had to try:

pip install . --no-clean --build build/ -v --upgrade --no-deps

:(



>
> On November 5, 2015 at 5:25:16 PM, Ralf Gommers (ralf.gomm...@gmail.com)
> wrote:
> > On Tue, Nov 3, 2015 at 6:10 PM, Chris Barker - NOAA Federal <
> > chris.bar...@noaa.gov> wrote:
> >
> > > >> I'm not talking about in place installs, I'm talking about e.g.
> > > building a
> > > >> wheel and then tweaking one file and rebuilding -- traditionally
> build
> > > >> systems go to some effort to keep track of intermediate artifacts
> and
> > > reuse
> > > >> them across builds when possible, but if you always copy the source
> tree
> > > >> into a temporary directory before building then there's not much the
> > > build
> > > >> system can do.
> > >
> > > This strikes me as an optimization -- is it an important one?
> > >
> >
> > Yes, I think it is. At least if we want to move people towards `pip
> install
> > .` instead of `python setup.py`.
> >
> >
> > > If I'm doing a lot of tweaking and re-running, I'm usually in develop
> mode.
> > >
> >
> > Everyone has a slightly different workflow. What if you install into a
> > bunch of different venvs between tweaks? The non-caching for a package
> like
> > scipy pushes rebuild time from <30 sec to ~10 min.
> >
> >
> > > I can see that when you build a wheel, you may build it, test it,
> > > discover an wheel-specific error, and then need to repeat the cycle --
> > > but is that a major use-case?
> > >
> > > That being said, I have been pretty frustrated debugging conda-build
> > > scripts -- there is a lot of overhead setting up the build environment
> > > each time you do a build...
> > >
> > > But with wheel building there is much less overhead, and far fewer
> > > complications requiring the edit-build cycle.
> > >
> > > And couldn't make-style this-has-already-been-done checking happen
> > > with a copy anyway?
> > >
> >
> > The whole point of the copy is that it's a clean environment. Pip
> currently
> > creates tempdirs and removes them when it's done building. So no.
> >
> > Ralf
> > ___
> > Distutils-SIG maillist - Distutils-SIG@python.org
> > https://mail.python.org/mailman/listinfo/distutils-sig
> >
>
> -
> Donald Stufft
> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372
> DCFA
>
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-11-05 Thread Ralf Gommers
On Tue, Nov 3, 2015 at 6:10 PM, Chris Barker - NOAA Federal <
chris.bar...@noaa.gov> wrote:

> >> I'm not talking about in place installs, I'm talking about e.g.
> building a
> >> wheel and then tweaking one file and rebuilding -- traditionally build
> >> systems go to some effort to keep track of intermediate artifacts and
> reuse
> >> them across builds when possible, but if you always copy the source tree
> >> into a temporary directory before building then there's not much the
> build
> >> system can do.
>
> This strikes me as an optimization -- is it an important one?
>

Yes, I think it is. At least if we want to move people towards `pip install
.` instead of `python setup.py`.


> If I'm doing a lot of tweaking and re-running, I'm usually in develop mode.
>

Everyone has a slightly different workflow. What if you install into a
bunch of different venvs between tweaks? The non-caching for a package like
scipy pushes rebuild time from <30 sec to ~10 min.


> I can see that when you build a wheel, you may build it, test it,
> discover an wheel-specific error, and then need to repeat the cycle --
> but is that a major use-case?
>
> That being said, I have been pretty frustrated debugging conda-build
> scripts -- there is a lot of overhead setting up the build environment
> each time you do a build...
>
> But with wheel building there is much less overhead, and far fewer
> complications requiring the edit-build cycle.
>
> And couldn't make-style this-has-already-been-done checking happen
> with a copy anyway?
>

The whole point of the copy is that it's a clean environment. Pip currently
creates tempdirs and removes them when it's done building. So no.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] wacky idea about reifying extras

2015-10-30 Thread Ralf Gommers
On Thu, Oct 29, 2015 at 2:16 AM, Nathaniel Smith  wrote:

> On Wed, Oct 28, 2015 at 4:30 PM, Ralf Gommers 
> wrote:
> >
> > On Tue, Oct 27, 2015 at 5:45 PM, Brett Cannon  wrote:
> >>
> >>
> >> Nathaniel's comment about how this might actually give pip a leg up on
> >> conda also sounds nice to me as I have enough worry about having a
> fissure
> >> in 1D along the Python 2/3 line, and I'm constantly worried that the
> >> scientific community is going to riot and make it a 2D fissure along
> Python
> >> 2/3, pip/conda axes and split effort, documentation, etc.
> >
> >
> > If it helps you sleep: I'm confident that no one is planning this
> particular
> > riot. It takes little work to support pip and conda - the hard issues are
> > mostly with building, not installing.
>
> Well I wouldn't say "no one". You weren't there at the NumPy BoF
> at SciPy this year, where a substantial portion of the room started
> calling for exactly this, and I felt pretty alone up front trying to
> squash it almost singlehandedly. It was a bit awkward actually!
>

Hmm, guess I missed something. Still confident that it won't happen,
because (a) it doesn't make too much sense to me, and (b) there's probably
little overlap between the people that want that and the people that do the
actual build/packaging maintenance work (outside of conda people
themselves).


> The argument for numpy dropping pip support is actually somewhat
> compelling. It goes like this: conda users don't care if numpy breaks
> ABI, because conda already enforces that numpy-C-API-using-packages
> have to be recompiled every time a new numpy release comes out.
> Therefore, if we only supported conda, then we would be free to break
> ABI and clean up some of the 20 year old broken junk that we have
> lying around and add new features more quickly. Conclusion: continuing
> to support pip is hobbling innovation in the whole numerical
> ecosystem.


> IMO this is not compelling *enough* to cut off our many many users who
> are not using conda,


Agreed. It's also not like those are the only options. If breaking ABI
became so valuable that it needs to be done, I'd rather put the burden of
that on packagers of projects that rely on numpy and would have to create
lots of new installers rather than on users that expect "pip install" to
work.

Ralf



> plus a schism like this would have all kinds of
> knock-on costs (the value of a community grows like O(n**2), so
> splitting a community is expensive!). And given that you and I are
> both on the list of gatekeepers to such a change, yeah, it's not going
> to happen in the immediate future.
>
> But... if conda continues to gain mindshare at pip's expense, and they
> fix some of the more controversial sticking points (e.g. the current
> reliance on secret proprietary build recipes), and the pip/distutils
> side of things continues to stagnate WRT things like this... I dunno,
> I could imagine that argument becoming more and more compelling over
> the next few years. At that point I'm honestly not sure what happens,
> but I suspect that all the options are unpleasant. You and I have a
> fair amount of political capital, but it is finite. ...Or maybe I'm
> worrying over nothing and everything would be fine, but still, it'd be
> nice if we never have to find out because pip etc. get better enough
> that the issue goes away.
>
> What I'm saying is, it's not a coincidence that it was after SciPy
> this year that I finally subscribed to distutils-sig :-).
>
> -n
>
> --
> Nathaniel J. Smith -- http://vorpus.org
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] build system abstraction PEP

2015-10-29 Thread Ralf Gommers
On Tue, Oct 27, 2015 at 12:23 AM, Robert Collins 
wrote:

> On 27 October 2015 at 10:32, Ralf Gommers  wrote:
> >
>

> > (2) Complex example: to build a Scipy wheel on Windows with MinGW the
> > command is ``python setup.py config --compiler=mingw32 build
> > --compiler=mingw32 bdist_wheel``.
>
> So in this case the build tool needs to know about the compiler stuff
> itself- pip doesn't know. We have a way in pip to tunnel stuff down to
> setuptools today; thats incompatible with dynamically building wheels
> on the fly for 'pip install' - so I'm not sure it needs to be
> reflected here.
>

It looks like you made a start at
https://github.com/rbtcollins/interoperability-peps/blob/build-system-abstraction/build-system-abstraction.rst#handling-of-compiler-options

"Instead we recommend that individual build tools should have
a config file mechanism to provide such pervasive settings
across all things built locally."

makes sense, at least up to "across". The same settings for everything
built locally isn't appropriate - should be able to have a config file for
one project. Example: you may want to build with MSVC and with MinGW on
Windows.

Also, it seems to me like there should be a way to pass the full path of a
config file to the build tool via pip. Can be easily done via an optional
key "config-file" in the build tool description. Example: right now numpy
distributes a site.cfg.example that users can rename to site.cfg and
usually put right next to setup.py. When one uses pip, it may go off
building in some tmpdir, change path envvars, etc. - so how does the build
tool find that config file?

Ralf


> We'll need some more input on that I think.
> ...
> > mechanism) to the build tool. If it is out of scope, I'd be interested to
> > see what you think are use-cases with complex requirements that are
> enabled
> > by this PEP.
>
> The PEP is aimed at enabling additional build-tools to be on parity
> with setuptools in *pip install* invocations that go through the
> wheel-autobuild-path in pip.
>
> The complex examples of passing arbitrary options to setup.py
> currently bypasses wheel building in pip, and so can't be tackled at
> all :(.
>
> But we can work on including that with some thought.
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] wacky idea about reifying extras

2015-10-28 Thread Ralf Gommers
On Tue, Oct 27, 2015 at 5:45 PM, Brett Cannon  wrote:

>
> Nathaniel's comment about how this might actually give pip a leg up on
> conda also sounds nice to me as I have enough worry about having a fissure
> in 1D along the Python 2/3 line, and I'm constantly worried that the
> scientific community is going to riot and make it a 2D fissure along Python
> 2/3, pip/conda axes and split effort, documentation, etc.
>

If it helps you sleep: I'm confident that no one is planning this
particular riot. It takes little work to support pip and conda - the hard
issues are mostly with building, not installing.

Smaller riots like breaking ``python setup.py install``  recommending ``pip
install .`` instead[1] are in the cards though:)

Ralf

[1] http://article.gmane.org/gmane.comp.python.numeric.general/61757
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Please don't impose additional barriers to participation

2015-10-28 Thread Ralf Gommers
On Wed, Oct 28, 2015 at 9:12 PM, Paul Moore  wrote:

> On 28 October 2015 at 18:44, Donald Stufft  wrote:
> > On October 28, 2015 at 2:42:19 PM, Paul Moore (p.f.mo...@gmail.com)
> wrote:
> >>
> >> [1] If I'm supposed to be getting notifications for comments on the PR
> >> (as a member of the PyPA group, shouldn't I be?) then it's not
> >> happening... I know I can subscribe to the PR, but I'm not clear why I
> >> should need to - I don't for pip issues, for instance...
> >
> > You’re probably not directly subscribed to that repo, there should be a
> follow button on https://github.com/pypa/interoperability-peps. You got
> it automatically for pip because you were explicitly added as a committer
> on the pip repo.
>
> No subscribe button that I can see on the repo. I could subscribe on a
> per-issue basis, but that's a bit different.
>

It's the "Watch" button on the right top of
https://github.com/pypa/interoperability-peps

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] build system abstraction PEP

2015-10-27 Thread Ralf Gommers
On Wed, Oct 28, 2015 at 6:03 AM, Nathaniel Smith  wrote:

> On Sun, Oct 25, 2015 at 11:01 PM, Robert Collins
>  wrote:
> > Since Nathaniel seems busy, I've taken the liberty of drafting a
> > narrow PEP based on the conversations that arose from the prior
> > discussion.
> >
> > It (naturally) has my unique flavor, but builds on the work Nathaniel
> > had put together, so I've put his name as a co-author even though he
> > hasn't seen a word of it until now :) - all errors and mistakes are
> > therefore mine...
> >
> > Current draft text in rendered form at:
> > https://gist.github.com/rbtcollins/666c12aec869237f7cf7
> >
> > I've run it past Donald and he has a number of concerns - I think
> > we'll need to discuss them here, and possibly in another hangout, to
> > get a path forward.
>
> Now that I've had a chance to read it properly...
>
> First impression: there's surprisingly little overlap between this and
> my simultaneously-posted draft [1] --


Which is good, double work has been kept to a minimum - it's like you two
actually coordinated this:)


> my draft focuses on trying to
> only document the stuff that everyone seemed to agree on, includes a
> proposal for static metadata in sdists (since Donald seemed to be
> saying that he considered this a mandatory component of any proposal
> to update how sdists work), and tries to set out a blueprint for how
> to organize the remaining issues, whereas yours spends most of its
> time on the controversial details that I decided to skip over for this
> draft.
>

Imho they're not details. The controversial parts of your draft are still
mostly in the metadata part. If you'd split your draft in two, then you'd
see that the first one is pretty short and the second half of it is only
TBDs. And those TBDs are exactly what Robert's draft fills in.

@Robert: thanks for the example, very helpful. I'll look at it in more
detail later.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] build system abstraction PEP

2015-10-26 Thread Ralf Gommers
On Mon, Oct 26, 2015 at 7:01 AM, Robert Collins 
wrote:

> Since Nathaniel seems busy, I've taken the liberty of drafting a
> narrow PEP based on the conversations that arose from the prior
> discussion.
>
> It (naturally) has my unique flavor, but builds on the work Nathaniel
> had put together, so I've put his name as a co-author even though he
> hasn't seen a word of it until now :) - all errors and mistakes are
> therefore mine...
>
> Current draft text in rendered form at:
> https://gist.github.com/rbtcollins/666c12aec869237f7cf7


I have the feeling that I like where this PEP is going, but it's quite
difficult to read. It would be very helpful to add one or two examples.
Suggestions:

(1) Super simple example: for a pure Python package with one dependency and
a build tool which has no dependencies itself.

(2) Complex example: to build a Scipy wheel on Windows with MinGW the
command is ``python setup.py config --compiler=mingw32 build
--compiler=mingw32 bdist_wheel``.

I tried to do (1) with flit as the build tool, the pypa.yaml should include:

bootstrap-requires:
  - requests
  - docutils
  - requirement:python_version>="3"

So you have to encode flit's dependencies in your pypa.yaml, which will
break as soon as flit grows a new dependency. Or did I misunderstand?

Maybe (2) simply is not possible / out of scope, but I have the feeling
that there'll be a need for users to be able pass stuff (via pip or some
other mechanism) to the build tool. If it is out of scope, I'd be
interested to see what you think are use-cases with complex requirements
that are enabled by this PEP.

Cheers,
Ralf



>
> I've run it past Donald and he has a number of concerns - I think
> we'll need to discuss them here, and possibly in another hangout, to
> get a path forward.
>
> Cheers,
> Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Towards a simple and standard sdist format that isn't intertwined with distutils

2015-10-12 Thread Ralf Gommers
On Mon, Oct 12, 2015 at 6:37 AM, Robert Collins 
wrote:

> On 12 October 2015 at 17:06, Robert Collins 
> wrote:
> > EWOW, huge thread.
> >
> > I've read nearly all of it but in order not to make it massively
> > worse, I'm going to reply to all the points I think need raising in
> > one mail :).
>
> And a bugfix :) - I didn't link to the docs for the build system
> interface we have today -
> https://pip.pypa.io/en/latest/reference/pip_install/#build-system-interface
>

>From that link:
"""
In order for pip to install a package from source, setup.py must implement
the following commands:
...
The install command should implement the complete process of installing the
package to the target directory XXX.
 """
That just sounds so wrong. You want the build system to build, not install.
And if "install" actually means "build to a tempdir so pip can copy it over
it to its final location", then how does that address something like
installing docs to a different dir than the package itself?

+1 for your main point of focusing more on enabling other build systems
though.

Ralf



> -Rob
>
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] setup_requires and install_requires

2014-05-24 Thread Ralf Gommers
On Mon, May 19, 2014 at 1:01 AM, Toby St Clere Smithe wrote:

> Hi,
>
> I'm sure you're all aware of this,


I wasn't actually.


> but I wonder if there's any progress
> for me to be aware of. I've got an extension that I build with
> distutils. It requires numpy both to build and to run, so I have numpy
> in both setup_requires and install_requires. Yet setup.py builds numpy
> twice -- once for the build stage, and then again on installation. This
> seems inefficient to me -- why not just build it once? Is this by
> design?
>

Seems fairly inefficient, so I'd guess it's not by design.

Note that if numpy is already installed, you may want to avoid adding the
*_requires arguments in order not to silently upgrade or break the
installed numpy. Something like
https://github.com/scipy/scipy/pull/3566/files

Ralf



>
> Best regards,
>
>
> --
> Toby St Clere Smithe
> http://tsmithe.net
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Using Wheel with zipimport

2014-01-30 Thread Ralf Gommers
On Wed, Jan 29, 2014 at 5:22 PM, Vinay Sajip wrote:

> 
> On Wed, 29/1/14, Paul Moore  wrote:
>
> > That package installation utilities should not dabble in sys.path
> manipulation.
> > The import path is the user's responsibility.
>
> User as in developer (rather than end user). Right, and distlib's wheel
> code
> does no sys.path manipulation unless explicitly asked to.
>

Also end user. If, as a user, I want to use inplace builds and PYTHONPATH
instead of virtualenvs for whatever reason, that should be supported.
Setuptools inserting stuff to sys.path that come before PYTHONPATH entries
is quite annoying.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] pip on windows experience

2014-01-23 Thread Ralf Gommers
On Thu, Jan 23, 2014 at 3:42 PM, Oscar Benjamin
wrote:

> On Thu, Jan 23, 2014 at 12:16:02PM +, Paul Moore wrote:
> >
> > The official numpy installer uses some complex magic to select the
> > right binaries based on your CPU, and this means that the official
> > numpy "superpack" wininst files don't convert (at least I don't think
> > they do, it's a while since I tried).
>
> It's probably worth noting that numpy are toying around with wheels and
> have uploaded a number of them to PyPI for testing:
> http://sourceforge.net/projects/numpy/files/wheels_to_test/
>
> Currently there are only OSX wheels there (excluding the puer Python
> ones) and they're not available on PyPI. I assume that they're waiting
> for a solution for the Windows installer (a post-install script for
> wheels). That would give a lot more impetus to put wheels up on PyPI.
>

Indeed. We discussed just picking the SSE2 or SSE3 build and putting that
up as a wheel, but that was deemed a not so great idea:
http://article.gmane.org/gmane.comp.python.numeric.general/56072

The Sourceforge OSX wheels are presumably not getting that much use
> right now. The OSX-specific numpy wheel has been downloaded 4 times in
> the last week: twice on Windows and twice on Linux!
>

Some feedback from the people who did try those wheels would help. I asked
for that on the numpy list after creating them, but didn't get much. So I
haven't been in a hurry to move them over to PyPi.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-06 Thread Ralf Gommers
On Fri, Dec 6, 2013 at 2:48 PM, Oscar Benjamin
wrote:

> On 6 December 2013 13:06, David Cournapeau  wrote:
> >
> > As Ralf, I think it is overkill. The problem of SSE vs non SSE is
> because of
> > one library, ATLAS, which as IMO the design flaw of being arch specific.
> I
> > always hoped we could get away from this when I built those special
> > installers for numpy :)
> >
> > MKL does not have this issue, and now that openblas (under a BSD license)
> > can be used as well, we can alleviate this for deployment. Building a
> > deployment story for this is not justified.
>
> Oh, okay that's great. How hard would it be to get openblas numpy
> wheels up and running? Would they be compatible with the existing
> scipy etc. binaries?


OpenBLAS is still pretty buggy compared to ATLAS (although performance in
many cases seems to be on par); I don't think that will be well received
for the official releases. We actually did discuss it as an alternative for
Accelerate on OS X, but there was quite a bit of opposition.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-06 Thread Ralf Gommers
On Fri, Dec 6, 2013 at 1:33 PM, Nick Coghlan  wrote:

> On 6 December 2013 17:21, Ralf Gommers  wrote:
> > On Fri, Dec 6, 2013 at 6:47 AM, Nick Coghlan  wrote:
> >> With that approach, the existing wheel model would work (no need for a
> >> variant system), and numpy installations could be freely moved between
> >> machines (or shared via a network directory).
> >
> > Hmm, taking a compile flag and encoding it in the package layout seems
> like
> > a fundamentally wrong approach. And in order to not litter the source
> tree
> > and all installs with lots of empty dirs, the changes to __init__.py will
> > have to be made at build time based on whether you're building Windows
> > binaries or something else. Path manipulation is usually fragile as
> well. So
> > I suspect this is not going to fly.
>
> In the absence of the perfect solution (i.e. picking the right variant
> out of no SSE, SSE2, SSE3 automatically), would it be a reasonable
> compromise to standardise on SSE2 as "lowest acceptable common
> denominator"?
>

Maybe, yes. It's hard to figure out the impact of this, but I'll bring it
up on the numpy list. If no one has a good way to get some statistics on
cpu's that don't support these instruction sets, it may be worth a try for
one of the Python versions and see how many users will run into the issue.

On accident we've released an incorrect binary once before by the way
(scipy 0.8.0 for Python 2.5) and that was a problem fairly quickly:
https://github.com/scipy/scipy/issues/1697. That was 2010 though.


> Users with no sse capability at all or that wanted to take advantage
> of the SSE3 optimisations, would need to grab one of the Windows
> installers or something from conda, but for a lot of users, a "pip
> install numpy" that dropped the SSE2 version onto their system would
> be just fine, and a much lower barrier to entry than "well, first
> install this other packaging system that doesn't interoperate with
> your OS package manager at all...".
>

Well, for most Windows users grabbing a .exe and clicking on it is a lower
barrier that opening a console and typing "pip install numpy":)


> Are we letting perfect be the enemy of better, here? (punting on the
> question for 6 months and seeing if we can deal with the install-time
> variant problem in pip 1.6 is certainly an option, but if we don't
> *need* to wait that long...)
>

Let's first get the OS X wheels up, that can be done now. And then see what
is decided on the numpy list for the compromise you propose above.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Ralf Gommers
On Fri, Dec 6, 2013 at 6:47 AM, Nick Coghlan  wrote:

> On 6 December 2013 11:52, Donald Stufft  wrote:
> >
> > On Dec 5, 2013, at 8:48 PM, Chris Barker - NOAA Federal <
> chris.bar...@noaa.gov> wrote:
> >
> >> What would really be best is run-time selection of the appropriate lib
> >> -- it would solve this problem, and allow users to re-distribute
> >> working binaries via py2exe, etc. And not require opening a security
> >> hole in wheels...
> >>
> >> Not sure how hard that would be to do, though.
> >
> > Install time selectors probably isn’t a huge deal as long as there’s a
> way
> > to force a particular variant to install and to disable the executing
> code.
>
> Hmm, I just had an idea for how to do the runtime selection thing. It
> actually shouldn't be that hard, so long as the numpy folks are OK
> with a bit of __path__ manipulation in package __init__ modules.
>
> Specifically, what could be done is this:
>
> - all of the built SSE level dependent modules would move out of their
> current package directories into a suitable named subdirectory (say
> "_nosse, _sse2, _sse3")
> - in the __init__.py file for each affected subpackage, you would have
> a snippet like:
>
> numpy._add_sse_subdir(__path__)
>
> where _add_sse_subdir would be something like:
>
> def _add_sse_subdir(search_path):
> if len(search_path) > 1:
> return # Assume the SSE dependent dir has already been added
> # Could likely do this SSE availability check once at import time
> if _have_sse3():
> sub_dir = "_sse3"
> elif _have_sse2():
> sub_dir = "_sse2"
> else:
> sub_dir = "_nosse"
> main_dir = search_path[0]
> search_path.append(os.path.join(main_dir, sub_dir)
>
> With that approach, the existing wheel model would work (no need for a
> variant system), and numpy installations could be freely moved between
> machines (or shared via a network directory).
>

Hmm, taking a compile flag and encoding it in the package layout seems like
a fundamentally wrong approach. And in order to not litter the source tree
and all installs with lots of empty dirs, the changes to __init__.py will
have to be made at build time based on whether you're building Windows
binaries or something else. Path manipulation is usually fragile as well.
So I suspect this is not going to fly.

Ralf



> To avoid having the implicit namespace packages in 3.3+ cause any
> problems with this approach, the SSE subdirectories should contain
> __init__.py files that explicitly raise ImportError.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Ralf Gommers
On Thu, Dec 5, 2013 at 10:12 PM, Oscar Benjamin
wrote:

> On 4 December 2013 20:56, Ralf Gommers  wrote:
> > On Wed, Dec 4, 2013 at 5:05 PM, Chris Barker - NOAA Federal
> >  wrote:
> >>
> >> So a lowest common denominator wheel would be very, very, useful.
> >>
> >> As for what that would be: the superpack is great, but it's been around
> a
> >> while (long while in computer years)
> >>
> >> How many non-sse machines are there still out there? How many non-sse2?
> >
> > Hard to tell. Probably <2%, but that's still too much. Some older Athlon
> XPs
> > don't have it for example. And what if someone submits performance
> > optimizations (there has been a focus on those recently) to numpy that
> use
> > SSE4 or AVX for example? You don't want to reject those based on the
> > limitations of your distribution process.
> >
> >> And how big is the performance boost anyway?
> >
> > Large. For a long time we've put a non-SSE installer for numpy on pypi so
> > that people would stop complaining that ``easy_install numpy`` didn't
> work.
> > Then there were regular complaints about dot products being an order of
> > magnitude slower than Matlab or R.
>
> Yes, I wouldn't want that kind of bad PR getting around about
> scientific Python "Python is slower than Matlab" etc.
>
> It seems as if there is a need to extend the pip+wheel+PyPI system
> before this can fully work for numpy. I'm sure that the people here
> who have been working on all of this would be very interested to know
> what kinds of solutions would work best for numpy and related
> packages.
>
> You mentioned in another message that a post-install script seems best
> to you. I suspect there is a little reluctance to go this way because
> one of the goals of the wheel system is to reduce the situation where
> users execute arbitrary code from the internet with admin privileges
> e.g. "sudo pip install X" will download and run the setup.py from X
> with root privileges. Part of the point about wheels is that they
> don't need to be "executed" for installation. I know that post-install
> scripts are common in .deb and .rpm packages but I think that the use
> case there is slightly different as the files are downloaded from
> controlled repositories whereas PyPI has no quality assurance.
>

I don't think it's avoidable - anything that is transparant to the user
will have to execute code. The multiwheel idea of Nick looks good to me.


> BTW, how do the distros handle e.g. SSE?
>

I don't know exactly to be honest.


> My understanding is that they
> just strip out all the SSE and related non-portable extensions and
> ship generic 686 binaries. My experience is with Ubuntu and I know
> they're not very good at handling BLAS with numpy and they don't seem
> to be able to compile fftpack as well as Cristoph can.
>
> Perhaps a good near-term plan might be to
> 1) Add the bdist_wheel command to numpy - which may actually be almost
> automatic with new enough setuptools/pip and wheel installed.
> 2) Upload wheels for OSX to PyPI - for OSX SSE support can be inferred
> from OS version which wheels can currently handle.
> 3) Upload wheels for Windows to somewhere other than PyPI e.g.
> SourceForge pending a distribution solution that can detect SSE
> support on Windows.
>

That's a reasonable plan. I have an OS X wheel already, which required only
a minor change to numpy's setup.py.


> I think it would be good to have a go at wheels even if they're not
> fully ready for PyPI (just in case some other issue surfaces in the
> process).
>

Agreed.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-04 Thread Ralf Gommers
On Thu, Dec 5, 2013 at 1:09 AM, Chris Barker  wrote:

> On Wed, Dec 4, 2013 at 12:56 PM, Ralf Gommers wrote:
>
>> The problem is explaining to people what they want - no one reads docs
>> before grabbing a binary.
>>
>
> right -- so we want a default "pip install" install that will work for
> most people. And I think "works for most people" is far more important than
> "optimized for your system"
>
>  How many non-sse machines are there still out there? How many non-sse2?
>>>
>>
>> Hard to tell. Probably <2%, but that's still too much.
>>
>
> I have no idea how to tell, but I agree 2% is too much, however, 0.2%
> would not be too much (IMHO) -- anyway, I'm just wondering how much we are
> making this hard for very little return.
>

I also don't know.


> Anyway, best would be a select-at-runtime option -- I think that's what
> MKL does. IF someone can figure that out, great, but I still think a numpy
> wheel that works for most would still be worth doing ,and we can do it now.
>

I'll start playing with wheels in the near future.


>
>  Some older Athlon XPs don't have it for example. And what if someone
>> submits performance optimizations (there has been a focus on those
>> recently) to numpy that use SSE4 or AVX for example? You don't want to
>> reject those based on the limitations of your distribution process.
>>
>
> No, but we also don't want to distribute nothing because we can't
> distribute the best thing.
>
>  And how big is the performance boost anyway?
>>>
>>
>> Large. For a long time we've put a non-SSE installer for numpy on pypi so
>> that people would stop complaining that ``easy_install numpy`` didn't work.
>> Then there were regular complaints about dot products being an order of
>> magnitude slower than Matlab or R.
>>
>
> Does SSE by you that? or do you need a good BLAS? But same point, anyway.
> Though  I think we lose more users by people not getting an install at all
> then we lose by people installing and then finding out they need a to
> install an optimized version to a get a good "dot".
>
>
>>
>>> Yes, 64-bit MinGW + gfortran doesn't yet work (no place to install dlls
>> from the binary, long story). A few people including David C are working on
>> this issue right now. Visual Studio + Intel Fortran would work, but going
>> with only an expensive toolset like that is kind of a no-go -
>>
>
> too bad there is no MS-fortran-express...
>
> On the other hand, saying "no one can have a 64 bit scipy, because people
> that want to build fortran extensions that are compatible with it are out
> of luck" is less than ideal. Right now, we are giving the majority of
> potential scipy users nothing for Win64.
>

There are multiple ways to get a win64 install - Anaconda, EPD, WinPython,
Christoph's installers. So there's no big hurry here.


> You know what they say "done is better than perfect"
>
> [Side note: scipy really shouldn't be a monolithic package with everything
> and the kitchen sink in it -- this would all be a lot easier if it was a
> namespace package and people could get the non-Fortran stuff by
> itself...but I digress.]
>

Namespace packages have been tried with scikits - there's a reason why
scikit-learn and statsmodels spent a lot of effort dropping them. They
don't work. Scipy, while monolithic, works for users.


>  Note on OS-X :  how long has it been since Apple shipped a 32 bit
>>> machine? Can we dump default 32 bit support? I'm pretty sure we don't need
>>> to do PPC anymore...
>>>
>>
>> I'd like to, but we decided to ship the exact same set of binaries as
>> python.org - which means compiling on OS X 10.5/10.6 and including PPC +
>> 32-bit Intel.
>>
>
> no it doesn't -- if we decide not to ship the 3.9, PPC + 32-bit Intel.
> binary -- why should that mean that we can't ship the Intel32+64 bit one?
>

But we do ship the 32+64-bit one (at least for Python 2.7 and 3.3). So
there shouldn't be any issue here.

Ralf



> And as for that -- if someone gets a binary with only 64 bit in it, it
> will run fine with the 32+64 bit build, as long as it's run on a 64 bit
> machine. So if, in fact, no one has a 32 bit Mac anymore (I'm not saying
> that's the case) we don't need to build for it.
>
> And maybe the next python.org builds could be 64 bit Intel only. Probably
> not yet, but we shouldn't be locked in forever
>
> -Chris
>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115   (206) 526-6317   main reception
>
> chris.bar...@noaa.gov
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-04 Thread Ralf Gommers
On Wed, Dec 4, 2013 at 10:59 PM, Paul Moore  wrote:

> On 4 December 2013 21:13, Ralf Gommers  wrote:
> > Besides the issues you mention, the problem is that it creates a single
> > point of failure. I really appreciate everything Christoph does, but it's
> > not appropriate as the default way to provide binary releases for a large
> > number of projects. There needs to be a reproducible way that the devs of
> > each project can build wheels - this includes the right metadata, but
> > ideally also a good way to reproduce the whole build environment
> including
> > compilers, blas/lapack implementations, dependencies etc. The latter
> part is
> > probably out of scope for this list, but is discussed right now on the
> > numfocus list.
>
> You're right - what I said ignored the genuine work being done by the
> rest of the scientific community to solve the real issues involved. I
> apologise, that wasn't at all fair.
>

No need to apologize at all.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-04 Thread Ralf Gommers
On Wed, Dec 4, 2013 at 11:41 AM, Oscar Benjamin
wrote:

> On 4 December 2013 07:40, Ralf Gommers  wrote:
> > On Wed, Dec 4, 2013 at 1:54 AM, Donald Stufft  wrote:
> >>
> >> I’d love to get Wheels to the point they are more suitable then they are
> >> for SciPy stuff,
> >
> > That would indeed be a good step forward. I'm interested to try to help
> get
> > to that point for Numpy and Scipy.
>
> Thanks Ralf. Please let me know what you think of the following.
>
> >> I’m not sure what the diff between the current state and what
> >> they need to be are but if someone spells it out (I’ve only just skimmed
> >> your last email so perhaps it’s contained in that!) I’ll do the arguing
> >> for it. I
> >> just need someone who actually knows what’s needed to advise me :)
> >
> > To start with, the SSE stuff. Numpy and scipy are distributed as
> "superpack"
> > installers for Windows containing three full builds: no SSE, SSE2 and
> SSE3.
> > Plus a script that runs at install time to check which version to use.
> These
> > are built with ``paver bdist_superpack``, see
> > https://github.com/numpy/numpy/blob/master/pavement.py#L224. The NSIS
> and
> > CPU selector scripts are under tools/win32build/.
> >
> > How do I package those three builds into wheels and get the right one
> > installed by ``pip install numpy``?
>
> This was discussed previously on this list:
> https://mail.python.org/pipermail/distutils-sig/2013-August/022362.html
>

Thanks, I'll go read that.

Essentially the current wheel format and specification does not
> provide a way to do this directly. There are several different
> possible approaches.
>
> One possibility is that the wheel spec can be updated to include a
> post-install script (I believe this will happen eventually - someone
> correct me if I'm wrong). Then the numpy for Windows wheel can just do
> the same as the superpack installer: ship all variants, then
> delete/rename in a post-install script so that the correct variant is
> in place after install.
>
> Another possibility is that the pip/wheel/PyPI/metadata system can be
> changed to allow a "variant" field for wheels/sdists. This was also
> suggested in the same thread by Nick Coghlan:
> https://mail.python.org/pipermail/distutils-sig/2013-August/022432.html
>
> The variant field could be used to upload multiple variants e.g.
> numpy-1.7.1-cp27-cp22m-win32.whl
> numpy-1.7.1-cp27-cp22m-win32-sse.whl
> numpy-1.7.1-cp27-cp22m-win32-sse2.whl
> numpy-1.7.1-cp27-cp22m-win32-sse3.whl
> then if the user requests 'numpy:sse3' they will get the wheel with
> sse3 support.
>
> Of course how would the user know if their CPU supports SSE3? I know
> roughly what SSE is but I don't know what level of SSE is avilable on
> each of the machines I use. There is a Python script/module in
> numpexpr that can detect this:
> https://github.com/eleddy/numexpr/blob/master/numexpr/cpuinfo.py
>
> When I run that script on this machine I get:
> $ python cpuinfo.py
> CPU information: CPUInfoBase__get_nbits=32 getNCPUs=2 has_mmx has_sse2
> is_32bit is_Core2 is_Intel is_i686
>
> So perhaps someone could break that script out of numexpr and release
> it as a separate package on PyPI.


That's similar to what numpy has - actually it's a copy from
numpy.distutils.cpuinfo


> Then the instructions for installing
> numpy could be something like
> """
> You can install numpy with
>
> $pip install numpy
>
> which will download the default version without any CPU-specific
> optimisations.
>
> If you know what level of SSE support your CPU has then you can
> download a more optimised numpy with either of:
>
> $ pip install numpy:sse2
> $ pip install numpy:sse3
>
> To determine whether or not your CPU has SSE2 or SSE3 or no SSE
> support you can install and run the cpuinfo script. For example on
> this machine:
>
> $ pip install cpuinfo
> $ python -m cpuinfo --sse
> This CPU supports the SSE3 instruction set.
>
> That means we can install numpy:sse3.
> """
>

The problem with all of the above is indeed that it's not quite automatic.
You don't want your user to have to know or care about what SSE is. Nor do
you want to create a new package just to hack around a pip limitation. I
like the post-install (or pre-install) option much better.


> Of course it would be a shame to have a solution that is so close to
> automatic without quite being automatic. Also the problem is that
> having no SSE support in the default numpy means that lots of people
> would lose ou

Re: [Distutils] Handling the binary dependency management problem

2013-12-04 Thread Ralf Gommers
On Wed, Dec 4, 2013 at 9:13 AM, Paul Moore  wrote:

> On 4 December 2013 07:40, Ralf Gommers  wrote:
> >> I’m not sure what the diff between the current state and what
> >> they need to be are but if someone spells it out (I’ve only just skimmed
> >> your last email so perhaps it’s contained in that!) I’ll do the arguing
> >> for it. I
> >> just need someone who actually knows what’s needed to advise me :)
> >
> >
> > To start with, the SSE stuff. Numpy and scipy are distributed as
> "superpack"
> > installers for Windows containing three full builds: no SSE, SSE2 and
> SSE3.
> > Plus a script that runs at install time to check which version to use.
> These
> > are built with ``paver bdist_superpack``, see
> > https://github.com/numpy/numpy/blob/master/pavement.py#L224. The NSIS
> and
> > CPU selector scripts are under tools/win32build/.
> >
> > How do I package those three builds into wheels and get the right one
> > installed by ``pip install numpy``?
>
> I think that needs a compatibility tag. Certainly it isn't immediately
> soluble now.
>
> Could you confirm how the correct one of the 3 builds is selected
> (i.e., what the code is to detect which one is appropriate)? I could
> look into what options we have here.
>

The stuff under tools/win32build I mentioned above. Specifically:
https://github.com/numpy/numpy/blob/master/tools/win32build/cpuid/cpuid.c


> > If this is too difficult at the moment, an easier (but much less
> important
> > one) would be to get the result of ``paver bdist_wininst_simple`` as a
> > wheel.
>
> That I will certainly look into. Simple answer is "wheel convert
> ". But maybe it would be worth adding a "paver bdist_wheel"
> command. That should be doable in the same wahy setuptools added a
> bdist_wheel command.
>
> > For now I think it's OK that the wheels would just target 32-bit Windows
> and
> > python.org compatible Pythons (given that that's all we currently
> > distribute). Once that works we can look at OS X and 64-bit Windows.
>
> Ignoring the SSE issue, I believe that simply wheel converting
> Christoph Gohlke's repository gives you that right now. The only
> issues there are (1) the MKL license limitation, (2) hosting, and (3)
> whether Christoph would be OK with doing this (he goes to lengths on
> his site to prevent spidering his installers).
>

Besides the issues you mention, the problem is that it creates a single
point of failure. I really appreciate everything Christoph does, but it's
not appropriate as the default way to provide binary releases for a large
number of projects. There needs to be a reproducible way that the devs of
each project can build wheels - this includes the right metadata, but
ideally also a good way to reproduce the whole build environment including
compilers, blas/lapack implementations, dependencies etc. The latter part
is probably out of scope for this list, but is discussed right now on the
numfocus list.


> I genuinely believe that "a schientific stack for non-scientists" is
> trivially solved in this way.


That would be nice, but no. The only thing you'd have achieved is to take a
curated stack of .exe installers and converted it to the same stack of
wheels. Which is nice and a step forward, but doesn't change much in the
bigger picture. The problem is certainly nontrivial.

Ralf


> For scientists, of course, we'd need to
> look deeper, but having a base to start from would be great.
>
> Paul
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-04 Thread Ralf Gommers
On Wed, Dec 4, 2013 at 5:05 PM, Chris Barker - NOAA Federal <
chris.bar...@noaa.gov> wrote:

> Ralf,
>
> Great to have you on this thread!
>
> Note: supporting "variants" on one way or another is a great idea, but for
> right now, maybe we can get pretty far without it.
>
> There are options for "serious" scipy users that need optimum performance,
> and newbies that want the full stack.
>
> So our primary audience for "default" installs and pypi wheels are folks
> that need the core packages ( maybe a web dev that wants some MPL plots)
> and need things to "just work" more than anything optimized.
>

The problem is explaining to people what they want - no one reads docs
before grabbing a binary. On the other hand, using wheels does solve the
issue that people download 32-bit installers for 64-bit Windows systems.


> So a lowest common denominator wheel would be very, very, useful.
>
> As for what that would be: the superpack is great, but it's been around a
> while (long while in computer years)
>
> How many non-sse machines are there still out there? How many non-sse2?
>

Hard to tell. Probably <2%, but that's still too much. Some older Athlon
XPs don't have it for example. And what if someone submits performance
optimizations (there has been a focus on those recently) to numpy that use
SSE4 or AVX for example? You don't want to reject those based on the
limitations of your distribution process.

And how big is the performance boost anyway?
>

Large. For a long time we've put a non-SSE installer for numpy on pypi so
that people would stop complaining that ``easy_install numpy`` didn't work.
Then there were regular complaints about dot products being an order of
magnitude slower than Matlab or R.

What I'm getting at is that we may well be able to build a reasonable win32
> binary wheel that we can put up on pypi right now, with currently available
> tools.
>
> Then MPL and pandas and I python...
>
> Scipy is trickier-- what with the Fortran and all, but I think we could do
> Win32 anyway.
>
> And what's the hold up with win64? Is that fortran and scipy? If so, then
> why not do win64 for the rest of the stack?
>

Yes, 64-bit MinGW + gfortran doesn't yet work (no place to install dlls
from the binary, long story). A few people including David C are working on
this issue right now. Visual Studio + Intel Fortran would work, but going
with only an expensive toolset like that is kind of a no-go - especially
since I think you'd force everyone else that builds other Fortran
extensions to then also use the same toolset.

(I, for one, have been a heavy numpy user since the Numeric days, and I
> still hardly use scipy)
>
> By the way, we can/should do OS-X too-- it seems easier in fact (fewer
> hardware options to support, and the Mac's universal binaries)
>
> -Chris
>
> Note on OS-X :  how long has it been since Apple shipped a 32 bit machine?
> Can we dump default 32 bit support? I'm pretty sure we don't need to do PPC
> anymore...
>

I'd like to, but we decided to ship the exact same set of binaries as
python.org - which means compiling on OS X 10.5/10.6 and including PPC +
32-bit Intel.

Ralf


>
> On Dec 3, 2013, at 11:40 PM, Ralf Gommers  wrote:
>
>
>
>
> On Wed, Dec 4, 2013 at 1:54 AM, Donald Stufft  wrote:
>
>>
>> On Dec 3, 2013, at 7:36 PM, Oscar Benjamin 
>> wrote:
>>
>> > On 3 December 2013 21:13, Donald Stufft  wrote:
>> >> I think Wheels are the way forward for Python dependencies. Perhaps
>> not for
>> >> things like fortran. I hope that the scientific community can start
>> >> publishing wheels at least in addition too.
>> >
>> > The Fortran issue is not that complicated. Very few packages are
>> > affected by it. It can easily be fixed with some kind of compatibility
>> > tag that can be used by the small number of affected packages.
>> >
>> >> I don't believe that Conda will gain the mindshare that pip has
>> outside of
>> >> the scientific community so I hope we don't end up with two systems
>> that
>> >> can't interoperate.
>> >
>> > Maybe conda won't gain mindshare outside the scientific community but
>> > wheel really needs to gain mindshare *within* the scientific
>> > community. The root of all this is numpy. It is the biggest dependency
>> > on PyPI, is hard to build well, and has the Fortran ABI issue. It is
>> > used by very many people who wouldn't consider themselves part of the
>> > "scientific community". For example matplotlib depends on it. The P

Re: [Distutils] Handling the binary dependency management problem

2013-12-03 Thread Ralf Gommers
On Wed, Dec 4, 2013 at 1:54 AM, Donald Stufft  wrote:

>
> On Dec 3, 2013, at 7:36 PM, Oscar Benjamin 
> wrote:
>
> > On 3 December 2013 21:13, Donald Stufft  wrote:
> >> I think Wheels are the way forward for Python dependencies. Perhaps not
> for
> >> things like fortran. I hope that the scientific community can start
> >> publishing wheels at least in addition too.
> >
> > The Fortran issue is not that complicated. Very few packages are
> > affected by it. It can easily be fixed with some kind of compatibility
> > tag that can be used by the small number of affected packages.
> >
> >> I don't believe that Conda will gain the mindshare that pip has outside
> of
> >> the scientific community so I hope we don't end up with two systems that
> >> can't interoperate.
> >
> > Maybe conda won't gain mindshare outside the scientific community but
> > wheel really needs to gain mindshare *within* the scientific
> > community. The root of all this is numpy. It is the biggest dependency
> > on PyPI, is hard to build well, and has the Fortran ABI issue. It is
> > used by very many people who wouldn't consider themselves part of the
> > "scientific community". For example matplotlib depends on it. The PyPy
> > devs have decided that it's so crucial to the success of PyPy that
> > numpy's basically being rewritten in their stdlib (along with the C
> > API).
> >
> > A few times I've seen Paul Moore refer to numpy as the "litmus test"
> > for wheels. I actually think that it's more important than that. If
> > wheels are going to fly then there *needs* to be wheels for numpy. As
> > long as there isn't a wheel for numpy then there will be lots of
> > people looking for a non-pip/PyPI solution to their needs.
> >
> > One way of getting the scientific community more on board here would
> > be to offer them some tangible advantages. So rather than saying "oh
> > well scientific use is a special case so they should just use conda or
> > something", the message should be "the wheel system provides solutions
> > to many long-standing problems and is even better than conda in (at
> > least) some ways because it cleanly solves the Fortran ABI issue for
> > example".
> >
> >
> > Oscar
>
> I’d love to get Wheels to the point they are more suitable then they are
> for
> SciPy stuff,


That would indeed be a good step forward. I'm interested to try to help get
to that point for Numpy and Scipy.

I’m not sure what the diff between the current state and what
> they need to be are but if someone spells it out (I’ve only just skimmed
> your last email so perhaps it’s contained in that!) I’ll do the arguing
> for it. I
> just need someone who actually knows what’s needed to advise me :)
>

To start with, the SSE stuff. Numpy and scipy are distributed as
"superpack" installers for Windows containing three full builds: no SSE,
SSE2 and SSE3. Plus a script that runs at install time to check which
version to use. These are built with ``paver bdist_superpack``, see
https://github.com/numpy/numpy/blob/master/pavement.py#L224. The NSIS and
CPU selector scripts are under tools/win32build/.

How do I package those three builds into wheels and get the right one
installed by ``pip install numpy``?

If this is too difficult at the moment, an easier (but much less important
one) would be to get the result of ``paver bdist_wininst_simple`` as a
wheel.

For now I think it's OK that the wheels would just target 32-bit Windows
and python.org compatible Pythons (given that that's all we currently
distribute). Once that works we can look at OS X and 64-bit Windows.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Ralf Gommers
On Mon, Dec 2, 2013 at 12:38 AM, Paul Moore  wrote:

> On 1 December 2013 22:17, Nick Coghlan  wrote:
>
> > For example, I installed Nikola into a virtualenv last night. That
> required
> > installing the development headers for libxml2 and libxslt, but the error
> > that tells you that is a C compiler one.
> >
> > I've been a C programmer longer than I have been a Python one, but I
> still
> > had to resort to Google to try to figure out what dev libraries I needed.
>
> But that's a *build* issue, surely? How does that relate to installing
> Nikola from a set of binary wheels?
>
> I understand you are thinking about non-Python libraries, but all I
> can say is that this has *never* been an issue to my knowledge in the
> Windows world. People either ship DLLs with the Python extension, or
> build statically. I understand that things are different in the Unix
> world, but to be blunt why should Windows users care?
>
> > Outside the scientific space, crypto libraries are also notoriously hard
> to
> > build, as are game engines and GUI toolkits. (I guess database bindings
> > could also be a problem in some cases)
>
> Build issues again...
>
> > We have the option to leave handling the arbitrary binary dependency
> problem
> > to platforms, and I think we should take it.
>
> Again, can we please be clear here? On Windows, there is no issue that
> I am aware of. Wheels solve the binary distribution issue fine in that
> environment (I know this is true, I've been using wheels for months
> now - sure there may be specialist areas that need some further work
> because they haven't had as much use yet, but that's details)
>
> > This is why I suspect there will be a better near term effort/reward
> > trade-off in helping the conda folks improve the usability of their
> platform
> > than there is in trying to expand the wheel format to cover arbitrary
> binary
> > dependencies.
>
> Excuse me if I'm feeling a bit negative towards this announcement.
> I've spent many months working on, and promoting, the wheel + pip
> solution, to the point where it is now part of Python 3.4. And now
> you're saying that you expect us to abandon that effort and work on
> conda instead? I never saw wheel as a pure-Python solution, installs
> from source were fine for me in that area. The only reason I worked so
> hard on wheel was to solve the Windows binary distribution issue. If
> the new message is that people should not distribute wheels for (for
> example) lxml, pyyaml, pymzq, numpy, scipy, pandas, gmpy, and pyside
> (to name a few that I use in wheel format relatively often) then
> effectively the work I've put in has been wasted.
>

Hi, scipy developer here. In the scientific python community people are
definitely interested in and intending to standardize on wheels. Your work
on wheel + pip is much appreciated.

The problems above that you say are "build issues" aren't really build
issues (where build means what distutils/bento do to build a package).
Maybe the following concepts, shamelessly stolen from the thread linked
below, help:
- *build systems* handle the actual building of software, eg Make, CMake,
distutils, Bento, autotools, etc
- *package managers* handle the distribution and installation of built (or
source) software, eg pip, apt, brew, ports
- *build managers* are separate from the above and handle the automatic(?)
preparation of packages from the results of build systems

Conda is a package manager to the best of my understanding, but because it
controls the whole stack it can also already do parts of the job of a build
manager. This is not something that pip aims to do. Conda is fairly new and
not well understood in our community either, but maybe this (long) thread
helps:
https://groups.google.com/forum/#!searchin/numfocus/build$20managers/numfocus/mVNakFqfpZg/6h_SldGNM-EJ.


Regards,
Ralf


I'm hoping I've misunderstood here. Please clarify. Preferably with
> specifics for Windows (as "conda is a known stable platform" simply
> isn't true for me...) - I accept you're not a Windows user, so a
> pointer to already-existing documentation is fine (I couldn't find any
> myself).
>
> Paul.
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] easy_install failed to install numpy

2013-05-19 Thread Ralf Gommers
On Sun, May 19, 2013 at 10:12 AM, Huiqun Zhou  wrote:

> Hi,
>
> I'm trying to install numpy, but got the following error message.  What's
> wrong?
>

See https://github.com/numpy/numpy/issues/3160.

Numpy issues are better asked about on the numpy list by the way.

Cheers,
Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Distribute install problem on Snow Leopard

2009-10-06 Thread Ralf Gommers
On Tue, Oct 6, 2009 at 5:00 PM, Tarek Ziadé  wrote:

> On Tue, Oct 6, 2009 at 4:55 PM, Ralf Gommers
>  wrote:
> >
> > Yes, that fixed it. Thanks!
>
> Great. The bug is still to be fixed though, would you mind adding an
> issue following this link:
>
> http://bitbucket.org/tarek/distribute/issues/new/
>
> Giving the details.
>
> Thanks
>

Reported as issue #59.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Distribute install problem on Snow Leopard

2009-10-06 Thread Ralf Gommers
On Tue, Oct 6, 2009 at 4:43 PM, Tarek Ziadé  wrote:

> On Tue, Oct 6, 2009 at 4:37 PM, Ralf Gommers
>  wrote:
> > Hi,
> >
> > I installed Distribute on a Snow Leopard box using the default Python
> that
> > comes with OS X. I used the installation method recommended in the docs,
> the
> > distribute_setup.py script. Now easy_install fails no matter what package
> I
> > try to install (it does not get around to looking for the name of the
> > requested package):
> >
> > $ easy_install blablabla
> > Traceback (most recent call last):
> >   File "/usr/bin/easy_install-2.6", line 10, in 
> > load_entry_point('setuptools==0.6c9', 'console_scripts',
> > 'easy_install')()
> >   File
> >
> "/Library/Python/2.6/site-packages/distribute-0.6.3-py2.6.egg/pkg_resources.py",
> > line 281, in load_entry_point
> > return get_distribution(dist).load_entry_point(group, name)
> >   File
> >
> "/Library/Python/2.6/site-packages/distribute-0.6.3-py2.6.egg/pkg_resources.py",
> > line 2197, in load_entry_point
> > raise ImportError("Entry point %r not found" % ((group,name),))
> > ImportError: Entry point ('console_scripts', 'easy_install') not found
> >
> >
> > Did I forget something, or is this a bug? Any suggestions on how to fix
> it?
>
> This is a bug. It seems that the installation did not upgrade the
> "easy_install-2.6" script located
> in your "/usr/bin". We are going to investigate.
>
> The simplest way to fix this is to change in that script the name of
> the distribution:
>
> > load_entry_point('setuptools==0.6c9', 'console_scripts',
> 'easy_install')()
>
> by
>
> > load_entry_point('distribute==0.6.3', 'console_scripts',
> 'easy_install')()
>
> Let us know how it goes
>

Yes, that fixed it. Thanks!

Ralf



>
> Tarek
>
> --
> Tarek Ziadé | http://ziade.org | オープンソースはすごい! | 开源传万世,因有你参与
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] Distribute install problem on Snow Leopard

2009-10-06 Thread Ralf Gommers
Hi,

I installed Distribute on a Snow Leopard box using the default Python that
comes with OS X. I used the installation method recommended in the docs, the
distribute_setup.py script. Now easy_install fails no matter what package I
try to install (it does not get around to looking for the name of the
requested package):

$ easy_install blablabla
Traceback (most recent call last):
  File "/usr/bin/easy_install-2.6", line 10, in 
load_entry_point('setuptools==0.6c9', 'console_scripts',
'easy_install')()
  File
"/Library/Python/2.6/site-packages/distribute-0.6.3-py2.6.egg/pkg_resources.py",
line 281, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
  File
"/Library/Python/2.6/site-packages/distribute-0.6.3-py2.6.egg/pkg_resources.py",
line 2197, in load_entry_point
raise ImportError("Entry point %r not found" % ((group,name),))
ImportError: Entry point ('console_scripts', 'easy_install') not found


Did I forget something, or is this a bug? Any suggestions on how to fix it?

Cheers,
Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig