Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Nick Coghlan
On 5 December 2013 17:35, Ralf Gommers ralf.gomm...@gmail.com wrote:

 Namespace packages have been tried with scikits - there's a reason why
 scikit-learn and statsmodels spent a lot of effort dropping them. They don't
 work. Scipy, while monolithic, works for users.

The namespace package emulation that was all that was available in
versions prior to 3.3 can certainly be a bit fragile at times. The
native namespace packages in 3.3+ should be more robust (although even
one package erroneously including an __init__.py file can still cause
trouble).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Nick Coghlan
On 5 December 2013 19:40, Paul Moore p.f.mo...@gmail.com wrote:
 On 4 December 2013 23:31, Nick Coghlan ncogh...@gmail.com wrote:
 Hmm, rather than adding complexity most folks don't need directly to the
 base wheel spec, here's a possible multiwheel notion - embed multiple
 wheels with different names inside the multiwheel, along with a
 self-contained selector function for choosing which ones to actually install
 on the current system.

 That sounds like a reasonable approach. I'd be willing to try to put
 together a proof of concept implementation, if people think it's
 viable. What would we need to push this forward? A new PEP?

 This could be used not only for the NumPy use case, but also allow the
 distribution of external dependencies while allowing their installation to
 be skipped if they're already present on the target system.

 I'm not sure how this would work - wheels don't seem to me to be
 appropriate for installing external dependencies, but as I'm not
 100% clear on what you mean by that term I may be misunderstanding.
 Can you provide a concrete example?

If you put stuff in the data scheme dir, it allows you to install
files anywhere you like relative to the installation root. That means
you can already use the wheel format to distribute arbitrary files,
you may just have to build it via some mechanism other than
bdist_wheel.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Binary dependency management, round 2 :)

2013-12-05 Thread Nick Coghlan
On 4 December 2013 23:25, Daniel Holth dho...@gmail.com wrote:
 On Wed, Dec 4, 2013 at 6:10 AM, Nick Coghlan ncogh...@gmail.com wrote:
 == Regarding custom installation directories ==

 This technically came up in the cobblerd thread (regarding installing
 scripts to /usr/sbin instead of /usr/bin), but I believe it may also
 be relevant to the problem of shipping external libraries inside
 wheels, static data files for applications, etc.

 It's a little underspecified in PEP 427, but the way the wheel format
 currently handles installation to paths other than purelib and platlib
 (or to install to both of those as part of the same wheel) is to use
 the sysconfig scheme names as subdirectories within the wheel's .data
 directory. This approach is great for making it easy to build
 well-behaved cross platform wheels that play nice with virtual
 environments, but allowing a just put it here escape clause could
 potentially be a useful approach for platform specific wheels
 (especially on *nix systems that use the Filesystem Hierarchy
 Standard).

 I've posted this idea to the metadata format issue tracker:
 https://bitbucket.org/pypa/pypi-metadata-formats/issue/13/add-a-new-subdirectory-to-allow-wheels-to

 Note the 'data' sysconfig directory is already just '/' or the root of
 the virtualenv.

Ah, nice - I didn't grasp that from the sysconfig docs, and it's
definitely not covered in PEP 427 (it currently glosses over the
install scheme directories without even a reference to sysconfig)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Paul Moore
On 5 December 2013 09:52, Nick Coghlan ncogh...@gmail.com wrote:
 I'm not sure how this would work - wheels don't seem to me to be
 appropriate for installing external dependencies, but as I'm not
 100% clear on what you mean by that term I may be misunderstanding.
 Can you provide a concrete example?

 If you put stuff in the data scheme dir, it allows you to install
 files anywhere you like relative to the installation root. That means
 you can already use the wheel format to distribute arbitrary files,
 you may just have to build it via some mechanism other than
 bdist_wheel.

Ah, OK. I see.

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] Please accept Python 3.4 selector

2013-12-05 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Just publishing a new release of Berkeley DB I got 'Programming
Language :: Python :: 3.4' selector is rejected as invalid.

- -- 
Jesús Cea Avión _/_/  _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
Twitter: @jcea_/_/_/_/  _/_/_/_/_/
jabber / xmpp:j...@jabber.org  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQCVAwUBUqBwEplgi5GaxT1NAQKVpgP/VHXatbfPRE0Wu9XF6psC9Ci+U/OFgHIr
keD6FF+YRzpoT/hHtNWRnD+/eLfT7AigO5OXiyAhhgvwMjpv+9Q1MrtDNolpRKFl
HJhyB5Po9fH6k9B/TTyngdJzRADtvfdDugWCUaxWjPwJaVH8K6ajhXo1kcE0PLig
w6e6obS+Mc8=
=pHa+
-END PGP SIGNATURE-
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Chris Barker - NOAA Federal
On Dec 5, 2013, at 1:40 AM, Paul Moore p.f.mo...@gmail.com wrote:


 I'm not sure how this would work - wheels don't seem to me to be
 appropriate for installing external dependencies, but as I'm not
 100% clear on what you mean by that term

One of the key features of conda is that it is not specifically tied
to python--it can manage any binary package for a system: this is a
key reason for it's existance -- continuum wants to support it's users
with one way to install all they stuff they need to do their work with
one cross-platform solution. This includes not just libraries that
python extensions require, but also non-python stuff like Fortran
compilers, other languages (like R), or who knows what?

As wheels and conda packages are both just archives, there's no reason
wheel couldn't grow that capability -- but I'm not at all sure we want
it to.

-Chris
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Chris Barker - NOAA Federal
On Dec 4, 2013, at 11:35 PM, Ralf Gommers ralf.gomm...@gmail.com wrote



I'm just wondering how much we are making this hard for very little return.

I also don't know.


I wonder if a poll on the relevant lists would be helpful...


 I'll start playing with wheels in the near future.


Great! Thanks!

There are multiple ways to get a win64 install - Anaconda, EPD, WinPython,
Christoph's installers. So there's no big hurry here.


well, this discussion is about pip-installability, but yes, some of those
are python.org compatible: I know I always point people to Christoph's repo.



 [Side note: scipy really shouldn't be a monolithic package with everything
 and the kitchen sink in it -- this would all be a lot easier if it was a
 namespace package and people could get the non-Fortran stuff by
 itself...but I digress.]


Namespace packages have been tried with scikits - there's a reason why
scikit-learn and statsmodels spent a lot of effort dropping them. They
don't work. Scipy, while monolithic, works for users.


True--I've been trying out namespace packages for some far easier problems,
and you're right--not a robust solution.

That really should be fixed--but a whole new topic!




 Note on OS-X :  how long has it been since Apple shipped a 32 bit machine?
 Can we dump default 32 bit support? I'm pretty sure we don't need to do PPC
 anymore...


 I'd like to, but we decided to ship the exact same set of binaries as
 python.org - which means compiling on OS X 10.5/10.6 and including PPC +
 32-bit Intel.


 no it doesn't -- if we decide not to ship the 3.9, PPC + 32-bit Intel.
 binary -- why should that mean that we can't ship the Intel32+64 bit one?


But we do ship the 32+64-bit one (at least for Python 2.7 and 3.3). So
there shouldn't be any issue here.


Right--we just need the wheel. Which should be trivial for numpy on OS-X --
not the same sse issues.

Thanks for working on this.

- Chris
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Install a script to prefix/sbin instead of prefix/bin

2013-12-05 Thread Michael Jansen
On Tuesday, December 03, 2013 12:33:22 PM Michael Jansen wrote:
  Changes to distutils itself are fairly pointless, since the earliest
  possible date for publication of any such changes is now as part of
  Python 3.5 in 2015. The sheer impracticality of that approach when
  plenty of people are still running Python 2.6 is why we're going
  through the somewhat painful exercise of decoupling the preferred
  build system from the standard library :)
  
  So, at this point, the appropriate place to make such a change is in
  setuptools: https://bitbucket.org/pypa/setuptools
  
  That will allow it to be made in a way that supports Python 2.6+,
  whereas a distutils change won't really help anyone.
 
 A point well made :) . Will do that.

I made a proof of concept implementation (not ready to be merged) here

https://bitbucket.org/jansenm/setuptools/commits/all

I did fight a bit with mercurial (we had severe philosophical disagreements) so 
i hope it works.

There is a new test that checks that everything works for the distutils syntax. 
and then one that 
tests the improvements. I implemented the flexible idea.

scripts = [
'bin/bin_script',
'sbin/sbin_script',
['sbin', ['sbin/sbin_script']],
['bin', ['bin/bin_script2']],
['srv/www/myprefix', ['bin/mymod.wsgi']]

The test is run with

$ python setup.py test --test-module setuptools.tests.test_install_scripts; 

and currently prints out the directory it used for testing and does not remove 
it so its possible to 
inspect the result.

test_distutils_compatibility (...) ... ## /tmp/tmpPMKJNk

I btw. noticed that numpy overloads build_scripts itself to handle functions 
instead of strings. Not 
sure if my patch will break that.

Open:

Where to put the scripts in step build_scripts? For now it does this for 
maximum backwards 
compatibility.

/tmp/tmp5ov6qb/src/build/scripts-2.7
/tmp/tmp5ov6qb/src/build/scripts-2.7/bin_script
/tmp/tmp5ov6qb/src/build/scripts-2.7/sbin_script
/tmp/tmp5ov6qb/src/build/setuptools-scripts-2.7
/tmp/tmp5ov6qb/src/build/setuptools-scripts-2.7/sbin
/tmp/tmp5ov6qb/src/build/setuptools-scripts-2.7/sbin/sbin_script
/tmp/tmp5ov6qb/src/build/setuptools-scripts-2.7/bin
/tmp/tmp5ov6qb/src/build/setuptools-scripts-2.7/bin/bin_script2
/tmp/tmp5ov6qb/src/build/setuptools-scripts-2.7/srv
/tmp/tmp5ov6qb/src/build/setuptools-scripts-2.7/srv/www
/tmp/tmp5ov6qb/src/build/setuptools-scripts-2.7/srv/www/myprefix
/tmp/tmp5ov6qb/src/build/setuptools-scripts-2.7/srv/www/myprefix/mymod.wsgi

I had to use the install_data directory to install the scripts into with 
copy_tree in install_scripts 
because my preferred solution install_base is not rerooted by distutils (when 
--root is given to 
setup.py).

There is currently no build-setuptools-scripts parameter for python setup.py 
build. We would have 
to add that. Same for build_scripts. it would be build-setuptools-dir there.

Documentation is missing too. As i said only a proof of concept for now.







 



 
 Mike

-- 
Michael Jansen
http://michael-jansen.biz
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] Buildout - redo_pyc function too slow

2013-12-05 Thread Kamal Mustafa
Installing large package such as Django on EC2 micro instance took a
very long time, 8-9 minutes with 99% cpu usage. Initially, I taught it
caused by setuptools analyzing the packages to figure out it zip_safe
or not [1]. But after looking at this closely, that's not the case.
Analyzing the egg only took few seconds and can be negligible to the
total time it took to install the whole package. I have also test by
adding zip_safe=False to django's setup.py and didn't see any drastic
improvement to the time taken to install it.

I test by using easy_install directly and it took around 1-2 minutes
to finish so it mean the other 8 minutes being spent in buildout
itself rather than in setuptools/easy_install. The install process
basically went like this:-

...
Writing /tmp/easy_install-GwjQPW/django-master/setup.cfg
Running django-master/setup.py -q bdist_egg --dist-dir
/tmp/easy_install-GwjQPW/django-mas
ter/egg-dist-tmp-Yk_MYR
warning: no previously-included files matching '__pycache__' found
under directory '*'
warning: no previously-included files matching '*.py[co]' found under
directory '*'
...
...
LONG GAP HERE ...
Got Django 1.7.
Picked: Django = 1.7
Generated script '/home/kamal/test_buildout/bin/django-admin.py'.
Generated interpreter '/home/kamal/test_buildout/bin/python'.

Stepping through the code, I figure out the LONG GAP starting after:-

dists = self._call_easy_install(
dist.location, ws, self._dest, dist)

in line 531 of zc/buildout/easy_install.py. Next after this line is:-

for dist in dists:
redo_pyc(dist.location)

Commenting this function call I manage to cut down the installation
time to 2m30s. So what the purpose of this function ? Skipping it seem
to be fine, I can import the package without any error. My
buildout.cfg:-

[buildout]
parts = base

[base]
recipe = zc.recipe.egg
eggs =
Django
interpreter = python

The only reference to redo_pyc I found is
http://www.makina-corpus.org/blog/minitage-10-out which just saying
redo_pyc to be somehow slow.

[1]:https://github.com/buildout/buildout/issues/116
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Oscar Benjamin
On 4 December 2013 20:56, Ralf Gommers ralf.gomm...@gmail.com wrote:
 On Wed, Dec 4, 2013 at 5:05 PM, Chris Barker - NOAA Federal
 chris.bar...@noaa.gov wrote:

 So a lowest common denominator wheel would be very, very, useful.

 As for what that would be: the superpack is great, but it's been around a
 while (long while in computer years)

 How many non-sse machines are there still out there? How many non-sse2?

 Hard to tell. Probably 2%, but that's still too much. Some older Athlon XPs
 don't have it for example. And what if someone submits performance
 optimizations (there has been a focus on those recently) to numpy that use
 SSE4 or AVX for example? You don't want to reject those based on the
 limitations of your distribution process.

 And how big is the performance boost anyway?

 Large. For a long time we've put a non-SSE installer for numpy on pypi so
 that people would stop complaining that ``easy_install numpy`` didn't work.
 Then there were regular complaints about dot products being an order of
 magnitude slower than Matlab or R.

Yes, I wouldn't want that kind of bad PR getting around about
scientific Python Python is slower than Matlab etc.

It seems as if there is a need to extend the pip+wheel+PyPI system
before this can fully work for numpy. I'm sure that the people here
who have been working on all of this would be very interested to know
what kinds of solutions would work best for numpy and related
packages.

You mentioned in another message that a post-install script seems best
to you. I suspect there is a little reluctance to go this way because
one of the goals of the wheel system is to reduce the situation where
users execute arbitrary code from the internet with admin privileges
e.g. sudo pip install X will download and run the setup.py from X
with root privileges. Part of the point about wheels is that they
don't need to be executed for installation. I know that post-install
scripts are common in .deb and .rpm packages but I think that the use
case there is slightly different as the files are downloaded from
controlled repositories whereas PyPI has no quality assurance.

BTW, how do the distros handle e.g. SSE? My understanding is that they
just strip out all the SSE and related non-portable extensions and
ship generic 686 binaries. My experience is with Ubuntu and I know
they're not very good at handling BLAS with numpy and they don't seem
to be able to compile fftpack as well as Cristoph can.

Perhaps a good near-term plan might be to
1) Add the bdist_wheel command to numpy - which may actually be almost
automatic with new enough setuptools/pip and wheel installed.
2) Upload wheels for OSX to PyPI - for OSX SSE support can be inferred
from OS version which wheels can currently handle.
3) Upload wheels for Windows to somewhere other than PyPI e.g.
SourceForge pending a distribution solution that can detect SSE
support on Windows.

I think it would be good to have a go at wheels even if they're not
fully ready for PyPI (just in case some other issue surfaces in the
process).


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Binary dependency management, round 2 :)

2013-12-05 Thread Oscar Benjamin
On 5 December 2013 00:06, Marcus Smith qwc...@gmail.com wrote:

  but Anoconda does some a nifty thing: it make s conda package that holds
 the shared lib, then other packages that depend on it depend on that
 package, so it will both get auto--installed

 But I don't see why you couldn't do that with wheels.

 exactly,  that's what I'm really proposing/asking,  is that maybe wheels
 should formally go in that direction.
 i.e. not just packaging python projects, but packaging non-python
 dependencies that python projects need (but have those dependencies be
 optional, for those who want to fulfill those deps using

I don't think it matters whether anyone formally goes in that
direction. If it's possible then it will happen for some things sooner
or later. I hope it does happen too, for things like build tools,
BLAS/LAPACK libraries etc. Virtualenv+pip could become a much more
convenient way to set up a software configuration than currently
exists on Windows and OSX (and on Linux distros if you're not looking
to mess with the system install).


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Ralf Gommers
On Thu, Dec 5, 2013 at 10:12 PM, Oscar Benjamin
oscar.j.benja...@gmail.comwrote:

 On 4 December 2013 20:56, Ralf Gommers ralf.gomm...@gmail.com wrote:
  On Wed, Dec 4, 2013 at 5:05 PM, Chris Barker - NOAA Federal
  chris.bar...@noaa.gov wrote:
 
  So a lowest common denominator wheel would be very, very, useful.
 
  As for what that would be: the superpack is great, but it's been around
 a
  while (long while in computer years)
 
  How many non-sse machines are there still out there? How many non-sse2?
 
  Hard to tell. Probably 2%, but that's still too much. Some older Athlon
 XPs
  don't have it for example. And what if someone submits performance
  optimizations (there has been a focus on those recently) to numpy that
 use
  SSE4 or AVX for example? You don't want to reject those based on the
  limitations of your distribution process.
 
  And how big is the performance boost anyway?
 
  Large. For a long time we've put a non-SSE installer for numpy on pypi so
  that people would stop complaining that ``easy_install numpy`` didn't
 work.
  Then there were regular complaints about dot products being an order of
  magnitude slower than Matlab or R.

 Yes, I wouldn't want that kind of bad PR getting around about
 scientific Python Python is slower than Matlab etc.

 It seems as if there is a need to extend the pip+wheel+PyPI system
 before this can fully work for numpy. I'm sure that the people here
 who have been working on all of this would be very interested to know
 what kinds of solutions would work best for numpy and related
 packages.

 You mentioned in another message that a post-install script seems best
 to you. I suspect there is a little reluctance to go this way because
 one of the goals of the wheel system is to reduce the situation where
 users execute arbitrary code from the internet with admin privileges
 e.g. sudo pip install X will download and run the setup.py from X
 with root privileges. Part of the point about wheels is that they
 don't need to be executed for installation. I know that post-install
 scripts are common in .deb and .rpm packages but I think that the use
 case there is slightly different as the files are downloaded from
 controlled repositories whereas PyPI has no quality assurance.


I don't think it's avoidable - anything that is transparant to the user
will have to execute code. The multiwheel idea of Nick looks good to me.


 BTW, how do the distros handle e.g. SSE?


I don't know exactly to be honest.


 My understanding is that they
 just strip out all the SSE and related non-portable extensions and
 ship generic 686 binaries. My experience is with Ubuntu and I know
 they're not very good at handling BLAS with numpy and they don't seem
 to be able to compile fftpack as well as Cristoph can.

 Perhaps a good near-term plan might be to
 1) Add the bdist_wheel command to numpy - which may actually be almost
 automatic with new enough setuptools/pip and wheel installed.
 2) Upload wheels for OSX to PyPI - for OSX SSE support can be inferred
 from OS version which wheels can currently handle.
 3) Upload wheels for Windows to somewhere other than PyPI e.g.
 SourceForge pending a distribution solution that can detect SSE
 support on Windows.


That's a reasonable plan. I have an OS X wheel already, which required only
a minor change to numpy's setup.py.


 I think it would be good to have a go at wheels even if they're not
 fully ready for PyPI (just in case some other issue surfaces in the
 process).


Agreed.

Ralf
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Please accept Python 3.4 selector

2013-12-05 Thread Richard Jones
On 5 December 2013 23:22, Jesus Cea j...@jcea.es wrote:

 Programming Language :: Python :: 3.4


Added!
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Chris Barker - NOAA Federal
On Dec 5, 2013, at 1:12 PM, Oscar Benjamin oscar.j.benja...@gmail.com wrote:


 Yes, I wouldn't want that kind of bad PR getting around about
 scientific Python Python is slower than Matlab etc.

Well, is that better or worse that 2% or less people finding they
can't run it on their old machines

 It seems as if there is a need to extend the pip+wheel+PyPI system
 before this can fully work for numpy.

Maybe, in this case, but with the whole fortran ABI thing, yes.

 You mentioned in another message that a post-install script seems best
 to you.

What would really be best is run-time selection of the appropriate lib
-- it would solve this problem, and allow users to re-distribute
working binaries via py2exe, etc. And not require opening a security
hole in wheels...

Not sure how hard that would be to do, though.

 3) Upload wheels for Windows to somewhere other than PyPI e.g.
 SourceForge pending a distribution solution that can detect SSE
 support on Windows.

The hard-core I want to use python instead of matlab users are being
re-directed to Anaconda or Canopy anyway. So maybe sub-optimal
binaries on pypi is OK.

By the way, anyone know what Anaconda and Canopy do about SSE and a good BLAS?


 I think it would be good to have a go at wheels even if they're not
 fully ready for PyPI (just in case some other issue surfaces in the
 process).

Absolutely!

- Chris
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Donald Stufft

On Dec 5, 2013, at 8:48 PM, Chris Barker - NOAA Federal chris.bar...@noaa.gov 
wrote:

 What would really be best is run-time selection of the appropriate lib
 -- it would solve this problem, and allow users to re-distribute
 working binaries via py2exe, etc. And not require opening a security
 hole in wheels...
 
 Not sure how hard that would be to do, though.

Install time selectors probably isn’t a huge deal as long as there’s a way
to force a particular variant to install and to disable the executing code.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Chris Barker
On Thu, Dec 5, 2013 at 5:52 PM, Donald Stufft don...@stufft.io wrote:


 On Dec 5, 2013, at 8:48 PM, Chris Barker - NOAA Federal 
 chris.bar...@noaa.gov wrote:

  What would really be best is run-time selection of the appropriate lib
  -- it would solve this problem, and allow users to re-distribute
  working binaries via py2exe, etc. And not require opening a security
  hole in wheels...
 
  Not sure how hard that would be to do, though.

 Install time selectors probably isn’t a huge deal as long as there’s a way
 to force a particular variant to install and to disable the executing code.


I was proposing run-time -- so the same package would work right when
moved to another machine via py2exe, etc. I imagine that's harder,
particularly with permissions issues...

-Chris







 -
 Donald Stufft
 PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372
 DCFA




-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Nick Coghlan
On 6 December 2013 11:52, Donald Stufft don...@stufft.io wrote:

 On Dec 5, 2013, at 8:48 PM, Chris Barker - NOAA Federal 
 chris.bar...@noaa.gov wrote:

 What would really be best is run-time selection of the appropriate lib
 -- it would solve this problem, and allow users to re-distribute
 working binaries via py2exe, etc. And not require opening a security
 hole in wheels...

 Not sure how hard that would be to do, though.

 Install time selectors probably isn’t a huge deal as long as there’s a way
 to force a particular variant to install and to disable the executing code.

Hmm, I just had an idea for how to do the runtime selection thing. It
actually shouldn't be that hard, so long as the numpy folks are OK
with a bit of __path__ manipulation in package __init__ modules.

Specifically, what could be done is this:

- all of the built SSE level dependent modules would move out of their
current package directories into a suitable named subdirectory (say
_nosse, _sse2, _sse3)
- in the __init__.py file for each affected subpackage, you would have
a snippet like:

numpy._add_sse_subdir(__path__)

where _add_sse_subdir would be something like:

def _add_sse_subdir(search_path):
if len(search_path)  1:
return # Assume the SSE dependent dir has already been added
# Could likely do this SSE availability check once at import time
if _have_sse3():
sub_dir = _sse3
elif _have_sse2():
sub_dir = _sse2
else:
sub_dir = _nosse
main_dir = search_path[0]
search_path.append(os.path.join(main_dir, sub_dir)

With that approach, the existing wheel model would work (no need for a
variant system), and numpy installations could be freely moved between
machines (or shared via a network directory).

To avoid having the implicit namespace packages in 3.3+ cause any
problems with this approach, the SSE subdirectories should contain
__init__.py files that explicitly raise ImportError.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Thomas Heller

Am 06.12.2013 06:47, schrieb Nick Coghlan:

On 6 December 2013 11:52, Donald Stufft don...@stufft.io wrote:


On Dec 5, 2013, at 8:48 PM, Chris Barker - NOAA Federal chris.bar...@noaa.gov 
wrote:


What would really be best is run-time selection of the appropriate lib
-- it would solve this problem, and allow users to re-distribute
working binaries via py2exe, etc. And not require opening a security
hole in wheels...

Not sure how hard that would be to do, though.


Install time selectors probably isn’t a huge deal as long as there’s a way
to force a particular variant to install and to disable the executing code.


Hmm, I just had an idea for how to do the runtime selection thing. It
actually shouldn't be that hard, so long as the numpy folks are OK
with a bit of __path__ manipulation in package __init__ modules.


Manipulation of __path__ at runtime usually makes it harder for
modulefinder to find all the required modules.

Thomas
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-05 Thread Ralf Gommers
On Fri, Dec 6, 2013 at 6:47 AM, Nick Coghlan ncogh...@gmail.com wrote:

 On 6 December 2013 11:52, Donald Stufft don...@stufft.io wrote:
 
  On Dec 5, 2013, at 8:48 PM, Chris Barker - NOAA Federal 
 chris.bar...@noaa.gov wrote:
 
  What would really be best is run-time selection of the appropriate lib
  -- it would solve this problem, and allow users to re-distribute
  working binaries via py2exe, etc. And not require opening a security
  hole in wheels...
 
  Not sure how hard that would be to do, though.
 
  Install time selectors probably isn’t a huge deal as long as there’s a
 way
  to force a particular variant to install and to disable the executing
 code.

 Hmm, I just had an idea for how to do the runtime selection thing. It
 actually shouldn't be that hard, so long as the numpy folks are OK
 with a bit of __path__ manipulation in package __init__ modules.

 Specifically, what could be done is this:

 - all of the built SSE level dependent modules would move out of their
 current package directories into a suitable named subdirectory (say
 _nosse, _sse2, _sse3)
 - in the __init__.py file for each affected subpackage, you would have
 a snippet like:

 numpy._add_sse_subdir(__path__)

 where _add_sse_subdir would be something like:

 def _add_sse_subdir(search_path):
 if len(search_path)  1:
 return # Assume the SSE dependent dir has already been added
 # Could likely do this SSE availability check once at import time
 if _have_sse3():
 sub_dir = _sse3
 elif _have_sse2():
 sub_dir = _sse2
 else:
 sub_dir = _nosse
 main_dir = search_path[0]
 search_path.append(os.path.join(main_dir, sub_dir)

 With that approach, the existing wheel model would work (no need for a
 variant system), and numpy installations could be freely moved between
 machines (or shared via a network directory).


Hmm, taking a compile flag and encoding it in the package layout seems like
a fundamentally wrong approach. And in order to not litter the source tree
and all installs with lots of empty dirs, the changes to __init__.py will
have to be made at build time based on whether you're building Windows
binaries or something else. Path manipulation is usually fragile as well.
So I suspect this is not going to fly.

Ralf



 To avoid having the implicit namespace packages in 3.3+ cause any
 problems with this approach, the SSE subdirectories should contain
 __init__.py files that explicitly raise ImportError.

 Cheers,
 Nick.

 --
 Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig