Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Paul Moore
On 2 December 2013 07:31, Nick Coghlan ncogh...@gmail.com wrote:
 The only problem I want to take off the table is the one where
 multiple wheel files try to share a dynamically linked external binary
 dependency.

OK. Thanks for the clarification.

Can I suggest that we need to be very careful how any recommendation
in this area is stated? I certainly didn't get that impression from
your initial posting, and from the other responses it doesn't look
like I was the only one.

We're only just starting to get real credibility for wheel as a
distribution format, and we need to get a very strong message out that
wheel is the future, and people should be distributing wheels as their
primary binary format. My personal litmus test is the scientific
community - when Christoph Gohlke is distributing his (Windows) binary
builds as wheels, and projects like numpy, ipython, scipy etc are
distributing wheels on PyPI, rather than bdist_wininst, I'll feel like
we have got to the point where wheels are the norm. The problem is,
of course, that with conda being a scientific distribution at heart,
any message we issue that promotes conda in any context will risk
confusion in that community.

My personal interest is as a non-scientific user who does a lot of
data analysis, and finds IPython, Pandas, matplotlib numpy etc useful.
At the moment I can pip install the tools I need (with a quick wheel
convert from wininst format). I don't want to find that in the future
I can't do that, but instead have to build from source or learn a new
tool (conda).

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Oscar Benjamin
On 2 December 2013 09:19, Paul Moore p.f.mo...@gmail.com wrote:
 On 2 December 2013 07:31, Nick Coghlan ncogh...@gmail.com wrote:
 The only problem I want to take off the table is the one where
 multiple wheel files try to share a dynamically linked external binary
 dependency.

 OK. Thanks for the clarification.

 Can I suggest that we need to be very careful how any recommendation
 in this area is stated? I certainly didn't get that impression from
 your initial posting, and from the other responses it doesn't look
 like I was the only one.

I understood what Nick meant but I still don't understand how he's
come to this conclusion.

 We're only just starting to get real credibility for wheel as a
 distribution format, and we need to get a very strong message out that
 wheel is the future, and people should be distributing wheels as their
 primary binary format. My personal litmus test is the scientific
 community - when Christoph Gohlke is distributing his (Windows) binary
 builds as wheels, and projects like numpy, ipython, scipy etc are
 distributing wheels on PyPI, rather than bdist_wininst, I'll feel like
 we have got to the point where wheels are the norm. The problem is,
 of course, that with conda being a scientific distribution at heart,
 any message we issue that promotes conda in any context will risk
 confusion in that community.

Nick's proposal is basically incompatible with allowing Cristoph
Gohlke to use pip and wheels. Christoph provides a bewildering array
of installers for prebuilt packages that are interchangeable with
other builds at the level of Python code but not necessarily at the
binary level. So, for example, His scipy is incompatible with the
official (from SourceForge) Windows numpy build because it links
with the non-free Intel MKL library and it needs numpy to link against
the same. Installing his scipy over the other numpy results in this:
https://mail.python.org/pipermail//python-list/2013-September/655669.html

So Christoph can provide wheels and people can manually download them
and install from them but would beginners find that any easier than
running the .exe installers? The .exe installers are more powerful and
can do things like the numpy super-pack that distributes binaries for
different levels of SSE support (as discussed previously on this list
the wheel format cannot currently achieve this). Beginners will also
find .exe installers more intuitive than running pip on the command
line and will typically get better error messages etc. than pip
provides. So I don't really see why Cristoph should bother switching
formats (as noted by Paul before anyone who wants a wheel cache can
easily convert his installers into wheels).

AFAICT what Nick is saying is that it's not possible for pip and PyPI
to guarantee the compatibility of different binaries because unlike
apt-get and friends only part of the software stack is controlled.
However I think this is not the most relevant difference between pip
and apt-get here. The crucial difference is that apt-get communicates
with repositories where all code and all binaries are under control of
a single organisation. Pip (when used normally) communicates with PyPI
and no single organisation controls the content of PyPI. So there's no
way for pip/PyPI to guarantee *anything* about the compatibility of
the code that they distribute/install, whether the problems are to do
with binary compatibility or just compatibility of pure Python code.
For pure Python distributions package authors are expected to solve
the compatibility problems and pip provides version specifiers etc
that they can use to do this. For built distributions they could do
the same - except that pip/PyPI don't provide a mechanism for them to
do so.

Because PyPI is not a centrally controlled single software stack it
needs a different model for ensuring compatibility - one driven by the
community. People in the Python community are prepared to spend a
considerable amount of time, effort and other resources solving this
problem. Consider how much time Cristoph Gohlke must spend maintaining
such a large internally consistent set of built packages. He has
created a single compatible binary software stack for scientific
computation. It's just that PyPI doesn't give him any way to
distribute it. If perhaps he could own a tag like cgohlke and upload
numpy:cgohlke and scipy:cgohlke then his scipy:cgohlke wheel could
depend on numpy:cgohlke and numpy:cgohlke could somehow communicate
the fact that it is incompatible with any other scipy distribution.
This is one way in which pip/PyPI could facilitate the Python
community to solve the binary compatibility problems.

[As an aside I don't know whether Cristoph's Intel license would
permit distribution via PYPI.]

Another way would be to allow the community to create compatibility
tags so that projects like numpy would have mechanisms to indicate
e.g. Fortran ABI compatibility. In this model no one owns a particular
tag but 

Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Paul Moore
On 2 December 2013 10:45, Oscar Benjamin oscar.j.benja...@gmail.com wrote:
 Nick's proposal is basically incompatible with allowing Cristoph
 Gohlke to use pip and wheels. Christoph provides a bewildering array
 of installers for prebuilt packages that are interchangeable with
 other builds at the level of Python code but not necessarily at the
 binary level. So, for example, His scipy is incompatible with the
 official (from SourceForge) Windows numpy build because it links
 with the non-free Intel MKL library and it needs numpy to link against
 the same. Installing his scipy over the other numpy results in this:
 https://mail.python.org/pipermail//python-list/2013-September/655669.html

Ah, OK. I had not seen this issue as I've always either used
Christoph's builds or not used them. I've never tried or needed to mix
builds. This is probably because I'm very much only a casual user of
the scientific stack, so my needs are pretty simple.

 So Christoph can provide wheels and people can manually download them
 and install from them but would beginners find that any easier than
 running the .exe installers? The .exe installers are more powerful and
 can do things like the numpy super-pack that distributes binaries for
 different levels of SSE support (as discussed previously on this list
 the wheel format cannot currently achieve this). Beginners will also
 find .exe installers more intuitive than running pip on the command
 line and will typically get better error messages etc. than pip
 provides. So I don't really see why Cristoph should bother switching
 formats (as noted by Paul before anyone who wants a wheel cache can
 easily convert his installers into wheels).

The crucial answer here is that exe installers don't recognise
virtualenvs. Again, I can imagine that a scientific user would
naturally install Python and put all the scientific modules into the
system Python - but precisely because I'm a casual user, I want to
keep big dependencies like numpy/scipy out of my system Python, and so
I use virtualenvs.

The big improvement pip/wheel give over wininst is a consistent user
experience, whether installing into the system Python, a virtualenv,
or a Python 3.3+ venv. (I used to use wininsts in preference to pip,
so please excuse a certain level of the enthusiasm of a convert here
:-))

 AFAICT what Nick is saying is that it's not possible for pip and PyPI
 to guarantee the compatibility of different binaries because unlike
 apt-get and friends only part of the software stack is controlled.
 However I think this is not the most relevant difference between pip
 and apt-get here. The crucial difference is that apt-get communicates
 with repositories where all code and all binaries are under control of
 a single organisation. Pip (when used normally) communicates with PyPI
 and no single organisation controls the content of PyPI. So there's no
 way for pip/PyPI to guarantee *anything* about the compatibility of
 the code that they distribute/install, whether the problems are to do
 with binary compatibility or just compatibility of pure Python code.
 For pure Python distributions package authors are expected to solve
 the compatibility problems and pip provides version specifiers etc
 that they can use to do this. For built distributions they could do
 the same - except that pip/PyPI don't provide a mechanism for them to
 do so.

Agreed. Expecting the same level of compatibility guarantees from PyPI
as is provided by RPM/apt is unrealistic, in my view. Heck, even pure
Python packages don't give any indication as to whether they are
Python 3 compatible in some cases (I just hit this today with the
binstar package, as an example). This is a fact of life with a
repository that doesn't QA uploads.

 Because PyPI is not a centrally controlled single software stack it
 needs a different model for ensuring compatibility - one driven by the
 community. People in the Python community are prepared to spend a
 considerable amount of time, effort and other resources solving this
 problem. Consider how much time Cristoph Gohlke must spend maintaining
 such a large internally consistent set of built packages. He has
 created a single compatible binary software stack for scientific
 computation. It's just that PyPI doesn't give him any way to
 distribute it. If perhaps he could own a tag like cgohlke and upload
 numpy:cgohlke and scipy:cgohlke then his scipy:cgohlke wheel could
 depend on numpy:cgohlke and numpy:cgohlke could somehow communicate
 the fact that it is incompatible with any other scipy distribution.
 This is one way in which pip/PyPI could facilitate the Python
 community to solve the binary compatibility problems.

Exactly.

 [As an aside I don't know whether Cristoph's Intel license would
 permit distribution via PYPI.]

Yes, I'd expect Cristoph's packages would likely always have to remain
off PyPI (if for no other reason than the fact that he isn't the owner
of the packages he's providing distributions 

Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Nick Coghlan
On 2 Dec 2013 21:57, Paul Moore p.f.mo...@gmail.com wrote:

 On 2 December 2013 10:45, Oscar Benjamin oscar.j.benja...@gmail.com
wrote:
  Nick's proposal is basically incompatible with allowing Cristoph
  Gohlke to use pip and wheels. Christoph provides a bewildering array
  of installers for prebuilt packages that are interchangeable with
  other builds at the level of Python code but not necessarily at the
  binary level. So, for example, His scipy is incompatible with the
  official (from SourceForge) Windows numpy build because it links
  with the non-free Intel MKL library and it needs numpy to link against
  the same. Installing his scipy over the other numpy results in this:
 
https://mail.python.org/pipermail//python-list/2013-September/655669.html

 Ah, OK. I had not seen this issue as I've always either used
 Christoph's builds or not used them. I've never tried or needed to mix
 builds. This is probably because I'm very much only a casual user of
 the scientific stack, so my needs are pretty simple.

  So Christoph can provide wheels and people can manually download them
  and install from them but would beginners find that any easier than
  running the .exe installers? The .exe installers are more powerful and
  can do things like the numpy super-pack that distributes binaries for
  different levels of SSE support (as discussed previously on this list
  the wheel format cannot currently achieve this). Beginners will also
  find .exe installers more intuitive than running pip on the command
  line and will typically get better error messages etc. than pip
  provides. So I don't really see why Cristoph should bother switching
  formats (as noted by Paul before anyone who wants a wheel cache can
  easily convert his installers into wheels).

 The crucial answer here is that exe installers don't recognise
 virtualenvs. Again, I can imagine that a scientific user would
 naturally install Python and put all the scientific modules into the
 system Python - but precisely because I'm a casual user, I want to
 keep big dependencies like numpy/scipy out of my system Python, and so
 I use virtualenvs.

 The big improvement pip/wheel give over wininst is a consistent user
 experience, whether installing into the system Python, a virtualenv,
 or a Python 3.3+ venv. (I used to use wininsts in preference to pip,
 so please excuse a certain level of the enthusiasm of a convert here
 :-))

And the conda folks are working on playing nice with virtualenv - I don't
we'll see a similar offer from Microsoft for MSI any time soon :)

  AFAICT what Nick is saying is that it's not possible for pip and PyPI
  to guarantee the compatibility of different binaries because unlike
  apt-get and friends only part of the software stack is controlled.
  However I think this is not the most relevant difference between pip
  and apt-get here. The crucial difference is that apt-get communicates
  with repositories where all code and all binaries are under control of
  a single organisation. Pip (when used normally) communicates with PyPI
  and no single organisation controls the content of PyPI. So there's no
  way for pip/PyPI to guarantee *anything* about the compatibility of
  the code that they distribute/install, whether the problems are to do
  with binary compatibility or just compatibility of pure Python code.
  For pure Python distributions package authors are expected to solve
  the compatibility problems and pip provides version specifiers etc
  that they can use to do this. For built distributions they could do
  the same - except that pip/PyPI don't provide a mechanism for them to
  do so.

 Agreed. Expecting the same level of compatibility guarantees from PyPI
 as is provided by RPM/apt is unrealistic, in my view. Heck, even pure
 Python packages don't give any indication as to whether they are
 Python 3 compatible in some cases (I just hit this today with the
 binstar package, as an example). This is a fact of life with a
 repository that doesn't QA uploads.

Exactly, this is the difference between pip and conda - conda is a solution
for installing from curated *collections* of packages. It's somewhat
related to the tagging system people are speculating about for PyPI, but
instead of being purely hypothetical, it already exists.

Because it uses hash based dependencies, there's no chance of things
getting mixed up. That design has other problems which limit the niche
where a tool like conda is the right answer, but within that niche, hash
based dependency management helps bring the combinatorial explosion of
possible variations under control.

  Because PyPI is not a centrally controlled single software stack it
  needs a different model for ensuring compatibility - one driven by the
  community. People in the Python community are prepared to spend a
  considerable amount of time, effort and other resources solving this
  problem. Consider how much time Cristoph Gohlke must spend maintaining
  such a large 

Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Tres Seaver
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/01/2013 05:07 PM, Vinay Sajip wrote:
 On Sun, 1/12/13, Paul Moore p.f.mo...@gmail.com wrote:
 
 If the issue is simply around defining compatibility tags that
 better describe the various environments around, then let's just
 get on with that - we're going to have to do it in the end anyway,
 why  temporarily promote an alternative solution just to change our
 recommendation later?
 
 This makes sense to me. We should refine the compatibility tags as
 much as is required. It would be nice if there was some place (on
 PyPI, or elsewhere) where users could request binary distributions for
 specific packages for particular environments, and then some kind
 people with those environments might be able to build those wheels and
 upload them ... a bit like Christoph Gohlke does for Windows.

The issue is combinatorial explosion in the compatibility tag space.
There is basically zero chance that even Linux users (even RedHat users
across RHEL version) would benefit from pre-built binary wheels (as
opposed to packages from their distribution).  Wheels on POSIX allow
caching of the build process for deployment across a known set of hosts:
 they won't insulate you from the need to build in the first place.

Wheels *might* be in play in the for-pay market, where a vendor supports
a limited set platforms, but those solutions will use separate indexes
anyway.


Tres.
- -- 
===
Tres Seaver  +1 540-429-0999  tsea...@palladion.com
Palladion Software   Excellence by Designhttp://palladion.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKcizYACgkQ+gerLs4ltQ6kKwCfRa5s8XnM5SwlnnIHGGJ8dJSg
hPUAn1TLWQNxtbQmPvvMPT2rEmlhCwq5
=xRsn
-END PGP SIGNATURE-

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Tres Seaver
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/01/2013 05:17 PM, Nick Coghlan wrote:

 I see conda as existing at a similar level to apt and yum from a
 packaging point of view, with zc.buildout as a DIY equivalent at that
 level.

FTR: zc.buildout does nothing to insulate you from the need for a
compiler;  it does allow you to create repeatable builds from source for
non-Python components which would otherwise vary with the underlying
platform.  The actual recipes for such components often involve a *lot*
of yak shaving. ;)


Tres.
- -- 
===
Tres Seaver  +1 540-429-0999  tsea...@palladion.com
Palladion Software   Excellence by Designhttp://palladion.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKcjIMACgkQ+gerLs4ltQ5XlQCeMmoyvAOvJGChhpGOF2Phkut0
nfwAnjj2pbr8bHKfS8+lzt/XorPVNzSe
=QmuK
-END PGP SIGNATURE-

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Tres Seaver
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/01/2013 06:38 PM, Paul Moore wrote:
 I understand that things are different in the Unix world, but to be
 blunt why should Windows users care?

You're kidding, right?  90% or more of the reason for wheels in the first
place is because Windows users can't build their own software from
source.  The amount of effort put in by non-Windows package owners to
support them dwarfs whatever is bothering you here.


Tres.
- -- 
===
Tres Seaver  +1 540-429-0999  tsea...@palladion.com
Palladion Software   Excellence by Designhttp://palladion.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKcjTgACgkQ+gerLs4ltQ7fQQCg0Pfd5tp3vvEsJnJ0aNLNeIXH
bVwAn2av6wxVMXEqe4jIQLL+2W4oqQ9G
=foOx
-END PGP SIGNATURE-

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Ralf Gommers
On Mon, Dec 2, 2013 at 12:38 AM, Paul Moore p.f.mo...@gmail.com wrote:

 On 1 December 2013 22:17, Nick Coghlan ncogh...@gmail.com wrote:

  For example, I installed Nikola into a virtualenv last night. That
 required
  installing the development headers for libxml2 and libxslt, but the error
  that tells you that is a C compiler one.
 
  I've been a C programmer longer than I have been a Python one, but I
 still
  had to resort to Google to try to figure out what dev libraries I needed.

 But that's a *build* issue, surely? How does that relate to installing
 Nikola from a set of binary wheels?

 I understand you are thinking about non-Python libraries, but all I
 can say is that this has *never* been an issue to my knowledge in the
 Windows world. People either ship DLLs with the Python extension, or
 build statically. I understand that things are different in the Unix
 world, but to be blunt why should Windows users care?

  Outside the scientific space, crypto libraries are also notoriously hard
 to
  build, as are game engines and GUI toolkits. (I guess database bindings
  could also be a problem in some cases)

 Build issues again...

  We have the option to leave handling the arbitrary binary dependency
 problem
  to platforms, and I think we should take it.

 Again, can we please be clear here? On Windows, there is no issue that
 I am aware of. Wheels solve the binary distribution issue fine in that
 environment (I know this is true, I've been using wheels for months
 now - sure there may be specialist areas that need some further work
 because they haven't had as much use yet, but that's details)

  This is why I suspect there will be a better near term effort/reward
  trade-off in helping the conda folks improve the usability of their
 platform
  than there is in trying to expand the wheel format to cover arbitrary
 binary
  dependencies.

 Excuse me if I'm feeling a bit negative towards this announcement.
 I've spent many months working on, and promoting, the wheel + pip
 solution, to the point where it is now part of Python 3.4. And now
 you're saying that you expect us to abandon that effort and work on
 conda instead? I never saw wheel as a pure-Python solution, installs
 from source were fine for me in that area. The only reason I worked so
 hard on wheel was to solve the Windows binary distribution issue. If
 the new message is that people should not distribute wheels for (for
 example) lxml, pyyaml, pymzq, numpy, scipy, pandas, gmpy, and pyside
 (to name a few that I use in wheel format relatively often) then
 effectively the work I've put in has been wasted.


Hi, scipy developer here. In the scientific python community people are
definitely interested in and intending to standardize on wheels. Your work
on wheel + pip is much appreciated.

The problems above that you say are build issues aren't really build
issues (where build means what distutils/bento do to build a package).
Maybe the following concepts, shamelessly stolen from the thread linked
below, help:
- *build systems* handle the actual building of software, eg Make, CMake,
distutils, Bento, autotools, etc
- *package managers* handle the distribution and installation of built (or
source) software, eg pip, apt, brew, ports
- *build managers* are separate from the above and handle the automatic(?)
preparation of packages from the results of build systems

Conda is a package manager to the best of my understanding, but because it
controls the whole stack it can also already do parts of the job of a build
manager. This is not something that pip aims to do. Conda is fairly new and
not well understood in our community either, but maybe this (long) thread
helps:
https://groups.google.com/forum/#!searchin/numfocus/build$20managers/numfocus/mVNakFqfpZg/6h_SldGNM-EJ.


Regards,
Ralf


I'm hoping I've misunderstood here. Please clarify. Preferably with
 specifics for Windows (as conda is a known stable platform simply
 isn't true for me...) - I accept you're not a Windows user, so a
 pointer to already-existing documentation is fine (I couldn't find any
 myself).

 Paul.
 ___
 Distutils-SIG maillist  -  Distutils-SIG@python.org
 https://mail.python.org/mailman/listinfo/distutils-sig

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] Package a project

2013-12-02 Thread Imran M Yousuf
Hi,

I am new to setuptools. I am using it to build and package a project
of mine. Currently if I execute `python setup.py bdist` it generates a
tarball with all files located in paths
'./abs/path/to/project/bin/[entry points]' and
'./abs/path/to/project/lib/python-2.7/site-packages/[rest of the
sources]'. This does not seem to be logical :(, I would rather want
the binary distribution to be structure -
'./project-name/bin/' and './project-name/lib/'.

Can some please advise me how to achieve it? I am using VirtualEnv for
development of this project and its setup.py looks like -

from setuptools import setup, find_packages

setup(name='project-name',
  version='1.0',
  description='Description',
  author='Imran M Yousuf',
  author_email='im...@smitsol.com',
  url='http://www.smitsol.com',
  install_requires = ['setuptools', 'pycrypto==2.6'],
  packages=find_packages('src', [tests]),
  package_dir={'': 'src'},
  test_suite=tests,
  entry_points={
  'console_scripts': ['manager=client.manager:main']
  }
 )

Thank you,

-- 
Imran M Yousuf
Entrepreneur  CEO
Smart IT Solution
http://smitsol.com
25/5B, Block F, Haji Chinu Miah Road Bylane
Joint Quarter, Mohammadpur
Dhaka - 1207, Bangladesh
Email: im...@smitsol.com
Twitter: @imyousuf - http://twitter.com/imyousuf
Skype: imyousuf
Blog: http://imyousuf-tech.blogs.smartitengineering.com/
Mobile: +880-1711402557
   +880-1746119494
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Paul Moore
On 2 December 2013 13:22, Nick Coghlan ncogh...@gmail.com wrote:
 As a quick sanity check question - what is the long-term advice for
 Christoph (and others like him)? Continue distributing wininst
 installers? Move to wheels? Move to conda packages? Do whatever you
 want, we don't care? We're supposedly pushing pip as the officially
 supported solution to package management - how can that be reconciled
 with *not* advising builders[1] to produce pip-compatible packages?

 What Christoph is doing is producing a cross-platform curated binary
 software stack, including external dependencies. That's precisely the
 problem I'm suggesting we *not* try to solve in the core tools any time
 soon, but instead support bootstrapping conda to solve the problem at a
 different layer.

OK. From my perspective, that's *not* what Christoph is doing (I
concede that it might be from his perspective, though). As far as I
know, the only place where Christoph's builds are incompatible with
standard builds is where numpy is involved (where he uses Intel
compiler extensions). But what he does *for me* is to provide binary
builds of lxml, pyyaml, matplotlib, pyside and a number of other
packages that I haven't got the infrastructure set up locally to
build. [He also provides apparently-incompatible binary builds of
scientific packages like numpy/scipy/pandas, but that's a side-issue
and as I get *all* of my scientific packages from him, the
incompatibility is not a visible problem for me]

If the named projects provided Windows binaries, then there would be
no issue with Christoph's stuff. But AFAIK, there is no place I can
get binary builds of matplotlib *except* from Christoph. And lxml
provides limited sets of binaries - there's no Python 3.3 version, for
example. I could continue :-)

Oh, and by the way, in what sense do you mean cross-platform here?
Win32 and Win64? Maybe I'm being narrow minded, but I tend to view
cross platform as meaning needs to think about at least two of
Unix, Windows and OSX. The *platform* issues on Windows (and OSX, I
thought) are solved - it's the ABI issues that we've ignored thus far
(successfully till now :-))

But Christoph's site won't go away because of this debate, and as long
as I can find wininst, egg or wheel binaries somewhere, I can maintain
my own personal wheel index. So I don't really care much, and I'll
stop moaning for now. I'll focus my energies on building that personal
index instead.

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] Install a script to prefix/sbin instead of prefix/bin

2013-12-02 Thread Michael Jansen
Hi

I am currently working on the cobbler (http://cobblerd.org) setup.py and trying 
to improve it. It 
currently is only capable of installing to /usr for several reasons. Since i 
would like to have 
virtualenv support when running it i am trying to change that. While doing that 
i managed to meet 
one of the standard problems with distutils/setuptools - a search returns 
results of 2004 or older - 
which still seems to be unsolved.

I would be willing to change that but my recent research leaves me a bit 
confused with all that 
setuptools is not dead, distlibs, tuf and wheels stuff.

So here is the question. If i am willing to do the work is it possible to add 
something like 
sbin_scripts (or admin_scripts?) to distutils?

I locally solved the problem like that http://pastebin.kde.org/pqrwrud1p (or 
attached).

The implementation expands INSTALL_SCHEMA and reuses install_scripts to install 
those 
sbin_scripts too. That could be debatable. Perhaps a dedicated 
build|install_xxx_scripts command.

I want to add that fhs also has a libexec folder (usually 
/usr/lib/project/libexec these days) 
destined for scripts that are supposed to be only called by other 
scripts/programs not manually.

And cobbler for installs python scripts for modwsgi into /srv/www/cobbler/ . So 
i guess just adding 
sbin_scripts is not really the solution. Perhaps something more flexible is 
needed.

Or should i wait for nextgen distlib based swiss knife python build tool and 
for now keep my local 
additions (there are more).

-- 
Michael Jansen
http://michael-jansen.biz
from distutils.command import install as dist_install
from distutils.command.build   import build as _build
from distutils.command.build_scripts   import build_scripts as _build_scripts
from distutils.command.build_scripts   import convert_path, newer, first_line_re

from setuptools.command.install import install as _install
from setuptools.command.install_scripts import install_scripts as 
_install_scripts
from setuptools import setup as _setup
from setuptools.dist import Distribution as _Distribution

from stat import ST_MODE
import sys
import os
from distutils import log

if sys.version  2.2:
dist_install.WINDOWS_SCHEME['sbin_scripts'] = '$base/Scripts'
else:
dist_install.WINDOWS_SCHEME['sbin_scripts'] = '$base/Scripts'

dist_install.INSTALL_SCHEMES['unix_prefix']['sbin_scripts'] = '$base/sbin'
dist_install.INSTALL_SCHEMES['unix_home']['sbin_scripts'] = '$base/sbin'
dist_install.INSTALL_SCHEMES['unix_user']['sbin_scripts'] = '$userbase/sbin'
dist_install.INSTALL_SCHEMES['nt_user']['sbin_scripts'] = '$userbase/Scripts'
dist_install.INSTALL_SCHEMES['os2']['sbin_scripts'] = '$base/Scripts'
dist_install.INSTALL_SCHEMES['os2_home']['sbin_scripts'] = '$userbase/sbin'

# The keys to an installation scheme; if any new types of files are to be
# installed, be sure to add an entry to every installation scheme above,
# and to SCHEME_KEYS here.
dist_install.SCHEME_KEYS = ('purelib', 'platlib', 'headers', 'scripts', 'data') 
+ ('sbin_scripts',)

class install(_install):
Enhance install target aimed for inclusion upstream.

user_options = _install.user_options + [
('install-sbin-scripts=', None, installation directory for sbin 
scripts)
]

def initialize_options(self):
_install.initialize_options(self)
self.install_sbin_scripts = None

def finalize_options(self):
_install.finalize_options(self)
if self.root is not None:
self.change_roots('sbin_scripts')

def finalize_unix(self):
_install.finalize_unix(self)

if self.install_base is not None or self.install_platbase is not None:
if self.install_sbin_scripts is None:
raise DistutilsOptionError, \
  (install-base or install-platbase supplied, but 
  installation scheme is incomplete)
return



def finalize_other(self):
_install.finalize_other(self)
# Nothing else to do here

def expand_dirs(self):
_install.expand_dirs(self)
self._expand_attrs(['install_sbin_scripts'])

# :TODO:
# In run() add it to rejectdirs

class install_scripts(_install_scripts):

user_options = _install_scripts.user_options + [
('install-sbin-dir=', 'd', directory to install sbin scripts to)
]

def initialize_options(self):
_install_scripts.initialize_options(self)
self.install_sbin_dir = None
self.build_sbin_dir = None

def finalize_options (self):
_install_scripts.finalize_options(self)
self.set_undefined_options(
'build',
('build_sbin_scripts', 'build_sbin_dir'))
self.set_undefined_options(
'install',
('install_sbin_scripts', 'install_sbin_dir')
)

def run(self):
_install_scripts.run(self)
print self.build_sbin_dir
print self.install_sbin_dir

self.outfiles = 

Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Paul Moore
On 2 December 2013 13:38, Tres Seaver tsea...@palladion.com wrote:
 On 12/01/2013 06:38 PM, Paul Moore wrote:
 I understand that things are different in the Unix world, but to be
 blunt why should Windows users care?

 You're kidding, right?  90% or more of the reason for wheels in the first
 place is because Windows users can't build their own software from
 source.  The amount of effort put in by non-Windows package owners to
 support them dwarfs whatever is bothering you here.

My point is that most of the complex binary compatibility problems
seem to be Unix-related, and as you imply, Unix users don't seem to
have much interest in using wheels except for local caching. So why
build that complexity into the spec if the main users (Windows, and
Unix users who won't ever publish wheels outside their own systems)
don't have a need for it? Let's just stick with something simple that
has limitations but works (practicality beats purity). My original
bdist_simple proposal was a pure-Windows replacement for wininst.
Daniel developed that into wheels which cater for non-Windows systems
(I believe, precisely because he had an interest in the local cache
use case). We're now seeing the complexities of the Unix world affect
the design of wheels, and it's turning out to be a hard problem. All
I'm trying to say is let's not give up on binary wheels for Windows,
just because we have unsolved issues on Unix. Whether solving the Unix
issues is worth it is the Unix users' call - I'll help solve the
issues, if they choose to, but I won't support abandoning the existing
Windows solution just because it can't be extended to cater for Unix
as well.

I'm immensely grateful for the amount of work projects which are
developed on Unix (and 3rd parties like Cristoph) put into supporting
Windows. Far from dismissing that, I want to avoid making things any
harder than they already are for such people - current wheels are no
more complex to distribute than wininst installers, and I want to keep
the impact on non-Windows projects at that level. If I come across as
ungrateful, I apologise.

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Oscar Benjamin
On 2 December 2013 13:54, Paul Moore p.f.mo...@gmail.com wrote:

 If the named projects provided Windows binaries, then there would be
 no issue with Christoph's stuff. But AFAIK, there is no place I can
 get binary builds of matplotlib *except* from Christoph. And lxml
 provides limited sets of binaries - there's no Python 3.3 version, for
 example. I could continue :-)

The matplotlib folks provide a list of binaries for Windows and OSX
hosted by SourceForge:
http://matplotlib.org/downloads.html

So do numpy and scipy.

 Oh, and by the way, in what sense do you mean cross-platform here?
 Win32 and Win64? Maybe I'm being narrow minded, but I tend to view
 cross platform as meaning needs to think about at least two of
 Unix, Windows and OSX. The *platform* issues on Windows (and OSX, I
 thought) are solved - it's the ABI issues that we've ignored thus far
 (successfully till now :-))

Exactly. A python extension that uses Fortran needs to indicate which
of the two Fortran ABIs it uses. Scipy must use the same ABI as the
BLAS/LAPACK library that numpy was linked with. This is core
compatibility data but there's no way to communicate it to pip.
There's no need to actually provide downloadable binaries for both
ABIs but there is a need to be able to detect incompatibilities.

Basically if
1) There is at least one single consistent set of built wheels for
Windows/OSX for any popular set of binary-interdependent packages.
2) A way to automatically detect incompatibilities and to
automatically find compatible built wheels.
then *a lot* of packaging problems have been solved.

Part 1 already exists. There are multiple consistent sets of built
installers (not wheels yet) for many hard to build packages. Part 2
requires at least some changes in pip/PyPI.

I read somewhere that numpy is the most frequently cited dependency on
PyPI. It can be built in multiple binary-incompatible ways. If there
is at least a way for the installer to know that it was built in the
standard way (for Windows/OSX) then there can be a set of binaries
built to match that. There's no need for a combinatorial explosion of
compatibility tags - just a single set of compatibility tags that has
complete binaries (where the definition of complete obviously depends
on your field).

People who want to build in different incompatible ways can do so
themselves, although it would still be nice to get an install time
error message when you subsequently try to install something
incompatible.

For Linux this problem is basically solved as far as beginners are
concerned because they can just use apt.


Oscar
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Paul Moore
On 2 December 2013 14:19, Oscar Benjamin oscar.j.benja...@gmail.com wrote:
 Basically if
 1) There is at least one single consistent set of built wheels for
 Windows/OSX for any popular set of binary-interdependent packages.
 2) A way to automatically detect incompatibilities and to
 automatically find compatible built wheels.
 then *a lot* of packaging problems have been solved.

 Part 1 already exists. There are multiple consistent sets of built
 installers (not wheels yet) for many hard to build packages. Part 2
 requires at least some changes in pip/PyPI.

Precisely.

But isn't part 2 at least sort-of solved by users manually pointing at
the right index? The only files on PyPI are compatible with each
other and externally hosted files (thanks for the pointer to the
matplotlib binaries, BTW) won't get picked up automatically by pip so
users have to set up their own index (possibly converting
wininst-wheel) and so can manually manage the compatibility process
if they are careful.

If people start uploading incompatible binaries to PyPI, I expect a
rash of bug reports followed very quickly by people settling down to a
community-agreed standard (in fact, that's probably already happened).
Incompatible builds will remain on external hosts like Cristoph's.

It's not perfect, certainly, but it's no worse than currently.

For any sort of better solution to part 2, you need *installed
metadata* recording the ABI / shared library details for the installed
files. So this is a Metadata 2.0 question, and not a compatibility tag
/ wheel issue (except that when Metadata 2.0 gets such information,
Wheel 2.0 probably needs to be specified to validate against it or
something). And on that note, I agree with Nick that we don't want to
be going there at the moment, if ever. I just disagree with what I
thought he was saying, that we should be so quick to direct people to
conda (at some point we could debate why conda rather than ActiveState
or Enthought, but tbh I really don't care...) I'd go with something
along the lines of:


Wheels don't attempt to solve the issue of one package depending on
another one that has been built with specific options/compilers, or
links to specific external libraries. The binaries on PyPI should
always be compatible with each other (although nothing checks this,
it's simply a matter of community standardisation), but if you use
distributions hosted outside of PyPI or build your own, you need to
manage such compatibility yourself. Most of the time, outside of
specialised areas, it should not be an issue[1].

If you want guaranteed compatibility, you should use a distribution
that validates and guarantees compatibility of all hosted files. This
might be your platform package manager (apt or RPM) or a bundled
Python distribution like Enthought, Conda or Activestate.


[1] That statement is based on *my* experience. If problems are
sufficiently widespread, we can tone it down, but let's not reach the
point of FUD.

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Package a project

2013-12-02 Thread Paul Moore
On 2 December 2013 15:28, Imran M Yousuf im...@smitsol.com wrote:
 Thanks for the suggestion Paul. Wheel structures exactly as I want it
 to be, but I see it does not generate the entry point scripts; any
 idea how to get them to work?

The scripts should be generated when you install the wheel (with pip
1.5, which will be released soon). OTOH, I thought that wheel put
generated scripts into the wheel file by default (except in version
0.20.0 - do you have the latest version?)

The scripts should be in wheel file/project.data/scripts - the
wheel install process puts them in the right place (virtualenv/bin or
whatever is appropriate).

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Package a project

2013-12-02 Thread Imran M Yousuf
Damn me! Thanks Paul, yes the script does have it in the data folder.

Thank you,

Imran

On Mon, Dec 2, 2013 at 9:32 PM, Paul Moore p.f.mo...@gmail.com wrote:
 On 2 December 2013 15:28, Imran M Yousuf im...@smitsol.com wrote:
 Thanks for the suggestion Paul. Wheel structures exactly as I want it
 to be, but I see it does not generate the entry point scripts; any
 idea how to get them to work?

 The scripts should be generated when you install the wheel (with pip
 1.5, which will be released soon). OTOH, I thought that wheel put
 generated scripts into the wheel file by default (except in version
 0.20.0 - do you have the latest version?)

 The scripts should be in wheel file/project.data/scripts - the
 wheel install process puts them in the right place (virtualenv/bin or
 whatever is appropriate).

 Paul



-- 
Imran M Yousuf
Entrepreneur  CEO
Smart IT Solution
http://smitsol.com
25/5B, Block F, Haji Chinu Miah Road Bylane
Joint Quarter, Mohammadpur
Dhaka - 1207, Bangladesh
Email: im...@smitsol.com
Twitter: @imyousuf - http://twitter.com/imyousuf
Skype: imyousuf
Blog: http://imyousuf-tech.blogs.smartitengineering.com/
Mobile: +880-1711402557
   +880-1746119494
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Package a project

2013-12-02 Thread Imran M Yousuf
Thanks for the suggestion Paul. Wheel structures exactly as I want it
to be, but I see it does not generate the entry point scripts; any
idea how to get them to work?

Thank you,

Imran

On Mon, Dec 2, 2013 at 7:59 PM, Paul Moore p.f.mo...@gmail.com wrote:
 On 2 December 2013 07:53, Imran M Yousuf im...@smitsol.com wrote:
 Hi,

 I am new to setuptools. I am using it to build and package a project
 of mine. Currently if I execute `python setup.py bdist` it generates a
 tarball with all files located in paths
 './abs/path/to/project/bin/[entry points]' and
 './abs/path/to/project/lib/python-2.7/site-packages/[rest of the
 sources]'. This does not seem to be logical :(, I would rather want
 the binary distribution to be structure -
 './project-name/bin/' and './project-name/lib/'.

 Can some please advise me how to achieve it? I am using VirtualEnv for
 development of this project and its setup.py looks like -

 from setuptools import setup, find_packages

 setup(name='project-name',
   version='1.0',
   description='Description',
   author='Imran M Yousuf',
   author_email='im...@smitsol.com',
   url='http://www.smitsol.com',
   install_requires = ['setuptools', 'pycrypto==2.6'],
   packages=find_packages('src', [tests]),
   package_dir={'': 'src'},
   test_suite=tests,
   entry_points={
   'console_scripts': ['manager=client.manager:main']
   }
  )

 Install the wheel project and use bdist_wheel instead of a simple
 bdist. Also, use the sdist (source distribution) command to create a
 source package (that needs a compiler to build). Binary packages are
 only compatible with the platform/Python version they are built on, so
 you may want to make multiple wheels, depending on what platforms you
 are targeting.

 From what you provide, I'm not 100% sure if you have C code in your
 project, actually. If you don't, then a sdist is sufficient - although
 a wheel might be worth uploading as well (pure Python wheels are
 cross-platform).

 The plain bdist command produces a dumb binary distribution, which
 is obsolete, and frankly useless.
 Paul



-- 
Imran M Yousuf
Entrepreneur  CEO
Smart IT Solution
http://smitsol.com
25/5B, Block F, Haji Chinu Miah Road Bylane
Joint Quarter, Mohammadpur
Dhaka - 1207, Bangladesh
Email: im...@smitsol.com
Twitter: @imyousuf - http://twitter.com/imyousuf
Skype: imyousuf
Blog: http://imyousuf-tech.blogs.smartitengineering.com/
Mobile: +880-1711402557
   +880-1746119494
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Vinay Sajip
On Mon, 2/12/13, Tres Seaver tsea...@palladion.com wrote:

 The issue is combinatorial explosion in the compatibility  tag space.
 There is basically zero chance that even Linux users (even  RedHat
 users  across RHEL version) would benefit from pre-built binary
 wheels (as  opposed to packages from their distribution).  Wheels
 on POSIX allow caching of the build process for deployment across
 a known set of hosts: they won't insulate you from the need to build in
 the first place.
 
The combinations are number of Python X.Y versions x the no. of platform 
architectures/ABI variants, or do you mean something more than this?

The wheel format is supposed to be a cross-platform binary package format; are 
you saying it is completely useless for POSIX except as a cache for identical 
hosts? What about for the cases like simple C extensions which have no external 
dependencies, but are only for speedups? What about POSIX environments where 
compilers aren't available (e.g. restricted/embedded environments, or due to 
security policies)?

Regards,

Vinay Sajip
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Tres Seaver
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/02/2013 12:23 PM, Vinay Sajip wrote:
 On Mon, 2/12/13, Tres Seaver tsea...@palladion.com wrote:
 
 The issue is combinatorial explosion in the compatibility  tag
 space. There is basically zero chance that even Linux users (even
 RedHat users  across RHEL version) would benefit from pre-built
 binary wheels (as  opposed to packages from their distribution).
 Wheels on POSIX allow caching of the build process for deployment
 across a known set of hosts: they won't insulate you from the need
 to build in the first place.
 

 The combinations are number of Python X.Y versions x the no. of
 platform architectures/ABI variants, or do you mean something more
 than this?

Trying to mark up wheels so that they can be safely shared with unknown
POSIXy systems seems like a halting problem, to me:  the chance I can
build a wheel on my machine that you can use on yours (the only reason to
distribute a wheel, rather than the sdist, in the first place) drops off
sharply as wheel's binariness comes into play.  I'm arguing that wheel
is not an interesting *distribution* format for POSIX systems (at least,
for non-Mac ones).  It could still play out in *deployment* scenarios (as
you note below).

Note that wheel's main deployment advantage over a binary egg
(installable by pip) is exactly reversed if you use 'easy_install' or
'zc.buildout'.  Otherwise, in a controlled deployment, they are pretty
much equivalent.

 The wheel format is supposed to be a cross-platform binary package 
 format; are you saying it is completely useless for POSIX except as a 
 cache for identical hosts? What about for the cases like simple C 
 extensions which have no external dependencies, but are only for 
 speedups?

I have a lot of packages on PyPI which have such optimization-only
speeedups.  The time difference to build such extensions is trivial
(e.g., for zope.interface, ~1 second on my old slow laptop, versus 0.4
seconds without the extension).

Even for lxml (Daniel's original motivating case), the difference is ~45
seconds to build from source vs. 1 second to install a wheel (or and
egg).  The instant I have to think about whether the binary form might be
subtly incompatbile, that 1 second *loses* to the 45 seconds I spend over
here arguing with you guys while it builds again from source. :)

 What about POSIX environments where compilers aren't available (e.g.
 restricted/embedded environments, or due to security policies)?

Such environments are almost certainly driven by development teams who
can build wheels specifically for deployment to them (assuming the
policies allow anything other than distro-package-managed software).
This is still really a cache the build optimization to known platforms
(w/ all binary dependencies the same), rather than distribution.


Tres.
- -- 
===
Tres Seaver  +1 540-429-0999  tsea...@palladion.com
Palladion Software   Excellence by Designhttp://palladion.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKcyPsACgkQ+gerLs4ltQ4oBwCgvhoq8ovEn/Bl/0FpBEfI48JY
znEAoJElD+R9SPnJXduwjCy7oxWRmcWH
=a0TT
-END PGP SIGNATURE-

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Marcus Smith
 hash based dependencies

In the conda build guide, the yaml spec files reference dependencies by
name/version (and the type of conda environment you're in will determine
the rest)
http://docs.continuum.io/conda/build.html#specifying-versions-in-requirements
Where does the hash come in?  what do you mean?

 publication of curated stacks when the conda folks already have one,

so, I see the index: http://repo.continuum.io/pkgs/index.html
Is they a way to contribute to this index yet?  or is that what would need
to be worked out.
otherwise, I guess the option is you have to build out recipes for
anything else you need from pypi, right? or is it easier than that?
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Marcus Smith
 In the conda build guide, the yaml spec files reference dependencies by
 name/version (and the type of conda environment you're in will determine
 the rest)

 http://docs.continuum.io/conda/build.html#specifying-versions-in-requirements
 Where does the hash come in?  what do you mean?


e.g. here's the requirement section from the spec file for their recipe for
fabric.

https://github.com/ContinuumIO/conda-recipes/blob/master/fabric/meta.yaml#L28

requirements:
  build:
- python
- distribute
- paramiko

  run:
- python
- distribute
- paramiko
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Install a script to prefix/sbin instead of prefix/bin

2013-12-02 Thread Daniel Holth
On Mon, Dec 2, 2013 at 9:14 AM, Michael Jansen i...@michael-jansen.biz wrote:
 Hi



 I am currently working on the cobbler (http://cobblerd.org) setup.py and
 trying to improve it. It currently is only capable of installing to /usr for
 several reasons. Since i would like to have virtualenv support when running
 it i am trying to change that. While doing that i managed to meet one of the
 standard problems with distutils/setuptools - a search returns results of
 2004 or older - which still seems to be unsolved.



 I would be willing to change that but my recent research leaves me a bit
 confused with all that setuptools is not dead, distlibs, tuf and wheels
 stuff.



 So here is the question. If i am willing to do the work is it possible to
 add something like sbin_scripts (or admin_scripts?) to distutils?



 I locally solved the problem like that http://pastebin.kde.org/pqrwrud1p (or
 attached).



 The implementation expands INSTALL_SCHEMA and reuses install_scripts to
 install those sbin_scripts too. That could be debatable. Perhaps a dedicated
 build|install_xxx_scripts command.



 I want to add that fhs also has a libexec folder (usually
 /usr/lib/project/libexec these days) destined for scripts that are
 supposed to be only called by other scripts/programs not manually.



 And cobbler for installs python scripts for modwsgi into /srv/www/cobbler/ .
 So i guess just adding sbin_scripts is not really the solution. Perhaps
 something more flexible is needed.



 Or should i wait for nextgen distlib based swiss knife python build tool and
 for now keep my local additions (there are more).



 --

 Michael Jansen

 http://michael-jansen.biz

It would be fairly simple to propose adding the GNU coding standard
directory variables:
http://www.gnu.org/prep/standards/html_node/Directory-Variables.html .
They could become valid names under the wheel .data/ directory.

You would also want to figure out how to instruct the installer to
generate scripts wrappers for these alternative destinations so e.g.
the execute bit is set and the #! line points to the correct
interpreter.

It would be a good idea to consider the needs of Windows and
virtualenv users who may need to be able to opt out of the install
things anywhere on my system and full FHS compliance features.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] How setuptools reads requirements

2013-12-02 Thread Erik Bray
On Tue, Nov 19, 2013 at 1:56 PM, Kura k...@kura.io wrote:
 Hey guys

 I'm trying to dig in to how setuptools/distutils reads the
 [install_]requires keyword and how requirements.txt is handled.

 I've tried running setup.py --requires but get an empty response back, this
 also fails when using =x.x,=x.y and causes a Python stack trace.

 Was wondering if someone on here could shed some light on the topic for me.

`setup.py --requires` is mostly useless.  It only reports back entries
from the `requires=` argument to setup() supported by distutils.  This
only tracks the names of other modules that a module needs to import,
and is being phased out as not particularly useful.  It's not in any
way tied to `install_requires` which is a feature added by setuptools.

As for requirements.txt, you might mean requires.txt which is a
file added to the .egg_info directory when you run the `setup.py
egg_info` command on a setuptools-enabled package.  Pip has its own
code for reading information from files out of .egg_info.  I believe
there are other libraries like pkginfo [1] that do this.

 It's for a Python dependency management/monitoring system I am writing so
 hopefully would benefit the overall community.

Sounds intriguing!

[1] https://pypi.python.org/pypi/pkginfo
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Package a project

2013-12-02 Thread Nick Coghlan
On 3 Dec 2013 02:01, Imran M Yousuf im...@smitsol.com wrote:

 Thanks for the suggestion Paul. Wheel structures exactly as I want it
 to be, but I see it does not generate the entry point scripts; any
 idea how to get them to work?

Those are platform dependent, so the installer generates them at install
time based on the metadata in the wheel.

Cheers,
Nick.


 Thank you,

 Imran

 On Mon, Dec 2, 2013 at 7:59 PM, Paul Moore p.f.mo...@gmail.com wrote:
  On 2 December 2013 07:53, Imran M Yousuf im...@smitsol.com wrote:
  Hi,
 
  I am new to setuptools. I am using it to build and package a project
  of mine. Currently if I execute `python setup.py bdist` it generates a
  tarball with all files located in paths
  './abs/path/to/project/bin/[entry points]' and
  './abs/path/to/project/lib/python-2.7/site-packages/[rest of the
  sources]'. This does not seem to be logical :(, I would rather want
  the binary distribution to be structure -
  './project-name/bin/' and './project-name/lib/'.
 
  Can some please advise me how to achieve it? I am using VirtualEnv for
  development of this project and its setup.py looks like -
 
  from setuptools import setup, find_packages
 
  setup(name='project-name',
version='1.0',
description='Description',
author='Imran M Yousuf',
author_email='im...@smitsol.com',
url='http://www.smitsol.com',
install_requires = ['setuptools', 'pycrypto==2.6'],
packages=find_packages('src', [tests]),
package_dir={'': 'src'},
test_suite=tests,
entry_points={
'console_scripts': ['manager=client.manager:main']
}
   )
 
  Install the wheel project and use bdist_wheel instead of a simple
  bdist. Also, use the sdist (source distribution) command to create a
  source package (that needs a compiler to build). Binary packages are
  only compatible with the platform/Python version they are built on, so
  you may want to make multiple wheels, depending on what platforms you
  are targeting.
 
  From what you provide, I'm not 100% sure if you have C code in your
  project, actually. If you don't, then a sdist is sufficient - although
  a wheel might be worth uploading as well (pure Python wheels are
  cross-platform).
 
  The plain bdist command produces a dumb binary distribution, which
  is obsolete, and frankly useless.
  Paul



 --
 Imran M Yousuf
 Entrepreneur  CEO
 Smart IT Solution
 http://smitsol.com
 25/5B, Block F, Haji Chinu Miah Road Bylane
 Joint Quarter, Mohammadpur
 Dhaka - 1207, Bangladesh
 Email: im...@smitsol.com
 Twitter: @imyousuf - http://twitter.com/imyousuf
 Skype: imyousuf
 Blog: http://imyousuf-tech.blogs.smartitengineering.com/
 Mobile: +880-1711402557
+880-1746119494
 ___
 Distutils-SIG maillist  -  Distutils-SIG@python.org
 https://mail.python.org/mailman/listinfo/distutils-sig
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Marcus Smith
  publication of curated stacks when the conda folks already have one,

 so, I see the index: http://repo.continuum.io/pkgs/index.html
 Is they a way to contribute to this index yet?  or is that what would need
 to be worked out.


probably a dumb question, but would it be possible to convert all the
anaconda packages to wheels?
even the non-python ones like:
qt-4.7.4-0.tar.bz2http://repo.continuum.io/pkgs/free/linux-64/qt-4.7.4-0.tar.bz2
certainly not the intent of wheels, but just wondering if it could be made
to work?
but I'm guessing there's pieces in the core anaconda distribution itself,
that makes it all work?
the point here being to provide a way to use the effort of conda in any
kind of normal python environment, as long you consistently point at an
index that just contains the conda wheels.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Nick Coghlan
On 3 Dec 2013 00:19, Paul Moore p.f.mo...@gmail.com wrote:

 On 2 December 2013 13:38, Tres Seaver tsea...@palladion.com wrote:
  On 12/01/2013 06:38 PM, Paul Moore wrote:
  I understand that things are different in the Unix world, but to be
  blunt why should Windows users care?
 
  You're kidding, right?  90% or more of the reason for wheels in the
first
  place is because Windows users can't build their own software from
  source.  The amount of effort put in by non-Windows package owners to
  support them dwarfs whatever is bothering you here.

 My point is that most of the complex binary compatibility problems
 seem to be Unix-related, and as you imply, Unix users don't seem to
 have much interest in using wheels except for local caching. So why
 build that complexity into the spec if the main users (Windows, and
 Unix users who won't ever publish wheels outside their own systems)
 don't have a need for it? Let's just stick with something simple that
 has limitations but works (practicality beats purity). My original
 bdist_simple proposal was a pure-Windows replacement for wininst.
 Daniel developed that into wheels which cater for non-Windows systems
 (I believe, precisely because he had an interest in the local cache
 use case). We're now seeing the complexities of the Unix world affect
 the design of wheels, and it's turning out to be a hard problem. All
 I'm trying to say is let's not give up on binary wheels for Windows,
 just because we have unsolved issues on Unix.

Huh? This is *exactly* what I am saying we should do - wheels *already*
work so long as they're self-contained.

They *don't* work (automatically) when they have an external dependency:
users have to obtain the external dependency by other means, and ensure
that everything is properly configured to find it, and that everything is
compatible with the retrieved version.

You're right that Christoph is doing two different things, though, so our
advice to him (or anyone that wanted to provide the cross-platform
equivalent of his current Windows-only stack) would be split:

- for all self-contained installers, also publish a wheel file on a custom
index server (although having a builder role on PyPI where project owners
can grant someone permission to upload binaries after the sdist is
published could be interesting)
- for those installers which actually form an integrated stack with shared
external binary dependencies, use the mechanisms provided by conda rather
than getting users to manage the external dependencies by hand (as
licensing permits, anyway)

Whether solving the Unix
 issues is worth it is the Unix users' call - I'll help solve the
 issues, if they choose to, but I won't support abandoning the existing
 Windows solution just because it can't be extended to cater for Unix
 as well.

You appear to still be misunderstanding my proposal, as we're actually in
violent agreement. All that extra complexity you're worrying about is
precisely what I'm saying we should *leave out* of the wheel spec. In most
cases of accelerator and wrapper modules, the static linking and/or
bundling solutions will work fine, and that's the domain I believe we
should *deliberately* restrict wheels to, so we don't get distracted trying
to solve an incredibly hard external dependency management problem that we
don't actually need to solve at the wheel level, since anyone that actually
needs it solved can just bootstrap conda instead.

 I'm immensely grateful for the amount of work projects which are
 developed on Unix (and 3rd parties like Cristoph) put into supporting
 Windows. Far from dismissing that, I want to avoid making things any
 harder than they already are for such people - current wheels are no
 more complex to distribute than wininst installers, and I want to keep
 the impact on non-Windows projects at that level. If I come across as
 ungrateful, I apologise.

The only problem I want to explicitly declare out of scope for wheel files
is the one the wininst installers can't handle cleanly either: the subset
of Christoph's installers which need a shared external binary dependency,
and any other components in a similar situation.

Using wheels or native Windows installers can get you in trouble in that
case, since you may accidentally set up conflicts in your environment. The
solution is curation of a software stack built around that external
dependency (or dependencies), backed up by a packaging system that prevents
conflicts within a given local installation.

The mainstream Linux distros approach this problem by mapping everything to
platform-specific packages and trying to get parallel installation working
cleanly (a part of the problem I plan to work on improving post Python
3.4), but that approach doesn't scale well and is one of the factors
responsible for the notorious time lags between software being released on
PyPI and it being available in the Linux system package managers
(streamlining that conversion is one of my main goals for 

Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Paul Moore
On 2 December 2013 22:26, Nick Coghlan ncogh...@gmail.com wrote:
Whether solving the Unix
 issues is worth it is the Unix users' call - I'll help solve the
 issues, if they choose to, but I won't support abandoning the existing
 Windows solution just because it can't be extended to cater for Unix
 as well.

 You appear to still be misunderstanding my proposal, as we're actually in
 violent agreement. All that extra complexity you're worrying about is
 precisely what I'm saying we should *leave out* of the wheel spec. In most
 cases of accelerator and wrapper modules, the static linking and/or bundling
 solutions will work fine, and that's the domain I believe we should
 *deliberately* restrict wheels to, so we don't get distracted trying to
 solve an incredibly hard external dependency management problem that we
 don't actually need to solve at the wheel level, since anyone that actually
 needs it solved can just bootstrap conda instead.

OK. I think I've finally seen what you're suggesting, and yes, it's
essentially the same as I'd like to see (at least for now). I'd hoped
that wheels could be more useful for Unix users than seems likely now
- mainly because I really do think that a lot of the benefits of
binary distributions are *not* restricted to Windows, and if Unix
users could use them, it'd lessen the tendency to think that
supporting anything other than source installs was purely to cater
for Windows users not having a compiler :-) But if that's not a
practical possibility (and I defer to the Unix users' opinions on that
matter) then so be it.

On the other hand, I still don't see where the emphasis on conda in
your original message came from. There are lots of full stack
solutions available - I'd have thought system packages like RPM and
apt are the obvious first suggestion for users that need a curated
stack. If they are not appropriate, then there are Enthought,
ActiveState and Anaconda/conda that I know of. Why single out conda to
be blessed?

Also, I'd like the proposal to explicitly point out that 99% of the
time, Windows is the simple case (because static linking and bundling
DLLs is common). Getting Windows users to switch to wheels will be
enough change to ask, without confusing the message. A key point here
is that packages like lxml, matplotlib, or Pillow would have
arbitrary binary dependency issues on Unix, but (because of static
linking/bundling) be entirely appropriate for wheels on Windows. Let's
make sure the developers don't miss this point!

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Nick Coghlan
On 3 Dec 2013 08:17, Marcus Smith qwc...@gmail.com wrote:



  publication of curated stacks when the conda folks already have one,

 so, I see the index: http://repo.continuum.io/pkgs/index.html
 Is they a way to contribute to this index yet?  or is that what would
need to be worked out.


 probably a dumb question, but would it be possible to convert all the
anaconda packages to wheels?
 even the non-python ones like:  qt-4.7.4-0.tar.bz2
 certainly not the intent of wheels, but just wondering if it could be
made to work?
 but I'm guessing there's pieces in the core anaconda distribution itself,
that makes it all work?
 the point here being to provide a way to use the effort of conda in any
kind of normal python environment, as long you consistently point at an
index that just contains the conda wheels.

I'm not sure about the conda - wheel direction, but pip install conda 
conda init mostly works already if you're in a virtualenv that owns its
copy of Python (this is also the answer to why not ActiveState or
Enthought - the Continuum Analytics software distribution stuff is truly
open source, and able to be used completely independently of their
services).

Their docs aren't that great in terms of explaining the *why* of conda -
I'm definitely influenced by spending time talking about how it works with
Travis and some of the other Continuum Analytics folks at PyCon US and the
Austin Python user group.

However, their approach to distribution of fully curated stacks seems
basically sound, the scientific and data analysis users I know that have
tried it have loved it, the devs have expressed a willingness to work on
improving their interoperability with the standard tools (and followed
through on that at least once by creating the conda init command) , and
they're actively interested in participating in the broader community
(hence the presentation at the packaging mini-summit at PyCon US, as well
as assorted presentations at SciPy and PyData conferences).

People are already confused about the differences between pip and conda and
when they should use each, and unless we start working with the conda devs
to cleanly define the different use cases, that's going to remain the case.

POSIX users need ready access to a prebuilt scientific stack just as much
(or more) than Mac OS X and Windows users (there's a reason
ScientificLinux is a distribution in its own right) and that space is
moving fast enough that the Linux distros (even SL) end up being too slow
to update. conda solves that problem, and it solves it in a way that works
on Windows as well. On the wheel side of things we haven't even solved the
POSIX platform tagging problem yet, and I don't believe we should make
users wait until we have figured that out when there's an existing solution
to that particular problem that already works.

Cheers,
Nick.



 ___
 Distutils-SIG maillist  -  Distutils-SIG@python.org
 https://mail.python.org/mailman/listinfo/distutils-sig

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Marcus Smith
 I'm not sure about the conda - wheel direction, but pip install conda 
 conda init mostly works already if you're in a virtualenv that owns its
 copy of Python

ok, I just tried conda in a throw-away altinstall of py2.7.
I was thinking I would have to conda create new isolated environments
from there.
but there literally is a conda init (*not* documented on the website)
like you mentioned that get's conda going in the current environment.
pip and conda were both working, except that pip didn't know about
everything conda had installed like sqllite, which is expected.
and I found all the conda metadata which was helpful to look at.

I still don't know what you mean by hash based dependencies.
I'm not seeing any requirements being locked by hashes in the metadata?
what do you mean?
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Nick Coghlan
On 3 Dec 2013 09:03, Paul Moore p.f.mo...@gmail.com wrote:

 On 2 December 2013 22:26, Nick Coghlan ncogh...@gmail.com wrote:
 Whether solving the Unix
  issues is worth it is the Unix users' call - I'll help solve the
  issues, if they choose to, but I won't support abandoning the existing
  Windows solution just because it can't be extended to cater for Unix
  as well.
 
  You appear to still be misunderstanding my proposal, as we're actually
in
  violent agreement. All that extra complexity you're worrying about is
  precisely what I'm saying we should *leave out* of the wheel spec. In
most
  cases of accelerator and wrapper modules, the static linking and/or
bundling
  solutions will work fine, and that's the domain I believe we should
  *deliberately* restrict wheels to, so we don't get distracted trying to
  solve an incredibly hard external dependency management problem that we
  don't actually need to solve at the wheel level, since anyone that
actually
  needs it solved can just bootstrap conda instead.

 OK. I think I've finally seen what you're suggesting, and yes, it's
 essentially the same as I'd like to see (at least for now). I'd hoped
 that wheels could be more useful for Unix users than seems likely now
 - mainly because I really do think that a lot of the benefits of
 binary distributions are *not* restricted to Windows, and if Unix
 users could use them, it'd lessen the tendency to think that
 supporting anything other than source installs was purely to cater
 for Windows users not having a compiler :-) But if that's not a
 practical possibility (and I defer to the Unix users' opinions on that
 matter) then so be it.

 On the other hand, I still don't see where the emphasis on conda in
 your original message came from. There are lots of full stack
 solutions available - I'd have thought system packages like RPM and
 apt are the obvious first suggestion for users that need a curated
 stack. If they are not appropriate, then there are Enthought,
 ActiveState and Anaconda/conda that I know of. Why single out conda to
 be blessed?

 Also, I'd like the proposal to explicitly point out that 99% of the
 time, Windows is the simple case (because static linking and bundling
 DLLs is common). Getting Windows users to switch to wheels will be
 enough change to ask, without confusing the message. A key point here
 is that packages like lxml, matplotlib, or Pillow would have
 arbitrary binary dependency issues on Unix, but (because of static
 linking/bundling) be entirely appropriate for wheels on Windows. Let's
 make sure the developers don't miss this point!

Once we solve the platform tagging problem, wheels will also work on any
POSIX system for the simple cases of accelerator and wrapper modules. Long
term the only persistent problem is with software stacks that need
consistent build settings and offer lots of build options. That applies to
Windows as well - the SSE build variants of NumPy were one of the original
cases brought up as not being covered by the wheel compatibility tag format.

Near term, platform independent stacks *also* serve as a workaround for the
POSIX platform tagging issues and the fact there isn't yet a default
build configuration for the scientific stack.

As for Why conda?:
- open source
- cross platform
- can be installed with pip
- gets new releases of Python components faster than Linux distributions
- uses Continuum Analytics services by default, but can be configured to
use custom servers
- created by the creator of NumPy

For ActiveState and Enthought, as far as I am aware, their package managers
are closed source and tied fairly closely to their business model, while
the Linux distros are not only platform specific, but have spotty coverage
of PyPI packages, and even those which are covered, often aren't reliably
kept up to date (although I hope metadata 2.0 will help improve that
situation by streamlining the conversion to policy compliant system
packages).

Cheers,
Nick.


 Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Handling the binary dependency management problem

2013-12-02 Thread Chris Barker
On Mon, Dec 2, 2013 at 5:22 AM, Nick Coghlan ncogh...@gmail.com wrote:

 And the conda folks are working on playing nice with virtualenv - I don't
 we'll see a similar offer from Microsoft for MSI any time soon :)

nice to know...

   a single organisation. Pip (when used normally) communicates with PyPI
   and no single organisation controls the content of PyPI.

can't you point pip to a wheelhouse'? How is that different?

For built distributions they could do
   the same - except that pip/PyPI don't provide a mechanism for them to
   do so.

I'm still confused as to what conda provides here -- as near as I can tell,
conda has a nice hash-based way to ensure binary compatibility -- which is
a good thing. But the curated set of packages is an independent issue.
What's stopping anyone from creating a nice curated set of packages with
binary wheels (like the Gohlke repo)

And wouldn't it be better to make wheel a bit more robust in this regard
than add yet another recommended tool to the mix?

 Exactly, this is the difference between pip and conda - conda is a
 solution for installing from curated *collections* of packages. It's
 somewhat related to the tagging system people are speculating about for
 PyPI, but instead of being purely hypothetical, it already exists.

Does it? I only know of one repository of conda packages -- and it provides
poor support for some things (like wxPython -- does it support any desktop
GUI on OS-X?)

So why do we think that conda is a better option for these unknown curatied
repos?

Also, I'm not sure I WANT anymore curated repos -- I'd rather a standard
set by python.org that individual package maintainers can choose to support.

PyPI wheels would then be about publishing default versions of
 components, with the broadest compatibility, while conda would be a
 solution for getting access to alternate builds that may be faster, but
 require external shared dependencies.

I'm still confused as to why packages need to share external dependencies
(though I can see why it's nice...) .

But what's the new policy here? Anaconda and Canopy exist already? Do we
need to endorse them? Why? If you want PyPI wheels would then be about
publishing default versions of components, with the broadest
compatibility, -- then we still need to improve things a bit, but we can't
say we're done

What Christoph is doing is producing a cross-platform curated binary
 software stack, including external dependencies. That's precisely the
 problem I'm suggesting we *not* try to solve in the core tools any time
 soon, but instead support bootstrapping conda to solve the problem at a
 different layer.

So we are advocating that others, like Christoph, create curated stack with
conda? Asside from whether conda really provides much more than wheel to
support doing this, I think it's a BAD idea to encourage it: I'd much
rather encourage package maintainers to build standard packages, so we
can get some extra interoperabilty.

Example: you can't use wxPython with Anocoda (on the Mac, anyway). At least
not without figuring out how to build it yourself, an I'm not sure it will
even work then. (and it is a fricking nightmare to build). But it's getting
harder to find standard packages for the mac for the SciPy stack, so
people are really stuck.

So the pip compatible builds for those tools would likely miss out on some
 of the external acceleration features,

that's fine -- but we still need those pip compatible builds 

and the nice thing about pip-compatible builds (really
python.orgcompatible builds...) is that they play well with the other
binary
installers --

 By ceding the distribution of cross-platform curated software stacks with
 external binary dependencies problem to conda, users would get a solution
 to that problem that they can use *now*,

Well, to be fair, I've been starting a project to provide binaries for
various packages for OS_X amd did intend to give conda a good look-see, but
I really has hoped that wheels where the way now...oh well.
-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig