Re: [Distutils] Parallel installation of incompatible versions

2013-03-20 Thread Bohuslav Kabrda
- Original Message -
 pkg_resources.requires() is our only current solution for parallel
 installation of incompatible versions. This can be made to work and
 is
 a lot better than the nothing we had before it was created, but also
 has quite a few issues (and it can be a nightmare to debug when it
 goes wrong).
 
 Based on the exchanges with Mark McLoughlin the other week, and
 chatting to Matthias Klose here at the PyCon US sprints, I think I
 have a design that will let us support parallel installs in a way
 that
 builds on existing standards, while behaving more consistently in
 edge
 cases and without making sys.path ridiculously long even in systems
 with large numbers of potentially incompatible dependencies.
 
 The core of this proposal is to create an updated version of the
 installation database format that defines semantics for *.pth files
 inside .dist-info directories.
 
 Specifically, whereas *.pth files directly in site-packages are
 processed automatically when Python starts up, those inside dist-info
 directories would be processed only when explicitly requested
 (probably through a new distlib API). The processing of the *.pth
 file
 would insert it into the path immediately before the path entry
 containing the .dist-info directory (this is to avoid an issue with
 the pkg_resources insert-at-the-front-of-sys.path behaviour where
 system packages can end up shadowing those from a local source
 checkout, without running into the issue with
 append-to-the-end-of-sys.path where a specifically requested version
 is shadowed by a globally installed version)
 
 To use CherryPy2 and CherryPy3 on Fedora as an example, what this
 would allow is for CherryPy3 to be installed normally (i.e. directly
 in site-packages), while CherryPy2 would be installed as a split
 install, with the .dist-info going into site-packages and the actual
 package going somewhere else (more on that below). A cherrypy2.pth
 file inside the dist-info directory would reference the external
 location where cherrypy 2.x can be found.
 
 To use this at runtime, you would do something like:
 
 distlib.some_new_requires_api(CherryPy (2.2))
 import cherrypy
 

So what would be done when CherryPy 4 came? CherryPy 3 is installed directly in 
site-packages, so version 2 and 4 would be treated with split-install?
It seems to me that this type of special casing is not what we want. If you 
develop on one machine and deploy on another machine, you have no guarantee 
that the standard installation of CherryPy is the same as on your system. That 
would force developers to actually always install their used versions by 
split-install, so that they could make sure they always import the correct 
version.
At this point, I will go to the Ruby world for example (please don't shout at 
me :). If you look at how RubyGems work, they put _every_ Gem in a versioned 
directory (therefore no special casing). When just require 'foo' is used, 
newest foo is imported, otherwise a specific version is imported if 
specified. I believe that we should head a similar way here, making the 
split-install the default (and the only way).
Then if user uses standard

 import cherrypy

Python would import the newest version. When using

 distlib.some_new_requires_api(CherryPy (2.2))
 import cherrypy

Python would import the specific version. This may actually turn out to be very 
useful, as you could place all the distlib calls into __init__.py of your 
package which would nicely separate this from the actual code (and we wouldn't 
need anything like Ruby Gemfiles).
So am I completely wrong here or does this make sense to you?

Slavek.

 Cheers,
 Nick.
 
 --
 Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
 ___
 Distutils-SIG maillist  -  Distutils-SIG@python.org
 http://mail.python.org/mailman/listinfo/distutils-sig
 

-- 
Regards,
Bohuslav Slavek Kabrda.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] self.introduce(distutils-sig)

2013-03-20 Thread Paul Moore
On 19 March 2013 16:21, Steve Dower steve.do...@microsoft.com wrote:
 As I understand, the issue is the same as between different versions of 
 Python and comes down to not being able to assume a compiler on Windows 
 machines. It's easy to make a source file that will compile for any ABI and 
 platform, but distributing binaries requires each one to be built separately. 
 This doesn't have to be an onerous task - it can be scripted quite easily 
 once you have all the required compilers - but it does take more effort than 
 simply sharing a source file.

Another nice tool would be some sort of Windows build farm, where
projects could submit a sdist and it would build wheels for a list of
supported Python versions and architectures. That wouldn't work for
projects with complex dependencies, obviously, but could cover a
reasonable-sized chunk of PyPI (especially if dependencies could be
added to the farm on request).

And can I have a pony as well, of course... :-)

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] Building wheels - project metadata, and specifying compatibility

2013-03-20 Thread Paul Moore
When building wheels, it is necessary to know details of the
compatibility requirements of the code. The most common case is for
pure Python code, where the code could in theory be valid for a single
Python version, but in reality is more likely to be valid either for
all Pythons, or sometimes for just Python 2 or Python 3 (where
separate code bases or 2to3 are involved). The wheel project supports
a universal flag in setup.cfg, which sets the compatibility flags to
'py2.py3', but that is only one case.

Ultimately, we need a means (probably in metadata) for (pure Python)
projects to specify any of the following:

1. The built code works on any version of Python (that the project supports)
2. The built code is specific to the major version of Python that it
was built with
3. The built code is only usable for the precise Python version it was
built with

The default is currently (3), but this is arguably the least common
case. Nearly all code will support at least (2) and more and more is
supporting (1).

Note that this is separate from the question of what versions the
project supports. It's about how the code is written. Specifically,
there's no point in marking code that uses new features in Python 3.3
as .py33 - it's still .py3 as it will work with Python 3.4. The fact
that it won't work on Python 3.2 is just because the project doesn't
support Python 3.2. Installing a .py3 wheel into Python 3.2 is no
different from installing a sdist there. So overspecifying the wheel
compatibility so that a sdist gets picked up for earlier versions
isn't helpful.

In addition to a means for projects to specify this themselves, tools
(bdist_wheel, pip wheel) should probably have a means to override the
default at the command line, as it will be some time before projects
specify this information, even once it is standard. There's always the
option to rename the generated file, but that feels like a hack...

Where C extensions are involved, there are other questions. Mostly,
compiled code is implementation, architecture, and minor version
specific, so there's little to do here. The stable ABI is relevant,
but I have no real experience of using it to know how that would work.
There is also the case of projects with C accelerators - it would be
good to be able to easily build both the accelerated version and a
fallback pure-python wheel. I don't believe this is easy as things
stand - distutils uses a compiler if it's present, so forcing a
pure-python build when you have a compiler is harder work than it
needs to be when building binary distributions.

Comments? Should the default in bdist_wheel and pip wheel be changed
or should it remain as safe as possible (practicality vs purity)? If
the latter, should override flags be added, or is renaming the wheel
in the absence of project metadata the recommended approach? And does
anyone have any experience of how this might all work with C
extensions?

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Parallel installation of incompatible versions

2013-03-20 Thread Nick Coghlan
On Tue, Mar 19, 2013 at 11:06 AM, PJ Eby p...@telecommunity.com wrote:
 Could you perhaps spell out why this is better than just dropping .whl
 files (or unpacked directories) into site-packages or equivalent?

I need a solution that will also work for packages installed by the
system installer - in fact, that's the primary use case. For
self-contained installation independent of the system Python, people
should be using venv/virtualenv, zc.buildout, software collections (a
Fedora/RHEL tool in the same space), or a similar isolated
application solution.

System packages will be spread out according to the FHS, and need to
work relatively consistently for every language the OS supports (i.e.
all of them), so long term solutions that assume the use of
Python-specific bundling formats for the actual installation are not
sufficient in my view.

I also want to create a database of parallel installed versions that
can be used to avoid duplication across virtual environments and
software collections by using .pth files to reference a common
installed version rather than having to use symlinks or copies of the
files.

I'm not wedded to using *actual* pth files as a cross-platform linking
solution - a more limited format that only supported path additions,
without the extra powers of pth files would be fine. The key point is
to use the .dist-info directories to bridge between unversioned
installs in site packages and finding parallel versions at runtime
without side effects on all Python applications executed on that
system (which is the problem with using a pth file in site packages
to bootstrap the parallel versioning system as easy_install does).

 Also, one thing that actually confuses me about this proposal is that
 it sounds like you are saying you'd have two CherryPy.dist-info
 directories in site-packages, which sounds broken to me; the whole
 point of the existing protocol for .dist-info was that it allowed you
 to determine the importable versions from a single listdir().  Your
 approach would break that feature, because you'd have to:

 1. Read each .dist-info directory to find .pth files
 2. Open and read all the .pth files
 3. Compare the .pth file contents with sys.path to find out what is
 actually *on* sys.path

If a distribution has been installed in site-packages (or has an
appropriate *.pth file there), there won't be any *.pth file in the
.dist-info directory. The *.pth file will only be present if the
package has been installed *somewhere else*.

However, it occurs to me that we can do this differently, by
explicitly involving a separate directory that *isn't* on sys.path by
default, and use a path hook to indicate when it should be accessed.

Under this version of the proposal, PEP 376 would remain unchanged,
and would effectively become the database of installed distributions
available on sys.path by default. These files would all remain
available by default, preserving backwards compatibility for the vast
majority of existing software that doesn't use any kind of parallel
install system.

We could then introduce a separate database of all installed
distributions. Let's use the versioned-packages name, and assume it
lives adjacent to the existing site-packages. The difference between
this versioned-packages directory and site-packages would be that:

- it would never be added to sys.path itself
- multiple .dist-info directories for different versions of the same
distribution may be present
- distributions are installed into named-and-versioned subdirectories
rather than directly into versioned-packages
- rather than the contents being processed directly from sys.path, we
would add a versioned-packages entry to sys.path with a path hook
that maps to a custom module finder that handles the extra import
locations without the same issues as the current approach to modifying
sys.path in pkg_resources (which allows shadowing development versions
with installed versions by inserting at the front), or the opposite
problem that would be created by appending to the end (allowing
default versions to shadow explicitly requested versions)

We would then add some new version constraint API in distlib to:

1. Check the PEP 376 db. If the version identified there satisfies the
constraint, fine, we leave the import state unmodified.
2. If no suitable version is found, check the new versioned-packages directory.
3. If a suitable parallel installed version is found, we check its
dist-info directory for the details needed to update the set of paths
processed by the versioned import hook.

The versioned import hook would work just like normal sys.path based
import (i.e. maintaining a sequence of path entries, using
sys.modules, sys.path_hooks, sys.path_importer_cache), the only
difference is that the set of paths it checks would initially be
empty. Calls to the new API in distlib would modify the *versioned*
path, effectively inserting all those paths at the point in sys.path
where the versioned-packages marker 

Re: [Distutils] Parallel installation of incompatible versions

2013-03-20 Thread Nick Coghlan
On Wed, Mar 20, 2013 at 1:01 AM, Bohuslav Kabrda bkab...@redhat.com wrote:
 So what would be done when CherryPy 4 came? CherryPy 3 is installed directly 
 in site-packages, so version 2 and 4 would be treated with split-install?
 It seems to me that this type of special casing is not what we want. If you 
 develop on one machine and deploy on another machine, you have no guarantee 
 that the standard installation of CherryPy is the same as on your system. 
 That would force developers to actually always install their used versions by 
 split-install, so that they could make sure they always import the correct 
 version.

This approach isn't viable, as it is both backwards incompatible with
the expectations of current Python software and incompatible with the
requirements of Linux distros and other system integrators (who need
to be able to add new backwards incompatible versions of software
without changing the default version).

And I definitely won't shout at people for mentioning what other
languages do - learning from what works and what doesn't for other
groups is exactly what we *should* be doing. Many of the features in
the forthcoming metadata 2.0 specification are driven by stealing
things that are known to work from Node.js, Perl, Ruby, PHP, RPM, DEB,
etc.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Building wheels - project metadata, and specifying compatibility

2013-03-20 Thread Daniel Holth
On Wed, Mar 20, 2013 at 8:27 AM, Paul Moore p.f.mo...@gmail.com wrote:
 When building wheels, it is necessary to know details of the
 compatibility requirements of the code. The most common case is for
 pure Python code, where the code could in theory be valid for a single
 Python version, but in reality is more likely to be valid either for
 all Pythons, or sometimes for just Python 2 or Python 3 (where
 separate code bases or 2to3 are involved). The wheel project supports
 a universal flag in setup.cfg, which sets the compatibility flags to
 'py2.py3', but that is only one case.

 Ultimately, we need a means (probably in metadata) for (pure Python)
 projects to specify any of the following:

 1. The built code works on any version of Python (that the project supports)
 2. The built code is specific to the major version of Python that it
 was built with
 3. The built code is only usable for the precise Python version it was
 built with

 The default is currently (3), but this is arguably the least common
 case. Nearly all code will support at least (2) and more and more is
 supporting (1).

 Note that this is separate from the question of what versions the
 project supports. It's about how the code is written. Specifically,
 there's no point in marking code that uses new features in Python 3.3
 as .py33 - it's still .py3 as it will work with Python 3.4. The fact
 that it won't work on Python 3.2 is just because the project doesn't
 support Python 3.2. Installing a .py3 wheel into Python 3.2 is no
 different from installing a sdist there. So overspecifying the wheel
 compatibility so that a sdist gets picked up for earlier versions
 isn't helpful.

On the other hand Python 3.4 knows it is compatible with py33 and
will pick up that wheel too.

It is designed this way to provide a (small) distinction between the
safe default and intentional cross-Python-compatible publishing.

 In addition to a means for projects to specify this themselves, tools
 (bdist_wheel, pip wheel) should probably have a means to override the
 default at the command line, as it will be some time before projects
 specify this information, even once it is standard. There's always the
 option to rename the generated file, but that feels like a hack...

I need to do a wheel retag tool instead of a simple rename because
now the WHEEL metadata is supposed to contain all the information in
the filename through the Tag and Build keys. This lets us effectively
sign the filename.

 Where C extensions are involved, there are other questions. Mostly,
 compiled code is implementation, architecture, and minor version
 specific, so there's little to do here. The stable ABI is relevant,
 but I have no real experience of using it to know how that would work.
 There is also the case of projects with C accelerators - it would be
 good to be able to easily build both the accelerated version and a
 fallback pure-python wheel. I don't believe this is easy as things
 stand - distutils uses a compiler if it's present, so forcing a
 pure-python build when you have a compiler is harder work than it
 needs to be when building binary distributions.

This is an open problem, for example in pypy they might be C
decelerators. There should be a better way to have optional or
conditional C extensions.

 Comments? Should the default in bdist_wheel and pip wheel be changed
 or should it remain as safe as possible (practicality vs purity)? If
 the latter, should override flags be added, or is renaming the wheel
 in the absence of project metadata the recommended approach? And does
 anyone have any experience of how this might all work with C
 extensions?

I would like to see the setup.cfg metadata used by bdist_wheel
expanded and standardized. The command line override would also be
good. Does anyone have the stomach to put some of that into distutils
or setuptools itself?

Daniel Holth
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Parallel installation of incompatible versions

2013-03-20 Thread Bohuslav Kabrda
- Original Message -
 On Wed, Mar 20, 2013 at 1:01 AM, Bohuslav Kabrda bkab...@redhat.com
 wrote:
  So what would be done when CherryPy 4 came? CherryPy 3 is installed
  directly in site-packages, so version 2 and 4 would be treated
  with split-install?
  It seems to me that this type of special casing is not what we
  want. If you develop on one machine and deploy on another machine,
  you have no guarantee that the standard installation of CherryPy
  is the same as on your system. That would force developers to
  actually always install their used versions by split-install, so
  that they could make sure they always import the correct version.
 
 This approach isn't viable, as it is both backwards incompatible with
 the expectations of current Python software and incompatible with the
 requirements of Linux distros and other system integrators (who need
 to be able to add new backwards incompatible versions of software
 without changing the default version).
 

Yep, it's backwards incompatible, sure. I think your proposal is a step in the 
right direction. My proposal is where I think we should be heading in the long 
term (and do the big step of breaking the backward compatibility as a part of 
some other huge step, like Python2-Python3 transition was).
As for Linux distros, that's not an issue AFAICS. We've been doing the same 
with Ruby for quite some time and it works (yes, with some patching here and 
there, but generally it does).
Fact is that this system brings lots of benefits to developers. I'm actually 
quite schizophrenic in this regard, as I'm both packager and developer :) and I 
see how these worlds collide in these matters. From the packager point of view 
I see your point, from the developer point of view I install CherryPy 4, import 
CherryPy and then find out that I'm still using version 3, which breaks my 
developer expectations.

 And I definitely won't shout at people for mentioning what other
 languages do - learning from what works and what doesn't for other
 groups is exactly what we *should* be doing. Many of the features in
 the forthcoming metadata 2.0 specification are driven by stealing
 things that are known to work from Node.js, Perl, Ruby, PHP, RPM,
 DEB,
 etc.
 
 Cheers,
 Nick.
 
 --
 Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
 

-- 
Regards,
Bohuslav Slavek Kabrda.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Parallel installation of incompatible versions

2013-03-20 Thread Daniel Holth
Not sure how you could do a good job having one version of a package
available by default, and a different one available by requires().
Eggs list the top level packages provided and you could shadow them
but it seems like it would be really messy.

Ruby Gems appear to have a directory full of gems:
~/.gem/ruby/1.8/gems/. Each subdirectory is {name}-{version} and
doesn't need any suffix - we know what they are because of where they
are.

bundler-1.2.1
json-1.7.5
sinatra-1.3.3
tilt-1.3.3
tzinfo-0.3.33

Each subdirectory contains metadata, and a lib/ directory that would
actually be added to the Ruby module path.

Like with pkg_resources, developers are warned to only require Gems
on things that are *not* imported (preferably in the equivalent of our
console_scripts wrappers). Otherwise you get an unwanted Gem
dependency if you ever tried to use the same gem outside of the gem
system.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Building wheels - project metadata, and specifying compatibility

2013-03-20 Thread Paul Moore
On 20 March 2013 12:44, Daniel Holth dho...@gmail.com wrote:
 On the other hand Python 3.4 knows it is compatible with py33 and
 will pick up that wheel too.

 It is designed this way to provide a (small) distinction between the
 safe default and intentional cross-Python-compatible publishing.

Good point. I keep forgetting it does that. (I still maintain that
behaviour is very non-intuitive, but I'm willing to accept that unless
someone else pipes up,. it's probably just me :-))

 In addition to a means for projects to specify this themselves, tools
 (bdist_wheel, pip wheel) should probably have a means to override the
 default at the command line, as it will be some time before projects
 specify this information, even once it is standard. There's always the
 option to rename the generated file, but that feels like a hack...

 I need to do a wheel retag tool instead of a simple rename because
 now the WHEEL metadata is supposed to contain all the information in
 the filename through the Tag and Build keys. This lets us effectively
 sign the filename.

Again good point. If I get some free time, I might take a stab at that
if you'd like...

 Where C extensions are involved, there are other questions. Mostly,
 compiled code is implementation, architecture, and minor version
 specific, so there's little to do here. The stable ABI is relevant,
 but I have no real experience of using it to know how that would work.
 There is also the case of projects with C accelerators - it would be
 good to be able to easily build both the accelerated version and a
 fallback pure-python wheel. I don't believe this is easy as things
 stand - distutils uses a compiler if it's present, so forcing a
 pure-python build when you have a compiler is harder work than it
 needs to be when building binary distributions.

 This is an open problem, for example in pypy they might be C
 decelerators. There should be a better way to have optional or
 conditional C extensions.

Agreed. These are definitely hard issues, and a proper solution won't
be quickly achieved. What we have now is a good 80% solution, but
let's keep the remaining 20% in mind.

 Comments? Should the default in bdist_wheel and pip wheel be changed
 or should it remain as safe as possible (practicality vs purity)? If
 the latter, should override flags be added, or is renaming the wheel
 in the absence of project metadata the recommended approach? And does
 anyone have any experience of how this might all work with C
 extensions?

 I would like to see the setup.cfg metadata used by bdist_wheel
 expanded and standardized. The command line override would also be
 good. Does anyone have the stomach to put some of that into distutils
 or setuptools itself?

Agreed. My question would be, should this be exposed anywhere in the
project metadata? (For example, for other tools that use distlib to
build wheels and need to know programatically what tags to use).

By the way, one point I dislike with the bdist_wheel solution is that
it explicitly strips #-comments from the end of the universal= line. I
can see why you want to be able to use end-of-line comments, but it's
not part of the standard configparser format, and you don't support
';' style comments (which could confuse people).

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Building wheels - project metadata, and specifying compatibility

2013-03-20 Thread Daniel Holth
On Wed, Mar 20, 2013 at 9:11 AM, Paul Moore p.f.mo...@gmail.com wrote:
 On 20 March 2013 12:44, Daniel Holth dho...@gmail.com wrote:
 On the other hand Python 3.4 knows it is compatible with py33 and
 will pick up that wheel too.

 It is designed this way to provide a (small) distinction between the
 safe default and intentional cross-Python-compatible publishing.

 Good point. I keep forgetting it does that. (I still maintain that
 behaviour is very non-intuitive, but I'm willing to accept that unless
 someone else pipes up,. it's probably just me :-))

 In addition to a means for projects to specify this themselves, tools
 (bdist_wheel, pip wheel) should probably have a means to override the
 default at the command line, as it will be some time before projects
 specify this information, even once it is standard. There's always the
 option to rename the generated file, but that feels like a hack...

 I need to do a wheel retag tool instead of a simple rename because
 now the WHEEL metadata is supposed to contain all the information in
 the filename through the Tag and Build keys. This lets us effectively
 sign the filename.

 Again good point. If I get some free time, I might take a stab at that
 if you'd like...

 Where C extensions are involved, there are other questions. Mostly,
 compiled code is implementation, architecture, and minor version
 specific, so there's little to do here. The stable ABI is relevant,
 but I have no real experience of using it to know how that would work.
 There is also the case of projects with C accelerators - it would be
 good to be able to easily build both the accelerated version and a
 fallback pure-python wheel. I don't believe this is easy as things
 stand - distutils uses a compiler if it's present, so forcing a
 pure-python build when you have a compiler is harder work than it
 needs to be when building binary distributions.

 This is an open problem, for example in pypy they might be C
 decelerators. There should be a better way to have optional or
 conditional C extensions.

 Agreed. These are definitely hard issues, and a proper solution won't
 be quickly achieved. What we have now is a good 80% solution, but
 let's keep the remaining 20% in mind.

 Comments? Should the default in bdist_wheel and pip wheel be changed
 or should it remain as safe as possible (practicality vs purity)? If
 the latter, should override flags be added, or is renaming the wheel
 in the absence of project metadata the recommended approach? And does
 anyone have any experience of how this might all work with C
 extensions?

 I would like to see the setup.cfg metadata used by bdist_wheel
 expanded and standardized. The command line override would also be
 good. Does anyone have the stomach to put some of that into distutils
 or setuptools itself?

 Agreed. My question would be, should this be exposed anywhere in the
 project metadata? (For example, for other tools that use distlib to
 build wheels and need to know programatically what tags to use).

I think setup.cfg counts as far as build metadata is concerned.

 By the way, one point I dislike with the bdist_wheel solution is that
 it explicitly strips #-comments from the end of the universal= line. I
 can see why you want to be able to use end-of-line comments, but it's
 not part of the standard configparser format, and you don't support
 ';' style comments (which could confuse people).

That wasn't really intentional and it probably doesn't need to do
that. It's piggybacking on top of the distutils config parsing system
which may do what's needed already.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Parallel installation of incompatible versions

2013-03-20 Thread Bohuslav Kabrda
- Original Message -
 Not sure how you could do a good job having one version of a package
 available by default, and a different one available by requires().
 Eggs list the top level packages provided and you could shadow them
 but it seems like it would be really messy.
 

Yup, it'd require decent amount of changes and probably break some backwards 
compatibility as mentioned.

 Ruby Gems appear to have a directory full of gems:
 ~/.gem/ruby/1.8/gems/. Each subdirectory is {name}-{version} and
 doesn't need any suffix - we know what they are because of where they
 are.
 
 bundler-1.2.1
 json-1.7.5
 sinatra-1.3.3
 tilt-1.3.3
 tzinfo-0.3.33
 
 Each subdirectory contains metadata, and a lib/ directory that would
 actually be added to the Ruby module path.
 

Not exactly. the 1.8 directory contains gems/ and specifications/. The 
specifications/ directory contain {name}-{version}.gemspec, which is a 
meta-information holder for the specific gem. Among other things, it contains 
require_paths, that are concatenated with gems/{name}-{versions} to get the 
load path. So the rubygems require first looks at the list of specs, then 
chooses the proper one (newest when no version is specified or the specified 
one) and then computes the load path from it.

 Like with pkg_resources, developers are warned to only require Gems
 on things that are *not* imported (preferably in the equivalent of
 our
 console_scripts wrappers). Otherwise you get an unwanted Gem
 dependency if you ever tried to use the same gem outside of the gem
 system.
 

I don't really know what you mean by this - could you please reword it?

-- 
Regards,
Bohuslav Slavek Kabrda.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] The pypa account on BitBucket

2013-03-20 Thread Nick Coghlan
Hey pip/virtualenv folks, does one of you control the pypa placeholder
account on BitBucket? (it seems possible, given it was created shortly
after the Github account).

I've been pondering the communicating-with-the-broader-community issue
(especially in relation to
http://simeonfranklin.com/blog/2013/mar/17/my-pycon-2013-poster/) and
I'm thinking that the PSF account is the wrong home on BitBucket for
the meta-packaging documentation repo. The PSF has traditionally been
hands off relative to the actual development activities, and I don't
want to change that.

Instead, I'd prefer to have a separate team account, and also talk to
Vinay about moving pylauncher and distlib under that account.

I can create a different account if need be, but if one of you
controls pypa, then it would be good to use that and parallel the
pip/virtualenv team account on GitHub. If you don't already control
it, then I'll write to BitBucket support to see if the account is
actually being used for anything, and if not, if there's a way to
request control over it. Failing that, I'll settle for a
similar-but-different name, but pypa is definitely my preferred
option.

Regards,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Parallel installation of incompatible versions

2013-03-20 Thread Daniel Holth
 Like with pkg_resources, developers are warned to only require Gems
 on things that are *not* imported (preferably in the equivalent of
 our
 console_scripts wrappers). Otherwise you get an unwanted Gem
 dependency if you ever tried to use the same gem outside of the gem
 system.


 I don't really know what you mean by this - could you please reword it?

There should be only one call to the linker, at the very top of
execution. Otherwise in this pseudo-language example you can't use
foobar without also using the requires system:

myscript:
requires(a, b, c)
import foobar
run()

foobar:
requires(c, d) # No!
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Parallel installation of incompatible versions

2013-03-20 Thread Nick Coghlan
On Wed, Mar 20, 2013 at 6:58 AM, Daniel Holth dho...@gmail.com wrote:
 Like with pkg_resources, developers are warned to only require Gems
 on things that are *not* imported (preferably in the equivalent of
 our
 console_scripts wrappers). Otherwise you get an unwanted Gem
 dependency if you ever tried to use the same gem outside of the gem
 system.


 I don't really know what you mean by this - could you please reword it?

 There should be only one call to the linker, at the very top of
 execution. Otherwise in this pseudo-language example you can't use
 foobar without also using the requires system:

 myscript:
 requires(a, b, c)
 import foobar
 run()

 foobar:
 requires(c, d) # No!

RIght, version control and runtime access should be separate steps. In
a virtual environment, you shouldn't need runtime checks at all - all
the version compatibility checks should be carried out when creating
the environment.

Similarly, when a distro defines their site-packages contents, they're
creating an integrated set of interlocking requirements, all designed
to work together. Only when they need multiple mutually incompatible
versions installed should the versioning system be needed. Assuming we
go this way, distros will presumably install system Python packages
into the versioned layout and then symlink them appropriately from the
available by default layout in site-packages.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] self.introduce(distutils-sig)

2013-03-20 Thread Nick Coghlan
On Wed, Mar 20, 2013 at 3:13 AM, Paul Moore p.f.mo...@gmail.com wrote:
 On 19 March 2013 16:21, Steve Dower steve.do...@microsoft.com wrote:
 As I understand, the issue is the same as between different versions of 
 Python and comes down to not being able to assume a compiler on Windows 
 machines. It's easy to make a source file that will compile for any ABI and 
 platform, but distributing binaries requires each one to be built 
 separately. This doesn't have to be an onerous task - it can be scripted 
 quite easily once you have all the required compilers - but it does take 
 more effort than simply sharing a source file.

 Another nice tool would be some sort of Windows build farm, where
 projects could submit a sdist and it would build wheels for a list of
 supported Python versions and architectures. That wouldn't work for
 projects with complex dependencies, obviously, but could cover a
 reasonable-sized chunk of PyPI (especially if dependencies could be
 added to the farm on request).

 And can I have a pony as well, of course... :-)

This also came up in the discussion over on
http://simeonfranklin.com/blog/2013/mar/17/my-pycon-2013-poster/

I was pointed to an interesting resource:
http://www.lfd.uci.edu/~gohlke/pythonlibs/

(The security issues with that arrangement are non-trivial, but the
convenience factor is huge)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] self.introduce(distutils-sig)

2013-03-20 Thread Steve Dower
 From: Nick Coghlan [mailto:ncogh...@gmail.com]
 [snip]

 I was pointed to an interesting resource:
 http://www.lfd.uci.edu/~gohlke/pythonlibs/
 
 (The security issues with that arrangement are non-trivial, but the
 convenience factor is huge)

FWIW, one of the guys on our team has met with Christoph and considers him 
trustworthy.

 Cheers,
 Nick.
 


___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] self.introduce(distutils-sig)

2013-03-20 Thread Adam GROSZER

On 03/20/2013 04:42 PM, Nick Coghlan wrote:

On Wed, Mar 20, 2013 at 3:13 AM, Paul Moore p.f.mo...@gmail.com wrote:

On 19 March 2013 16:21, Steve Dower steve.do...@microsoft.com wrote:

As I understand, the issue is the same as between different versions of Python 
and comes down to not being able to assume a compiler on Windows machines. It's 
easy to make a source file that will compile for any ABI and platform, but 
distributing binaries requires each one to be built separately. This doesn't 
have to be an onerous task - it can be scripted quite easily once you have all 
the required compilers - but it does take more effort than simply sharing a 
source file.


Another nice tool would be some sort of Windows build farm, where
projects could submit a sdist and it would build wheels for a list of
supported Python versions and architectures. That wouldn't work for
projects with complex dependencies, obviously, but could cover a
reasonable-sized chunk of PyPI (especially if dependencies could be
added to the farm on request).

And can I have a pony as well, of course... :-)


This also came up in the discussion over on
http://simeonfranklin.com/blog/2013/mar/17/my-pycon-2013-poster/

I was pointed to an interesting resource:
http://www.lfd.uci.edu/~gohlke/pythonlibs/

(The security issues with that arrangement are non-trivial, but the
convenience factor is huge)

Cheers,
Nick.



Well a few other links:

http://winbot.zope.org
https://github.com/zopefoundation/zope.wineggbuilder
https://github.com/zopefoundation/zope.winbot

I can tell you getting such a beast to work takes quite some time.



--
Best regards,
 Adam GROSZER
--
Quote of the day:
Each time you are honest and conduct yourself with honesty, a success 
force will drive you toward greater success.  Each time you lie, even 
with a little white lie, there are strong forces pushing you toward 
failure. (Joseph Sugarman)

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Setuptools-Distribute merge announcement

2013-03-20 Thread Erik Bray
On Wed, Mar 13, 2013 at 8:54 PM, PJ Eby p...@telecommunity.com wrote:
 Jason Coombs (head of the Distribute project) and I are working on
 merging the bulk of the improvements distribute made into the
 setuptools code base.  He has volunteered to take over maintenance of
 setuptools, and I welcome his assistance.  I appreciate the
 contributions made by the distribute maintainers over the years, and
 am glad to have Jason's help in getting those contributions into
 setuptools as well.  Continuing to keep the code bases separate isn't
 helping anybody, and as setuptools moves once again into active
 development to deal with the upcoming shifts in the Python-wide
 packaging infrastructure (the new PEPs, formats, SSL, TUF, etc.), it
 makes sense to combine efforts.

 Aside from the problems experienced by people with one package that
 are fixed in the other, the biggest difficulties with the fork right
 now are faced by the maintainers of setuptools-driven projects like
 pip, virtualenv, and buildout, who have to either take sides in a
 conflict, or spend additional time and effort testing and integrating
 with both setuptools and distribute.  We'd like to end that pain and
 simplify matters for end users by bringing distribute enhancements to
 setuptools and phasing out the distribute fork as soon as is
 practical.

 In the short term, our goal is to consolidate the projects to prevent
 duplication, wasted effort, and incompatibility, so that we can start
 moving forward. This merge will allow us to combine resources and
 teams, so that we may focus on a stable but actively-maintained
 toolset.  In the longer term, the goal is for setuptools as a concept
 to become obsolete.  For the first time, the Python packaging world
 has gotten to a point where there are PEPs *and implementations* for
 key parts of the packaging infrastructure that offer the potential to
 get rid of setuptools entirely.  (Vinay Sajip's work on distlib,
 Daniel Holth's work on the wheel format, and Nick Coghlan's taking
 up the reins of the packaging PEPs and providing a clear vision for a
 new way of doing things -- these are just a few of the developments in
 recent play.)

 Obsolete, however, doesn't mean unmaintained or undeveloped.  In
 fact, for the new way of doing things to succeed, setuptools will
 need a lot of new features -- some small, some large -- to provide a
 migration path.

 At the moment, the merge is not yet complete.  We are working on a
 common repository where the two projects' history has been spliced
 together, and are cleaning up the branch heads to facilitate
 re-merging them.  We'd hoped to have this done by PyCon, but there
 have been a host of personal, health, and community issues consuming
 much of our available work time.  But we decided to go ahead and make
 an announcement *now*, because with the big shifts taking place in the
 packaging world, there are people who need to know about the upcoming
 merge in order to make the best decisions about their own projects
 (e.g. pip, buildout, etc.) and to better support their own users.

 Thank you once again to all the distribute contributors, for the many
 fine improvements you've made to the setuptools package over the
 years, and I hope that you'll continue to make them in the future.
 (Especially as I begin to phase myself out of an active role in the
 project!)

 I now want to turn the floor over to Jason, who's put together a
 Roadmap/FAQ for what's going to be happening with the project going
 forward.  We'll then both be here in the thread to address any
 questions or concerns you might have.

Quick question regarding open issues on Distribute (of which I have a
handful assigned to me, and of which I intend to tackle a few others):
 Would it it make sense to just hold off on those until the merge is
completed?  Also is there anything I can do to help with the merge?
How is that coming along?

Erik
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] self.introduce(distutils-sig)

2013-03-20 Thread Paul Moore
On 20 March 2013 16:31, Nick Coghlan ncogh...@gmail.com wrote:
 Then the pip developers, for example, could say we trust Christoph to
 make our Windows installers, and grant him repackager access so he
 could upload the binaries for secure redistribution from PyPI rather
 than needing to host them himself.

Another axis of the same idea would be to allow people to upload
unofficial binaries. The individual would not need to be confirmed
as trusted by the project, but his uploads would *not* be visible by
default on PyPI. Users would be able to opt in to builds by that
individual, and if they did, those builds would be merged in with
what's on PyPI.

That model is much closer to how Christoph is actually working at the
moment - people can choose whether to trust him, but if they do they
can get his builds and the upstream projects don't get involved.

Paul
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The pypa account on BitBucket

2013-03-20 Thread Marcus Smith
Nick:

I'm not sure who owns it yet.
If it is one of us, then it would need to be a group vote to use the pypa
brand name like this.
I'll try to get all the pypa people to come here and register their opinion.

here's my personal thoughts:

I understand the motivation to reuse our name, but probably less political
to start a new nifty short name.
pypack or something. pack as in a group of people, but also short for
packaging

In the spirit of the blog post,  here's the 2 doc projects I'd like to see
exist under this new ~pypack group account, and be linked to from the
main python docs.

1)  Python Packaging User Guide:  to replace the unmaintained
Hitchhiker's guide,  or just get permission to copy that in here and get it
up to date and more complete.
2)  Python Packaging Dev Hub: a simpler name to replace
python-meta-packaging

give the ~10-15 people that are actively involved in the various packaging
projects and PEPs admin/merge access to help maintain these docs.

and then announce this on python-announce as real and supported indirectly
by the PSF.

people will flock IMO to follow it and contribute with pulls and issues

Marcus


On Wed, Mar 20, 2013 at 6:39 AM, Nick Coghlan ncogh...@gmail.com wrote:

 Hey pip/virtualenv folks, does one of you control the pypa placeholder
 account on BitBucket? (it seems possible, given it was created shortly
 after the Github account).

 I've been pondering the communicating-with-the-broader-community issue
 (especially in relation to
 http://simeonfranklin.com/blog/2013/mar/17/my-pycon-2013-poster/) and
 I'm thinking that the PSF account is the wrong home on BitBucket for
 the meta-packaging documentation repo. The PSF has traditionally been
 hands off relative to the actual development activities, and I don't
 want to change that.

 Instead, I'd prefer to have a separate team account, and also talk to
 Vinay about moving pylauncher and distlib under that account.

 I can create a different account if need be, but if one of you
 controls pypa, then it would be good to use that and parallel the
 pip/virtualenv team account on GitHub. If you don't already control
 it, then I'll write to BitBucket support to see if the account is
 actually being used for anything, and if not, if there's a way to
 request control over it. Failing that, I'll settle for a
 similar-but-different name, but pypa is definitely my preferred
 option.

 Regards,
 Nick.

 --
 Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
 ___
 Distutils-SIG maillist  -  Distutils-SIG@python.org
 http://mail.python.org/mailman/listinfo/distutils-sig

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The pypa account on BitBucket

2013-03-20 Thread Daniel Holth
On Wed, Mar 20, 2013 at 1:39 PM, Marcus Smith qwc...@gmail.com wrote:
 Nick:

 I'm not sure who owns it yet.
 If it is one of us, then it would need to be a group vote to use the pypa
 brand name like this.
 I'll try to get all the pypa people to come here and register their opinion.

 here's my personal thoughts:

 I understand the motivation to reuse our name, but probably less political
 to start a new nifty short name.
 pypack or something. pack as in a group of people, but also short for
 packaging

 In the spirit of the blog post,  here's the 2 doc projects I'd like to see
 exist under this new ~pypack group account, and be linked to from the main
 python docs.

 1)  Python Packaging User Guide:  to replace the unmaintained Hitchhiker's
 guide,  or just get permission to copy that in here and get it up to date
 and more complete.
 2)  Python Packaging Dev Hub: a simpler name to replace
 python-meta-packaging

 give the ~10-15 people that are actively involved in the various packaging
 projects and PEPs admin/merge access to help maintain these docs.

 and then announce this on python-announce as real and supported indirectly
 by the PSF.

 people will flock IMO to follow it and contribute with pulls and issues

 Marcus

I like the python packaging authority brand and think it would be
great to put some renewed authority behind it.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The pypa account on BitBucket

2013-03-20 Thread Kevin Horn
On Wed, Mar 20, 2013 at 12:39 PM, Marcus Smith qwc...@gmail.com wrote:

 Nick:

 I'm not sure who owns it yet.
 If it is one of us, then it would need to be a group vote to use the pypa
 brand name like this.
 I'll try to get all the pypa people to come here and register their
 opinion.

 here's my personal thoughts:

 I understand the motivation to reuse our name, but probably less political
 to start a new nifty short name.
 pypack or something. pack as in a group of people, but also short for
 packaging



I like the pypack name.


 In the spirit of the blog post,  here's the 2 doc projects I'd like to see
 exist under this new ~pypack group account, and be linked to from the
 main python docs.

 1)  Python Packaging User Guide:  to replace the unmaintained
 Hitchhiker's guide,  or just get permission to copy that in here and get it
 up to date and more complete.
 2)  Python Packaging Dev Hub: a simpler name to replace
 python-meta-packaging

 give the ~10-15 people that are actively involved in the various packaging
 projects and PEPs admin/merge access to help maintain these docs.

 and then announce this on python-announce as real and supported indirectly
 by the PSF.

 people will flock IMO to follow it and contribute with pulls and issues

 Marcus


This sounds like a reasonable plan to me.  There definitely need to be a
user-centric bunch of docs being maintained someplace.

--
Kevin Horn
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The pypa account on BitBucket

2013-03-20 Thread Nick Coghlan
On Wed, Mar 20, 2013 at 10:39 AM, Marcus Smith qwc...@gmail.com wrote:
 Nick:

 I'm not sure who owns it yet.

I ran into Jannis before he left this morning, and he was fairly sure
someone decided it would also be a good idea to register it on
BitBucket after the GitHub group was set up.

 If it is one of us, then it would need to be a group vote to use the pypa
 brand name like this.
 I'll try to get all the pypa people to come here and register their opinion.

 here's my personal thoughts:

 I understand the motivation to reuse our name, but probably less political
 to start a new nifty short name.

A big part of my role at this point is to take the heat for any
potentially political or otherwise controversial issues (similar to
the way Guido takes the heat for deciding what colour various
bikesheds are going to be painted in the core language design - the
BDFL-Delegate title was chosen advisedly).

While we certainly won't do it if you're not amenable as a group, I'll
be trying my best to persuade you that it's a good idea to turn your
self-chosen name into official reality :)

 pypack or something. pack as in a group of people, but also short for
 packaging

The reason I'd like permission to re-use the name is because I want to
be crystal clear that pip *is* the official installer, and virtualenv
is the official way to get venv support in versions prior to 3.3, and
similar for distlib and pylauncher (of course, I also need to make
sure Vinay is OK with that, since those projects currently live under
his personal repo).

I don't want to ask the pypa to change its name, and I absolutely *do
not* want to have people asking whether or not pypa and some other
group are the ones to listen to in terms of how to do software
distribution the Python way. I want to have one group that the core
Python docs can reference and say if you need to distribute Python
software with and for older Python versions, here's where to go for
the latest and greatest tools and advice. If we have two distinct
names on GitHub and PyPI, it becomes that little bit harder to convey
that pylauncher, pip, virtualenv, distlib are backwards compatible
versions of features of Python 3.4+ and officially endorsed by the
core development team.

 In the spirit of the blog post,  here's the 2 doc projects I'd like to see
 exist under this new ~pypack group account, and be linked to from the main
 python docs.

 1)  Python Packaging User Guide:  to replace the unmaintained Hitchhiker's
 guide,  or just get permission to copy that in here and get it up to date
 and more complete.
 2)  Python Packaging Dev Hub: a simpler name to replace
 python-meta-packaging

 give the ~10-15 people that are actively involved in the various packaging
 projects and PEPs admin/merge access to help maintain these docs.

Yes, that sounds like a good structure.

 and then announce this on python-announce as real and supported indirectly
 by the PSF.

It's not PSF backing that matters, it's the python-dev backing to add
links from the 2.7 and 3.3 versions of the docs on python.org to the
user guide on the new site (and probably from the CPython dev guide to
the packaging developer hub). That's a fair bit easier for me to sell
if it's one group rather than two.

 people will flock IMO to follow it and contribute with pulls and issues

Yes, a large part of my goal here is similar to that of the PSF board
when Brett Cannon was funded for a couple of months to write the
initial version of the CPython developer guide.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The pypa account on BitBucket

2013-03-20 Thread Daniel Holth
On Wed, Mar 20, 2013 at 2:01 PM, Nick Coghlan ncogh...@gmail.com wrote:
 On Wed, Mar 20, 2013 at 10:39 AM, Marcus Smith qwc...@gmail.com wrote:
 Nick:

 I'm not sure who owns it yet.

 I ran into Jannis before he left this morning, and he was fairly sure
 someone decided it would also be a good idea to register it on
 BitBucket after the GitHub group was set up.

 If it is one of us, then it would need to be a group vote to use the pypa
 brand name like this.
 I'll try to get all the pypa people to come here and register their opinion.

 here's my personal thoughts:

 I understand the motivation to reuse our name, but probably less political
 to start a new nifty short name.

 A big part of my role at this point is to take the heat for any
 potentially political or otherwise controversial issues (similar to
 the way Guido takes the heat for deciding what colour various
 bikesheds are going to be painted in the core language design - the
 BDFL-Delegate title was chosen advisedly).

 While we certainly won't do it if you're not amenable as a group, I'll
 be trying my best to persuade you that it's a good idea to turn your
 self-chosen name into official reality :)

 pypack or something. pack as in a group of people, but also short for
 packaging

 The reason I'd like permission to re-use the name is because I want to
 be crystal clear that pip *is* the official installer, and virtualenv
 is the official way to get venv support in versions prior to 3.3, and
 similar for distlib and pylauncher (of course, I also need to make
 sure Vinay is OK with that, since those projects currently live under
 his personal repo).

 I don't want to ask the pypa to change its name, and I absolutely *do
 not* want to have people asking whether or not pypa and some other
 group are the ones to listen to in terms of how to do software
 distribution the Python way. I want to have one group that the core
 Python docs can reference and say if you need to distribute Python
 software with and for older Python versions, here's where to go for
 the latest and greatest tools and advice. If we have two distinct
 names on GitHub and PyPI, it becomes that little bit harder to convey
 that pylauncher, pip, virtualenv, distlib are backwards compatible
 versions of features of Python 3.4+ and officially endorsed by the
 core development team.

 In the spirit of the blog post,  here's the 2 doc projects I'd like to see
 exist under this new ~pypack group account, and be linked to from the main
 python docs.

 1)  Python Packaging User Guide:  to replace the unmaintained Hitchhiker's
 guide,  or just get permission to copy that in here and get it up to date
 and more complete.
 2)  Python Packaging Dev Hub: a simpler name to replace
 python-meta-packaging

 give the ~10-15 people that are actively involved in the various packaging
 projects and PEPs admin/merge access to help maintain these docs.

 Yes, that sounds like a good structure.

 and then announce this on python-announce as real and supported indirectly
 by the PSF.

 It's not PSF backing that matters, it's the python-dev backing to add
 links from the 2.7 and 3.3 versions of the docs on python.org to the
 user guide on the new site (and probably from the CPython dev guide to
 the packaging developer hub). That's a fair bit easier for me to sell
 if it's one group rather than two.

 people will flock IMO to follow it and contribute with pulls and issues

 Yes, a large part of my goal here is similar to that of the PSF board
 when Brett Cannon was funded for a couple of months to write the
 initial version of the CPython developer guide.

 Cheers,
 Nick.

 --
 Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
 ___
 Distutils-SIG maillist  -  Distutils-SIG@python.org
 http://mail.python.org/mailman/listinfo/distutils-sig

And we really need to double down on this kind of pseudo-totalitarian
propaganda: http://s3.pixane.com/lenin_packaging.png

(only now with more setuptools!)
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] self.introduce(distutils-sig)

2013-03-20 Thread Donald Stufft

On Mar 20, 2013, at 12:45 PM, Paul Moore p.f.mo...@gmail.com wrote:

 On 20 March 2013 16:31, Nick Coghlan ncogh...@gmail.com wrote:
 Then the pip developers, for example, could say we trust Christoph to
 make our Windows installers, and grant him repackager access so he
 could upload the binaries for secure redistribution from PyPI rather
 than needing to host them himself.
 
 Another axis of the same idea would be to allow people to upload
 unofficial binaries. The individual would not need to be confirmed
 as trusted by the project, but his uploads would *not* be visible by
 default on PyPI. Users would be able to opt in to builds by that
 individual, and if they did, those builds would be merged in with
 what's on PyPI.
 
 That model is much closer to how Christoph is actually working at the
 moment - people can choose whether to trust him, but if they do they
 can get his builds and the upstream projects don't get involved.
 
 Paul
 ___
 Distutils-SIG maillist  -  Distutils-SIG@python.org
 http://mail.python.org/mailman/listinfo/distutils-sig


Why can't unofficial binaries just use a separate index? e.g. Christoph can 
just make an index with his binaries.

This solution also works well if someone wants to maintain a curated PyPI.

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] self.introduce(distutils-sig)

2013-03-20 Thread Donald Stufft

On Mar 20, 2013, at 12:31 PM, Nick Coghlan ncogh...@gmail.com wrote:

 On Wed, Mar 20, 2013 at 9:03 AM, Steve Dower steve.do...@microsoft.com 
 wrote:
 From: Nick Coghlan [mailto:ncogh...@gmail.com]
 [snip]
 
 I was pointed to an interesting resource:
 http://www.lfd.uci.edu/~gohlke/pythonlibs/
 
 (The security issues with that arrangement are non-trivial, but the
 convenience factor is huge)
 
 FWIW, one of the guys on our team has met with Christoph and considers him 
 trustworthy.
 
 Thanks, that's great to know, and ties into an idea that I just had.
 In addition to whether or not the build is trusted, there's also the
 risk of MITM attacks against the download site (less so when automated
 installers aren't involved, but still a risk). We just switched PyPI
 over to HTTPS for that very reason.
 
 The idle thought I had was that it may be useful if PyPI users could
 designate other users as repackagers for their project, and PyPI
 offered an interface that was *just* file uploads for an existing
 release.

I *think* if done properly a TUF secured API can be setup so as that you can 
delegate the role for signing certain files is delegated, but I'm not sure.

 
 Then the pip developers, for example, could say we trust Christoph to
 make our Windows installers, and grant him repackager access so he
 could upload the binaries for secure redistribution from PyPI rather
 than needing to host them himself.
 
 We'd probably want something like this for an effective build farm
 system anyway, this way it could work regardless of whether it was a
 human or an automated system converting the released sdists to
 platform specific binaries.
 
 Cheers,
 Nick.
 
 -- 
 Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
 ___
 Distutils-SIG maillist  -  Distutils-SIG@python.org
 http://mail.python.org/mailman/listinfo/distutils-sig


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] self.introduce(distutils-sig)

2013-03-20 Thread Tres Seaver
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/20/2013 06:13 AM, Paul Moore wrote:

 Another nice tool would be some sort of Windows build farm, where 
 projects could submit a sdist and it would build wheels for a list of 
 supported Python versions and architectures. That wouldn't work for 
 projects with complex dependencies, obviously, but could cover a 
 reasonable-sized chunk of PyPI (especially if dependencies could be 
 added to the farm on request).

The Zope Foundation pays hosting charges for a box which runs Windows
tests for ZF projects, and also builds and uploads Windows binaries (eggs
and MSIs) for them when they are released.

  http://winbot.zope.org/

As an example, look at any recent zope.interface release, e.g.:

 https://pypi.python.org/pypi/zope.interface/4.0.5#downloads



Tres.
- -- 
===
Tres Seaver  +1 540-429-0999  tsea...@palladion.com
Palladion Software   Excellence by Designhttp://palladion.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iEYEARECAAYFAlFKAP0ACgkQ+gerLs4ltQ6UZwCgkxfOtrArGn/F5dKPk6+QepWV
7jYAniBreYijRKhevNS6rDUteePNzfZW
=m0LP
-END PGP SIGNATURE-

___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The pypa account on BitBucket

2013-03-20 Thread Paul Moore
On 20 March 2013 18:01, Nick Coghlan ncogh...@gmail.com wrote:
 If it is one of us, then it would need to be a group vote to use the pypa
 brand name like this.
 I'll try to get all the pypa people to come here and register their opinion.

 here's my personal thoughts:

 I understand the motivation to reuse our name, but probably less political
 to start a new nifty short name.

 A big part of my role at this point is to take the heat for any
 potentially political or otherwise controversial issues (similar to
 the way Guido takes the heat for deciding what colour various
 bikesheds are going to be painted in the core language design - the
 BDFL-Delegate title was chosen advisedly).

 While we certainly won't do it if you're not amenable as a group, I'll
 be trying my best to persuade you that it's a good idea to turn your
 self-chosen name into official reality :)

I don't have a problem with the extension of the pypa brand name to
cover this, and I'm all in favour of pip and virtualenv being
sanctioned as the official answers in this space, I'd be a little
cautious over some of the administrative aspects of such a move,
though - consider if there's a sudden rush of people who want to
contribute to packaging documents - do we want them to have commit
rights on pip? Do we have different people committers on the github
and bitbucket repos? Not insurmountable issues, but worth considering.

Paul.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] self.introduce(distutils-sig)

2013-03-20 Thread Paul Moore
On 20 March 2013 18:29, Donald Stufft don...@stufft.io wrote:
 Why can't unofficial binaries just use a separate index? e.g. Christoph can 
 just make an index with his binaries.

 This solution also works well if someone wants to maintain a curated PyPI.

The only real issue I know of is hosting. I've thought about doing
this myself, but don't have (free) hosting space I could use, and I
don't really feel like paying for and setting something up on spec. I
could host the files somewhere like bitbucket, but that feels like an
abuse for any substantial number of packages.

I presume Christoph doesn't publish his binaries as an index because
wininst installers are typically downloaded and installed
manually.Although AIUI, easy_install could use them if they were in
index format.

But you're right, people *can* do that.
Paul.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Setuptools-Distribute merge announcement

2013-03-20 Thread PJ Eby
On Wed, Mar 20, 2013 at 12:42 PM, Erik Bray erik.m.b...@gmail.com wrote:
 Quick question regarding open issues on Distribute (of which I have a
 handful assigned to me, and of which I intend to tackle a few others):
  Would it it make sense to just hold off on those until the merge is
 completed?

I'd personally say no, go ahead and do the work now, except that it
might be making more work for Jason later at the repository-munging
level.   ;-)  So, hopefully he'll chime in here with a yea or nay.


 Also is there anything I can do to help with the merge?
 How is that coming along?

It's...  somewhat of a mess, actually.   As Jason mentioned,
distribute didn't import setuptools' version history at the start, so
it's being a bit of a challenge to merge in a way that maintains
history.  My original suggestion for merging was to just cherrypick
patches and apply them to setuptools (w/appropriate credits), because
apart from the added tests and new features, there's at most about 5%
difference between setuptools and distribute by line count.  (And the
added tests and features are mostly in separate files, so can be added
without worrying about conflicts.  And a lot of the remaining added
stuff is being taken out, anyway, because it's the stuff that
distribute uses to pretend it's setuptools.)

Some challenges that have arisen since, are that the more changes
Jason makes to the distribute branch in our merged repo, the less an
hg annot is actually going to show the real authors of stuff anyway
when we get done.  (For example, putting back in the missing
entry_points.txt whose absence has been causing problems w/distribute
lately.)  And we're getting huge and (mostly meaningless) conflicts
during attempted merges, too.

So, if you have any thoughts on what can be done to fix that, by all
means, suggest away.  ;-)
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] self.introduce(distutils-sig)

2013-03-20 Thread Daniel Holth
On Wed, Mar 20, 2013 at 3:10 PM, Paul Moore p.f.mo...@gmail.com wrote:
 On 20 March 2013 18:29, Donald Stufft don...@stufft.io wrote:
 Why can't unofficial binaries just use a separate index? e.g. Christoph can 
 just make an index with his binaries.

 This solution also works well if someone wants to maintain a curated PyPI.

 The only real issue I know of is hosting. I've thought about doing
 this myself, but don't have (free) hosting space I could use, and I
 don't really feel like paying for and setting something up on spec. I
 could host the files somewhere like bitbucket, but that feels like an
 abuse for any substantial number of packages.

 I presume Christoph doesn't publish his binaries as an index because
 wininst installers are typically downloaded and installed
 manually.Although AIUI, easy_install could use them if they were in
 index format.

 But you're right, people *can* do that.
 Paul.

If we know who to ask we can get hosting (not my area of expertise).
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Parallel installation of incompatible versions

2013-03-20 Thread PJ Eby
On Wed, Mar 20, 2013 at 8:29 AM, Nick Coghlan ncogh...@gmail.com wrote:
 I'm not wedded to using *actual* pth files as a cross-platform linking
 solution - a more limited format that only supported path additions,
 without the extra powers of pth files would be fine. The key point is
 to use the .dist-info directories to bridge between unversioned
 installs in site packages and finding parallel versions at runtime
 without side effects on all Python applications executed on that
 system (which is the problem with using a pth file in site packages
 to bootstrap the parallel versioning system as easy_install does).

So why not just make a new '.pth-info' file or directory dropped into
a sys.path directory for this purpose?  Reusing .dist-info as an
available package (vs. an *importable* package) looks like a bad idea
from a compatibility point of view.  (For example, it's immediately
incompatible with Distribute, which would interpret the redundant
.dist-info as being importable from that directory.)


 If a distribution has been installed in site-packages (or has an
 appropriate *.pth file there), there won't be any *.pth file in the
 .dist-info directory.

Right, but if this were the protocol, you wouldn't tell what's
*already on sys.path* without reading all those .dist-info directories
to see if they *had* .pth files.  You'd have to look for the ones that
were missing a .pth file, in other words, in order to know which of
those .dist-info's represented a package that was actually importable
from that directory.


 The *.pth file will only be present if the package has been installed 
 *somewhere else*.

...which is precisely the thing that makes it incompatible with PEP
376 (and Distribute ATM).  ;-)


 However, it occurs to me that we can do this differently, by
 explicitly involving a separate directory that *isn't* on sys.path by
 default, and use a path hook to indicate when it should be accessed.

Why not just put a .pth-info file that points to the other location,
or whatever?  Then it's still discoverable, but you don't have to open
it unless you intend to add it to sys.path (or an import hook or
whatever).

If it needs to list a bunch of different directories in it, or
whatever, doesn't matter.  The point is, using a file in the *same*
sys.path directory saves a metric tonne of complexity in sys.path
management.  Plus, you get the available packages in a single
directory read, and you can open whatever files you need in order to
pick up additional information in the case of needing a non-default
package.


 Under this version of the proposal, PEP 376 would remain unchanged,
 and would effectively become the database of installed distributions
 available on sys.path by default.

That's what it is *now*.  Or more precisely, it's a directory of
packages that would be importable if a given directory is present on
sys.path.  It doesn't say anything about sys.path as a whole.


 - rather than the contents being processed directly from sys.path, we
 would add a versioned-packages entry to sys.path with a path hook
 that maps to a custom module finder that handles the extra import
 locations without the same issues as the current approach to modifying
 sys.path in pkg_resources (which allows shadowing development versions
 with installed versions by inserting at the front), or the opposite
 problem that would be created by appending to the end (allowing
 default versions to shadow explicitly requested versions)

Note that you can do this without needing a separate sys.path entry.
You can give alternate versions whatever precedence they *would* have
had, by replacing the finder for the relevant directory.

But it would be better if you could be clearer about what precedence
you want these other packages to have, relative to the matching
sys.path entries.  You seem to be speaking in terms of a single
site-packages and single versioned-packages directory, but
applications and users can have more complicated paths than that.  For
example, how do PYTHONPATH directories factor into this?  User-site
packages?  Application plugin directories?  Will all of these need
their own markers?

That's why I think we should focus on *individual* directories (the
way PEP 376 does), rather than trying to define an overall precedence
system.

While there are some challenges with easy_install.pth, the basic
precedence concept it uses is sound: an encapsulated package
discovered in a given directory takes precedence over unencapsulated
packages in the same directory.

The place where easy_install falls down is in the implementation: not
only does it have to munge sys.path in order to insert those
non-defaults, it also installs *everything* in an encapsulated form,
making a huge sys.path.

But you can take the same basic idea and apply it to an import hook; I
just think that rather than having the extra directory, it's less
coupling and complexity if we look at the level of directories rather
than sys.path as a whole.

This 

Re: [Distutils] Parallel installation of incompatible versions

2013-03-20 Thread PJ Eby
On Wed, Mar 20, 2013 at 10:35 AM, Nick Coghlan ncogh...@gmail.com wrote:
 Assuming we go this way, distros will presumably install system Python 
 packages
 into the versioned layout and then symlink them appropriately from the
 available by default layout in site-packages.

If they're going to do that, then why not put the versioned layout
directly into site-packages in the first place?
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The pypa account on BitBucket

2013-03-20 Thread Carl Meyer
FWIW I think if pip and virtualenv are being elevated to a new level of
official, I have no problem with the pypa name being used as the
umbrella for the next few years' improve python packaging efforts. I
know I've talked to some people who don't follow packaging closely who
thought this was already the case and were surprised to learn that e.g.
distribute was not part of the PyPA.

Python packaging already suffers from a too many similar but slightly
different names problem; let's consolidate rather than exacerbate this
problem.

I just checked and my Bitbucket account does not have admin control over
bitbucket.org/pypa - must be Jannis?

Regarding other administrative issues:

On 03/20/2013 11:59 AM, Paul Moore wrote:
 I don't have a problem with the extension of the pypa brand name to
 cover this, and I'm all in favour of pip and virtualenv being
 sanctioned as the official answers in this space, I'd be a little
 cautious over some of the administrative aspects of such a move,
 though - consider if there's a sudden rush of people who want to
 contribute to packaging documents - do we want them to have commit
 rights on pip? Do we have different people committers on the github
 and bitbucket repos? Not insurmountable issues, but worth considering.

We already have multiple teams on the github PyPA to allow for
different committers on pip vs virtualenv. AFAIK bitbucket also supports
per-repo access control. So I don't see any reason this should be a
problem: using the name PyPA as an umbrella does not imply that there
must be a single list of people with equal access to all PyPA repositories.

Carl



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The pypa account on BitBucket

2013-03-20 Thread Marcus Smith
so, counting the beans...  :  )
we have 8 active pypa people in my count.
I think 5 yea votes would make it official
I see 3 yea votes so far.
I'm willing to change my vote for the good of the whole if needed, but
I'm still curious to hear how non-pypa feel about this.
Marcus
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The pypa account on BitBucket

2013-03-20 Thread Donald Stufft

On Mar 20, 2013, at 6:22 PM, Marcus Smith qwc...@gmail.com wrote:

 so, counting the beans...  :  )
 we have 8 active pypa people in my count.
 I think 5 yea votes would make it official
 I see 3 yea votes so far.
 I'm willing to change my vote for the good of the whole if needed, but I'm 
 still curious to hear how non-pypa feel about this.
 Marcus
 ___
 Distutils-SIG maillist  -  Distutils-SIG@python.org
 http://mail.python.org/mailman/listinfo/distutils-sig



+0

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The pypa account on BitBucket

2013-03-20 Thread Nick Coghlan
On Wed, Mar 20, 2013 at 3:05 PM, Carl Meyer c...@oddbird.net wrote:
 We already have multiple teams on the github PyPA to allow for
 different committers on pip vs virtualenv. AFAIK bitbucket also supports
 per-repo access control. So I don't see any reason this should be a
 problem: using the name PyPA as an umbrella does not imply that there
 must be a single list of people with equal access to all PyPA repositories.

Indeed, we use this on the PSF BitBucket repos - you can define groups
to make it easy to give the same set of people access to multiple
repos, but there's no requirement that the access controls to a team's
repos all be the same.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] The pypa account on BitBucket

2013-03-20 Thread Alex Clark

On 2013-03-20 22:22:40 +, Marcus Smith said:


so, counting the beans...  :  )
we have 8 active pypa people in my count.
I think 5 yea votes would make it official
I see 3 yea votes so far.
I'm willing to change my vote for the good of the whole if needed, 
but I'm still curious to hear how non-pypa feel about this.



It's shorter than the The Fellowship of the Packaging (And FOTP is 
not as attractive an acronym) :-). IIUC, Nick plans to do some 
official pimping of pip and venv and wants to use the PyPA 
brand/organization to do it… I would say +0 in general, and +1 to using 
PyPA instead of a new name. Seems like a good fit.



Alex




Marcus
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig



--
Alex Clark · http://about.me/alex.clark


___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig