[Distutils] What's the use case of testpypi?

2017-10-30 Thread Toshio Kuratomi
When we locked down pypi to prevent uploading an sdist to overwrite a
previous I remember that some people wanted a brief window to check
for brown paper bag issues and be able to upload a new tarball in that
window if needed.  IIRC, those people were told to use testpypi for
that sort of thing.  Upload potential tarball to testpypi.  If it
works, go ahead and rerun to upload to the real pypi.  If it doesn't,
fix it and try again.

This past week I decided to try out that workflow for a project that
I'm managing the releases for and ran into a snag.  testpypi has the
same behaviour as production pypi wherein you can only upload a given
sdist once and afterwards it's no longer allowed.  For the use case
above this is problematic.  It essentially changes the idea of "test
your release on testpypi before making a release" into "You can have
to chances to get it right if you use testpypi" which, although better
than uploading directly to pypi, still leaves a lot of room for error
(let's face it: I know that if I'm stuck releasing late at night due
to time constraints and make a mistake, chances are better than normal
that my fix won't be the perfection that my ordinary code is and could
have other showstopper bugs that I'd want my testing to catch as well
;-)

Is this something that we could change for testpypi?  It could be
implemented in many ways: straight overwrite, being able to destroy a
version so that it seems to never have existed, or being able to
destroy and recreate a package so that it has no uploaded sdists
recorded.

On the other side of the usefulness of enabling the testing use case
above, such a change would be a difference between testpypi and
production pypi meaning that it would no longer be testing exactly the
same functionality as will be deployed in production.  I'm not sure if
that's a more important consideration or not.  I figured that unless I
asked, I would never know the answer :-)

Thanks,
-Toshio
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Python 3.x Adoption for PyPI and PyPI Download Numbers

2015-04-21 Thread Toshio Kuratomi
On Tue, Apr 21, 2015 at 01:54:55PM -0400, Donald Stufft wrote:
 
 Anyways, I'll have access to the data set for another day or two before I
 shut down the (expensive) server that I have to use to crunch the numbers so 
 if
 there's anything anyone else wants to see before I shut it down, speak up 
 soon.
 
Where are curl and wget getting categorized in the User Agent graphs?

Just morbidly curious as to whether they're in with Browser and therefore
mostly unused or Unknown and therefore only slightly less unused ;-)

-Toshio


pgpxWMQAnD5Bn.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Create formal process for claiming 'abandoned' packages

2014-09-20 Thread Toshio Kuratomi
On Sat, Sep 20, 2014 at 11:34 AM, Donald Stufft don...@stufft.io wrote:


 For the record, CPAN and npm both have similar things allowing someone to
 take
 over an abandoned project.

 I don’t believe ruby gems has an official policy and it appears that they
 are hesitant
 to do this form the threads I’ve seen (Though they mentioned doing it for
 _why).

Good information.

 Most of the Linux distros have some mechanism for someone to claim that a
 particular
 package in the distro is no longer maintained and to attempt to take it
 over, though
 is somewhat different.

yeah, I come from distro land but I'm hesitant to point directly at
any of our documented policies on this because there are some
differences between being a bunch of people working together to make a
set of curated and integrated packages vs a loosely associated group
of developers who happen to use a shared namespace within a popular
service.  All distros I can think of have some sort of self-governance
whereas pypi is more akin to a bunch of customers making use of a
service.  Some of the distro policies don't apply very well in this
space.  Some do, however, so I hope other people who are familiar with
their distros will also filter the relevant policy ideas from their
realms and put them forward.

-Toshio
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Create formal process for claiming 'abandoned' packages

2014-09-20 Thread Toshio Kuratomi
On Sat, Sep 20, 2014 at 1:30 AM, John Wong gokoproj...@gmail.com wrote:
 Hi all.

 TL;DR version: I think

 * an option to enroll in automatic ownership transfer
 * an option to promote Request for Adoption
 * don't transfer unless there are no releases on the index

 will be reasonable to me.

 On Fri, Sep 19, 2014 at 9:26 PM, Richard Jones rich...@python.org wrote:


 In light of this specific case, I have an additional change that I think
 I'll implement to attempt to prevent it again: In the instances where the
 current owner is unresponsive to my attempts to contact them, *and* the
 project has releases in the index, I will not transfer ownership. In the
 cases where no releases have been made I will continue to transfer
 ownership.


 I believe this is the best solution, and frankly, people in the OSS world
 has been forking all these years
 should someone disagree with the upstream or just believe they are better
 off with the fork. I am not
 a lawyer, but one has to look at any legal issue with ownership transfer. I
 am not trying to scare
 anyone, but the way I see ownership transfer (or even modifying the index on
 behalf of me) is the same
 as asking Twitter or Github to grant me a username simply because the
 account has zero activity.

This is a great example, however, I think that you're assuming that
the answer to the question of whether services like twitter and github
(and facebook and email service providers and many other
service-customer relationships) should sometimes grant username
takeovers is 100% no and I don't believe that's the case.  I mean, in
the past year there was a loud outcry about facebook not being willing
to grant access to an account where the user had died and their family
wanted access to the data so as to preserve it.  Facebook eventually
granted access in that case.  Email has historically been transferred
quite frequently.  When you quit a job or leave a university your
email address is often taken from you and, when someone with a similar
name or inclination arrives, that address can be given to someone
else.

 Toshio Kuratomi a.bad...@gmail.com wrote:

 But there are
 also security concerns with letting a package bitrot on pypi.


 Again, I think that people should simply fork. The best we can do is simply
 prevent
 the packages from being downloaded again. Basically, shield all the packages
 from public. We preserve what people did and had. We can post a notice
 so the public knows what is going on.

 Surely it sucks to have to use a fork when Django or Requests are forked and
 now everyone has to call it something different and rewrite their code.
 But that's the beginning of a new chapter. The community has to be reformed.
 It sucks but I think it is better in the long run. You don't have to argue
 with the
 original owner anymore in theory.

I'm on the fence over the model that I think you've got in your head
here but I think it's more important to talk about why I think
demanding people fork is the wrong path to take in my example which I
think is much more cut and dried.

Let's say you belong to a large project with 50 committers and a
user's mailing list that numbers in the thousands of subscribers.  The
project owns a domain name with a large website and shoot, maybe they
even have a legal body that serves as a place to take donations,
register trademarks and so forth.  You happen to be the release
manager.  You've been with the project since it was a small 5 person
endeavour.  While everyone else was busy coding, you specialized in
deployment, installation, and, of course, creating a tarball to upload
to pypi on every release.  People may oca casionally think that they
should help you out but hey, you've always done it, no one has reason
to complain, and besides, there's this really important bug that they
should be working on fixing instead

So then you die.  It's unexpected.  Hit by a bus.  Eaten by a
velociraptor.  You know the various hypothetical scenarios.  Well, the
project is still vibrant.  It still has 49 committers.  It's still the
owner of a trademark and a domain name.  It still has thousands of
users.  Now that it's a necessity, it can even find that has other
people to volunteer to replace you as release manager.

What it doesn't have, is permission to upload to pypi anymore.

I think if someone asked to transfer ownership to another member of
the upstream project had the time to research this they'd have no
trouble at all deciding that the right course of action would be to
transfer ownership.  In this scenario all of the facts point towards
the upstream being the people who should have rights to upload to pypi
and they simply didn't have the foresight to assure that they wouldn't
lose that right through an accident.  Now what if we start taking some
of the features of the scenario away?  What if there wasn't a
foundation?  A trademark?  A domain name?  What if the release manager
disappeared from the internetz and no one knew if he

Re: [Distutils] Create formal process for claiming 'abandoned' packages

2014-09-19 Thread Toshio Kuratomi
On Fri, Sep 19, 2014 at 9:26 PM, Richard Jones rich...@python.org wrote:

 When someone requests to take over a listing on PyPI, the process is:
i
 * If the request comes in through some means other than the sf.net support
 tracker, I require the requestor to make the request through that tracker so
 there is a record,ard
 * I ask whether they have contact the current owner,
 * I personally contact the owner through whatever means I have (sometimes
 this means using the address listed for the user in PyPI, sometimes that
 address is not valid so I use other means where possible),

This seems like the step where change would be most fruitful.  The
idea of a public list mentioned before allows a variety of feedback:

1) The maintainer themselves
2) People who know the maintainer and have an alternate method to contact them
3) Other people who know the project and can raise an objection to the
exact person who is being added as a new owner

Another thought here is that it's often best to use every means of
contacting someone that you reasonably have available.   So if there's
a valid mail in pypi and a valid email in your contacts, use both.
The public list idea essentially lets you crowdsource additional
methods of contacting the maintainer.

 There's been some suggestions made:

 * Publicly announcing the intention to make the change is a good one, though
 again finding an appropriate forum that enough people would actually read is
 tricky.

If there's no appropriate forum, starting a new one might be the best.
Uploaders to pypi could certainly be seen as an audience that
doesn't match well with any other existing mailing list.


 In light of this specific case, I have an additional change that I think
 I'll implement to attempt to prevent it again: In the instances where the
 current owner is unresponsive to my attempts to contact them, *and* the
 project has releases in the index, I will not transfer ownership. In the
 cases where no releases have been made I will continue to transfer
 ownership.

This is tricky.  There are certainly security issues with allowing
just anyone to take over a popular package at any time.  But there are
also security concerns with letting a package bitrot on pypi.  Say
that the 4 pypi maintainers of Django or the 6 pypi maintainers of pip
became unresponsive (it doesn't even have to be forever... that 6
month sabbatical could correspond with something happening to your
co-maintainers as well).  And the still active upstream project makes
a new security fix that they need to get into the hands of their users
ASAP.  We don't want pypi to block that update from going out.  Even
if the project creates a new pypi package name and uploads there,
would we really want the last package on pypi that all sorts of old
documentation and blog posts on the internet is pointing to to be the
insecure one?

So I don't think an absolute we will never transfer ownership once
code is released is a good idea here.  It's a good idea to increase
the means used to determine if the current maintainer can be reached
and it's a good idea to throw extra eyes at vetting whether a transfer
is warranted.  It may be a good idea to add more criteria around what
makes for an allowable transfer (for instance, in my examples, there's
still a large, well known canonical upstream even though the specific
members of that upstream responsible for uploading to pypi have gone
unresponsive.  That might be a valid criteria whereas one-coder
projects being replaced by other one-coder forks might be a case where
you simply say rename please).

It could help to have other people involved in the decision making for
this.  At the least, having other people involved will spread
responsibility.  At best it gives the group additional man-hours to
research the facts in the case.


One final thought in regards to ticket 407.  My impression from
reading the notes is that this was not a complete invalidation of the
current process.  In the end, the current owner was alerted to the
takeover attempt and also was in a position to do something about it
since they disagreed with what was happening.  Those are both points
in favor of some pieces of the process (adding the new owner instead
of replacing the owner).  This might not be sufficient for a malicious
attack on a project but it does show that the process does have some
good features in terms of dealing with mistakes in communication.

-Toshio
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Pycon

2014-03-31 Thread Toshio Kuratomi
On Tue, Apr 01, 2014 at 08:41:12AM +1000, Nick Coghlan wrote:
 
 On 1 Apr 2014 03:26, Barry Warsaw ba...@python.org wrote:
 
  On Mar 28, 2014, at 03:06 PM, Daniel Holth wrote:
 
  Who is going to pycon? I will be there.
 
  I'll be there, for the duration (language summit through sprints).  It would
  be great to hav an OpenSpace or BoF for discussing the intersection of 
  Python
  packaging issues and distros.
 
 Oh, good idea. Toshio and a bunch of other Fedora folks will also be there
 (unfortunately not Slavek this year - maybe 2015). Would be good to discuss 
 all
 the distro integration support in metadata 2.0 :)

Yep, I'll be there from Language Summit until the end.  I'm not sure that
any of the other Fedora folks who are coming this year are interested in
packaging, we're a bit web-developer heavy this year.

-Toshio


pgpxrhBDpZTCM.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] Shebang lines, /usr/bin/python, and PEP394

2013-07-25 Thread Toshio Kuratomi
Over on python-dev we're talking about Linux Distributions switching from
python2 to python3, what steps they need to take and in what order.  One of
the things that's come up [1]_ is that a very early step in the process is 
making
sure that shebang lines use /usr/bin/python2 or /usr/bin/python3 as noted in
PEP394 [2]_.  Faced with the prospect of patching a whole bunch of scripts
in the distribution, I'm wondering what distutils, distlib, setuptools, etc
do with shebang lines.
* Do they rewrite shebang lines?
* If so, do they use #!/usr/bin/python2 or do they use #!/usr/bin/python ?
* If the latter, is there hope that we could change that to match with PEP-394's
  recommendations?  (setuptools seems to be moving relatively quickly these
  days so that seems reasonably easy distutils is tied to the release
  schedule of core python-2.7.x although if the change is accepted into the
  CPython tree we might consider backporting it to the current distribution
  package early.

.. [1]_: http://mail.python.org/pipermail/python-dev/2013-July/127565.html
.. [2]_: http://www.python.org/dev/peps/pep-0394/#recommendation

Thanks,
Toshio


pgpi9lWWyiX5l.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] [issue152] setuptools breaks with from __future__ import unicode_literals in setup.py

2013-07-06 Thread Toshio Kuratomi
On Sat, Jul 06, 2013 at 06:52:05AM +, mbogosian wrote:
 
 New submission from mbogosian:
 
 unicode_literals break a bunch of stuff in setuptools. Considering they may 
 become the default at some point, this should be fixed...? I do not know if 
 this is related to issue 78.
 
 To reproduce, run the attached setup.py (output below). Comment out the 
 unicode_literals line in setup.py and try it again (everything should work).
 
 % DISTUTILS_DEBUG=t python -c 'import setuptools ; print 
 setuptools.__version__'
 0.8
 % unzip -d foo_test.zip ; cd foo_test
 ...
 % DISTUTILS_DEBUG=t python setup.py build
[snip output]
 % DISTUTILS_DEBUG=t python setup.py nosetests
[snip output]

Not sure what the unicode model is in setuptools but one way to look at this
is that in python2, the setuptools API takes byte str and in python3, the API
takes unicode str.  So this is a case of the setup.py being invalid.

If you have:
from __future__ import unicode_literals

That doesn't change what the api takes as input; it only changes how you
express it.  So a package author who does from __future__ import
unicode_literals would also need to do this to make things work:

'package_dir'   : { '': b'src' },
'packages'  : setuptools.find_packages(b'src', exclude = ( 
b'foo', b'test', b'test.*' )),

Someone else will have to speak to whether that's the intended model,
though.

-Toshio


pgpNonigRPr3p.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Add optional password_command .pypirc value

2013-03-08 Thread Toshio Kuratomi
On Fri, Mar 08, 2013 at 12:57:54PM -0500, Donald Stufft wrote:
 On Mar 8, 2013, at 12:47 PM, Lennart Regebro rege...@gmail.com wrote:
 
  On Fri, Mar 8, 2013 at 6:01 PM, Donald Stufft don...@stufft.io wrote:
  I dislike hijacking SSH to tunnel a HTTP protocol over
  
  I'm not sure we have to hijack or tunnel anything. :-)
 
 If you're uploading via SSH you'll open a SSH tunnel and then POST to PyPI 
 over that tunnel.
 
  
  and adding more reliance on SSH keys means a lost SSH key becomes _even_ 
  worse than it already is.
  
  I don't follow that argument. You can have separate keys in separate
  places if you like.
 
 Ideally you can sure. Security that only deals in ideal and doesn't pay 
 attention to what people will actually do in the general case is a problem. 
 The general case people will reuse their typical SSH keys, thus placing more 
 reliance on a single secret across multiple services (Github, bitbucket, SSH, 
 PyPI). Encouraging authentication token sharing is a bad practice.
 
 HTTP has a token that is functionally similar to SSH keys. Client side SSL 
 certificates. They would function fine and enable similar uses as SSH keys.
 
If we're choosing between SSH keys and SSL certificates the client side
tools for SSH are much more mature than the ones for SSL.  The numerous
ssh-agents, for instance, allow the ssh key to be encrypted on disk but the
user is only prompted for a password when the agent has to read the key
(which could be after a timeout or once when the ssh-agent starts up).
SSL certificate use for comandline usage doesn't yet have that sort of tool
so SSL certificates are often unencrypted on disk if they're being used for
commandline access.

-Toshio


pgp4__BWB6HKx.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Changing the separator from - to ~ and allow all Unicode alphanumerics in package names...

2012-11-12 Thread Toshio Kuratomi
On Mon, Nov 12, 2012 at 02:34:14PM -0500, Daniel Holth wrote:
 
 
 Horrifying. All codecs that are not utf-8 should be banned, except on Windows.

nod  I made that argument on python-dev but it didn't win the necessary
people over.  I don't recall why so you'd have to look at the thread to see
what's already been argued.

(This is assuming that we aren't talking about locale settings in general
but only reading the module filenames off of the filesystem.  Banning non-utf8
locales isn't a good idea since there are areas of the owrld where utf-8
isn't going to be adopted anytime soon.)


 Or at least warn(Your Unicode is broken); in fact, just put that in site.py
 unconditionally.
 
If python itself adds that to site.py, that would be great.  But individual
sites adding things to site.py only makes python code written at one site
non-portable.

 However remember that a non-ASCII pypi name ☃ could still be just import
 snowman. Only the .dist-info directory ☃-1.0.0.dist-info would necessarily
 contain the higher Unicode characters.

nod  I wasn't thinking about that.  If you specify that the metadata
directories (if they contain the unicode characters) must be encoded in
utf-8 (or at least, must be in a specific encoding on a specific platform),
then that would work.  Be sure to specify the encoding and use it
explicitly, when decoding filenames rather than the implicit d4ecoding which
relies on the locale, though (I advise having unittests where the locale is
set to something non-utf-8 (C locale works well) to test this or someone who
doesn't remember this conversation will make a mistake someday).  If you
rely on the implicit conversion with locale, you'll eventually end up back
in the mess of having bytes that you don't know what to do with.

 I will keep the - and document the - to _ folding convention. - turns into _
 when going into a filename, and _ turns back into - when parsed out of a
 filename.
 
Cool.  Thanks.

 The alternative to putting the metadata in the filename which btw isn't that
 big of a problem, is to have indexed metadata. IIUC apt-get and yum work this
 way and the filename does not matter at all. The tradeoff is of course that 
 you
 have to generate the index. The simple index is a significant convenience of
 easy_install derived systems.
 
nod.  I've liked the idea of putting metadata about all installed modules
into a separate index.  It makes possible writing a new import mechanism
that uses the index to more efficiently load of modules on systems with
large sys.path's and make mulitple versions of a module on a system easier
to implement.

However, there are some things to consider:

* The python module case will be a bit more complex than yum and apt because
  you'll need to keep per-user databases and per-system databases (so that
  there's a place for user's to keep the metadata for modules that they
  install into user-writable directories).
* User's will need to run commands to install, update, and remove the
  metadata from those indexes.
* yum also need to deal with non-utf-8 data.  But some of those are
  due to legacy concerns and others are due to filenames.
  - Legacy: package names, package descriptions, etc, in those worlds can
contain non-utf8 data because the underlying systems (rpm and dpkg)
predate unicode.  For package descriptions, I know that yum continues to
store pure bytes and translate it to a sensible representation when it
loads.  For package names I'm unsure.  The major distributions that yum
works for specify that package names must be utf-8 so yum may specify
utf-8.  OTOH, yum is distro agnostic and $random_rpm_from_the_internet
can still use random byts in its package name so yum may still have to
deal with bytes here.
  - filenames: those are still bytes becaues there's nothing that enforces
utf-8.  If you're keeping a list of filenames in the metadata, you still
have to deal with those bytes somehow.  So yum and python packaging
tools would still have to make decisions about what to do with those.
For yum, it stores the bytes and has to operate on bytes and convert to
unicode (as best it can) when displaying data.  python packaging tools
can take a different path but they will need to make explicit assertions
about their treatment of encodings to do so.
+ For instance, they could assert that all filenames must be utf-8 --
  anyting else is an error and cannot be packaged.
+ A more complex example would be to store utf-8 in internal package
  metadata but have the capability to translate from the user's locale
  settings when reading off the filesystem.  Then create utf-8 filenames
  when writing out.  This gets a bit dodgy since the user can create the
  package, then install it on their system and the installed package
  would fail to find modules because they're no longer in the user's
  locale.)
+ A third example which I currently view as 

Re: [Distutils] Changing the separator from - to ~ and allow all Unicode alphanumerics in package names...

2012-11-09 Thread Toshio Kuratomi
On Fri, Nov 09, 2012 at 09:38:54PM -0500, Daniel Holth wrote:
 Although I think the ~ is a very ugly -, it could be useful to change the
 separator to something less commonly used than the -.
 
 It would be useful to be able to use the hyphen - in the version of a package
 (for semver) and elsewhere. Using it as the separator could make parsing the
 file name a bit trickier than is healthy.
 
items 10 and 11 of semver are problematic.  Other people who consume
versions, for instance Linux distributions, have a history of using dashes
as a separator.  They have to deal with stripping hyphens out of versions
that make use them.

The fact that distutils/setuptools also treats hyphens as separators is
a good thing for these audiences.

[..]
 
 If we do this, I
 would like to allow Unicode package names at the same time. safe_name(), the
 pkg_resources function that escapes package names for file names, would become
 
 re.sub(u[^\w.]+, _, upackage-name, flags=re.U)
 
 
 In other words, the rule for package names would be that they can contain any
 Unicode alphanumeric or _ or dot. Right now package names cannot practically
 contain non-ASCII because the setuptools installation will fold it all to _ 
 and
 installation metadata will collide on the disk.
 
I consider the limitation of package names to non-ascii to be a blessing in
disguise.  In python3, unicode module names are possible but not portable
between systems.  This is because the non-ascii module names inside of a python
file are abstract text but the representation on the filesystem is whatever
the user's locale is.  The consensus on python-dev when this was brought up
seemed to be that using non-ascii in your local locale was important for
learning to use python.  But distributing non-ascii modules to other people
was a bad idea.  (If you have the attention span for long threads, 
http://mail.python.org/pipermail/python-dev/2011-January/107467.html
Note that the threading was broken several times but the subject line stayed
the same.)


Description of the non-ascii module problem for people who want a summary:

I have a python3 program that has::
  #!/usr/bin/python3 -tt
  # -*- coding: utf-8 -*-
  import café
  café.do_something()

python3 reads this file in and represents café as an abstract text type
because I wrote it using utf-8 encoding and it can therefore decode the
file's contents to its internal representation.  However it then has to find
the café module on disk.  In my environment, I have LC_ALL=en_US.utf8.
python3 finds the file café.py and uses that to satisfy the import.

However, I have a colleague that does work with me.  He has access to my
program over a shared filesystem (or distributed to him via a git checkout
or copied via an sdist, etc).  His locale uses latin-1 (ISO8859-1) as his
encoding (For instance, LC_ALL=en_US.ISO8859-1).  When he runs my program,
python3 is still able to read the application file itself (due to the piece
of the file that specifies it's encoded in utf-8) but when it searches for
a file to satisfy café on the disk it runs into probelsm because the café.py
filename is not encoded using latin-1.

Other scenarios where the files are being shared were discussed in the
thread I mentioned but I won't go into all of them in this message...
hopefully you can generalize this example to how it will cause problems on
pypi, with pre-packaged modules on the system vs user's modules, etc.

-Toshio


pgp6cAtGmzRww.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Panel on packaging at PyCon 2013

2012-09-28 Thread Toshio Kuratomi
On Fri, Sep 28, 2012 at 12:08:48PM -0400, Barry Warsaw wrote:
 On Sep 28, 2012, at 12:01 PM, Éric Araujo wrote:
 
 I’m putting up a last-minute proposal for a panel about directions for
 the packaging ecosystem at the next PyCon.  For that I would need a list
 of panelists.  I think it would be interesting to have developers (say
 from distribute, buildout, pip, wheel) as well as users from
 subcommunities (packaging people tend to be web developers, but their
 experience doesn’t match the needs of the scipy community for example).
 
 Great idea.  Please include folks from distros.  Toshio would be a great
 representative of the RPM faction, and although I'm sure there's someone
 better, if no one else volunteers, I'll represent the .deb cabal.  It would be
 nice to have other *nix representatives if available, as well as Mac(Ports?)
 and Windows experts.
 
nod I could be on a panel although I haven't been keeping up with the
changes in distutils2/packaging/distlib/wheel/etc recently.  Nick Coghlin
had mentioned on one of the Fedora Python lists that he was working on
packaging and distros.  If he's planning on attending pycon he'd be an
alternate good choice for an rpm-based distro.

-Toshio


pgpBRsh9uwydy.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Differences in PEP386 and setuptools

2012-09-27 Thread Toshio Kuratomi
On Thu, Sep 27, 2012 at 01:00:10PM -0400, Donald Stufft wrote:
 On Thursday, September 27, 2012 at 11:59 AM, Toshio Kuratomi wrote:
 
 I would be for renaming .dev to .pre[1]_ but I would be against the rest 
 of
 this
 proposal. Having one and only one way to spell things was one of the goals
 of the pep. Having two post release tags that don't already have a defined
 meaning leads to confusion about what the ordering should be:
 
 dev (in my proposal, and in the way I've seen it used) isn't a post release
 tag,
 it is a separate release at the same level as alpha, beta, rc, final, and it's
 meaning tends to be this is the in development version of what will become
 release X.Y, so foo-1.0dev1 means the first development release of what
 will become foo-1.0, (just like foo-1.0a1 is the first alpha release of what
 will become foo-1.0).
 
 With the current layout (and with your changes to the proposal) there is no
 good way to have a development version. With the current layout the best
 approximation you can get is 1.0a1.dev1 (similarly with your changes it would
 be 1.0a1.pre1).
 
That's your preference, not capability.  I'd also say you should use
1.0.post1 rather than 1.0a1pre1.

 On the surface 1.0a1.dev1 looks like it might be alright, but my issues with
 it:
 
 1. It's not intuitive (at least not to me, development snapshots have
 always come before alphas in my mind)

There's no such consensus.  This is both from dealing with upstream
versioning as a distro packager and from discussion with other people who
worked on the versioning spec.  People were very clear that they wanted to
be able to stick dev versions in the middle of their attempt to release
alphas, betas, rcs, and finals.

You can see this in current, setuptools-driven practice in the snapshots
that pje hosts, for example:
  http://peak.telecommunity.com/snapshots/


 2. Goes against what is currently being used for either no good reason or
 a yet to be remembered good reason.

I've given you the reasons above.  I'd also point out that with your
clarification that you want .dev to be a toplevel alongside 'a', 'b', and
'c' you are also changing how dev is being used for no good reason -- simply
to fit your personal use case.

 3. Requires you to decide ahead of time if you're going to go from dev
 to alpha, or straight to a beta/candidate/final. (This is something
 that
 most small projects don't have set in stone).

Really?

Today I release:
   1.0a1.dev2012

Tomorrow I release:
   1.0b1

The versions sort.  There was no forethought needed.  Now if I was to do
this, I'd use:
   1.0.post2012

as my first release instead.


  4. It's semantics are different, this is a development release of
  1.0a1 as opposed to a development release of the 1.0 version.

Correct.  And this is a development release of 1.0a1 is what people that
designed the version specification wnated.  The development release of the
1.0 version case is taken care of by .post.

  5. It's just plain ugly.
 
I'd say the same thing about 'a', 'b', 'c' vs 'alpha', 'beta', 'rc'; .post
and .pre, etc.  Ugly seems to be the path to painting bikesheds.

 
 So to be clear my proposal would be:
 
 1.0dev1  1.0a1  1.0b1  1.0c1  1.0.pre1  1.0  1.0.post1

So is your proposal to get rid of all modifiers?  ie: there will be
toplevels that sort at the same level as the alpha/beta/rc tags?  If so, I'd
get rid of the pre tag in there and rename dev to pre as dev means post to
some people and pre to others.  I'd also remove the . since they're all
at the same level:

1.0pre1  1.0a1  1.0b1  1.0c1  1.0  1.0post1

While I'd be for this, this was definitely counter to the desires of a good
number of the other people who participated.  They definitely wanted
modifiers to denote that a snapshot dev instance applied before or after an
alpha/beta/rc.

If you do intend to keep .pre and .post as modifiers, I would be against
adding dev as a toplevel.  it's just adding another name with no clear
definition in the collective minds of the people consuming the versions
where the functionality is already provided for by the .post and .pre
(current .dev)

 as opposed to the current:
 
 1.0a1  1.0b1  1.0c1  1.0.dev1  1.0  1.0.post1


(Note: I assume rc1 is also in all of these examples and sorts between c and
final)

-Toshio


pgpWHfEFjODCv.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Differences in PEP386 and setuptools

2012-09-27 Thread Toshio Kuratomi
On Thu, Sep 27, 2012 at 01:46:25PM -0400, Éric Araujo wrote:
 Le 27/09/2012 11:59, Toshio Kuratomi a écrit :
  * a, b, c, and rc are the typical alpha beta release candidate tags.  We
debated whether to allow the single letter versions or the long versions
alpha, beta, rc.  In the end, which set of tags was used was
a bikeshed argument so it was chosen by PEP-author decree.  I personally
felt that having both c and rc was counter to the goals of the PEP but
we at least agreed that rc would intuitively sort after c so it wasn't
as bad as having both alpha and a where there would be no intuitively
correct sorting.
 
 Well, c and rc should really compare equal in my opinion.

That would be a bad thing.  What do you do in the face of a project
releasing:

foo-1.0c1
foo-1.0rc1

Does your tool get to download either one of those depending on who coded
it, the timestamp for the upload, or input from the RNG?

And saying that the foo project maintainers shouldn't do that because
they're semantically the same is well ad good but in practice people do
things that are wrong all the time.  With a single version string inside of
a package we can provide functions that can validate whether the version is
correct or not as part of using the versions in the library to mitigate
that.  We cannot do the same thing with version strings from two separate
releases of the package because where those releases are stored is
site/project specific.

Documenting that even though c and rc are menat to be semantically the same
they should always sort c followed by rc protects you from these
problems.

-Toshio


pgpC9JqWUSd8H.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Differences in PEP386 and setuptools

2012-09-27 Thread Toshio Kuratomi
On Thu, Sep 27, 2012 at 02:42:50PM -0400, Donald Stufft wrote:
 
 On Thursday, September 27, 2012 at 2:03 PM, Toshio Kuratomi wrote:
 
 That's your preference, not capability. I'd also say you should use
 1.0.post1 rather than 1.0a1pre1.
 
 So how do you encode the very first development release then if you're
 using post?

Took me a couple readings but I think you mean:

I start a project this morning.
I make a snapshot this afternoon.

What should the version be?

Using the current versioning scheme, I'd personally do:
0.dev1

By the time I get to an alpha release, we're going to have a version number
(for instance, 0.1).  So an initial version on the first day of 0 is fine:

0.dev1  0.1a1  [...]  0.1

/me has seen this in the wild as well... not just in my theoretical mind :-)

 Can you point me towards somewhere where their conceptual
 release cycle is
 
 Final - Dev - Alpha - Beta - RC
 
 or
 
 Alpha - Beta - RC - Final - Dev
 
I'm not sure what you're asking here.  All projects change code and release
it.  The changes may get released as something official-y like an alpha,
a beta, an rc, or a final.  They may also be released like a snapshot.
Those snapshots can be taken from any point on the timeline against any
release.  Trying to say that snapshots only happen before alpha beta rc or
snapshots only happen after alpha beta rc doesn't map to what people in
the real world are doing.

I think we both get to this same place down below, though, so I'll skip down
to there.


 This also makes sense to me in my particular use case, and it sounds like it
 makes sense in yours? My main concern is that shifting 0.1dev1 from
 sorting before alpha/beta/candidate to sorting after them is going to cause
 headaches. So perhaps a better proposal would be to allow .dev at any
 level (as it does currently), but dictate that .dev sorts first at whatever
 level
 it occurs at (which is what setuptools does). This fixes my major problem of
 getting bitten because setuptools and PEP386 treat 1.0dev1 differently while
 still giving people the ability to do 1.0a1.dev1. The major drawback being
 that you can't do a dev/pre release between an rc and a final, but as
 pkg_resources
 doesn't currently allow for that I would think that isn't a huge concern? That
 might
 solve the use case for both sides?
 
If we're just changing the position of X.Y.preN from this:

0.1  0.1.post1  0.2a1.pre1  0.2a1  0.2a1.post1  0.2b1  0.2pre1  0.2

to this:

0.1  0.1.post1  0.2pre1  0.2a1.pre1  0.2a1  0.2a1.post1  0.2b1  0.2

then that works perfectly fine for me.  To put out a snapshot between rc and
final, I'd do 0.2rc1.post1 so not having a pre/dev in there doesn't bother
me.

However, now that you mention it, ISTR that this is the way that it was
originally and the lack of pre/dev in that position did bother another (set
of?) the PEP authors enough that it was changed.  For me, switching the
search order of that particular piece is a bikeshed so I seem to have
reallocated the memory for storing the arguments made in favor of shifting
it to the current position :-)

-Toshio


pgpa6miIzO3BY.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Differences in PEP386 and setuptools

2012-09-27 Thread Toshio Kuratomi
On Thu, Sep 27, 2012 at 05:00:56PM -0500, Brad Allen wrote:
 On Thu, Sep 27, 2012 at 2:45 PM, Toshio Kuratomi a.bad...@gmail.com wrote:
 
  However, now that you mention it, ISTR that this is the way that it was
  originally and the lack of pre/dev in that position did bother another (set
  of?) the PEP authors enough that it was changed.  For me, switching the
  search order of that particular piece is a bikeshed so I seem to have
  reallocated the memory for storing the arguments made in favor of shifting
  it to the current position :-)
 
 Why is this perceived as a bikeshed issue? It seems like a discussion
 about substantial functionality which is tied to the semantics of these
 special tags. For many projects and organizations, terms like 'alpha'
 and 'beta' and 'dev' have meaning in a release context, and rearranging
 the order of these concepts has an impact.
 
dev has no universal defined relation to alpha, beta, etc in a release
context.

We all seem to be amenable to:

1.0alpha1.dev1   1.0alpha1

But people have not agreed whether:

1.0.dev1  1.0alpha1
or
1.0alpha1  1.0.dev1

Either of these cases can also be functionally replaced with the appropriate
.post tag:

1.0.dev1  1.0alpha1 ===  0.9.post1  1.0alpha1

1.0alpha1  1.0.dev1 ===  1.0alpha1  1.0rc1.post1

(Substitute the last final release for 0.9 in the above and substitute
whatever version you're snapshoting after for 1.0rc1)

Some organizations and projects use it one way and others the other way.
And unlike alpha, beta, rc ordering which is both long established in the
computing industry and based on the sorting of the greek alphabet, the
choice with .dev is arbitrary.  That's what makes this a bikeshed.  Either
way of doing this is going to confuse and upset some of the consumers and
there's no solid reason to say that the upset people are wrong.  At best
you can say that they're late to the party.

  So to be clear my proposal would be:
 
  1.0dev1  1.0a1  1.0b1  1.0c1  1.0.pre1  1.0  1.0.post1
 
  as opposed to the current:
 
  1.0a1  1.0b1  1.0c1  1.0.dev1  1.0  1.0.post1
 
 
 +1 for the proposal to change the PEP so that 'dev' versions are
 earlier than 'a' versions.

Note, just in case my chart is misleading, the proposal doesn't change all
'dev' versions to be earlier than the 'a' versions.  '.dev' is a modifier.
So it can be applied to any of the toplevel portions of the version.

So to be more complete, the proposal is to do this::

0.1
   0.1.post1
   1.0.dev1# The .dev that's changing position
   1.0a1.dev1
   1.0a1
   1.0a1.post1
   1.0b1.dev1
   1.0b1
   1.0b1.post1
   1.0c1.dev1
   1.0c1
   1.0c1.post1
# The current PEP386 position of 1.0.dev1
   1.0
   1.0.post1

-Toshio


pgpiM8al9AlSm.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Differences in PEP386 and setuptools

2012-09-26 Thread Toshio Kuratomi
On Wed, Sep 26, 2012 at 03:09:19PM -0400, Donald Stufft wrote:
 I've been comparing how PEP386/distutils2.version treats versions
 and how pkg_resources from setuptools treats versions and it
 confirmed a worry of me with the way that dev is treated in PEP386.
 
 In PEP386 you have several kinds of top level releases, these 
 are alpha, beta, candidate, and final and they are sorted as you
 would expect. On top of that it has pre and post releases for
 any of those top level releases. The pre releases are tagged
 with a .dev and post releases are tagged with a .post. They
 are sorted immediately before/after the main release that they
 are pre/post for.
 
 In pkg_resources dev is treated as it's own main release and
 sorts it in front of an alpha.
 
 This means that given the versions:
 
 [0.1.dev1, 0.1a1, 0.1b1, 0.1c1, 0.1]
 
 PEP386 will sort them as:
 
 [0.1a1, 0.1b1, 0.1c1, 0.1.dev1, 0.1]
 
 and pkg_resources will sort them as:
 
 [0.1.dev1, 0.1a1, 0.1b1, 0.1c1, 0.1]
 
 
 To further complicate things the most common usage I've personally seen
 in the wild is that of 0.1dev or 0.1dev1 which the author expects to sort
 before an alpha (In this case distutils2.version throws an error, but the
 suggest
 function is able to turn it into 0.1.dev1).
 
 I think this difference is going to cause confusion, especially during the the
 transition period when you're going to have people using both pkg_resources
 and the new PEP386 functions.
 
 Since PEP386 is only in the Accepted stage, and isn't part of the official
 implementation yet, is it at all possible to revise it? Ideally I think to
 follow
 with the prior art, and people's expectations dev should be moved to a 
 main
 release type sorted before an alpha, and to take it's place as a pre release
 modifier
 perhaps something like pre can be used instead (e.g. 0.1.pre1).

Note that this was an intentional difference with setuptools.

-Toshio


pgpIZNtijRhCe.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Proposal: drop md5 for sha256

2012-07-04 Thread Toshio Kuratomi
On Tue, Jul 03, 2012 at 06:33:08PM -0500, Jennings, Jared L CTR USAF AFMC 46 
SK/CCI wrote:
 On hosts configured for compliance with U.S. Federal Information
 Processing Standard (FIPS) 140-2
 http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf, like
 those in some banks and, yes, the U.S. Department of Defense,
 cryptographic modules (such as OpenSSL, which underlies hashlib) are not
 allowed to calculate MD5 digests, because MD5 is no longer a FIPS
 Approved digest algorithm.
 
 I know no one is trying here to lean on MD5 for security, but the
 standard says nothing about the reason why you're using MD5: just that
 you can't.
 
 No one expects a digest algorithm to fail, and Python 2.x may not have
 been fixed to check for that before being frozen
 https://bugzilla.redhat.com/show_bug.cgi?id=746118#c3, so if you run
 an MD5 checksum on a FIPS-compliant system with an unpatched Python 2.x,
 the Python interpreter will segfault. (Ruby, too, had this problem and
 was itself only recently fixed,
 http://bugs.ruby-lang.org/issues/4944.)
 
 I have to configure hosts in accordance with FIPS 140-2, so the more
 places I can get rid of MD5, the less headaches I have.
 
I've just had to look into this for a bug in a package on Fedora and it's
not all bad but also not all good.  I believe that in current python2 and
python3 (including soon to be released python-3.3),  if it's compiled
against openssl, the md5 hash constructor will SIGABRT when in FIPS mode.
If it's compiled against the internal md5 code, it will ignore FIPS mode.
Dave Malcolm has a patch in the tracker that hasn't yet been approved and
merged that allows one to pass a flag to the hash constructor that says that
the call is not being used for cryptographic purposes and then the
constructor will work even in FIPS mode.  I've seen no indication in the
tracker that this would be applied to future python-2.7.x releases, but it
could be backported by individual distributors of python2 (for instance,
Linux distributions).

A version of the patch is presently applied to the Fedora Linux 17 versions
of python2 and python3 if someone is curious.

Note that openssl itself allows the use of MD5 in FIPS mode under a similar
strategy.  So I'm not entirely certain that the standard forbids use of MD5
for non-cryptographic purposes.

-Toshio


pgpkSaRh8La5y.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Distribute and Python 3.2

2011-03-12 Thread Toshio Kuratomi
On Sat, Mar 12, 2011 at 11:05:50AM +, Vinay Sajip wrote:
 
 P.S. IMO Toshio Kuratomi's fix could be better implemented as
 
 self.config_vars['abiflags'] = getattr(sys, 'abiflags', '')
 
 in the same block as all the other self.config_vars[...] assignments.
 
Committed as:

diff -r f64c2d57df43 setuptools/command/easy_install.py
--- a/setuptools/command/easy_install.pyTue Feb 22 12:05:49 2011 -0800
+++ b/setuptools/command/easy_install.pySat Mar 12 06:47:42 2011 -0800
@@ -202,14 +202,10 @@
 'prefix': prefix,
 'sys_exec_prefix': exec_prefix,
 'exec_prefix': exec_prefix,
+# Only python 3.2+ has abiflags
+'abiflags': getattr(sys, 'abiflags', ''),
}
 
-try:
-self.config_vars['abiflags'] = sys.abiflags
-except AttributeError:
-# Only python-3.2+ has sys.abiflags
-self.config_vars['abiflags'] = ''
-
 if HAS_USER_SITE:
 self.config_vars['userbase'] = self.install_userbase
 self.config_vars['usersite'] = self.install_usersite

-Toshio


pgpw4poU4Wjb8.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Preventing downloading for package requirements

2011-02-23 Thread Toshio Kuratomi
On Wed, Feb 23, 2011 at 10:04:24PM +0100, Tarek Ziadé wrote:
 
  One way that seems to work is to add this to setup.cfg:
 
  [easy_install]
  allow_hosts: www.example.com
 
  This will break the download by limiting acceptable hosts to bogus ones that
  can't possibly satisfy the requirement.  But it's unsatisfying for several
  reasons:
 
  * It's obscure and doesn't really describe what we're trying to do 
  ('fixable'
   I suppose by a comment)
  * Requires the Debian packager to add a setup.cfg or modify an existing one 
  in
   the upstream package.
 
  Note that I thought this might also work, but it does not afaict:
 
  [easy_install]
  no_deps: true
 
 Well, if you want to handle all the dependencies for a project
 yourself, you can shortcut distribute or setuptools by using the
 --single-version-externaly managed option.
 
 When using this option, the project will be installed by the vanilla
 distutils install command.
 
 Then it's up to you to handle dependencies. That's how pip does, and Fedora 
 IIRC
 
What Barry's talking about is slightly different I think.  When running
python setup.py test, setup.py may download additional modules that should
have been specified in the system package (thus the download should never be
tried).  This occurs before the software is installed anywhere.

For Fedora we deal with this by preventing processes related to the build
from making any non-localhost network connnections.  That doesn't catch
things when a packager is building on their local machine but it does catch
things when the package is built on the builders

There's two pieces that work on that:
1) The build hosts themselves are configured with a firewall that prevents
   a lot of packets from leaving the box, and prevent any packets from going
   to a non-local network.
2) We build in a chroot and part of chroot construction is to create an
   empty resolv.conf.  This prevents DNS lookups from succeeding and
   controls the automatic downloading among other things.

Neither of these are especially well adapted to being run by a casual
packager but the second (a chroot with empty resolv.conf) could be done
without too much trouble (we have a tool called mock that creates chroots,
it was based on a tool called mach which can use apt and might be better for
a Debian usage).  Both 1 and 2 could be performed on a VM if you can get
your packagers to go that far or are dealing with a build system rather than
individual packagers.

-Toshio



pgpvocOcZe8qP.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] zc.buildout and System Python

2010-10-28 Thread Toshio Kuratomi
On Thu, Oct 28, 2010 at 10:22:58AM -0400, Jim Fulton wrote:
 
   It occurs to me that it would be nice if we made clean Python
   packages available for some of the popular Unix platforms.  I'm not
   sure what would be involved in doing that, from a distribution point
   of view.
 
If you're talking about a python that is carried by the OS in their package
sets, updatable using the OS tools, etc catch me on IRC (abadger1999 on
irc.freenode.net) and we could talk about this.  Off the top of my head,
I think it would be possible with a few compromises but not easy in the
decision department.  For instance, distributions have rules such as don't
bundle libraries that are available on the system that would apply to
things like libffi which are built from within python by default.  Or the
use of wide-unicode which isn't the default in a vanilla upstream build but
is the default on the Linux distributions that I know of.  Or the use of
multilib which makes for a split directory layout for libraries instead of
a single location.

The biggest issue I see is that it wouldn't be possible to fix bugs in
these packages.  Perhaps it would be possible to compromise and fix bugs but
only when the patches are backports from the upstream repository but we
presently do that in Fedora for firefox/xulrunner/thunderbird because of
mozilla's trademark agreement and it causes no end of conflicts between
contributors.

-Toshio


pgp5s6xOBkCEd.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] zc.buildout and System Python

2010-10-28 Thread Toshio Kuratomi
On Thu, Oct 28, 2010 at 12:08:30PM -0400, Jim Fulton wrote:
 On Thu, Oct 28, 2010 at 11:47 AM, Toshio Kuratomi a.bad...@gmail.com wrote:
  On Thu, Oct 28, 2010 at 10:22:58AM -0400, Jim Fulton wrote:
 
    It occurs to me that it would be nice if we made clean Python
    packages available for some of the popular Unix platforms.  I'm not
    sure what would be involved in doing that, from a distribution point
    of view.
 
  If you're talking about a python that is carried by the OS in their package
  sets, updatable using the OS tools, etc
 
 That would be great. It might be enough to post pre-built packages. shug
 
nod  There's a few ways to achieve that as well and it would be a lot
simpler.  There's the opensuse build system which lets you build packages
for a variety of distributions.  There's ubuntu ppas and fedora personal
repos that let you host within the distributions namespace but are marked as
being separate... Lots of different options there that might be suitable.

  catch me on IRC (abadger1999 on
  irc.freenode.net) and we could talk about this.  Off the top of my head,
  I think it would be possible with a few compromises but not easy in the
  decision department.
 
 Which makes it unattractive. I'm really not interested in getting
 embroiled in a political process.
 

nod  If going the route of getting a clean python into the distributions
themselvs, there's going to be a good deal of politics as there are a lot of
hard questions to answer there and a lot of contrary goals to reconcile.
The idea of simply hosting package repositories for each distribution would
be a lot easier in this area.

 BTW, I really don't care about certain types of innovation (e.g. file
 locations, wide unicode) as long as I as a developer don't feel them.
 It occurs to me that it would be useful if there was a definition of a
 standard Python that provided a baseline that developers could count
 on. Today, the closest thing to a standard is the Python distribution.
 I suppose that doesn't have to be the standard.  Of course, defining
 such a standard might be really painful, especially via email. It might
 be a good PyCon discussion/sprint topic.
 
nod  That could be a productive definition.

  The biggest issue I see is that it wouldn't be possible to fix bugs in
  these packages.  Perhaps it would be possible to compromise and fix bugs but
  only when the patches are backports from the upstream repository
 
 I'm not sure what you mean. Bugs are fixed via Python distributions.
 Is this not fast enough?
 
Correct, it's not fast enoguh.  Many distributions move imuch faster than
python releases.  Even slow moving distributions can simply be
releasing/releasing updates out of sync with when python itself releases.

As an example, Fedora releases a new version every six months.  Each release
has a 13 month lifetime.  During the 13 month lifetime, Fedora releases
updated packages almost daily.  So if someone filed a bug that
python-2.7.1-1 had a segfault or a UnicodeError or some other bug that the
maintainer felt was worthwhile to fix in the released Fedora, they would
ship an updated python package (perhaps with a backport from python-2.x's
tree or perhaps by coding a fix and then submitting the fix upstream
afterwards) and make the update as soon as they felt they had a workable
solution.

  but we
  presently do that in Fedora for firefox/xulrunner/thunderbird because of
  mozilla's trademark agreement and it causes no end of conflicts between
  contributors.
 
 I assume that wouldn't be a problem for Python, assuming I have a clue
 what that is. :)
 
Well -- the causitive agent is different but the results are similar.  In
mozilla's case, the issue is adding code that mozilla doesn't endorse as
with their permission you have to abandon the trademarks.  In a clean
python's case, it would be that we want to enforce on ourselves to only
ship what's in upstream.  In both cases, it prevents fixing bugs and making
other changes ahead of an external (to the distribution) schedule.

-Toshio


pgp8OpvpQEaG2.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Recent buildout failure in Mailman 3

2010-10-11 Thread Toshio Kuratomi
On Mon, Oct 11, 2010 at 11:46:11AM -0400, Barry Warsaw wrote:
 On Oct 08, 2010, at 11:37 PM, P.J. Eby wrote:
 
 At this point, I'm a bit stumped, as I don't know enough about how tarballs
 are supposed to work internally; should I just whip up a patch for the
 situation where the path has no slashes in it (the logilab case), or do I
 need to do something more sophisticated in the general case?
 
 I don't know, but the fix you did commit fixes the build problem for me.
 
tarfile.extract() takes care of symlinks just fine.  If you're going to
reimplement extract()'s functionality by using tarfile's private methods
directly, you probably should copy tarfile.extract()'s symlink handling
routine.

Another option, less efficient but letting the tarfile module handle any
nuances of the file format, would be to restructure the setuptools code to
call tarfile.extract() and then move the file if filter_progress() returned
a changed dst.  Something like this:

if not name.startswith('/') and '..' not in name:
prelim_dst = os.path.join(extract_dir, *name.split('/'))

if member.isfile() or member.isdir() or member.islnk():
final_dst = progress_filter(name, prelim_dst)
if final_dst:
tarfile.extract(member, extract_dir)
if final_dst != prelim_dst:
shutil.move(prelim_dst, final_dst)

-Toshio


pgpEXwuX6HWAc.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 386 status - last round here ?

2009-12-03 Thread Toshio Kuratomi
On Thu, Dec 03, 2009 at 01:55:53PM +0100, M.-A. Lemburg wrote:
 Tarek Ziadé wrote:
  Last, as I said in a previous mail, I tend to agree with the people
  who said that we should stick with only one way to write the version
  scheme for the sake of clarity. e.g. dropping aliases and picking
  *one* way to write the markers after major.minor.micro.
  
  I would tend to pick the same scheme than Python for the pre-releases
  (and c + rc):
  
  N.N[.N][(a|b|c|rc)N]
  
  And, for the post/dev markers I think dots are OK for readability,
 
 Sure, but readability and clarity means different things for
 different people.
 
 The reason I proposed aliases and underscores is to give package
 authors the choice of using terse forms or more verbose ones, as
 well as making the whole scheme more compatible to existing
 version strings already in use.
 
I'm not a big fan of underscores -- having multiple separators doesn't seem
very useful.

I don'tlike aliases but seeing as I like the long forms, having both short
and long but giving them a distinct ordering would be okay to me (ie:

a1  alpha1  a2  b1  beta1  c1  rc1


 Regarding post/dev markers:
 
 IMO, it's not really obvious that a 1.0a1.dev123 release refers to a
 snaphost *before* the 1.0a1 release. The string pre is more commonly
 used for such pre-release snapshots.
 
 For the .post123 tag, I don't see a need for a post string at all,
 1.0a1.123 is clearly a release or build *after* the 1.0a1 release
 and since the 1.123 is being treated as alpha version number,
 the post part processing can be dropped altogether.
 
 For the .dev part the situation is similar: you can always
 choose a pre-release version that is not actually released and then
 issue follow up snapshots to this, e.g.
 
   1.0a0.20091203
   1.0a0.20091204
   1.0a0.20091205
 
 and so on for nightly builds during the development phase.
 
 Instead of writing:
 
   1.0a1.dev20091205
 
 you'd then write
 
   1.0a0.20091205
 
 This is how Python itself is organizing the versions during
 development, BTW.
 
FWIW, I agree with all of this section.

-Toshio


pgpnOKSnzW5f4.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Common version-comparison semantics for peace love and harmony

2009-11-28 Thread Toshio Kuratomi
On Sat, Nov 28, 2009 at 05:07:01PM +0100, Tarek Ziadé wrote:
 
 So If the current proposal works for all cases (e.g. people can
 translate their schemes
 into PEP 386 one), I am proposing to:
 
 1- reject the +,  , - proposal, and stick with . so we have
 only one way to express the segments. (ala Python itself)
 
+1

 2 - keep the aliases (alpha/beta/rc) because they are not controversial
 
Rather than aliases, I'd like to see a sort order worked out.  Someone will
make both foo-1.0alpha1 and foo-1.0a1 because they don't understand that
they're just supposed to be aliases.  Better to document explicitly what
happens in that case.

 3 - stick with post dev for the post and pre-release tags because
 *we need them to sort development versions and post versions* in
 installers, and because *they don't hurt* for people that are not
 publishing such versions. They will be able to use their own dev
 markers internally if they want.
 
+1

 Next, once the PEP is edited, I am proposing to move this discussion
 in python-dev,
 for another round, and eventually have Guido accept it or reject it
 and move forward with PEP 345.
 
 Because as far as I am concerned, even if we change the syntax in PEP
 386 a million times, some people will not like it at the end.
 
Agreed :-)

-Toshio


pgpwOcIqUrLJ5.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Looking for portable what to determine directory where extensions are installed?

2009-11-11 Thread Toshio Kuratomi
On Wed, Nov 11, 2009 at 09:46:25AM -0800, Tom Epperly wrote:

 I thought distutils.sysconfig.get_python_lib() might be helpful, but it
 returns /usr/lib/python2.6/site-packages even on the system where
 /usr/lib64/python2.6/site-packages is the right answer.

You're on the right track here.  You just need one more piece of
information:

distutils.sysconfig.get_python_lib() will return the path where pure python
extensions get installed.

distutils.sysconfig.get_python_lib(1) will return the path where compiled
extensions get installed.

-Toshio


pgpZS2K5lSHwt.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] why would you ever need to specify setuptools as a dependency?

2009-10-20 Thread Toshio Kuratomi
On Tue, Oct 20, 2009 at 02:48:58PM +0100, Chris Withers wrote:
 Fred Drake wrote:
 On Tue, Oct 20, 2009 at 9:39 AM, Chris Withers ch...@simplistix.co.uk 
 wrote:
 As is specifying the setuptools distribution as a requirement when you're
 already using it...

 I don't use setuptools at runtime unless something requires it.

 Having it available at install time and run time are two different
 things, and should remain so.

 All I'm saying is that packages shouldn't express a dependency on  
 setuptools if setuptools is required even before that expression can be  
 parsed.

 I'm not talking about install or run time...

Are you then talking about build time?

-Toshio


pgpMGhTB7ndri.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] tracking requested vs dependency installs in PEP 376 metadata

2009-10-09 Thread Toshio Kuratomi
On Fri, Oct 09, 2009 at 09:21:29AM -0400, Carl Meyer wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Chris Withers wrote:
 The downside here is that it introduces one more wrinkle for installers
 to worry about handling correctly. There are strong use cases for the
 single bit requested vs auto-installed; nobody's yet presented use
 cases for the additional log info. The only thing that comes to my mind
 is UI niceties: being able to tell the user when, why, and by what agent
 a package was installed. I'm not aware of existing package managers that
 go that far; doesn't mean it's a bad idea.
 
rpm (one of the Linux package managers) tracks when a package was installed
and when it was built.

-Toshio


pgppga6qfCHKg.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] why would you ever need to specify setuptools as a dependency?

2009-10-09 Thread Toshio Kuratomi
On Fri, Oct 09, 2009 at 03:28:57PM +0100, Chris Withers wrote:

 In this case, which I suspect is extremely rare anyway, you'll need to  
 have setuptools installed already.

 So, in *any* of these cases, specifying setuptools as a requirement  
 seems like a total waste of time...

 Now, what case have I missed? ;-)

It's nice for people creating system packages when you specify all of the
packages that your runtime depends on in setup.py.  That allows system
packagers to read setup.py and be able to create the complete list of
runtime dependencies for their packaging metadata.  Several times I've been
asked for help debugging why a python package fails to work for a few people
only to discover that the package used entry-points or another setuptools
runtime feature but only required it for buildtime.  Note, however, that
overspeciying the *versions* you need has the opposite effect :-)

-Toshio


pgpHG1xNsPRZV.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] why would you ever need to specify setuptools as a dependency?

2009-10-09 Thread Toshio Kuratomi
On Fri, Oct 09, 2009 at 04:04:06PM +0100, Chris Withers wrote:
 Toshio Kuratomi wrote:
 On Fri, Oct 09, 2009 at 03:28:57PM +0100, Chris Withers wrote:
 In this case, which I suspect is extremely rare anyway, you'll need 
 to  have setuptools installed already.

 So, in *any* of these cases, specifying setuptools as a requirement  
 seems like a total waste of time...

 Now, what case have I missed? ;-)

 It's nice for people creating system packages when you specify all of the
 packages that your runtime depends on in setup.py.  

 ...except that it causes problems that are a bit more serious than nice 
 to have because of the ridiculous situation we're in with setuptools  
 and distribute...

What's the issue precisely?  Once distribute is on the system, setuptools is
provided by distribute so there's no problem there, correct?

Is it that the installers don't know that there's more than one package
providing the setuptools API?  That sounds like pypi and easy_install aren't
powerful enough to recognize that an API can be provided by multiple
modules.  If you actually want to have a full-blown package manager you'll
need to fix that but at the same time I'd warn that having a full-blown
package manager means having to deal with a lot of corner cases like this.

-Toshio


pgphncAqEKW68.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] why would you ever need to specify setuptools as a dependency?

2009-10-09 Thread Toshio Kuratomi
On Fri, Oct 09, 2009 at 05:13:16PM +0100, Chris Withers wrote:
 Toshio Kuratomi wrote:
 On Fri, Oct 09, 2009 at 04:04:06PM +0100, Chris Withers wrote:
 Toshio Kuratomi wrote:
 On Fri, Oct 09, 2009 at 03:28:57PM +0100, Chris Withers wrote:
 In this case, which I suspect is extremely rare anyway, you'll 
 need to  have setuptools installed already.

 So, in *any* of these cases, specifying setuptools as a 
 requirement  seems like a total waste of time...

 Now, what case have I missed? ;-)

 It's nice for people creating system packages when you specify all of the
 packages that your runtime depends on in setup.py.  
 ...except that it causes problems that are a bit more serious than 
 nice to have because of the ridiculous situation we're in with 
 setuptools  and distribute...

 What's the issue precisely?  Once distribute is on the system, setuptools is
 provided by distribute so there's no problem there, correct?

 The issue is that both the setuptools and distribute distributions  
 provide a setuptools package. This apparently causes problems, rather  
 unsurprisingly ;-)

True... but because of that people are able to specify setuptools in
setup.py and it will work with either distribute or setuptools.  Is what
you're getting at that if people didn't specify setuptools in setup.py,
distribute-0.6 could install without using the setuptools name?  I don't
think that works since you still need to take over the setuptools module
directory so import works inside the code and the setuptools egg-info so
things like plugin modules belonging to setuptools work.

-Toshio


pgph2qpvYlbBq.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] tracking requested vs dependency installs in PEP 376 metadata

2009-10-08 Thread Toshio Kuratomi
On Thu, Oct 08, 2009 at 12:39:33PM -0400, Carl Meyer wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Hey all,
 
 I propose adding a bit to the PEP 376 metadata that indicates whether a
 package was installed by user request or as a dependency of another
 package. This would allow (un)installer tools to intelligently remove
 orphaned dependencies, if they so choose. There might be questions about
 the details of such an uninstaller feature, but I'm not intending to
 discuss that here. The metadata itself is simple enough to track, and
 seems like it ought to be included for its future usefulness.
 
 I propose adding a metadata file REQUIRED within the .egg-info
 directory. The presence of this file indicates that the user
 specifically required this distribution. The absence of the file
 indicates that the distribution was installed as a dependency. The
 contents of the file are not used.
 
 For the API, I propose adding a required property to the Distribution
 class, which would be True or False based on the presence or absence of
 the REQUIRED file.
 
 I've added a demo implementation to a fork of Tarek's pep376 repo on
 bitbucket: http://bitbucket.org/carljm/pep376/changeset/0c8002e65cb7/
 
 Thoughts?
 
Note that Linux distributions have discussed this for ages and it's not
always as useful as a naive first thought would imply.  For instance, there
are often many scripts written by a system administrator (or a user) that
might need to have a module installed.  This is not to say that it's a bad
idea to record this information -- there could be some installers for
specific use cases might find it useful or that it could be useful with
confirmation by the user.

Also note that a package manager should be able to tell required status from
what is currently installed.  So it might make more semantic sense to record
what was requested by the user to be installed instead of what was required
by a package.  (When something is both required by a package and requested
by a user, the user request is what takes precedence.)

-Toshio


pgp4FvE57IMvz.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] tracking requested vs dependency installs in PEP 376 metadata

2009-10-08 Thread Toshio Kuratomi
On Thu, Oct 08, 2009 at 03:41:44PM -0400, Carl Meyer wrote:
  Also note that a package manager should be able to tell required status from
  what is currently installed.  So it might make more semantic sense to record
  what was requested by the user to be installed instead of what was required
  by a package.  (When something is both required by a package and requested
  by a user, the user request is what takes precedence.)
 
 Clearly my terminology choice was poor. REQUIRED in my proposal meant
 requested by name by a user (which has to be recorded at install
 time), not required as a dependency by other installed packages
 (which, as you say, can be calculated at runtime). Would REQUESTED, or
 AUTO_INSTALLED (with the sense flipped) be better options?
 
I would say REQUESTED due to my arguments for not recording
installed-as-package-dependency.

-Toshio


pgpcAsUr0vtmp.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Packaging Distribute

2009-10-08 Thread Toshio Kuratomi
On Thu, Oct 08, 2009 at 11:07:13PM +0200, Arfrever Frehtes Taifersar Arahesis 
wrote:
 2009-10-04 23:52:25 Sridhar Ratnakumar napisał(a):
  On Sun, 04 Oct 2009 13:41:06 -0700, Tarek Ziadé ziade.ta...@gmail.com  
  wrote:
  
   The other way would be to use Distribute instead of Setuptools for
   what the packaging system is calling setuptools. That's pretty
   much what is happening in Gentoo (arch) and UHU-Linux (dev),
   right now
  
  Interesting. Gentoo uses distribute but retains the name 'setuptools'?
 
 It's because Distribute 0.6.* installs setuptools.* modules.
 Distribute 0.7.* will be under name dev-python/distribute.
 
I started thinking about what it might take to do this for Fedora as well.
There's a number of worries I have but it sounds attractive because of the
increased maintainence support from distribute's upstream.

An alternative I thought of would be for us to ship both distribute-0.6 and
distribute-0.7 (when it arrives) and parallel install them.  Then we can
patch the setuptools using packages we maintain to check first for the 0.6
distribute and fall back on setuptools if it's not found.  That would
hopefully get a bunch of upstreams onto a better supported code base.

My question is: will Distribute have a parallel installation plan?  For
instance, renaming the module provided by 0.7 distribute2?  If so, this
makes a lot of sense.  If not, it's the ability of gentoo to reuse the
setuptools name that makes parallel installation of distribute-0.6 and
distribute-0.7 easier.

-Toshio


pgpN17eMj8lDm.pgp
Description: PGP signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Making commands extensible by default

2009-04-20 Thread Toshio Kuratomi
David Cournapeau wrote:
 Toshio Kuratomi wrote:
 They don't have bearing on talking about
 redesigning how to design a new architecture that's easy to extend.
   
 
 Those examples show why extending distutils commands with subclassing +
 post processing is not always enough. I don't understand why they would
 not be relevant for the design to improve distutils extensibility. They
 are quite typical of the usual problems I have when I need to extend
 distutils myself.
 
+1 to this argument :-)

subclassing is a bad way to implement extensibility for essentially
imperative tasks.

I was just saying that the fact that distutils commands being used from
paver having bugs does not invalidate paver's design of having functions
 be the task unit to build upon.

-Toshio



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Fixing the mess in sdist/egg_info

2009-04-16 Thread Toshio Kuratomi
Tarek Ziadé wrote:
 Hi,
 
 I am back on that problem with the code that builds the file list. The
 current Distutils trunk isn't working anymore with setuptools because
 of a recursive loop:
 
 distutils.sdist.run() - setuptools.build_py.data_files -
 setuptools.egg_info.run() - distutils.sdist.add_defaults() -
 setuptools.build_py.data_files - etc
 
 The mess is introduced by the fact that build_py is patched by
 setuptools in order to inject the files that are provided by the
 (D)VCS.
 But it also uses some APIs provided by sdist to complete the file list.
 
 In order to solve this, we need to define clearly what is the role of
 each command, and make sure it's a distinct role.
 
 which is not the case right now :
 
 1/ distutils.sdist  = this command is used to build a source distribution
 2/ setuptools.egg_info = this command is used to build an .egg-info
 directory but *also* injects the files founded with (D)VCS in the
 MANIFEST
 3/ distutils.build_py = this command is used to build pure python
 module but *also* to find all the .py files to include in a
 distribution (used by sdist).   In fact, it plays a central role in
 sdist to get the files to include.
 
 Here's a first thaught to discuss:
 
 what about introducing a new simple command called manifest that
 could be used by sdist or any other command, and that would be
 responsible of collecting the files that are part of the distribution
 (and nothing else). This command would generate the MANIFEST file and
 also provide the APIs to get the files included into MANIFEST.
 
 This command would introduce and use a simple plugin system similar to
 the setuptools.file_finders entry points setuptools has for (D)VCS
 files collecting. But with the files list being built as an argument
 to the plugin. (partly described here
 http://wiki.python.org/moin/Distutils/ManifestPluginSystem) so
 Distutils, setuptools or any third party tool can register some code
 that add or remove file in this file list.
 
 The manifest command would provide default plugins, and setuptools
 could refactor part of its code to use the manifest command rather
 than calling sdist APIs. The goal would be to have the same result at
 the end, but make it simpler to extend, and avoid command
 interdependencies like what we have today (and that makes it hard to
 maitain and make evolve).
 
 For instance the MANIFEST.in templating system would be one of the
 default plugin provided by Distutils.
 
 The initial work would consist of refactoring the current code that
 gets the files, using different strategies, into plugins for the new
 manifest command, then make sdist uses this command as a subcommand.
 
I think this is good stuff.  You might find that the pieces that collect
files for the MANIFEST are useful in more than creating the MANIFEST,
though.  Unless I've missed a message saying that we're going to
collapse sdist and package-installed-files together.

 The second phase would consist of working at setuptools level to use
 the same technique. Setuptools would be able to make existing
 setuptools.file_finders entry points work with Distutils manifest
 command registery by providing a bridge.
 
 Now for the plugin system, I see two options so far:
 
 1/ create a simple ad-hoc plugin system in Distutils, and declare the
 plugins in a new section in distutils.cfg / pydistutils.cfg for
 loading them (setuptools existing entry points would be collected
 through a unique plugin declared that would act like a bridge)
 
 2/ use entry points (so add them into Distutils) and define a entry
 point name based on the command name, maybe
 distutils:metadata.file_finders  so the plugin system could be used
 elsewhere in distutils.
 
Having a library that makes creating plugins easy is a good general
purpose thing.  Whatever plugin system is adopted/created, it would be
good for it to not be internal to distutils.  I've always hated that
distutils option handling is different from the option handling that a
coder can use from the standard library.

-Toshio



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Making commands extensible by default

2009-04-16 Thread Toshio Kuratomi
Tarek Ziadé wrote:
 Hello,
 
 This is a side discussion but quiet important ihmo.
 
 == Problem ==
 
 Some people complained about the fact that is was hard to extend
 Distutils commands.
 You end up rewriting the whole command most of the time.
 
 So what's a command ? It's a class that is used by the distribution
 instance when you run Distutils.
 
 roughly:
 
 cmd = Command(distribution)
 cmd.initialize_options()
 cmd.finalize_options()---  allows to check the options if
 subcommands where run
 cmd.run()---  runs the code
 
 each command can define sub commands, but most of the time it's a
 harcoded list, so you need to inherit the command
 if you want to add a new behavior.
 
 == work in progress, ==
 
 What we want to do here is being able to define subsets in run(),
 sharing the same options environment.
 
 so basically, a rough, generic run() method could be:
 
 def run():
for func in some_funcs:
   func(self, options)
 
 If some_funcs could be defined by a registery with simple names,
 anyone could provide new functions
 and configure the registery to run a sequence of function.
 
 Given a command name, Distutils can get this list of function, through
 a registery.
 Each function could register itself into Distutils, like in what I
 have started to work here for the manifest file:
 see http://wiki.python.org/moin/Distutils/ManifestPluginSystem
 
 The ordering would be configurable through the setup.cfg file.
 
 Any opinion, idea for this part ?
 
Have you looked at paver?  It's syntax makes extension easy.

@task
def run():
function1()
function2()
function3()

or
@task
@needs([function1, function2, function3])
def run():
pass

So if I want to do everything that setuptools.command.install does and
also install locale files using my own function I do:

@task
def install_locales():
# My code to install locales goes here
pass

@task
def install():
# A new install task that overrides the system install task
# Note that I control ordering just by changing the order
# subtasks get called
call_task('setuptools.command.install')
call_task('install_locales')

-Toshio



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] Symlinks vs API -- question for developers

2008-10-17 Thread Toshio Kuratomi
So I have a question for all the developers on this list.  Philip thinks
that using symlinks will drive adoption better than an API to access
package data.  I think an API will have better adoption than a symlink
hack.  But the real question is what do people who maintain packages
think?  Since Philip's given his reasoning, here's mine:

1) Philip says that with symlinks distributions will likely have to
submit patches to the build scripts to tag various files as belonging to
certain categories.  If you, as an upstream are going to accept a patch
to your build scripts to place files in a different place wouldn't you
also accept a patch to your source code to use a well defined API to
pull files from a different source?  This is a distribution's bread and
butter and if there's a small, useful, well-liked, standard API for
accessing data files you will start receiving patches from distributions
that want to help you help them.

2) Symlinks cannot be used universally.  Although it might not be common
to want an FHS style install in such an environment, it isn't unheard
of.  At one time in the distant past I had to use cygwin so I know that
while this may be a corner case, it does exist.

3) The primary argument for symlinks is that symlinks are compatible
with __file__.  But this compatibility comes at a cost -- symlinks can't
do anything extra.  In a different subthread Philip argues that
setuptools provides more than distutils and that's why people switch and
that the next generation tool needs to provide even more than
setuptools.  Symlinks cannot do that.

4) In contrast an API can do more:  It can deal with writable files. On
Unix, persistent, per user storage would go in the user's home
directory, on other OS's it would go somewhere else.  This is
abstractable using an API at runtime but not using symlinks at install time.

5) cross package data.  Using __file__ to detect file location is
inherently not suitable for crossing package boundaries.  Egg
Translations would not be able to use a symlink based backend to do its
work for this reason.

6) zipped eggs.  These require an API.  So moving to symlinks is
actually a regression.

7) Philip says that the reason pkg_resources does not see widespread
adoption is that the developer cost of using an API is too high compared
to __file__.  I don't believe that the difference between file and API
is that great.  An example of using an API could be something like this:

Symlinks::
  import os
  icondirectory = os.path.join(os.path.basename(__file__), 'icons')

API::
  import pkgdata
  icondirectory = pkgdata.resource(pkg='setuptools', \
  category='icon', resource='setuptools.png')

Instead I think the data handling portion of pkg_resources is not more
widely adopted for these reasons:

* pkg_resources's package handling is painful for the not-infrequent
corner cases.  So people who have encountered the problems with
require() not overriding a default or not selecting the proper version
when multiple packages specify overlapping version ranges already have a
negative impression of the library before they even get to the data
handling portion.

* pkg_resources does too much: loading libraries by version really has
nothing to do with loading data for use by a library.  This is a
drawback because people think of and promote pkg_resources as a way to
enable easy_install rather than a way to enable abstraction of data
location.

* The only benefit (at least, being promoted in the documentation) is to
allow zipped eggs to work.  Distributions have no reason to create
zipped eggs so they have no reason to submit patches to upstream to
support the pkg_resources api.

* Distributions, further, don't want to install all-in-one egg
directories on the system.  The pkg_resources API just gets in the way
of doing things correctly in a distribution.  I've had to patch code to
not use pkg_resources if data is installed in the FHS mandated areas.
Far from encouraging distributions to send patches upstream to make
modules use pkg_resources this makes distributions actively discourage
upstreams from using it.

* The API isn't flexible enough.  EggTranslations places its data within
the metadata store of eggs instead of within the data store.  This is
because the metadata is able to be read outside of the package in which
it is included while the package data can only be accessed from within
the package.


8) To a distribution, symlinks are just a hack.  We use them for things
like php web apps when the web application is hardcoded to accept only
one path for things (like the writable state files being intermixed with
the program code).  Managing a symlink farm is not something
distributions are going to get excited over so adoption by distributions
that this is the way to work with files won't happen until upstreams
move on their own.

Further, since the install tool is being proposed as a separate project
from the metadata to mark files, the expectation is that the

Re: [Distutils] [Catalog-sig] distribute D.C. sprint tasks

2008-10-16 Thread Toshio Kuratomi
Martin v. Löwis wrote:
 Right, please take a look at my last version 
 http://wiki.python.org/moin/PEP_374
 it tries to go in that direction
 
 For such an infrastructure (which apparently intends to mirror the files
 as well), I insist that a propagation of download counters is made
 mandatory. The only mirrors that can be excused from that are private
 ones.

This may not apply to pypi as the sites you get to mirror you may be
different but Linux distributions have found that it is easier to get
mirrors if the mirror admins can run as little custom stuff as possible.
 ie: If they can retrieve content from the master mirror via a simple
rsync cron job that they write they are happiest.  We have found other
ways to generate statistics regarding download in these cases (for
instance, based upon how many calls to retrieve the mirrorlist or how
many calls for specific packages via the mirror redirector).

As I say, whether this is a problem for you will depend on the
willingness of the sites that are mirroring you to run your scripts with
code that you've written rather than themselves.

-Toshio



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] pre-PEP : Synthesis of previous threads, and irc talks + proposals

2008-10-05 Thread Toshio Kuratomi
zooko wrote:
 On Oct 1, 2008, at 19:10 PM, Tarek Ziadé wrote:
 
 I hate the idea of dynamic metadata in fact. I can't express precisely
 why at that point.
 
 Me too and me too.
 
 Perhaps it would help to distinguish between requiring a certain
 functionality and requiring a specific codebase which implements that
 functionality.
 
 For example: distribution A requires the functionality of ctypes.  That
 part is statically, declaratively always true.
 
 However, distribution A doesn't necessarily require a *distribution*
 named ctypes.  If you are running on Python 2.6, then that
 functionality is already present.  If there is a new distribution out
 there named new_ctypes which provides the same functionality and the
 same interface but is a completely different code base, then the
 presence of new_ctypes satisfies distribution A's requirements.
 
 The former question is simple, static, and declarative.  The latter
 question isn't.
 
 In most cases there is only one implementation of a given interface, so
 we make do by equating the interface with the implementation.
 
 I wonder how Debian and Fedora handle this sort of issue?
 
With python modules we just require one thing providing the interface.
Let's say that elementtree was merged into python-2.5.  And let's say
that we got python-2.5 as the default python in Fedora 7.  Since we only
have one version of python in any release of Fedora we do something like
this:

  Require: python
  %if 0%{?fedora}  7
  Require: python-elementtree
  %endif

We are thinking of enhancing what dependency information we Require and
Provide (the problem being... we want to do this automatically.)  If we
get that working, we could do things like:

  Require: python(elementtree)

and in Fedora 6, python-elementtree would have:
  Provide: python(elementtree)

whereas Fedora 7+, the python package would have:
  Provide: python(elementtree)

Note that this information is not as easy to get to as the metadata
provided by eggs so we're still trying to come up with a script that
will generate this data automatically.

-Toshio



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Python Package Management Sucks

2008-10-02 Thread Toshio Kuratomi
Phillip J. Eby wrote:
 At 07:14 PM 10/1/2008 -0700, Toshio Kuratomi wrote:
 In terms of implementation I'd much rather see something less centered
 on the egg being the right way and the filesystem being a secondary
 concern.
 
 Eggs don't have anything to do with it; in Python, it's simply common
 sense to put static resources next to the code that uses them, if you
 want to write once, run anywhere.  And given Python's strength as an
 interactive development language with no build step, having to
 *install* your data files somewhere else on the system to use them isn't
 a *feature* -- not for a developer, anyway.
 
You're arguing about the developers point of view on something that's
hidden behind an API.  You've already made it so that the developer
cannot just reference the file on the filesystem because the egg may be
zipped.  So for the developer there's no change here.

I'm saying that there's no need to have a hardcoded path to lookup the
information at and then make the install tool place forwarding
information there to send the package somewhere else.  We have
metadata.  We should use it.

 And our hypothetical de-jure standard won't replace the de-facto
 standard unless it's adopted by developers...  and it won't be adopted
 if it makes their lives harder without a compensating benefit.  For the
 developer, FHS support is a cost, not a benefit, and only relevant to a
 subset of platforms, so the spec should make it as transparent for them
 as possible, if they don't have an interest in explicit support for it. 
 By the STASCTAP principle (Simple Things Are Simple, Complex Things Are
 Possible), it should be possible for distros to relocate, and simple for
 developers not to care about it.
 
It's both a cost and a benefit.  The cost is having to use an API which
they have to use anyway due to eggs possibly being zip files.  The
benefit is getting their code packaged by Linux distributors quicker and
getting more contributors as a result of the exposure.

 
   We should have metadata that tells us where the types of
 resources come from.  When a package is installed on Linux the metadata
 could point locales at file:///usr/share/locale.  When on Windows
 egg:locale (Perhaps the uninstalled case would use this too... that
 depends on how the egg structure and metadata evolves.)

 A question we'd have to decide is whether this particular metadata is
 something that should be defined globally or per package.  Or globally
 with a chance for packages to override it.
 
 I think install tools should handle it and keep it out of developers'
 hair.  We should of course distinguish configuration and other writable
 data from static data, not to mention documentation.  Any other
 file-related info is going to have to be optional, if that.  I don't
 really think it's a good idea to ask developers to fill in information
 they don't understand.  A developer who works entirely on Windows, for
 example, is not going to have a clue what to specify for FHS stuff, and
 they absolutely shouldn't have to if all they're doing is including some
 static data.
 
Needing to have some information about the files you ship is inevitable.
 Documentation is a good example.  man pages, License.txt, gnome help
files, windows help files, API docs, sphinx docs, etc each have to be
installed in different places, some with requirements to register the
files so the system knows they exist.All the knowledge about what to
do with these files should be placed in the tool.  But the knowledge of
what type to mark a given file with will have to lay with the developer.

 Even today, there exist Python developers who don't use the distutils to
 distribute their packages, so anything that makes it even more difficult
 than it is today, isn't going to be a viable standard.  The closer we
 can get in ease of use to just tarring up a directory, the more viable
 it'll be.  (That's one reason, btw, why setuptools offers revision
 control support and find_packages() for automating discovery of what to
 include.)
 
Actually, as a person who distributes upstream packages which don't use
distutils and is exposed to others, I'd say that the shortcomings in
terms of where to install files and how to reference the files after
install is one of the reasons that distutils is not used.  Are there
other reasons?  Sure.  But this is definitely one of the reasons.

 
  I'd have preferred to avoid that complexity, but if the two of us can't
  agree then there's no way on earth to get a community consensus.
 
  Btw, pkg_resources' concept of metadata would also need to be
  relocatable, since e.g. the EggTranslations package uses that
 metadata
  to store localizations of image resources and message catalogs.  (Other
  uses of the metadata files also inlcude scripts, dependencies, version
  info, etc.)
 
 Actually, we should decide whether we want to support that kind of thing
 within the egg metadata at all.  The other things we've been talking
 about

Re: [Distutils] Python Package Management Sucks

2008-10-01 Thread Toshio Kuratomi
You guys are fairly into you debate so hopefully I don't interject
something that's already been gone over :-)

Chris Withers wrote:
 Matthias Klose wrote:
 Install debian and get back to productive tasks.
 This is an almost troll-like answer.
 See page 35 of the presentation.

 I disagree. You could think of Packages are Pythons Plugins (taken
 from page 35) as a troll-like statement as well.
 
 You're welcome to your (incorrect) opinion ;-)
 Debian packages could just as easilly be seen as Debian's pluggins.
 
For a *very* loose definition of plugin, perhaps.  But if you look at:
  http://en.wikipedia.org/wiki/Plugin

the idea of Debian packages being plugins is a pretty far stretch.  The
idea of Packages being python plugins is less of a stretch but I'd call
it an analogy.  It's useful for looking on things in a new light but if
we start designing a plugin interface and only viewing packages through
that definition I think we'll be hindering ourselves.

 - all the package management systems behave differently and expect
 packages to be set up differently for them

 correct, but again they share common requirements.
 
 ...but all have different implementations.
 
The common requirements are more important than the varying
implementations when thinking about the metadata and how flexible things
need to be.  When justifying the need for a separate python build tool
and distribution format, realizing that there's different
implementations is good.  ie: we need to expose package naming,
versioning, and dependencies to outside tools because they have a common
need for that information on the one hand.  We have to realize that
there's a need for both run-from-egg and run-from-FHS-locations on the
other.

 some people prefer to name this stable releases instead of
 bitrot. 
 
 I'll call bullshit on this one. The most common problem I have as a
 happy Debian user and advocate when I go to try and get help for a
 packaged application (I use packages because I perhaps mistakenly assume
 this is the best way to get security-fixed softare), such as postfix,
 postgres, and Zope if I was foolish enough to take that path, is why
 are toy using that ancient and buggy version of the software?! shortly
 before pointing out how all the issues I'm facing are solved in newer
 (stable) releases.
 
 The problem is that first the application needs to be tested and
 released by its community, then Debian needs to re-package, patch,
 generally mess around with it, etc before it eventually gets a Debian
 release. It's bad enough with apps with huge support bases like
 portgres, imagine trying to do this properly for the 4000-odd packages
 on PyPI...
 
You're correct in the results you're seeing but not in the reason that
it exists.  There are many linux distributions and each has a different
policy of how to update packages.  The reason for the variety is that
there's demand for both fast package updates and slow package updates.
The Debian Stable, Red Hat Enterprise Linux, and other stable,
enterprise-oriented distributions' aim is to provide a stable base on
which people can build their applications and processes.  A common
misperception among developers who want faster cycles is that the base
system is just a core of packages while things closer to the leaves of
the dependency tree could be updated (ie: don't update the kernel; do
update the python-sqlalchemy package).  What's not seen is that these
distributions are providing the base for so many people that updates
that change the API/ABI/on-disk format/etc are likely to break *someone*
out there.  You want to be using one of these systems if you have
deployed a major application that serves thousands of people and can
afford little to no downtime because you can be more assured that any
changes to the system are either changes that are overwhelmingly
necessary and the API/ABI breakage has been reduced as much as possible
or changes that you yourself have introduced.

For system administrators it can also be frustrating due to knowing that
there's been bug fixes that are not supposed to change backwards
compatibility in newer upstream packages.  The problem here is that we
all know that all software has bugs.  The risk with an update to a newer
stable version of software is that the new software has bugs that are as
bad or worse than the old one.  The package maintainers have to evaluate
how many changes have gone into the new version of the software and how
big the current problem is and then apply the distribution's policy on
updates to that.  For a stable enterprise-oriented distro, it's often a
case of better the devil you know than the devil you don't.

For a developer of software or someone deploying a new system (as
opposed to someone who's had one deployed for several years before they
hit a certain bug), this can be quite frustrating as you know that there
are fixes and features in newer versions of the software.  When you have
the choice, then, you should use one 

Re: [Distutils] Msgfmt in distutils?

2008-10-01 Thread Toshio Kuratomi
Jeroen Ruigrok van der Werven wrote:
 -On [20081001 16:28], Toshio Kuratomi ([EMAIL PROTECTED]) wrote:
 and have distutils do the right thing with the .po files at build time
 (generate .mo files from them) and at install time (install them into
 PREFIX/share/locales/LC_MESSAGES/, or wherever the distribution is
 configured to put them).
 
 [snip]
 
 This has been a big deal for some applications I work on.  Our first cut
 was to add new Build and InstallData command classes.
 
 Actually with Babel (http://babel.edgewall.org/) that's all handled.
 
That's good to know.  One of our Turbogears applications uses Babel and
it definitely doesn't install to the right place.  I'd love to fix it to
take advantage of Babel' properly.  Would you be kind enough to point me
documentation on how to get Babel to install locale files?  Looking at
the babel website, I only see documentation up to building the message
catalogs.  If the install portion is integrated into setuptools is there
something I might have to configure in setup() to tell babel/setuptools
what directory to use?

-Toshio



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Python Package Management Sucks

2008-10-01 Thread Toshio Kuratomi
Phillip J. Eby wrote:
 At 11:00 AM 10/1/2008 -0700, Toshio Kuratomi wrote:
 I have no love for how pkg_resources implements this (including the API)
 but the idea of retrieving data files, locales, config files, etc from
 an API is good.  For packages to be coded that conform to the File
 Hierachy Standard on Linux, the API (and metadata) needs to be more
 flexible.
 
 There's some confusion here.  pkg_resources implements *resource*
 management and *metadata* management...  NOT file management.
 
 Resource files and metadata are no more data in the FHS sense than
 static data segments in a .so file are; they are simply a more
 convenient way of including such data than having a giant base64 string
 or something like that hardcoded into the program itself.  There is thus
 no relevance to the FHS and absolutely no reason for them to live
 anywhere except within the Python packages they are a part of.
 
If we can agree on a definition of resource files there's a case to be
made here.  One of the problems, though, is that people use
pkg_resources for things that are data.  Now there could be two reasons
for that:

1) Developers are abusing pkg_resources.
2) Linux distributions disagree with you on what consitutes data vs a
resource.

Let's discuss the definition of resource vs data below (since you made a
good start at it) and we can see which of these it is.

 
   We need to be able to mark locale, config, and data files in
 the metadata.
 
 Sure...  and having a standard for specifying that kind of
 application/system-level install stuff is great; it's just entirely
 outside the scope of what eggs are for.
 
 To be clear, I mean here that a file (as opposed to a resource) is
 something that the user is expected to be able to read or copy, or
 modify.  (Whereas a resource is something that is entirely internal to a
 library, and metadata is information *about* the library itself.)
 
metadata, I haven't even begun to think about yet.  I personally don't
see a huge need to shift it around on the filesystem but someone who's
thought about it longer might find reasons that it belongs in some other
place.

resources, as I said needs to be defined.  You're saying here that a
resource is something internal to the library.  A file is something
that a user can read, copy, or modify.

In a typical TurboGears app, there's the following things to be found
inside of the app's directory in site-packages:

config/{app.cfg,__init__.py,log.cfg} - These could go in /etc/ as their
configuration.  However, I've tried to stress to upstream that only
things that configure the TurboGears framework for use with their app
should go in these files (default templating language, identity
controller).  When those things are true, I can see this as being an
internal resource.  If upstream can't get their act together, it's config.

locale/{message catalogs for various languages} --  These are binary
files that contain strings that the user may see when a message is
given.  These, I think are data.

templates/*html -- These are templates that the application fills in
with variables intermixed with short bits of code.  These are on the
border between code and data.  The user sees them in a modified form.
The app sometimes executes pieces of them before the user sees them.
Some template languages create python byte code from the templates,
others load them and write into them every time.  None of them can be
executed on their own.  All of them have to be loaded by a call to parse
them from a piece of python code in another file.  None of them are
directly called or invoked.  My leaning is that these are data.

static/{javascript,css,images} -- These are things that are definitely
never executed.  They are served by the webserver verbatim when their
URL is called.  These are certainly data. (Note: I don't believe these
are referenced using the resources API, just via URL.)

So... do you agree on which of these are data and which are resources?
Do you have an idea on how we can prevent application and framework
writers from misusing the resources API to load things that are data?

 
   The build/install tool needs to be able to install those
 into the filesystem in the proper places for a Linux distro, an egg,
 etc.  and then we need to be able to call an API to retrieve the
 specific class of resources or a directory associated with them.
 
 Agreed...  assuming of course that we're keeping a clear distinction
 between static resources+metadata and actual data (e.g. configuration)
 files.
 
 
nod.  The definition and distinction is important.

-Toshio



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Python Package Management Sucks

2008-10-01 Thread Toshio Kuratomi
Phillip J. Eby wrote:
 At 09:40 PM 10/1/2008 +0200, Josselin Mouette wrote:
 Le mercredi 01 octobre 2008 à 14:39 -0400, Phillip J. Eby a écrit :
 We need to be able to mark locale, config, and data files in
  the metadata.
 
  Sure...  and having a standard for specifying that kind of
  application/system-level install stuff is great; it's just entirely
  outside the scope of what eggs are for.

 I don’t follow you. If the library needs these files to work, you
 definitely want to ship them, whether it is as their FHS locations in a
 package, or in the egg.

 Egg files aren't an all-purpose distribution format; they were designed
 for application plugins, and for libraries needed to support application
 plugins.  As such, they're self-contained and weren't designed for
 application-level installation support, such as documentation,
 configuration or data files, icons, etc.

 As has been pointed out, these are deficiencies of .egg files wrt the
 full spectrum of library and application installation needs, which is
 why I'm pushing for us working on an installation metadata standard that
 can accommodate these other needs that the .egg layout isn't really
 suited for.

We need to get the list of problems up somewhere on the wiki so that
people can check that the evolving standard doesn't fall into the same
pitfalls.  After all, people are using the egg and pkg_resources API for
just this purpose today with some happy about it and others not so much.

 My main point about the resources is simply that it's a needless
 complication to physically separate static data needed by a library at
 runtime, based solely on its file extension, in cases where only that
 library will be reading that file, and the file's contents are constant
 for that version of the library.

 To put it another way, if some interpretation of the FHS makes a
 distinction between two files encoding the same data, one named foo.bar
 and foo.py, where the only difference between the two is the internal
 encoding of the data, then that interpretation of the FHS is not based
 on any real requirement, AFAICT.

Actually, file encoding is one major criteria in the FHS.  However, it's
probably not in the manner you're thinking of :-)  Files which are
architecture dependent generally need to be separated from files which
are architecture independent.  Since text files and binary data which
has a standard byte-oriented format are generally what's used as data
these days it's the major reason that data files usually go in
/usr/share while libraries/binaries go in /usr/lib and /usr/bin.  This
is dues to the range of computers that architecture dependent vs
architecture independent data can be shared with.  Of course, part of
python's site-packages on Linux systems violates this rule as python can
split architecture dependent and architecture independent packages from
one another.  I know that some distributions have debated moving the
architecture independent portion of site-packages to /usr/share although
I don't know if any have (Josselin, has Debian done this?)  The idea of
moving is not straight forward because of 1) compatibility with
unpackaged software and 2) /usr/share is seen in two lights: the place
for architecture independent files and the place for data; /usr/lib is
seen in two lights: the place for architecture dependent non-executables
and the place for code whose instructions are run by executables.

 Of course, for documentation, application icons, and suchlike, the data
 *will* be read by things other than the library itself, and so a
 standardized location is appropriate.  The .egg format was designed
 primarily to support resources read only by the package in question, and
 secondarily to support metadata needed by applications or libraries that
 the package plugs in to.  It was not originally intended to be an
 general-purpose system package installation format.

nod.  Despite this design, it's presently being used for that.  So we
need to figure out what to do about it.


  To be clear, I mean here that a file (as opposed to a resource) is
  something that the user is expected to be able to read or copy, or
  modify.  (Whereas a resource is something that is entirely internal
  to a library, and metadata is information *about* the library itself.)

 It’s not as simple as that. Python is not the only thing out there, and
 there are many times where your resources need to be shipped in existing
 formats, in files that land at specific places. For example icons go
 in /usr/share/icons, locale files in .mo format in /usr/share/locale,
 etc.

 And docs need to go in /usr/share/doc, I presume.

docs are special in the packaging world on several accounts.  Generally
the packager has to collect at least some of the docs themselves (as
things like LICENSE.txt aren't normally included in a doc install but
are important for distributions to package.)  rpm, at least provides a
macro to make it easy for the packager to mark files and 

Re: [Distutils] Msgfmt in distutils?

2008-10-01 Thread Toshio Kuratomi
Philip Jenvey wrote:
 
 On Oct 1, 2008, at 11:25 AM, Toshio Kuratomi wrote:
 
 Jeroen Ruigrok van der Werven wrote:
 -On [20081001 16:28], Toshio Kuratomi ([EMAIL PROTECTED]) wrote:
 and have distutils do the right thing with the .po files at build time
 (generate .mo files from them) and at install time (install them into
 PREFIX/share/locales/LC_MESSAGES/, or wherever the distribution is
 configured to put them).

 [snip]

 This has been a big deal for some applications I work on.  Our first
 cut
 was to add new Build and InstallData command classes.

 Actually with Babel (http://babel.edgewall.org/) that's all handled.

 That's good to know.  One of our Turbogears applications uses Babel and
 it definitely doesn't install to the right place.  I'd love to fix it to
 take advantage of Babel' properly.  Would you be kind enough to point me
 documentation on how to get Babel to install locale files?  Looking at
 the babel website, I only see documentation up to building the message
 catalogs.  If the install portion is integrated into setuptools is there
 something I might have to configure in setup() to tell babel/setuptools
 what directory to use?
 
 Once you have Babel generating .mo files, all you'll need is a
 package_data entry for them, e.g.:
 
 package_data={'foo': ['i18n/*/LC_MESSAGES/*.mo']},
 
 then the catalogs will make it into the final sdist/egg and be included
 during an installation.
 
Thanks!  This isn't quite what I was asking, though.  the orignal poster
was asking how to install the catalogs into /usr/share/locale, the
proper directory on a Linux system.  I thouoght babel was able to do
that but it seems babel currently just handles the creation and
maintenance of the message catalogs.  Which is a huge thing!  I just was
hoping to get rid of my ugly code to move the catalogs into the system
directory.

-Toshio



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Python Package Management Sucks

2008-10-01 Thread Toshio Kuratomi
Phillip J. Eby wrote:
 At 03:14 PM 10/1/2008 -0700, Toshio Kuratomi wrote:
 resources, as I said needs to be defined.  You're saying here that a
 resource is something internal to the library.  A file is something
 that a user can read, copy, or modify.
 
 I should probably clarify that I mean unmediated by the program... 
 which is why I disagree regarding message catalogs.  They're not
 user-modifiable and there's nothing you can usefully do with them
 outside the program that uses them.  Of course, a default message
 catalog for someone to use to *create* translations from might be
 another story...
 
nod  this is what I was afraid of. This is definitely not a definition
of resource-only that has meaning for Linux distributions.  None of the
data in /usr/share is user-modifiable (a tiny bit of it is copiable for
the user to then edit the copy) and although a good fraction of it is
usable outside the program that uses it, a much larger portion is taken
up with things that are used by one program.

I could go through the examples below and tell why Linux distributions
feel the way they do but I don't think it's necessary.  Whether they're
data or resources, the files need to be relocatable.  And they need to
be accessed via an API for that to work.  So as long as we're agreed
that these have to be included in the egg on some platforms and in the
filesystem on others then I think we know what needs to be done.

[...]

 So... do you agree on which of these are data and which are resources?
 Do you have an idea on how we can prevent application and framework
 writers from misusing the resources API to load things that are data?
 
 Apparently not.  The alternative I would suggest is that under the new
 standard, an install tool should be allowed to relocate any non-Python
 files, and all access has to go through a resource API.  The install
 tool would then have to be responsible for putting some kind of
 forwarding information in the package directory to tell the resource API
 where it squirrelled the file(s) off to.  Then we can avoid all this
 angels-on-a-pin argument and the distros can Have It Their Way[tm].
 
In terms of implementation I'd much rather see something less centered
on the egg being the right way and the filesystem being a secondary
concern.  We should have metadata that tells us where the types of
resources come from.  When a package is installed on Linux the metadata
could point locales at file:///usr/share/locale.  When on Windows
egg:locale (Perhaps the uninstalled case would use this too... that
depends on how the egg structure and metadata evolves.)

A question we'd have to decide is whether this particular metadata is
something that should be defined globally or per package.  Or globally
with a chance for packages to override it.

 I'd have preferred to avoid that complexity, but if the two of us can't
 agree then there's no way on earth to get a community consensus.
 
 Btw, pkg_resources' concept of metadata would also need to be
 relocatable, since e.g. the EggTranslations package uses that metadata
 to store localizations of image resources and message catalogs.  (Other
 uses of the metadata files also inlcude scripts, dependencies, version
 info, etc.)
 
Actually, we should decide whether we want to support that kind of thing
within the egg metadata at all.  The other things we've been talking
about belonging in the metadata are simple key value pairs.
EggTranslations uses the metadata area as a data store.  (Or in your
definition, a resource store).  This breaks with the definition of what
metadata is.  Translations don't store information about a package, they
store alternate views of data within the package.

While the simple key value pairings can be located in either setuptools
.egg-info directories or python-2.5+ distutils .egg-info files, the data
store in EggTranslations can only be placed in directories.

Having a data store/resource store API would be more appropriate for the
kinds of things that EggTranslation is doing.

-Toshio



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] just use debian

2008-10-01 Thread Toshio Kuratomi
David Cournapeau wrote:
 Josselin Mouette wrote:
 Indeed, and the reason is that *functions never disappear from the
 glibc*.
 
 Yes and no. If you remove a function, you're indeed screwed, because you
 can't handle versioning in the header. But you can handle versioning in
 libraries at the link step, and file name of the library is an
 implementation detail of this versioning.
 
I'm not 100% certain but I think that Josselin is speaking of glibc in
particular here and you're speaking of c libraries in general.

 I don’t think Mono causes issues, but I’m pretty sure that allowing
 multiple versions like the GAC allows *will* causes issues. Not for
 purely-packaged things, where you can safely ignore those directory
 renames, but if you start mixing distributed packages and
 user-downloaded stuff, they will run into the same issues we have with
 setuptools.
 
 Please read the article carefully, it is not only about the GAC. It does
 handle the two conflicting issues: API stability installed globally vs
 easiness of deployment. That's why it is an interesting read IMHO: it
 addresses both issues. I don't think there is a single chance to see
 something as strong as C for python, because it would severely undermine
 the whole idea of the language used for prototyping.
 
Mono is absolutely horrid in this regard.  Those who care about mono
(not our most persuasive speakers, I'm afraid) have asked upstream to
stop making that a best practice for Mono applications.

I've said before that ideally a Linux distribution only wants one
version of a library.  In Fedora we're willing to have compat packages
that hold old API versions if we must but by and large we would rather
help upstream apps port their applications forward than to have
compatibility packages.  This is because upstream for the library will
always be focusing on the newer versions of the libraries, not the older
versions.  If applications stay stuck on older versions, we end up
having to support libraries by ourselves with no upstream to help us
with the old version.

As much as I'd rather not have compat packages, having private versions
of third party libraries as advocated in that Mono document is worse.

The primary problem is security.  If a distro allows application to have
their own private copies of libraries and a security flaw is discovered
we're going to hate life.  We'll have to:

1) Find what packages include that library.  Unlike when the link goes
to a system installed library, this will not cause a dependency between
packages.  So we can't just query the package metadata to find out what
packages are affected.

2) Fix all the versions in all the packages.  Because each package
includes its own version of the library, there are multiple versions of
the library in these packages.  If we're unlucky the change will be
conceptual and we'll have to fix lots of different looking code.

3) Push rebuilds of all the fixed packages out that our users have to
download.  There's PR involved here: Security fix to Library Foo vs
Security Fix to Library Foo, Application, Bar, Baz, [...] Zod.  There's
also the burden for the users to download the packages.

Compare this with having to fix a set of compat library packages that's
not included in other applications:

1) Find all the libraries in the affected set.  It will probably be
enough to look by package name since these will be like:
python-foo1-1.0, python-foo2-2.2, python-foo-3.0

2) Fix the library (probably with help from upstream) and the
compat-libraries (maybe with upstream help or maybe on our own).

3) Push rebuilds of the library packages for our users to download.

Another concern is licensing.  Anytime a package includes other, third
party modules, the licensing situation becomes more complex.  Are the
licensing terms of any of the works being violated?  How do we have to
list the licensing terms in the package?  Are licensing terms for all
the packages available?  is everything open source? (believe it or not,
we do find non-OSS stuff in third party directories when we audit these
bundled packages :-(

Another concern is not giving back to upstream.  Once a package starts
including its own, private copies of a library it becomes more and more
tempting for the package to make bug fixes and enhancements on its own
copy.  This has two serious problems: 1) It becomes harder to port the
application forward to a new version because this is no longer what
upstream has shipped at any time.  2) The changes may not get back to
upstream at all.  Those bug fixes and feature enhancements may end up
being only part of this package, even though the whole community would
benefit.

Another concern is build scripts that become tied to building
with/installing the private versions.  Distributions have policies on
inclusion of third party libraries in another application.  Sometimes
upstream has a reason to include a copy of a library for compatibility
on Windows or for customers who aren't going to get 

Re: [Distutils] Python Package Management Sucks

2008-10-01 Thread Toshio Kuratomi
Greg Ewing wrote:
 Toshio Kuratomi wrote:
 
 nod  this is what I was afraid of. This is definitely not a definition
 of resource-only that has meaning for Linux distributions.  None of the
 data in /usr/share is user-modifiable
 
 In that case it must be there because it's architecture-independent,
 right?
 
...That doesn't follow from what I said, but it's true :-)

 But by that criterion, all .py files should be in /usr/share, too.
 

I mentioned in a different post that this has been considered by several
distributions.  Note that not all .py files can be shifted due to the
way python parses modules.  But certainly modules which are pure python
could be moved.  Reasons that Fedora hasn't done this are:

1) Historical: .py files have been in /usr/lib/python2.5/site-packages
for a long time.
2) Compatibility with third parties: Unfortunately not everyone uses
distutils.  If we shifted the location to /usr/share and users installed
those packages into /usr/lib it would fail.
3) /usr/share has two purposes/criteria[1]_: architecture independent
and datafiles.  /usr/lib has two criteria[2]_: architecture independent
and libraries.  With .py{,c,o} we have both architecture indepedence and
a library.  So the criteria is in conflict with each other.

There may be more reasons, I'm in the /usr/share camp but not so much
that I'll keep bringing it up when there's no new arguments to give.

Note that Debian has done a lot of neat things with python source
recent(ish).  Josselin, Matthias, and some of the other Debian devs
could tell us if .py files get installed to /usr/share there.

.. _[1]:
http://www.pathname.com/fhs/pub/fhs-2.3.html#USRSHAREARCHITECTUREINDEPENDENTDATA
.. [2]_:
http://www.pathname.com/fhs/pub/fhs-2.3.html#USRLIBLIBRARIESFORPROGRAMMINGANDPA

 Also all shell scripts, Perl code, awk/sed scripts, etc, etc.

Things that are directly executable belong in a bin directory.  There
are next to no shell script libraries, just scripts.  Perl, awk, sed,
etc *scripts*  end up in /bin as well.  To my knowledge perl doesn't
support the split architecture independent library location/architecture
dependent library location that python does so everything goes into
/usr/lib.  Mono assemblies do not because of a pair of limitations of
the mono vm.  java jars go in /usr/share.  The m4 macros that
autoconf/automake use go there as well.

Programs that are written in python but don't want to expose their
internals to the outside world have their code under /usr/share.  We
make php apps do the same.  Perl is probably the same although I haven't
looked at an actual multi-file perl program in well, I don't
remember when so I don't know.

 Does the FHS specify that?
 
The FHS sets out certain rules and criteria.  Linux vendors have
interpreted them and sometimes the standard is updated due to either
current practice or clarification of former practice.  I don't believe
that FHS specifies that .py files go in /usr/lib or /usr/share.  The
rules state things like architecture independent data file which is
why there's some grey area for /usr/lib/python's .py files.

Note that although I'm happy to talk about the FHS here, I'm not
involved with creating the standard.  I'm also only one packager from
one distro.  So I'm happy to help answer questions about the FHS and how
Fedora interprets it but am not in any better position to change it than
any of you.

-Toshio



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Python Package Management Sucks

2008-09-30 Thread Toshio Kuratomi
Ian Bicking wrote:
 Rick Warner wrote:
 Actually, PyPI is replicated.  See, for example,
 http://download.zope.org/simple/.

 It may be that some of the mirrors should be better advertised.

 A half-hearted effort. at best, after the problems last year.  When I
 configure a CPAN client (once per user) I create a list of replicas I
 want to search for any query from a list of hundreds of  replicas
 distributed around the world. 
 
 Can someone suggest the best way to search among repositories?  For
 instance, try to connect to one, then stop if it gives Connection
 Refused?  If it gives any unexpected error (5xx)?  Timing out is a
 common failure, and a pain in the butt, but I guess there's that too.
 What does the CPAN client do?
 
 
I don't know what CPAN does but Linux distributions have also solved
this problem.  We send out massive numbers of updates and new packages
to users every day so we need a mirror network that works well.

In Fedora we have a server that gives out a list of mirrors with GeoIP
data used to try and assemble a list of mirrors near you (country, then
continent (with special cases, for instance, for certain middle eastern
countries that connect better to Europe than to Asia) and then global).

This server gives the mirror list out (randomized among the close
mirrors) and the client goes through the list, trying to retrieve
package metadata.  If it times out or otherwise fails, then it goes on
to the next mirror until it gets data.  (Note, some alternate clients
are able to download from multiple servers at the same time if multiple
packages are needed.)

The mirrorlist server is a pretty neat application
(https://fedorahosted.org/mirrormanager).  It has a TurboGears front end
that allows people to add a new mirror
(https://admin.fedoraproject.org/mirrormanager) for public availability
or restricted to a subset of IPs.  It allows you to only mirror a subset
of the whole content.  And it has several methods of telling if the
mirror is in sync or outdated.  The latter is important to us for making
sure we're giving out users the latest updates that we've shipped and
ranges from a script that the mirror admin can run from their cron job
to check the data available and report back to a process run on our
servers to check that the mirrors have up to date content.  The
mirrorlist itself is cached and served from a mod_python script (soon to
be mod_wsgi) for speed.

You might also be interested in the way that we work with package
metadata.  In Fedora and many other rpm-based distributions (Some
Debian-based distros talked about this as well but I don't know if it
was ever implemented there) we create static xml files (and recently,
sqlite dbs as well) that live on the mirrors.  The client hits the
mirror and downloads at least two of these files.  The repomd.xml file
describes the other files with checksums and is used to verify that the
other metadata is up to date and whether anything has changed.  The
primary.xml file stores information that is generally what is needed for
doing depsolving on the packages.  Then we have several other xml files
that collectively contain the complete metadata for the packages but is
usually overkill... by separating htis stuff out, we save clients from
having to download it in the common case.  This stuff could provide some
design ideas for constructing a pypi metadata repository and is
documented here:  http://createrepo.baseurl.org/

Note: the reason we went with static metadata rather than some sort of
cgi script is that static data can be mirrored without the mirror being
required to run anything beyond a simple rsync cron job.  This makes
finding mirrors much easier.

-Toshio



signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


[Distutils] Patch for minor easy_install problem

2007-10-03 Thread Toshio Kuratomi

It looks like this fix to easy_isntall:
'''
0.6c5
* Fixed .dll files on Cygwin not having executable permisions 
when an egg is installed unzipped.

'''

introduced a minor bug.  From 0.6c5 on, installing an egg unzipped makes 
all the files executable.


Attaching a patch that only makes .dll's executable.

-Toshio
Index: setuptools-0.6c7/setuptools/command/easy_install.py
===
--- setuptools-0.6c7.orig/setuptools/command/easy_install.py
+++ setuptools-0.6c7/setuptools/command/easy_install.py
@@ -988,7 +988,9 @@ See the setuptools documentation for the
 def pf(src,dst):
 if dst.endswith('.py') and not src.startswith('EGG-INFO/'):
 to_compile.append(dst)
-self.unpack_progress(src,dst); to_chmod.append(dst)
+if dst.endswith('.dll'):
+to_chmod.append(dst)
+self.unpack_progress(src,dst)
 return not self.dry_run and dst or None
 
 unpack_archive(egg_path, destination, pf)
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Why are egg-info and related pth files required for rpm packages?

2007-09-04 Thread Toshio Kuratomi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Phillip J. Eby wrote:
 At 12:39 PM 9/4/2007 -0400, Stanley A. Klein wrote:
 I recently installed Fedora 7 and looked at
 /usr/lib/python2.5/site-packages.  The directory has numerous Python
 packages installed but no egg-info and few .pth files.  Of the three .pth
 files in my installation, only one has a content different from the name
 of the package (pointing to a subdirectory of site-packages with a
 different name).

I have some egg-info files in my site-packages.  Those are from
setuptools using packages.  I noticed the  seemingly useless .pth files
the other day and am puzzled by their presence.  More info on those
later in this post.

[...]

 The .egg-info files or directories are required in order to contain 
 project-level metadata.
[...]
 So .egg-info is absolutely indispensable, regardless of installation method.
 
Absolutely agreed.

 As for .pth files, the only .pth files that should be generated by 
 setuptools are the ones needed to support namespace packages.  When 
 you have multiple projects spanning a namespace package, each of 
 those projects would contain somepackage/__init__.py in its naive 
 layout.  But this would cause conflicts between the RPMs, so 
 setuptools uses uniquely-named .pth files to work around the absence 
 of an __init__.py.  So, these Project-version-nspkg.pth files are 
 also indispensable, as the packages involved won't be importable without them.
 
 However, the .pth files you described don't sound like ones generated 
 by setuptools.

I looked into this briefly when attempting to get rid of .pth's and eggs
to diagnose the earlier bug (thanks again for the quick patch and
release!)  The packages I've looked at so far are all being generated by
distutils and have a C component.  I haven't had a chance to delve deeper.

 Note, by the way, that as of Python 2.5, *all* distutils-generated 
 packages include .egg-info; they just use a single file instead of a 
 directory.  This makes it easy to detect what Python packages are 
 installed on a system, as long as the platform maintainers don't 
 remove this file.

I'm sorry to say that this is not true on Fedora 7's python2.5.  There's
a patch that disables generating egg-info files for distutils.  I've
started talking with the python maintainer in Fedora to find out why the
patch exists and if it can be removed but he needs some time to find out
why the patch was added in the first place.

(A note in the spec files implies that the patch was added so as not to
generate egg-info for python core libraries and it might not have been
meant to affect distutils as a whole.  I have to figure out if even that
level of meddling is going to prove bothersome and make a
recommendation.  If you can think of some cases where that would be bad,
please reply so that I can include them in our discussion.)

- -Toshio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFG3chuX6yAic2E7kgRAs5nAJwPYy3Qh5udkOujX6Hz5VoemUyoOACcDwxe
vJLPiVHaB38bbhRrvw0j/+c=
=sXCC
-END PGP SIGNATURE-
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Disabling --single-version-externally-managed

2007-09-02 Thread Toshio Kuratomi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Phillip J. Eby wrote:
 At 07:45 PM 9/1/2007 -0700, Toshio Kuratomi wrote:
 pkg_resource.requires() is documented on the web page, in
 pkg_resource.txt, and when you python setup.py install
 - --single-version-externally-managed.  __requires__ is documented...
 nowhere.  If you're willing to change documentation that says to use
 pkg_resources.require() to use __requires__ instead then this will work
 out perfectly.  If not, we'll need to find a way to make
 pkg_resources.require() work so that we aren't constantly explaining to
 people that it doesn't work because upstream tells us it doesn't work
 and we're sorry that all the official upstream documentation says
 otherwise.
 
 As I've explained repeatedly before, your choices are to either have a
 default version, or not to have one.  If you do not have one, then
 everything works as you wish, except for the fact that you must always
 explicitly require() something to use it (because there's no default).
 
 If do you have a default version, then the only way to get something
 *other* than the default version is to use __requires__ or a
 setuptools-generated script (which automatically includes __requires__).
 
 
Yes.  And I'm repeating that the problem is the documentation doesn't
match the behaviour.  If using __requires__ works, then the
documentation needs to mention it.  Preferably, it should substitute for
all the places where pkg_resources.require() is currently highlighted as
the right way to do things.  For instance, this output from easy_install::
'''
Because this distribution was installed --multi-version, before you can
import modules from this package in an application, you will need to
'import pkg_resources' and then use a 'require()' call similar to one of
these examples, in order to select the desired version:

pkg_resources.require(SQLAlchemy)  # latest installed version
pkg_resources.require(SQLAlchemy==0.3.10)  # this exact version
pkg_resources.require(SQLAlchemy=0.3.10)  # this version or higher
'''

I realize that taken in the vacuum of that single easy_install run,
require() works.  But the instructions are neglecting to tell the user
that things are more complex than that.  That depending on how the other
versions of the module are installed, pkg_resources may not work at all.
 Since __require__ works for this instance and for a mixture of -m and
- -s isn't it best to give users instructions that they can use everywhere?

- -Toshio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFG2lokX6yAic2E7kgRAjtcAKCw1qg4iRrQAQxeyyX8sE9WvMwMDwCbB6ZY
ZrjwE12dAxBKrEoGra2b19s=
=qjbi
-END PGP SIGNATURE-
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Disabling --single-version-externally-managed

2007-09-02 Thread Toshio Kuratomi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Phillip J. Eby wrote:
 At 11:37 PM 9/1/2007 -0700, Toshio Kuratomi wrote:
 I realize that taken in the vacuum of that single easy_install run,
 require() works.  But the instructions are neglecting to tell the user
 that things are more complex than that.  That depending on how the other
 versions of the module are installed, pkg_resources may not work at all.
 
 You're taking this out of context.  When you run easy_install -m, the
 normal result is that *there is no default package*.  So require() is
 both necessary and sufficient in that case.

I'm saying that this fits into a larger context.  If we go forward with
using setuptools and eggs to manage multiple versions instead of coming
up with an ad hoc solution that changes the path ourselves, then
requires() is not sufficient.  So lacking documentation telling the end
user how to deal with this, we'll end up having to field more questions
about what to do in this situation.  This is not the situation we want
to be in.

 
  Since __require__ works for this instance and for a mixture of -m and
 - -s isn't it best to give users instructions that they can use
 everywhere?
 
 The problem is that you are adding a new situation to everywhere, that
 previously didn't exist -- mixing single-version and multi-version at
 the system packaging level.
 
Well, I'd argue that everywhere encompasses wierd situations that you
and I haven't even thought of yet that exist only on the planet Vogon
:-).  I'm just making use of your tool in a situation that seems logical
but hasn't been widespread until now.  So you can make a choice and say
your situation is not the intended use of setuptools, please fork it or
come up with another way of achieving your ends or figure out what, if
anything, is missing and help setuptools adapt to the situation.

 Either have a default version and deal with conflicts, or no default
 version and be explicit.

I'm trying to deal with conflicts.  And my understanding was that you
preferred people use __requires__ to do that.  In fact, you seemed to
say that __requires__ was the only way for people not using setuptools
to do that.

  __requires__ is a workaround to avoid
 conflicts in specific cases.  It is intended only as a workaround, and
 really it's only for tools like easy_install and zc.buildout that
 generate scripts from a project description.
 

1) Why a workaround?
2) What are the specific cases that it's limited to?

 I do not intend to support it for any other usage.  If you, as a
 packager, wish to package scripts that use __requires__, I don't see a
 problem with that.  It is emphatically *not* intended for general use.
 
What is the real way to allow people to do quick and dirty scripting and
experimentation from the interpreter shell in this environment?

And once again, we are *not* creating and packaging scripts that use
__require__.  We are packaging multiple versions of modules with a
default.  We need to have instructions for end-users who want to do
quick and dirty scripting or experiment in the interpreter shell how to
select the version that they wish.

 There are already tools in place to do what you want; you're just trying
 to get away with not using them.
 
No.  I'm trying to provide end users with an understanding of how to
work with the environment we are giving them.  We have no control over
what end users want to do so we have to give them enough information to
make an educated choice about how they can do the tasks they're used to.

 To put it another way, if you want to support multiple versions with a
 default, then you need to support the *end-user being able to choose*
 what version is the default.  

I'm trying to give end users a choice.  giving them a choice of defaults
is outside the scope of what I'm trying to accomplish but I'll be happy
if it works.

 If the user does an easy_install of some
 specific version, then *that* version needs to become the default.

This is fine with me but is really an extra level on top of what we're
trying to do.  We're working on system packaging.  The system packages
have to work together to give the user the ability to import MODULE and
the ability to get more specific than that.  If an end user
easy_installs something on top of the system packages it should work
with the packages installed on the system.  It doesn't matter to me, as
a system packager whether the end user decides to make it the default or
decides to make it explicitly requestable only as long as they are still
able to use their own version of the package if they so choose.

  And
 the only way to support that, is for you to generate your scripts with
 easy_install or zc.buildout, or to make your own tool that generates
 scripts using __requires__.  If you don't, then your system-level Python
 scripts will be hosed if the user changes the default version of a
 package they use.
 
Once again, *we are not creating any system level python scripts that
make

Re: [Distutils] Disabling --single-version-externally-managed

2007-09-02 Thread Toshio Kuratomi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Phillip J. Eby wrote:
 At 12:49 AM 9/2/2007 -0700, Toshio Kuratomi wrote:

 Let me know what else you need.
 
 What are the current contents of easy-install.pth?
 
 
import sys; sys.__plen = len(sys.path)
import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:];
p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert =
p+len(new)

Do you want me to get rid of it and try again?

- -Toshio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFG247uX6yAic2E7kgRAnHsAJ9KO93oSNgVH3xhuQwcB2pbd1EoaQCfftvj
VLnmUUNYGn0kV9qAYogS5FI=
=+rgQ
-END PGP SIGNATURE-
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Disabling --single-version-externally-managed

2007-09-01 Thread Toshio Kuratomi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Phillip J. Eby wrote:
 At 06:56 PM 8/31/2007 -0700, Toshio Kuratomi wrote:

 I tried manually creating a .pth file that lists one or the other of the
 eggs.
 
 That won't work to make the egg the default version.  You have to have
 easy_install generate the .pth file, so that the necessary magic is
 included.
 
 Normally, paths in .pth files are added to the *end* of sys.path, which
 means the single-version egg will take precedence.  easy_install adds
 special incantations to its .pth files so that the eggs it installs have
 higher precedence than everything else.

Good to know.  It doesn't affect what I tested as I used a manually
created .pth file *instead* of using single-version-externally-managed.
 So the eggs were arranged as:
  SQLAlchemy-0.4[...].egg/sqlalchemy
  SQLAlchemy-0.3[...].egg/sqlalchemy
  sqlalchemy.pth (containing the path into one of the eggs)

I'd rather do without .pth's in the Guidelines though, as they seem to
duplicate what can already be achieved by installing one egg as
single-version.

[...]
The rest of this is miscommunication based on my using terms
incorrectly.  I'll reply to the other message with something meaningful
now that I understand:

active version -- egg on sys.path

inactive version -- egg cannot be found by python as it is not on sys.path

default version -- version of a module that comes first on sys.path and
therefore will be selected from a bare import

project -- setuptools managed project that uses requires.txt to manage
conflicting versions.

If I've still got those wrong, let me know :-)

- -Toshio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFG2b2xX6yAic2E7kgRAg4MAJ9114KrfVWHxBa3MdglZPMUqoOTJACgiDKm
7BG+jEMItLfGplZ2rffDoGQ=
=7IK3
-END PGP SIGNATURE-
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Disabling --single-version-externally-managed

2007-09-01 Thread Toshio Kuratomi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Well, I found a bug.  I haven't coded a solution yet so I don't know
precisely what fixing it will do for the rest of our conversation yet::

def insert_on(self, path, loc = None):
Insert self.location in path before its nearest parent
directory
[...Place loc in path...]
# p is the spot where we found or inserted loc; now remove
duplicates
while 1:
try:
np = npath.index(nloc, p+1)
except ValueError:
break
else:
del npath[np], path[np]
p = np  # ha!

This code creates an unordered list because you might have already had
to place something after one of the duplicate locations that this code
is removing.  We need to do a sort of the entire list after each add +
duplicate removal run.

I'm seeing an actual problem with this when starting with the following
sys.path:
 ['/usr/bin', '/usr/lib/python25.zip', '/usr/lib/python2.5',
'/usr/lib/python2.5/plat-linux2', '/usr/lib/python2.5/lib-tk',
'/usr/lib/python2.5/lib-dynload', '/usr/lib/python2.5/site-packages',
'/usr/lib/python2.5/site-packages/Numeric',
'/usr/lib/python2.5/site-packages/PIL',
'/usr/lib/python2.5/site-packages/TestGears-0.2-py2.5.egg-info',
'/usr/lib/python2.5/site-packages/gst-0.10',
'/usr/lib/python2.5/site-packages/gtk-2.0',
'/usr/lib/python2.5/site-packages/pyinotify',
'/usr/lib/python2.5/site-packages/wx-2.8-gtk2-unicode']

/usr/lib/python2.5/site-packages/CherryPy-2.2.1-py2.5.egg is being
inserted before '/usr/lib/python2.5/site-packages'.  Then another
/usr/lib/python2.5/site-packages is entering the method and being placed
after /usr/lib/python2.5... which places it before the CherryPy egg.

- -Toshio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFG2c27X6yAic2E7kgRAraTAJ99J0Ij0OCfxei0bLzC/4l062QEFQCfUKt6
Qe3sguQ07kXv5emazhLFBMA=
=7j/M
-END PGP SIGNATURE-
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Disabling --single-version-externally-managed

2007-09-01 Thread Toshio Kuratomi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Phillip J. Eby wrote:
 At 09:20 PM 8/31/2007 -0700, Toshio Kuratomi wrote:
 Just to illustrate what I'm trying to achieve.  I've updated the Fedora
 Packaging Guidelines[1]_ to allow two versions of a package to coexist.
  I'll list here the sqlalchemy-0.4 and -0.3 build steps, filelists, and
 the output of the test-sql.sh script using this procedure.  The end
 result is what we want but the build step to get there seem a tad
 fragile and kludgey.  Since these are going to become guidelines for all
 of our python packages, I'd like to know if either: 1) there's a better
 way to do this or 2) the results I'm achieving are not expected and
 could disappear with a random upgrade of setuptools.

 .. _[1]: http://fedoraproject.org/wiki/PackagingDrafts/PythonEggs
 
 Here's the thing: if you want multiple installed versions of a package,
 *and* you want there to be a default version, then you *have* to have
 easy_install (or zc.buildout, or some similar tool) generate the startup
 scripts for anything that wants to use a non-default version.
 
You'll have to explain why that is because all my experiments show that
to be false.  Installing one package via easy_install -m and another via
- --single-version-externally-managed gets us 75% of the way to this
working.  Using easy_install -m for both and then copying/symlinking
gets us 100% of the way.

 This is true irrespective of the formats of the versions involved,
 whether they're zipfiles, directories, single-version, or whatever. 
 It's just the nature of the beast, due to the fact that there is a
 global 'working_set' that lists the projects that are currently on
 sys.path, and it is initialized when pkg_resources is imported.  (And
 currently, there is no way to *remove* a distribution from the working
 set.)
 
Since the working_set doesn't explicitly list eggs unless they are
specified in a project's requires.txt it seems like
pkg_resources.require() can override a module which is the default
because it is installed in site-packages.  In fact, I understood this to
be a feature of setuptools: allowing someone to override the vendor
installed packages in site-packages with your own eggs.

 Thus, if you want multiple versions *and* want to be able to select a
 version after pkg_resources has been imported, you *cannot* have a
 default version.  In other words, the egg must not be on sys.path when
 pkg_resources is imported.  Then, pkg_resources can locate the desired
 version and add it.

Not quite.  In the single-version case, the egg is on sys.path because
the module directory is in site-packages.  Therefore pkg_resources make
a different version importable by placing a different egg's path before
site-packages in sys.path.

The goal is to have all of these things work:
1) import MODULE should import whatever the vendor decided should be the
default.
2) requires.txt in a project's egginfo should work to select a specific
version from the installed eggs.
3) In a simple script (without requires.txt), pkg_resources.requires()
should work to select a specific version from the installed eggs.

I think the basic way to enable this is:
For 1) Either install a module that is not an egg into site-packages or
install an egg as single-version-externally-managed into site-packages.
 When the user does import MODULE, python looks in site-packages, finds
the MODULE, and imports it without touching any setuptools code.

For 2) All installed modules that we want to be selectable must be eggs.
 This needs to work with both single-version and multiple-version eggs
when determining the best version match then, if necessary, modify
sys.path to place the best match in front of the other eggs.

For 3) No eggs for this module can be on sys.path before the
pkg_resources.require() call.  This does not count modules brought in
via site-packages as those are going to be overridden when we place egg
paths before site-packages.

This seems to mostly work right now with the procedure I outlined in my
previous message.  I have to fix the bug found in my other message to
see if I can get it to work all the time (and perhaps eliminate the
kludginess in my original procedures.)

You seem to think there's something wrong with this so there's obviously
something you're seeing that I don't.  Can you give an example of where
this will fail?

- -Toshio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFG2dUKX6yAic2E7kgRAiMnAJ40guBFKrh0V3Yl+kzJqRlZ6V5TJgCgrA/j
fnHPHB9Q6533xSAzELAV6Dc=
=swaR
-END PGP SIGNATURE-
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Disabling --single-version-externally-managed

2007-09-01 Thread Toshio Kuratomi
Toshio Kuratomi wrote:
 
 This code creates an unordered list because you might have already had
 to place something after one of the duplicate locations that this code
 is removing.  We need to do a sort of the entire list after each add +
 duplicate removal run.

Here's a quick and dirty patch.  It's correct but it's inefficient.  A
better sort algorithm would help some.  Refactoring code so we call a
sort method after we finish adding entries to the path would be even better.

I'll test how this affects packaging tonight.

-Toshio
diff -up setuptools-0.6c6/pkg_resources.py.bak setuptools-0.6c6/pkg_resources.py
--- setuptools-0.6c6/pkg_resources.py.bak	2007-09-01 14:48:48.0 -0700
+++ setuptools-0.6c6/pkg_resources.py	2007-09-01 14:44:53.0 -0700
@@ -2214,31 +2214,20 @@ class Distribution(object):
 
 def insert_on(self, path, loc = None):
 Insert self.location in path before its nearest parent directory
-
 loc = loc or self.location
 if not loc:
 return
 
+#print 'DEBUG:',loc
 if path is sys.path:
 self.check_version_conflict()
 
+path.insert(0, loc)
 nloc = _normalize_cached(loc)
-bdir = os.path.dirname(nloc)
 npath= map(_normalize_cached, path)
 
-bp = None
-for p, item in enumerate(npath):
-if item==nloc:
-break
-elif item==bdir:
-path.insert(p, loc)
-npath.insert(p, nloc)
-break
-else:
-path.append(loc)
-return
-
 # p is the spot where we found or inserted loc; now remove duplicates
+p = 0
 while 1:
 try:
 np = npath.index(nloc, p+1)
@@ -2247,7 +2236,21 @@ class Distribution(object):
 else:
 del npath[np], path[np]
 p = np  # ha!
-
+   
+# Sort the paths
+newPath = []
+newNPath = []
+for basePathNum, basePath in enumerate(npath):
+bdir = os.path.dirname(basePath)
+for p, item in enumerate(newNPath):
+if item==bdir:
+newPath.insert(p, path[basePathNum])
+newNPath.insert(p, basePath)
+break
+else:
+newNPath.append(basePath)
+newPath.append(path[basePathNum])
+path = newPath
 return




signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Disabling --single-version-externally-managed

2007-09-01 Thread Toshio Kuratomi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Phillip J. Eby wrote:
 At 01:38 PM 9/1/2007 -0700, Toshio Kuratomi wrote:
 I'm seeing an actual problem with this when starting with the following
 sys.path:
  ['/usr/bin', '/usr/lib/python25.zip', '/usr/lib/python2.5',
 '/usr/lib/python2.5/plat-linux2', '/usr/lib/python2.5/lib-tk',
 '/usr/lib/python2.5/lib-dynload', '/usr/lib/python2.5/site-packages',
 '/usr/lib/python2.5/site-packages/Numeric',
 '/usr/lib/python2.5/site-packages/PIL',
 '/usr/lib/python2.5/site-packages/TestGears-0.2-py2.5.egg-info',
 
 Why do you have this on sys.path?  .egg-info files or directories should
 never appear on sys.path.
 
Beats me.

/me looks at the cvs log for the spec file that generated TestGears

Looks like the person that built the file initially in 2005 created a
zipped-egg and hand created a .pth to go with it.  A new maintainer took
over and when they rebuilt the package for the setuptools that changed
- --root to include --single-version-externally-managed they were confused
and changed the .pth to include the egg-info directory.

/me makes a change in cvs and removes the package from his system for now.

 
 '/usr/lib/python2.5/site-packages/gst-0.10',
 '/usr/lib/python2.5/site-packages/gtk-2.0',
 '/usr/lib/python2.5/site-packages/pyinotify',
 '/usr/lib/python2.5/site-packages/wx-2.8-gtk2-unicode']

 /usr/lib/python2.5/site-packages/CherryPy-2.2.1-py2.5.egg is being
 inserted before '/usr/lib/python2.5/site-packages'.  Then another
 /usr/lib/python2.5/site-packages is entering the method and being placed
 after /usr/lib/python2.5... which places it before the CherryPy egg.
 
 That sounds odd, since there should not be a need to add site-packages
 more than once.  In fact, that's what sounds like the actual bug here,
 since IIRC .insert_on() should never be called on a distribution whose
 .location is already on sys.path.


Yeah, I thought it was odd too :-).  But I was only instrumenting the
code to figure out what was going on so I took it at face value.

- -Toshio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFG2hreX6yAic2E7kgRAmGcAKCc9vqFG0JYqdv3dLDL1jJMfeaPEgCaAlat
71bb8k2Cc+sLCaHgWH+lwEU=
=Tckz
-END PGP SIGNATURE-
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Disabling --single-version-externally-managed

2007-09-01 Thread Toshio Kuratomi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Phillip J. Eby wrote:
 At 02:54 PM 9/1/2007 -0700, Toshio Kuratomi wrote:
 Toshio Kuratomi wrote:
 
  This code creates an unordered list because you might have already had
  to place something after one of the duplicate locations that this code
  is removing.  We need to do a sort of the entire list after each add +
  duplicate removal run.

 Here's a quick and dirty patch.  It's correct but it's inefficient.  A
 better sort algorithm would help some.  Refactoring code so we call a
 sort method after we finish adding entries to the path would be even
 better.

 I'll test how this affects packaging tonight.
 
 Please, please stop and back up a minute.   appreciate your eagerness to
 help, really, but right now I can't trust the correctness of the initial
 system against which you are performing your tests.  You've got crazy
 stuff on sys.path, you appear to be calling insert_on() (which isn't a
 documented API), and you're manually creating .pth files, among other
 things.
 
Well, *I'm* not calling insert_on().  setuptools is calling insert_on()
when it initializes via __requires__ = TurboGears; import
pkg_resources (I did not write that script, it's tg-admin from TurboGears.)

I just instrumented the pkg_resources() code to figure out what's going
on and found sys.path being updated incorrectly.

As for manually created .pth files, all the ones I created were removed
right after I tested that they didn't work.  None of the results I've
posted involved them.  However, there are .pth files that were generated
by other people's packaging.  I'll go through and remove or fix all of
those packages on my system.

 So before you do anything else, please restore your system to something
 which consists *only* of files generated by setuptools or easy_install
 without *any* added hacks, workarounds, or manually edited files.  That
 also means NO scripts that were not generated by easy_install.

Uhm... I can get rid of the two packages that I changed and take care of
the .pth files but really unless I remove everything but python and
setuptools from this system, there's going to be packages that were
built with distutils, setuptools, configure scripts, and etc here.
Everything is managed via rpm so I could do that but I'd rather know
what in particular I need to get rid of.  (Like: Remove anything that
installs a .pth file.)  If you say remove everything that installs an
egg that I haven't audited how it builds I can do that pretty easily.

 If you *must* have other scripts, there is an undocumented internal
 feature that you can use to specify a script's requirements such that
 they override the default package versions.  What you have to do is add
 a __requires__ definition to the script, e.g.:
 
__requires__= 'TurboGears1.0', 'FibbledyDee27.2'
 
 This definition must be in the actual script run by Python.  When
 pkg_resources is initially imported, any __requires__ requirements are
 given higher precedence than the default versions.  You must, however,
 still import pkg_resources, and it must be imported *after* setting
 __requires__, not before.
 
 Assuming I understand your requirements, you should be able to
 accomplish everything you want using only this one feature, plus
 single-version eggs for system default packages, and multi-version eggs
 (i.e. *no* .pth files) for everything else.


Does this work from the interpreter shell?
 __requires__ = 'SQLAlchemy=0.3,0.4beta1'
 import pkg_resources
 import sqlalchemy

That might do the trick but it's not ideal for us to drop support for a
documented interface in favor of an undocumented one.  I'm writing
guidelines for packagers, not programmers.  By and large, we're not
writing scripts, we're packaging python modules so that people who are
writing scripts can get their work done.  It's important that the
documented way of doing things works.

pkg_resource.requires() is documented on the web page, in
pkg_resource.txt, and when you python setup.py install
- --single-version-externally-managed.  __requires__ is documented...
nowhere.  If you're willing to change documentation that says to use
pkg_resources.require() to use __requires__ instead then this will work
out perfectly.  If not, we'll need to find a way to make
pkg_resources.require() work so that we aren't constantly explaining to
people that it doesn't work because upstream tells us it doesn't work
and we're sorry that all the official upstream documentation says otherwise.

 It should not be necessary for you to generate or edit any files, use
 other undocumented APIs, or anything else.
 
That's the unstated goal that goes along with the other three :-)

Do you IRC?  I'll fix and remove packages and then tell you what still
works and doesn't work.

- -Toshio

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFG2iPCX6yAic2E7kgRAkabAJoCbxLbgaiwtmjz19pyEESBTi4DZgCeO4W1
qok

Re: [Distutils] Disabling --single-version-externally-managed

2007-08-31 Thread Toshio Kuratomi
[Resending with gzipped complete filelist as the previous try was too
large to send to the list]

Just to illustrate what I'm trying to achieve.  I've updated the Fedora
Packaging Guidelines[1]_ to allow two versions of a package to coexist.
 I'll list here the sqlalchemy-0.4 and -0.3 build steps, filelists, and
the output of the test-sql.sh script using this procedure.  The end
result is what we want but the build step to get there seem a tad
fragile and kludgey.  Since these are going to become guidelines for all
of our python packages, I'd like to know if either: 1) there's a better
way to do this or 2) the results I'm achieving are not expected and
could disappear with a random upgrade of setuptools.

.. _[1]: http://fedoraproject.org/wiki/PackagingDrafts/PythonEggs

Build
-

sqlalchemy-0.3 compat package::
  CFLAGS=$RPM_OPT_FLAGS %{__python} setup.py bdist_egg
  mkdir -p %{python_sitelib}
  easy_install -m --prefix $RPM_BUILD_ROOT%{_usr} dist/*.egg


sqlalchemy-0.4 default package::
  CFLAGS=$RPM_OPT_FLAGS %{__python} setup.py bdist_egg
  mkdir -p %{python_sitelib}
  easy_install -m --prefix %{_usr} --always-unzip dist/*.egg
  cd %{python_sitelib}/%{srcname}-%{version}%{betaver}-py%{pyver}.egg
  mv sqlalchemy ..
  ln -s ../sqlalchemy .

The compat package is pretty straighforward.  However, building the
default package seems overly complex.  It seems like we should be able
to do this::
  CFLAGS=$RPM_OPT_FLAGS %{__python} setup.py build
  %{__python} setup.py install --skip-build --root $RPM_BUILD_ROOT

But that yields the tracebacks when using pkg_resource.require() to try
to run 0.3.

truncated filelist
--
Full filelist attached.  These are the main toplevel directories to show
where the important pieces are.  The sqlalchemy directories all contain
a version of the python module.  (SQLAlchemy-0.4.egg/sqlalchemy is
actually a symlink to site-packages/sqlalchemy but that doesn't matter.
   Those can be reversed or they can be copies with the same results).

  site-packages/SQLAlchemy-0.3.10-py2.5.egg
  site-packages/SQLAlchemy-0.3.10-py2.5.egg/EGG-INFO
  site-packages/SQLAlchemy-0.3.10-py2.5.egg/sqlalchemy
  site-packages/SQLAlchemy-0.4.0beta4-py2.5.egg
  site-packages/SQLAlchemy-0.4.0beta4-py2.5.egg/EGG-INFO
  site-packages/SQLAlchemy-0.4.0beta4-py2.5.egg/sqlalchemy
  site-packages/sqlalchemy


test-sql.sh output
--
import sqlalchemy...  0.4.0beta4
pkg_require =0.3,0.4.0beta1...  0.3.10
pkg_require...  0.4.0beta4
pkg_require =0.3...  0.4.0beta4
pkg_require = 0.4.10...  0.4.0beta4
pkg_require =0.3.12...  0.3.10

-Toshio



sqlalchemy.lst.gz
Description: GNU Zip compressed data


signature.asc
Description: OpenPGP digital signature
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig