[Python-Dev] Left the GSoC-mentors list

2009-04-02 Thread Daniel (ajax) Diniz
Hi,
I've just left the soc2009-mentors list on request, as I'm not a
mentor. So if you need my input on the mentor side regarding ideas
I've contributed to [1] (struct, socket, core helper tools or
Roundup), please CC me.

Best regards,
Daniel

[1] http://wiki.python.org/moin/SummerOfCode/2009/Incoming
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Kristján Valur Jónsson
Thanks Larry.
I didn't notice the patch, or indeed the defect, hence my question.
A clarification in the documentation that a string comparison is indeed used 
might be useful.
As a user of CObject I appreciate this effort.
K

-Original Message-
From: Larry Hastings [mailto:la...@hastings.org] 

A method for answering further such questions suggests itself,


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Greg Ewing

Jim Fulton wrote:

The only type-safety mechanism for a CObject is it's identity.  If you  
want to make sure you're using the foomodule api, make sure the  address 
of the CObject is the same as the address of the api object  exported by 
the module.


I don't follow that. If you already have the address of the
thing you want to use, you don't need a CObject.

2. Only code provided by the module provider should be accessing the  
CObject exported by the module.


Not following that either. Without attaching some kind of
metadata to a CObject, I don't see how you can know whether
a CObject passed to you from Python code is one that you
created yourself, or by some other unrelated piece of
code.

Attaching some kind of type info to a CObject and having
an easy way of checking it makes sense to me. If the
existing CObject API can't be changed, maybe a new
enhanced one could be added.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] OSError.errno => exception hierarchy?

2009-04-02 Thread Gustavo Carneiro
Apologies if this has already been discussed.

I was expecting that by now, python 3.0, the following code:

# clean the target dir
import errno
try:
shutil.rmtree(trace_output_path)
except OSError, ex:
if ex.errno not in [errno.ENOENT]:
raise

Would have become something simpler, like this:

# clean the target dir
try:
shutil.rmtree(trace_output_path)
except OSErrorNoEntry:   # or maybe os.ErrorNoEntry
pass

Apparently no one has bothered yet to turn OSError + errno into a hierarchy
of OSError subclasses, as it should.  What's the problem, no will to do it,
or no manpower?

Regards,

-- 
Gustavo J. A. M. Carneiro
INESC Porto, Telecommunications and Multimedia Unit
"The universe is always one step beyond logic." -- Frank Herbert
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Hrvoje Niksic

Greg Ewing wrote:

Attaching some kind of type info to a CObject and having
an easy way of checking it makes sense to me. If the
existing CObject API can't be changed, maybe a new
enhanced one could be added.


I thought the entire *point* of C object was that it's an opaque box 
without any info whatsoever, except that which is known and shared by 
its creator and its consumer.


If we're adding type information, then please make it a Python object 
rather than a C string.  That way the creator and the consumer can use a 
richer API to query the "type", such as by calling its methods or by 
inspecting it in some other way.  Instead of comparing strings with 
strcmp, it could use PyObject_RichCompareBool, which would allow a much 
more flexible way to define "types".  Using a PyObject also ensures that 
the lifecycle of the attached "type" is managed by the well-understood 
reference-counting mechanism.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Jim Fulton


On Apr 1, 2009, at 11:51 PM, Guido van Rossum wrote:
...

Note also this cheap exported-vtable hack isn't the
only use of CObjects; for example _ctypes uses them to wrap plenty of
one-off objects which are never set as attributes of the _ctypes  
module.
 We'd like a solution that enforces some safety for those too,  
without

creating spurious module attributes.


Why would you care about safety for ctypes? It's about as unsafe as it
gets anyway. Coredump emptor I say.



At which point, I wonder why we worry so much about someone  
intentionally breaking a CObject as in Larry's example.


Jim

--
Jim Fulton
Zope Corporation


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Jim Fulton


On Apr 2, 2009, at 7:28 AM, Greg Ewing wrote:


Jim Fulton wrote:

The only type-safety mechanism for a CObject is it's identity.  If  
you  want to make sure you're using the foomodule api, make sure  
the  address of the CObject is the same as the address of the api  
object  exported by the module.


I don't follow that. If you already have the address of the
thing you want to use, you don't need a CObject.


I was refering to the identity of the CObject itself.

2. Only code provided by the module provider should be accessing  
the  CObject exported by the module.


Not following that either. Without attaching some kind of
metadata to a CObject, I don't see how you can know whether
a CObject passed to you from Python code is one that you
created yourself, or by some other unrelated piece of
code.


The original use case for CObjects was to export an API from a module,  
in which case, you'd be importing the API from the module.  The  
presence in the module indicates the type. Of course, this doesn't  
account for someone intentionally replacing the module's CObject with  
a fake.



Attaching some kind of type info to a CObject and having
an easy way of checking it makes sense to me. If the
existing CObject API can't be changed, maybe a new
enhanced one could be added.


I don't think backward compatibility needs to be a consideration for  
Python 3 at this point.  I don't see much advantage in the proposal,  
but I can live with it for Python 3.


Jim

--
Jim Fulton
Zope Corporation


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] py3k regression tests on Windows

2009-04-02 Thread Kristján Valur Jónsson
Hello there.
Yesterday I created a number of defects for regression test failures on Windows:
http://bugs.python.org/issue5646 : test_importlib fails for py3k on Windows
http://bugs.python.org/issue5645 : test_memoryio fails for py3k on windows
http://bugs.python.org/issue5643 : test__locale fails with RADIXCHAR on Windows

Does anyone feel like taking a look?

K
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 382: Namespace Packages

2009-04-02 Thread Martin v. Löwis
I propose the following PEP for inclusion to Python 3.1.
Please comment.

Regards,
Martin

Abstract


Namespace packages are a mechanism for splitting a single Python
package across multiple directories on disk. In current Python
versions, an algorithm to compute the packages __path__ must be
formulated. With the enhancement proposed here, the import machinery
itself will construct the list of directories that make up the
package.

Terminology
===

Within this PEP, the term package refers to Python packages as defined
by Python's import statement. The term distribution refers to
separately installable sets of Python modules as stored in the Python
package index, and installed by distutils or setuptools. The term
vendor package refers to groups of files installed by an operating
system's packaging mechanism (e.g. Debian or Redhat packages install
on Linux systems).

The term portion refers to a set of files in a single directory (possibly
stored in a zip file) that contribute to a namespace package.

Namespace packages today


Python currently provides the pkgutil.extend_path to denote a package as
a namespace package. The recommended way of using it is to put::

from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)

int the package's ``__init__.py``. Every distribution needs to provide
the same contents in its ``__init__.py``, so that extend_path is
invoked independent of which portion of the package gets imported
first. As a consequence, the package's ``__init__.py`` cannot
practically define any names as it depends on the order of the package
fragments on sys.path which portion is imported first. As a special
feature, extend_path reads files named ``*.pkg`` which allow to
declare additional portions.

setuptools provides a similar function pkg_resources.declare_namespace
that is used in the form::

import pkg_resources
pkg_resources.declare_namespace(__name__)

In the portion's __init__.py, no assignment to __path__ is necessary,
as declare_namespace modifies the package __path__ through sys.modules.
As a special feature, declare_namespace also supports zip files, and
registers the package name internally so that future additions to sys.path
by setuptools can properly add additional portions to each package.

setuptools allows declaring namespace packages in a distribution's
setup.py, so that distribution developers don't need to put the
magic __path__ modification into __init__.py themselves.

Rationale
=

The current imperative approach to namespace packages has lead to
multiple slightly-incompatible mechanisms for providing namespace
packages. For example, pkgutil supports ``*.pkg`` files; setuptools
doesn't. Likewise, setuptools supports inspecting zip files, and
supports adding portions to its _namespace_packages variable, whereas
pkgutil doesn't.

In addition, the current approach causes problems for system vendors.
Vendor packages typically must not provide overlapping files, and an
attempt to install a vendor package that has a file already on disk
will fail or cause unpredictable behavior. As vendors might chose to
package distributions such that they will end up all in a single
directory for the namespace package, all portions would contribute
conflicting __init__.py files.

Specification
=

Rather than using an imperative mechanism for importing packages, a
declarative approach is proposed here, as an extension to the existing
``*.pkg`` mechanism.

The import statement is extended so that it directly considers ``*.pkg``
files during import; a directory is considered a package if it either
contains a file named __init__.py, or a file whose name ends with
".pkg".

In addition, the format of the ``*.pkg`` file is extended: a line with
the single character ``*`` indicates that the entire sys.path will
be searched for portions of the namespace package at the time the
namespace packages is imported.

Importing a package will immediately compute the package's __path__;
the ``*.pkg`` files are not considered anymore after the initial import.
If a ``*.pkg`` package contains an asterisk, this asterisk is prepended
to the package's __path__ to indicate that the package is a namespace
package (and that thus further extensions to sys.path might also
want to extend __path__). At most one such asterisk gets prepended
to the path.

extend_path will be extended to recognize namespace packages according
to this PEP, and avoid adding directories twice to __path__.

No other change to the importing mechanism is made; searching
modules (including __init__.py) will continue to stop at the first
module encountered.

Discussion
==

With the addition of ``*.pkg`` files to the import mechanism, namespace
packages can stop filling out the namespace package's __init__.py.
As a consequence, extend_path and declare_namespace become obsolete.

It is recommended that distributions put a file .pkg
into their namesp

Re: [Python-Dev] PEP 382: Namespace Packages

2009-04-02 Thread P.J. Eby

At 10:32 AM 4/2/2009 -0500, Martin v. Löwis wrote:

I propose the following PEP for inclusion to Python 3.1.
Please comment.


An excellent idea.  One thing I am not 100% clear on, is how to get 
additions to sys.path to work correctly with this.  Currently, when 
pkg_resources adds a new egg to sys.path, it uses its existing 
registry of namespace packages in order to locate which packages need 
__path__ fixups.  It seems under this proposal that it would have to 
scan sys.modules for objects with __path__ attributes that are lists 
that begin with a '*', instead...  which is a bit troubling because 
sys.modules doesn't always only contain module objects.  Many major 
frameworks place lazy module objects, and module proxies or wrappers 
of various sorts in there, so scanning through it arbitrarily is not 
really a good idea.


Perhaps we could add something like a sys.namespace_packages that 
would be updated by this mechanism?  Then, pkg_resources could check 
both that and its internal registry to be both backward and forward compatible.


Apart from that, this mechanism sounds great!  I only wish there was 
a way to backport it all the way to 2.3 so I could drop the messy 
bits from setuptools.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Guido van Rossum
On Thu, Apr 2, 2009 at 6:22 AM, Jim Fulton  wrote:
> The original use case for CObjects was to export an API from a module, in
> which case, you'd be importing the API from the module.

I consider this the *only* use case. What other use cases are there?

> The presence in the
> module indicates the type. Of course, this doesn't account for someone
> intentionally replacing the module's CObject with a fake.

And that's the problem. I would like the following to hold: given a
finite number of extension modules that I trust to be safe (i.e.
excluding ctypes!), pure Python code should not be able to cause any
of their CObjects to be passed off for another.

Putting an identity string in the CObject and checking that string in
PyCObject_Import() solves this.

Adding actual information about what the CObject *means* is
emphatically out of scope. Once a CObject is identified as having the
correct module and name, I am okay with trusting it, because Python
code has no way to create CObjects. I have to trust the extension that
exports the CObject anyway, since after all it is C code that could do
anything at all. But I need to be able to trust that the app cannot
swap CObjects.

>> Attaching some kind of type info to a CObject and having
>> an easy way of checking it makes sense to me. If the
>> existing CObject API can't be changed, maybe a new
>> enhanced one could be added.
>
> I don't think backward compatibility needs to be a consideration for Python
> 3 at this point.  I don't see much advantage in the proposal, but I can live
> with it for Python 3.

Good. Let's solve this for 3.1, and figure out whether or how to
backport later, since for 2.6 (and probably 2.7) binary backwards
compatibility is most important.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Antoine Pitrou
Guido van Rossum  python.org> writes:
> 
> On Thu, Apr 2, 2009 at 6:22 AM, Jim Fulton  zope.com> wrote:
> > The original use case for CObjects was to export an API from a module, in
> > which case, you'd be importing the API from the module.
> 
> I consider this the *only* use case. What other use cases are there?

I don't know if it is good style, but I could imagine it being used to
accumulate non-PyObject data in a Python container (e.g. a list), without too
much overhead.

It is used in getargs.c to manage a list of "destructors" of temporarily created
data for when a call to PyArg_Parse* fails.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Guido van Rossum
On Thu, Apr 2, 2009 at 10:24 AM, Antoine Pitrou  wrote:
> Guido van Rossum  python.org> writes:
>>
>> On Thu, Apr 2, 2009 at 6:22 AM, Jim Fulton  zope.com> wrote:
>> > The original use case for CObjects was to export an API from a module, in
>> > which case, you'd be importing the API from the module.
>>
>> I consider this the *only* use case. What other use cases are there?
>
> I don't know if it is good style, but I could imagine it being used to
> accumulate non-PyObject data in a Python container (e.g. a list), without too
> much overhead.
>
> It is used in getargs.c to manage a list of "destructors" of temporarily 
> created
> data for when a call to PyArg_Parse* fails.

Well, that sounds like it really just needs to manage a
variable-length array of void pointers, and using PyList and PyCObject
is just laziness (and perhaps the wrong kind -- I imagine I could
write the same code without using Python objects and it would be
cleaner *and* faster).

So no, I don't consider that a valid use case, or at least not one we
need to consider for backwards compatibility of the PyCObject design.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyDict_SetItem hook

2009-04-02 Thread Thomas Wouters
On Thu, Apr 2, 2009 at 04:16, John Ehresman  wrote:

> Collin Winter wrote:
>
>> Have you measured the impact on performance?
>>
>
> I've tried to test using pystone, but am seeing more differences between
> runs than there is between python w/ the patch and w/o when there is no hook
> installed.  The highest pystone is actually from the binary w/ the patch,
> which I don't really believe unless it's some low level code generation
> affect.  The cost is one test of a global variable and then a switch to the
> branch that doesn't call the hooks.
>
> I'd be happy to try to come up with better numbers next week after I get
> home from pycon.
>

Pystone is pretty much a useless benchmark. If it measures anything, it's
the speed of the bytecode dispatcher (and it doesn't measure it particularly
well.) PyBench isn't any better, in my experience. Collin has collected a
set of reasonable benchmarks for Unladen Swallow, but they still leave a lot
to be desired. From the discussions at the VM and Language summits before
PyCon, I don't think anyone else has better benchmarks, though, so I would
suggest using Unladen Swallow's:
http://code.google.com/p/unladen-swallow/wiki/Benchmarks

-- 
Thomas Wouters 

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 3to2 Project

2009-04-02 Thread Ron DuPlain
On Wed, Apr 1, 2009 at 12:50 PM, Ron DuPlain  wrote:
> On Mon, Mar 30, 2009 at 9:29 PM, Benjamin Peterson  
> wrote:
>> 2009/3/30 Collin Winter :
>>> If anyone is interested in working on this during the PyCon sprints or
>>> otherwise, here are some easy, concrete starter projects that would
>>> really help move this along:
>>> - The core refactoring engine needs to be broken out from 2to3. In
>>> particular, the tests/ and fixes/ need to get pulled up a directory,
>>> out of lib2to3/.
>>> - Once that's done, lib2to3 should then be renamed to something like
>>> librefactor or something else that indicates its more general nature.
>>> This will allow both 2to3 and 3to2 to more easily share the core
>>> components.
>>
>> FWIW, I think it is unfortunately too late to make this change. We've
>> already released it as lib2to3 in the standard library and I have
>> actually seen it used in other projects. (PythonScope, for example.)
>>
>
> Paul Kippes and I have been sprinting on this.  We put lib2to3 into a
> refactor package and kept a shell lib2to3 to support the old
> interface.
>
> We are able to run 2to3, 3to2, lib2to3 tests, and refactor tests.  We
> only have a few simple 3to2 fixes now, but they should be easy to add.
>  We kept the old lib2to3 tests to make sure we didn't break anything.
> As things settle down, I'd like to verify that our new lib2to3 is
> backward-compatible (since right now it points to the new refactor
> lib) with one of the external projects.
>
> We've been using hg to push changesets between each other, but we'll
> be committing to the svn sandbox before the week is out.  I'm heading
> out today, but Paul is sticking around another day.
>
> It's a start,
>
> Ron
>

See sandbox/trunk/refactor_pkg.
More fixers to come...

-Ron
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyDict_SetItem hook

2009-04-02 Thread Raymond Hettinger
The measurements are just a distractor.  We all already know that the hook is 
being added to a critical path.  Everyone will pay a cost for a feature that 
few people will use.  This is a really bad idea.  It is not part of a thorough, 
thought-out framework of container hooks (something that would need a PEP at 
the very least).The case for how it helps us is somewhat thin.  The case 
for DTrace hooks was much stronger.  

If something does go in, it should be #ifdef'd out by default.  But then, I 
don't think it should go in at all.  


Raymond



  On Thu, Apr 2, 2009 at 04:16, John Ehresman  wrote:

Collin Winter wrote:

  Have you measured the impact on performance?



I've tried to test using pystone, but am seeing more differences between 
runs than there is between python w/ the patch and w/o when there is no hook 
installed.  The highest pystone is actually from the binary w/ the patch, which 
I don't really believe unless it's some low level code generation affect.  The 
cost is one test of a global variable and then a switch to the branch that 
doesn't call the hooks.

I'd be happy to try to come up with better numbers next week after I get 
home from pycon.


  Pystone is pretty much a useless benchmark. If it measures anything, it's the 
speed of the bytecode dispatcher (and it doesn't measure it particularly well.) 
PyBench isn't any better, in my experience. Collin has collected a set of 
reasonable benchmarks for Unladen Swallow, but they still leave a lot to be 
desired. From the discussions at the VM and Language summits before PyCon, I 
don't think anyone else has better benchmarks, though, so I would suggest using 
Unladen Swallow's: http://code.google.com/p/unladen-swallow/wiki/Benchmarks


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Larry Hastings

Guido van Rossum wrote:

On Thu, Apr 2, 2009 at 6:22 AM, Jim Fulton  wrote:
  

The original use case for CObjects was to export an API from a module, in
which case, you'd be importing the API from the module.


I consider this the *only* use case. What other use cases are there?


Exporting a C/C++ data structure:

   
http://wiki.cacr.caltech.edu/danse/index.php/Lots_more_details_on_writing_wrappers
   
http://www.cacr.caltech.edu/projects/ARCS/array_kluge/array_klugemodule/html/misc_8h.html
   http://svn.xiph.org/trunk/vorbisfile-python/vorbisfile.c

Some folks don't register a proper type; they just wrap their objects in 
CObjects and add module methods.


The "obscure" method in the "Robin" package ( 
http://code.google.com/p/robin/ ) curiously wraps a *Python* object in a 
CObject:


   
http://code.google.com/p/robin/source/browse/trunk/src/robin/frontends/python/module.cc

I must admit I don't understand why this is a good idea.


There are many more wild & wooly use cases to be found if you Google for 
"PyCObject_FromVoidPtr".  Using CObject to exporting C APIs seems to be 
the minority, outside the CPython sources anyway.



/larry/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Larry Hastings


Hrvoje Niksic wrote:
If we're adding type information, then please make it a Python object 
rather than a C string.  That way the creator and the consumer can use 
a richer API to query the "type", such as by calling its methods or by 
inspecting it in some other way.


I'm not writing my patch that way; it would be too cumbersome for what 
is ostensibly an easy, light-weight API.  If you're going that route you 
might as well create a real PyTypeObject for the blob you're passing in.


But please feel free to contribute your own competing patch; you may 
start with my patch if you like.


YAGNI,


/larry/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Larry Hastings

Guido van Rossum wrote:

OK, my proposal would be to agree on the value of this string too:
"module.variable".
  


That's a fine idea for cases where the CObject is stored as an attribute 
of a module; my next update of my patch will change the existing uses to 
use that format.



Why would you care about safety for ctypes? It's about as unsafe as it
gets anyway. Coredump emptor I say.


_ctypes and exporting C APIs are not the only use cases of CObjects in 
the wild.  Please see, uh, that email I wrote like five minutes ago, 
also a reply to you.



/larry/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 382: Namespace Packages

2009-04-02 Thread Chris Withers

Martin v. Löwis wrote:

I propose the following PEP for inclusion to Python 3.1.
Please comment.


Would this support the following case:

I have a package called mortar, which defines useful stuff:

from mortar import content, ...

I now want to distribute large optional chunks separately, but ideally 
so that the following will will work:


from mortar.rbd import ...
from mortar.zodb import ...
from mortar.wsgi import ...

Does the PEP support this? The only way I can currently think to do this 
would result in:


from mortar import content,..
from mortar_rbd import ...
from mortar_zodb import ...
from mortar_wsgi import ...

...which looks a bit unsightly to me.

cheers,

Chris

--
Simplistix - Content Management, Zope & Python Consulting
   - http://www.simplistix.co.uk
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 382: Namespace Packages

2009-04-02 Thread Chris Withers

P.J. Eby wrote:
Apart from that, this mechanism sounds great!  I only wish there was a 
way to backport it all the way to 2.3 so I could drop the messy bits 
from setuptools.


Maybe we could? :-)

Chris

--
Simplistix - Content Management, Zope & Python Consulting
   - http://www.simplistix.co.uk
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OSError.errno => exception hierarchy?

2009-04-02 Thread Benjamin Peterson
2009/4/2 Gustavo Carneiro :
> Apologies if this has already been discussed.

I don't believe it has ever been discussed to be implemented.

> Apparently no one has bothered yet to turn OSError + errno into a hierarchy
> of OSError subclasses, as it should.  What's the problem, no will to do it,
> or no manpower?

Python doesn't need any more builtin exceptions to clutter the
namespace. Besides, what's wrong with just checking the errno?



-- 
Regards,
Benjamin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 382: Namespace Packages

2009-04-02 Thread M.-A. Lemburg
On 2009-04-02 17:32, Martin v. Löwis wrote:
> I propose the following PEP for inclusion to Python 3.1.

Thanks for picking this up.

I'd like to extend the proposal to Python 2.7 and later.

> Please comment.
> 
> Regards,
> Martin
> 
> Specification
> =
> 
> Rather than using an imperative mechanism for importing packages, a
> declarative approach is proposed here, as an extension to the existing
> ``*.pkg`` mechanism.
> 
> The import statement is extended so that it directly considers ``*.pkg``
> files during import; a directory is considered a package if it either
> contains a file named __init__.py, or a file whose name ends with
> ".pkg".

That's going to slow down Python package detection a lot - you'd
replace an O(1) test with an O(n) scan.

Alternative Approach:
-

Wouldn't it be better to stick with a simpler approach and look for
"__pkg__.py" files to detect namespace packages using that O(1) check ?

This would also avoid any issues you'd otherwise run into if you want
to maintain this scheme in an importer that doesn't have access to a list
of files in a package directory, but is well capable for the checking
the existence of a file.

Mechanism:
--

If the import mechanism finds a matching namespace package (a directory
with a __pkg__.py file), it then goes into namespace package scan mode and
scans the complete sys.path for more occurrences of the same namespace
package.

The import loads all __pkg__.py files of matching namespace packages
having the same package name during the search.

One of the namespace packages, the defining namespace package, will have
to include a __init__.py file.

After having scanned all matching namespace packages and loading
the __pkg__.py files in the order of the search, the import mechanism
then sets the packages .__path__ attribute to include all namespace
package directories found on sys.path and finally executes the
__init__.py file.

(Please let me know if the above is not clear, I will then try to
follow up on it.)

Discussion:
---

The above mechanism allows the same kind of flexibility we already
have with the existing normal __init__.py mechanism.

* It doesn't add yet another .pth-style sys.path extension (which are
difficult to manage in installations).

* It always uses the same naive sys.path search strategy. The strategy
is not determined by some file contents.

* The search is only done once - on the first import of the package.

* It's possible to have a defining package dir and add-one package
dirs.

* Namespace packages are easy to recognize by testing for a single
resource.

* Namespace __pkg__.py modules can provide extra meta-information,
logging, etc. to simplify debugging namespace package setups.

* It's possible to freeze such setups, to put them into ZIP files,
or only have parts of it in a ZIP file and the other parts in the
file-system.

Caveats:

* Changes to sys.path will not result in an automatic rescan for
additional namespace packages, if the package was already loaded.
However, we could have a function to make such a rescan explicit.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Apr 02 2009)
>>> Python/Zope Consulting and Support ...http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/

2009-03-19: Released mxODBC.Connect 1.0.1  http://python.egenix.com/

::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OSError.errno => exception hierarchy?

2009-04-02 Thread Jack diederich
On Thu, Apr 2, 2009 at 4:25 PM, Benjamin Peterson  wrote:
> 2009/4/2 Gustavo Carneiro :
>> Apologies if this has already been discussed.
>
> I don't believe it has ever been discussed to be implemented.
>
>> Apparently no one has bothered yet to turn OSError + errno into a hierarchy
>> of OSError subclasses, as it should.  What's the problem, no will to do it,
>> or no manpower?
>
> Python doesn't need any more builtin exceptions to clutter the
> namespace. Besides, what's wrong with just checking the errno?

The problem is manpower (this has been no ones itch).  In order to
have a hierarchy of OSError exceptions the underlying code would have
to raise them.  That means diving into all the C code that raises
OSError and cleaning them up.

I'm +1 on the idea but -1 on doing the work myself.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OSError.errno => exception hierarchy?

2009-04-02 Thread Barry Warsaw

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Apr 2, 2009, at 3:35 PM, Jack diederich wrote:

On Thu, Apr 2, 2009 at 4:25 PM, Benjamin Peterson  
 wrote:

2009/4/2 Gustavo Carneiro :

Apologies if this has already been discussed.


I don't believe it has ever been discussed to be implemented.

Apparently no one has bothered yet to turn OSError + errno into a  
hierarchy
of OSError subclasses, as it should.  What's the problem, no will  
to do it,

or no manpower?


Python doesn't need any more builtin exceptions to clutter the
namespace. Besides, what's wrong with just checking the errno?


The problem is manpower (this has been no ones itch).  In order to
have a hierarchy of OSError exceptions the underlying code would have
to raise them.  That means diving into all the C code that raises
OSError and cleaning them up.

I'm +1 on the idea but -1 on doing the work myself.


I'm +0/-1 (idea/work) on doing them all, but I think a /few/ errnos  
would be very handy.  I certainly check ENOENT and EEXIST very  
frequently, so being able to easily catch or ignore those would be a  
big win.  I'm sure there's one or two others that would give big bang  
for little buck.


Barry

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (Darwin)

iQCVAwUBSdUjmnEjvBPtnXfVAQKsqAP+Ol4N2EqmNl0AFRIyxyvY+i7JEWhcJMQl
7fNm/lVJt3s7+5oO7egzNJYAjCmvjd9Vdh4poAqWvmcrcJB3a0WDxf8ZTJnCErJx
ehdSpx9JO0nohrhcHM+EwcvQS39vZFFlLgOkCS5O57Wy5GdynAGBlPQY5abwJGEe
V8or9I16W/E=
=JG7r
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] issue5578 - explanation

2009-04-02 Thread Chris Withers

R. David Murray wrote:

On Wed, 1 Apr 2009 at 13:12, Chris Withers wrote:

Guido van Rossum wrote:

 Well hold on for a minute, I remember we used to have an exec
 statement in a class body in the standard library, to define some file
 methods in socket.py IIRC. 


But why an exec?! Surely there must be some other way to do this than 
an exec?


Maybe, but this sure is gnarly code:

_s = ("def %s(self, *args): return self._sock.%s(*args)\n\n"
  "%s.__doc__ = _realsocket.%s.__doc__\n")
for _m in _socketmethods:
exec _s % (_m, _m, _m, _m)
del _m, _s


I played around with this and managed to rewrite it as:

from functools import partial
from new import instancemethod

def meth(name,self,*args):
return getattr(self._sock,name)(*args)

for _m in _socketmethods:
p = partial(meth,_m)
p.__name__ = _m
p.__doc__ = getattr(_realsocket,_m).__doc__
m = instancemethod(p,None,_socketobject)
setattr(_socketobject,_m,m)

Have I missed something or is that a suitable replacement that gets rid 
of the exec nastiness?


Chris

--
Simplistix - Content Management, Zope & Python Consulting
   - http://www.simplistix.co.uk
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] issue5578 - explanation

2009-04-02 Thread Guido van Rossum
On Thu, Apr 2, 2009 at 2:16 PM, Chris Withers  wrote:
> R. David Murray wrote:
>>
>> On Wed, 1 Apr 2009 at 13:12, Chris Withers wrote:
>>>
>>> Guido van Rossum wrote:

  Well hold on for a minute, I remember we used to have an exec
  statement in a class body in the standard library, to define some file
  methods in socket.py IIRC.
>>>
>>> But why an exec?! Surely there must be some other way to do this than an
>>> exec?
>>
>> Maybe, but this sure is gnarly code:
>>
>>    _s = ("def %s(self, *args): return self._sock.%s(*args)\n\n"
>>          "%s.__doc__ = _realsocket.%s.__doc__\n")
>>    for _m in _socketmethods:
>>        exec _s % (_m, _m, _m, _m)
>>    del _m, _s
>
> I played around with this and managed to rewrite it as:
>
> from functools import partial
> from new import instancemethod
>
> def meth(name,self,*args):
>    return getattr(self._sock,name)(*args)
>
> for _m in _socketmethods:
>    p = partial(meth,_m)
>    p.__name__ = _m
>    p.__doc__ = getattr(_realsocket,_m).__doc__
>    m = instancemethod(p,None,_socketobject)
>    setattr(_socketobject,_m,m)
>
> Have I missed something or is that a suitable replacement that gets rid of
> the exec nastiness?

That code in socket.py is much older that functools... I don't know if
the dependency matters, probably not.

But anyways this is moot, the bug was only about exec in a class body
*nested inside a function*.

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyDict_SetItem hook

2009-04-02 Thread Guido van Rossum
Wow. Can you possibly be more negative?

2009/4/2 Raymond Hettinger :
> The measurements are just a distractor.  We all already know that the hook
> is being added to a critical path.  Everyone will pay a cost for a feature
> that few people will use.  This is a really bad idea.  It is not part of a
> thorough, thought-out framework of container hooks (something that would
> need a PEP at the very least).    The case for how it helps us is somewhat
> thin.  The case for DTrace hooks was much stronger.
>
> If something does go in, it should be #ifdef'd out by default.  But then, I
> don't think it should go in at all.
>
>
> Raymond
>
>
>
>
> On Thu, Apr 2, 2009 at 04:16, John Ehresman  wrote:
>>
>> Collin Winter wrote:
>>>
>>> Have you measured the impact on performance?
>>
>> I've tried to test using pystone, but am seeing more differences between
>> runs than there is between python w/ the patch and w/o when there is no hook
>> installed.  The highest pystone is actually from the binary w/ the patch,
>> which I don't really believe unless it's some low level code generation
>> affect.  The cost is one test of a global variable and then a switch to the
>> branch that doesn't call the hooks.
>>
>> I'd be happy to try to come up with better numbers next week after I get
>> home from pycon.
>
> Pystone is pretty much a useless benchmark. If it measures anything, it's
> the speed of the bytecode dispatcher (and it doesn't measure it particularly
> well.) PyBench isn't any better, in my experience. Collin has collected a
> set of reasonable benchmarks for Unladen Swallow, but they still leave a lot
> to be desired. From the discussions at the VM and Language summits before
> PyCon, I don't think anyone else has better benchmarks, though, so I would
> suggest using Unladen Swallow's:
> http://code.google.com/p/unladen-swallow/wiki/Benchmarks
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/guido%40python.org
>
>



-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] issue5578 - explanation

2009-04-02 Thread Chris Withers

Guido van Rossum wrote:

from functools import partial
from new import instancemethod

def meth(name,self,*args):
   return getattr(self._sock,name)(*args)

for _m in _socketmethods:
   p = partial(meth,_m)
   p.__name__ = _m
   p.__doc__ = getattr(_realsocket,_m).__doc__
   m = instancemethod(p,None,_socketobject)
   setattr(_socketobject,_m,m)

Have I missed something or is that a suitable replacement that gets rid of
the exec nastiness?


That code in socket.py is much older that functools... I don't know if
the dependency matters, probably not.

But anyways this is moot, the bug was only about exec in a class body
*nested inside a function*.


Indeed, I just hate seeing execs and it was an interesting mental 
exercise to try and get rid of the above one ;-)


Assuming it breaks no tests, would there be objection to me committing 
the above change to the Python 3 trunk?


Chris

--
Simplistix - Content Management, Zope & Python Consulting
   - http://www.simplistix.co.uk
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OSError.errno => exception hierarchy?

2009-04-02 Thread Amaury Forgeot d'Arc
Hello,

On Thu, Apr 2, 2009 at 22:35, Jack diederich  wrote:
> On Thu, Apr 2, 2009 at 4:25 PM, Benjamin Peterson  wrote:
>> 2009/4/2 Gustavo Carneiro :
>>> Apologies if this has already been discussed.
>>
>> I don't believe it has ever been discussed to be implemented.
>>
>>> Apparently no one has bothered yet to turn OSError + errno into a hierarchy
>>> of OSError subclasses, as it should.  What's the problem, no will to do it,
>>> or no manpower?
>>
>> Python doesn't need any more builtin exceptions to clutter the
>> namespace. Besides, what's wrong with just checking the errno?
>
> The problem is manpower (this has been no ones itch).  In order to
> have a hierarchy of OSError exceptions the underlying code would have
> to raise them.  That means diving into all the C code that raises
> OSError and cleaning them up.
>
> I'm +1 on the idea but -1 on doing the work myself.
>
> -Jack

The py library (http://codespeak.net/py/dist/) already has a py.error
module that provide an exception class for each errno.
See for example how they use py.error.ENOENT, py.error.EACCES... to
implement some kind of FilePath object:
http://codespeak.net/svn/py/dist/py/path/local/local.py

But I'm not sure I would like this kind of code in core python. Too
much magic...

-- 
Amaury Forgeot d'Arc
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] issue5578 - explanation

2009-04-02 Thread Guido van Rossum
On Thu, Apr 2, 2009 at 2:21 PM, Chris Withers  wrote:
> Guido van Rossum wrote:
>>>
>>> from functools import partial
>>> from new import instancemethod
>>>
>>> def meth(name,self,*args):
>>>   return getattr(self._sock,name)(*args)
>>>
>>> for _m in _socketmethods:
>>>   p = partial(meth,_m)
>>>   p.__name__ = _m
>>>   p.__doc__ = getattr(_realsocket,_m).__doc__
>>>   m = instancemethod(p,None,_socketobject)
>>>   setattr(_socketobject,_m,m)
>>>
>>> Have I missed something or is that a suitable replacement that gets rid
>>> of
>>> the exec nastiness?
>>
>> That code in socket.py is much older that functools... I don't know if
>> the dependency matters, probably not.
>>
>> But anyways this is moot, the bug was only about exec in a class body
>> *nested inside a function*.
>
> Indeed, I just hate seeing execs and it was an interesting mental exercise
> to try and get rid of the above one ;-)
>
> Assuming it breaks no tests, would there be objection to me committing the
> above change to the Python 3 trunk?

That's up to Benjamin. Personally, I live by "if it ain't broke, don't
fix it." :-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] [issue3609] does parse_header really belong in CGI module?

2009-04-02 Thread Senthil Kumaran
http://bugs.python.org/issue3609  requests to move the function
parse_header present in cgi module to email package.

The reasons for this request are:

1) The MIME type header parsing methods rightly belong to email
package. Confirming to RFC 2045.
2) parse_qs, parse_qsl were similarly moved from cgi to urlparse.


The question here is, should the relocation happen in Python 2.7 as
well as in Python 3K or only in Python 3k?

If changes happen in Python 2.7, then cgi.parse_header will have
DeprecationWarning just in case we go for more versions in Python 2.x
series.

Does anyone have any concerns with this change?

-- 
Senthil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Package Management - thoughts from the peanut gallery

2009-04-02 Thread Chris Withers

Hey All,

I have to admit to not having the willpower to plough through the 200 
unread messages in the packaging thread when I got back from PyCon but 
just wanted to throw out a few thoughts on what my python packaging 
utopia would look like:


- python would have a package format that included version numbers and 
dependencies.


- this package format would "play nice" with os-specific ideas of how 
packages should be structured.


- python itself would have a version number, so it could be treated as 
just another dependency by packages (ie: python >=2.3,<3)


- python would ship with a package manager that would let you install 
and uninstall python packages, resolving dependencies in the process and 
complaining if it couldn't or if there were clashes


- this package manager would facilitate the building of os-specific 
packages (.deb, .rpm) including providing dependency information, so 
making life *much* easier for these packagers.


- the standard library packages would be no different from any other 
package, and could be overridden as and when new versions became 
available on PyPI, should an end user so desire. They would also be free 
to have their own release lifecycles (unittest, distutils, email, I'm 
looking at you!)


- python would still ship "batteries included" with versions of these 
packages appropriate for the release, to keep those in corporate 
shackles or with no network happy. In fact, creating 
application-specific "bundles" like this would become trivial, helping 
those who have apps where they want to ship as single, isolated lumps 
which the os-specific package managers could use without having to worry 
about any python package dependencies.


Personally I feel all of the above are perfectly possible, and can't see 
anyone being left unhappy by them. I'm sure I've missed something then, 
otherwise why not make it happen?


cheers,

Chris

--
Simplistix - Content Management, Zope & Python Consulting
   - http://www.simplistix.co.uk
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] unittest package

2009-04-02 Thread Michael Foord

Hello all,

The unittest module is around 1500 lines of code now, and the tests are 
3000 lines.


It would be much easier to maintain as a package rather than a module. 
Shall I work on a suggested structure or are there objections in principle?


Obviously all the functionality would still be available from the 
top-level unittest namespace (for backwards compatibility).


Michael

--
http://www.ironpythoninaction.com/

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyDict_SetItem hook

2009-04-02 Thread Raymond Hettinger

Wow. Can you possibly be more negative?


I think it's worse to give the poor guy the run around
by making him run lots of random benchmarks.  In
the end, someone will run a timeit or have a specific
case that shows the full effect.  All of the respondents 
so far seem to have a clear intuition that hook is right 
in the middle of a critical path.  Their intuition matches

what I learned by spending a month trying to find ways
to optimize dictionaries.

Am surprised that there has been no discussion of why 
this should be in the default build (as opposed to a 
compile time option).  AFAICT, users have not previously

requested a hook like this.

Also, there has been no discussion for an overall strategy
for monitoring containers in general.  Lists and tuples will
both defy this approach because there is so much code
that accesses the arrays directly.  Am not sure whether the
setitem hook would work for other implementations either.

It seems weird to me that Collin's group can be working
so hard just to get a percent or two improvement in 
specific cases for pickling while python-dev is readily 
entertaining a patch that slows down the entire language.  


If my thoughts on the subject bug you, I'll happily
withdraw from the thread.  I don't aspire to be a
source of negativity.  I just happen to think this 
proposal isn't a good idea.



Raymond



- Original Message - 
From: "Guido van Rossum" 

To: "Raymond Hettinger" 
Cc: "Thomas Wouters" ; "John Ehresman" ; 

Sent: Thursday, April 02, 2009 2:19 PM
Subject: Re: [Python-Dev] PyDict_SetItem hook


Wow. Can you possibly be more negative?

2009/4/2 Raymond Hettinger :

The measurements are just a distractor. We all already know that the hook
is being added to a critical path. Everyone will pay a cost for a feature
that few people will use. This is a really bad idea. It is not part of a
thorough, thought-out framework of container hooks (something that would
need a PEP at the very least). The case for how it helps us is somewhat
thin. The case for DTrace hooks was much stronger.

If something does go in, it should be #ifdef'd out by default. But then, I
don't think it should go in at all.


Raymond




On Thu, Apr 2, 2009 at 04:16, John Ehresman  wrote:


Collin Winter wrote:


Have you measured the impact on performance?


I've tried to test using pystone, but am seeing more differences between
runs than there is between python w/ the patch and w/o when there is no hook
installed. The highest pystone is actually from the binary w/ the patch,
which I don't really believe unless it's some low level code generation
affect. The cost is one test of a global variable and then a switch to the
branch that doesn't call the hooks.

I'd be happy to try to come up with better numbers next week after I get
home from pycon.


Pystone is pretty much a useless benchmark. If it measures anything, it's
the speed of the bytecode dispatcher (and it doesn't measure it particularly
well.) PyBench isn't any better, in my experience. Collin has collected a
set of reasonable benchmarks for Unladen Swallow, but they still leave a lot
to be desired. From the discussions at the VM and Language summits before
PyCon, I don't think anyone else has better benchmarks, though, so I would
suggest using Unladen Swallow's:
http://code.google.com/p/unladen-swallow/wiki/Benchmarks


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/guido%40python.org






--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] unittest package

2009-04-02 Thread Robert Collins
On Thu, 2009-04-02 at 16:58 -0500, Michael Foord wrote:
> Hello all,
> 
> The unittest module is around 1500 lines of code now, and the tests are 
> 3000 lines.
> 
> It would be much easier to maintain as a package rather than a module. 
> Shall I work on a suggested structure or are there objections in principle?
> 
> Obviously all the functionality would still be available from the 
> top-level unittest namespace (for backwards compatibility).
> 
> Michael

I'd like to see this; jmls' testtools package has a layout for this
which is quite nice.

-Rob


signature.asc
Description: This is a digitally signed message part
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyDict_SetItem hook

2009-04-02 Thread Antoine Pitrou
Raymond Hettinger  rcn.com> writes:
> 
> It seems weird to me that Collin's group can be working
> so hard just to get a percent or two improvement in 
> specific cases for pickling while python-dev is readily 
> entertaining a patch that slows down the entire language.  

I think it's really more than a percent or two:
http://bugs.python.org/issue5670

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyDict_SetItem hook

2009-04-02 Thread Raymond Hettinger



It seems weird to me that Collin's group can be working
so hard just to get a percent or two improvement in 
specific cases for pickling while python-dev is readily 
entertaining a patch that slows down the entire language.  


[Antoine Pitrou]

I think it's really more than a percent or two:
http://bugs.python.org/issue5670


For lists, it was a percent or two:
http://bugs.python.org/issue5671

I expect Collin's overall efforts to payoff nicely.  I was
just pointing-out the contrast between module specific
optimization efforts versus anti-optimizations that affect
the whole language.


Raymond
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyDict_SetItem hook

2009-04-02 Thread Amaury Forgeot d'Arc
On Thu, Apr 2, 2009 at 03:23, Christian Heimes  wrote:
> John Ehresman wrote:
>> * To what extent should non-debugger code use the hook?  At one end of
>> the spectrum, the hook could be made readily available for non-debug use
>> and at the other end, it could be documented as being debug only,
>> disabled in python -O, & not exposed in the stdlib to python code.
>
> To explain Collin's mail:
> Python's dict implementation is crucial to the performance of any Python
> program. Modules, types, instances all rely on the speed of Python's
> dict type because most of them use a dict to store their name space.
> Even the smallest change to the C code may lead to a severe performance
> penalty. This is especially true for set and get operations.

A change that would have no performance impact could be to set mp->ma_lookup
to another function, that calls all the hooks it wants before calling
the "super()" method
(lookdict).
This ma_lookup is already an attribute of every dict, so a debugger
could trace only
the namespaces it monitors.

The only problem here is that ma_lookup is called with the key and its hash,
but not with the value, and you cannot know whether you are reading or
setting the dict.
It is easy to add an argument and call ma_lookup with the value (or
NULL, or -1 depending
on the action: set, get or del), but this may have a slight impact
(benchmark needed!)
even if this argument is not used by the standard function.

-- 
Amaury Forgeot d'Arc
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyDict_SetItem hook

2009-04-02 Thread Thomas Wouters
On Fri, Apr 3, 2009 at 00:07, Raymond Hettinger  wrote:

>
> It seems weird to me that Collin's group can be working
> so hard just to get a percent or two improvement in specific cases for
> pickling while python-dev is readily entertaining a patch that slows down
> the entire language.


Collin's group has unfortunately seen that you cannot know the actual impact
of a change until you measure it. GCC performance, for instance, is
extremely unpredictable, and I can easily see a change like this proving to
have zero impact -- or even positive impact -- on most platforms because,
say, it warms the cache for the common case. I doubt it will, but you can't
*know* until you measure it.

-- 
Thomas Wouters 

Hi! I'm a .signature virus! copy me into your .signature file to help me
spread!
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyDict_SetItem hook

2009-04-02 Thread Guido van Rossum
On Thu, Apr 2, 2009 at 3:07 PM, Raymond Hettinger  wrote:
>> Wow. Can you possibly be more negative?
>
> I think it's worse to give the poor guy the run around

Mind your words please.

> by making him run lots of random benchmarks.  In
> the end, someone will run a timeit or have a specific
> case that shows the full effect.  All of the respondents so far seem to have
> a clear intuition that hook is right in the middle of a critical path.
>  Their intuition matches
> what I learned by spending a month trying to find ways
> to optimize dictionaries.
>
> Am surprised that there has been no discussion of why this should be in the
> default build (as opposed to a compile time option).  AFAICT, users have not
> previously
> requested a hook like this.

I may be partially to blame for this. John and Stephan are requesting
this because it would (mostly) fulfill one of the top wishes of the
users of Wingware. So the use case is certainly real.

> Also, there has been no discussion for an overall strategy
> for monitoring containers in general.  Lists and tuples will
> both defy this approach because there is so much code
> that accesses the arrays directly.  Am not sure whether the
> setitem hook would work for other implementations either.

The primary use case is some kind of trap on assignment. While this
cannot cover all cases, most non-local variables are stored in dicts.
List mutations are not in the same league, as use case.

> It seems weird to me that Collin's group can be working
> so hard just to get a percent or two improvement in specific cases for
> pickling while python-dev is readily entertaining a patch that slows down
> the entire language.

I don't actually believe that you can know whether this affects
performance at all without serious benchmarking. The patch amounts to
a single global flag check as long as the feature is disabled, and
that flag could be read from the L1 cache.

> If my thoughts on the subject bug you, I'll happily
> withdraw from the thread.  I don't aspire to be a
> source of negativity.  I just happen to think this proposal isn't a good
> idea.

I think we need more proof either way.

> Raymond
>
>
>
> - Original Message - From: "Guido van Rossum" 
> To: "Raymond Hettinger" 
> Cc: "Thomas Wouters" ; "John Ehresman"
> ; 
> Sent: Thursday, April 02, 2009 2:19 PM
> Subject: Re: [Python-Dev] PyDict_SetItem hook
>
>
> Wow. Can you possibly be more negative?
>
> 2009/4/2 Raymond Hettinger :
>>
>> The measurements are just a distractor. We all already know that the hook
>> is being added to a critical path. Everyone will pay a cost for a feature
>> that few people will use. This is a really bad idea. It is not part of a
>> thorough, thought-out framework of container hooks (something that would
>> need a PEP at the very least). The case for how it helps us is somewhat
>> thin. The case for DTrace hooks was much stronger.
>>
>> If something does go in, it should be #ifdef'd out by default. But then, I
>> don't think it should go in at all.
>>
>>
>> Raymond
>>
>>
>>
>>
>> On Thu, Apr 2, 2009 at 04:16, John Ehresman  wrote:
>>>
>>> Collin Winter wrote:

 Have you measured the impact on performance?
>>>
>>> I've tried to test using pystone, but am seeing more differences between
>>> runs than there is between python w/ the patch and w/o when there is no
>>> hook
>>> installed. The highest pystone is actually from the binary w/ the patch,
>>> which I don't really believe unless it's some low level code generation
>>> affect. The cost is one test of a global variable and then a switch to
>>> the
>>> branch that doesn't call the hooks.
>>>
>>> I'd be happy to try to come up with better numbers next week after I get
>>> home from pycon.
>>
>> Pystone is pretty much a useless benchmark. If it measures anything, it's
>> the speed of the bytecode dispatcher (and it doesn't measure it
>> particularly
>> well.) PyBench isn't any better, in my experience. Collin has collected a
>> set of reasonable benchmarks for Unladen Swallow, but they still leave a
>> lot
>> to be desired. From the discussions at the VM and Language summits before
>> PyCon, I don't think anyone else has better benchmarks, though, so I would
>> suggest using Unladen Swallow's:
>> http://code.google.com/p/unladen-swallow/wiki/Benchmarks
>>
>>
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> http://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> http://mail.python.org/mailman/options/python-dev/guido%40python.org
>>
>>
>
>
>
> --
> --Guido van Rossum (home page: http://www.python.org/~guido/)
>



-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] unittest package

2009-04-02 Thread Barry Warsaw

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Apr 2, 2009, at 4:58 PM, Michael Foord wrote:

The unittest module is around 1500 lines of code now, and the tests  
are 3000 lines.


It would be much easier to maintain as a package rather than a  
module. Shall I work on a suggested structure or are there  
objections in principle?


+1/jfdi :)

Barry

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (Darwin)

iQCVAwUBSdVFMXEjvBPtnXfVAQJeeQQAl5yYTLCUT4M4jQBY0yb39uNexREytnmp
Oo+8gaehi2at62WbeIXa3GRojfWcpAJfGEWIWxsIEe8vRBMfNJphfsiN62rD1CIt
Awn9SPPka9Xxfd3fsdvfKxDpJysK1pcqNFi5e49lXgbmt8XJ/09RbviMUHFmhlxb
eVYkHYmelFQ=
=eNHE
-END PGP SIGNATURE-
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OSError.errno => exception hierarchy?

2009-04-02 Thread Gustavo Carneiro
(cross-posting back to python-dev to finalize discussions)

2009/4/2 Guido van Rossum 
[...]

> > The problem you report:
> >>
> >>  try:
> >>...
> >>  except OSWinError:
> >>...
> >>  except OSLinError:
> >>...
> >>
> >
> > Would be solved if both OSWinError and OSLinError were always defined in
> > both Linux and Windows Python.  Programs could be written to catch both
> > OSWinError and OSLinError, except that on Linux OSWinError would never
> > actually be raised, and on Windows OSLinError would never occur.  Problem
> > solved.
>
> Yeah, but now you'd have to generate the list of exceptions (which
> would be enormously long) based on the union of all errno codes in the
> universe.
>
> Unless you only want to do it for some errno codes and not for others,
> which sounds like asking for trouble.
>
> Also you need a naming scheme that works for all errnos and doesn't
> require manual work. Frankly, the only scheme that I can think of that
> could be automated would be something like OSError_ENAME.
>
> And, while OSError is built-in, I think these exceptions (because
> there are so many) should not be built-in, and probably not even live
> in the 'os' namespace -- the best place for them would be the errno
> module, so errno.OSError_ENAME.
>
> > The downsides of this?  I can only see memory, at the moment, but I might
> be
> > missing something.
>
> It's an enormous amount of work to make it happen across all
> platforms. And it doesn't really solve an important problem.


I partially agree.  It will be a lot of work.  I think the problem is valid,
although not very important, I agree.


>
>
> > Now just one final word why I think this matters.  The currently correct
> way
> > to remove a directory tree and only ignore the error "it does not exist"
> is:
> >
> > try:
> > shutil.rmtree("dirname")
> > except OSError, e:
> > if errno.errorcode[e.errno] != 'ENOENT':
> >raise
> >
> > However, only very experienced programmers will know to write that
> correct
> > code (apparently I am not experienced enought!).
>
> That doesn't strike me as correct at all, since it doesn't distinguish
> between ENOENT being raised for some file deep down in the tree vs.
> the root not existing. (This could happen if after you did
> os.listdir() some other process deleted some file.)


OK.  Maybe in a generic case this could happen, although I'm sure this won't
happen in my particular scenario.  This is about a build system, and I am
assuming there are no two concurrent builds (or else a lot of other things
would fail anyway).


> A better way might be
>
> try:
>  shutil.rmtree()
> except OSError:
>  if os.path.exists():
>   raise


Sure, this works, but at the cost of an extra system call.  I think it's
more elegant to check the errno (assuming the corner case you pointed out
above is not an issue).


> Though I don't know what you wish to happen of  were a dangling
> symlink.
>
> > What I am proposing is that the simpler correct code would be something
> > like:
> >
> > try:
> > shutil.rmtree("dirname")
> > except OSNoEntryError:
> > pass
> >
> > Much simpler, no?
>
> And wrong.
>
> > Right now, developers are tempted to write code like:
> >
> > shutil.rmtree("dirname", ignore_errors=True)
> >
> > Or:
> >
> > try:
> > shutil.rmtree("dirname")
> > except OSError:
> > pass
> >
> > Both of which follow the error hiding anti-pattern [1].
> >
> > [1] http://en.wikipedia.org/wiki/Error_hiding
> >
> > Thanks for reading this far.
>
> Thanks for not wasting any more of my time.


OK, I won't waste more time.  If this were an obvious improvement beyond
doubt to most people, I would pursue it, but since it's not, I can live with
it.

Thanks anyway,

-- 
Gustavo J. A. M. Carneiro
INESC Porto, Telecommunications and Multimedia Unit
"The universe is always one step beyond logic." -- Frank Herbert
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] __length_hint__

2009-04-02 Thread Daniel Stutzbach
Iterators can implement a method called __length_hint__ that provides a hint
to certain internal routines (such as list.extend) so they can operate more
efficiently.  As far as I can tell, __length_hint__ is currently
undocumented.  Should it be?

If so, are there any constraints on what an iterator should return?  I can
think of 3 possible rules, each with advantages and disadvantages:
1. return your best guess
2. return your best guess that you are certain is not higher than the true
value
3. return your best guess that you are certain is not lower than the true
value

Also, I've noticed that if a VERY large hint is returned by the iterator,
list.extend will sometimes disregard the hint and try to allocate memory
incrementally (correct for rule #1 or #2).  However, in another code path it
will throw a MemoryError immediately based on the hint (correct for rule
#3).

--
Daniel Stutzbach, Ph.D.
President, Stutzbach Enterprises, LLC 
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __length_hint__

2009-04-02 Thread Benjamin Peterson
2009/4/2 Daniel Stutzbach :
> Iterators can implement a method called __length_hint__ that provides a hint
> to certain internal routines (such as list.extend) so they can operate more
> efficiently.  As far as I can tell, __length_hint__ is currently
> undocumented.  Should it be?

This has been discussed, and no, it is a implementation detail mostly
for the optimization of builtin iterators.

>
> If so, are there any constraints on what an iterator should return?  I can
> think of 3 possible rules, each with advantages and disadvantages:
> 1. return your best guess
> 2. return your best guess that you are certain is not higher than the true
> value
> 3. return your best guess that you are certain is not lower than the true
> value
>
> Also, I've noticed that if a VERY large hint is returned by the iterator,
> list.extend will sometimes disregard the hint and try to allocate memory
> incrementally (correct for rule #1 or #2).  However, in another code path it
> will throw a MemoryError immediately based on the hint (correct for rule
> #3).

Perhaps Raymond can shed some light on these.


-- 
Regards,
Benjamin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __length_hint__

2009-04-02 Thread Raymond Hettinger



Iterators can implement a method called __length_hint__ that provides a hint
to certain internal routines (such as list.extend) so they can operate more
efficiently. As far as I can tell, __length_hint__ is currently
undocumented. Should it be?


This has been discussed, and no, it is a implementation detail mostly
for the optimization of builtin iterators.


Right.  That matches my vague recollection on the subject.



If so, are there any constraints on what an iterator should return? I can
think of 3 possible rules, each with advantages and disadvantages:
1. return your best guess


Yes.

BTW, the same rule also applies to __len__.  IIRC, Tim proposed 
to add that to the docs somewhere.




Perhaps Raymond can shed some light on these.


Can't guess the future of __length_hint__(). 
Since it doesn't have a slot, the attribute lookup

can actually slow down cases with a small number
of iterands.

The original idea was based on some research on
map/fold operations, noting that iterators can
sometimes be processed more efficiently if
accompanied by some metadata (i.e. the iterator has 
a known length, consists of unique items, is sorted, 
is all of a certain type, is re-iterable, etc.).



Raymond




___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] UnicodeDecodeError bug in distutils

2009-04-02 Thread Ben Finney
"Phillip J. Eby"  writes:

> However, there's currently no standard, as far as I know, for what
> encoding the PKG-INFO file should use.

Who would define such a standard? My vote goes for “default is UTF-8”.

> Meanwhile, the 'register' command accepts Unicode, but is broken in
> handling it. […]
> 
> Unfortunately, this isn't fixable until there's a new 2.5.x release.
> For previous Python versions, both register and write_pkg_info()
> accepted 8-bit strings and passed them on as-is, so the only
> workaround for this issue at the moment is to revert to Python 2.4
> or less.

What is the prognosis on this issue? It's still hitting me in Python
2.5.4.

-- 
 \   “Everything you read in newspapers is absolutely true, except |
  `\for that rare story of which you happen to have first-hand |
_o__) knowledge.” —Erwin Knoll |
Ben Finney

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 382: Namespace Packages

2009-04-02 Thread P.J. Eby

At 10:33 PM 4/2/2009 +0200, M.-A. Lemburg wrote:

That's going to slow down Python package detection a lot - you'd
replace an O(1) test with an O(n) scan.


I thought about this too, but it's pretty trivial considering that 
the only time it takes effect is when you have a directory name that 
matches the name you're importing, and that it will only happen once 
for that directory, unless there is no package on sys.path with that 
name, and the program tries to import the package multiple times.  In 
other words, the overhead isn't likely to be much, compared to the 
time needed to say, open and marshal even a trivial __init__.py file.




Alternative Approach:
-

Wouldn't it be better to stick with a simpler approach and look for
"__pkg__.py" files to detect namespace packages using that O(1) check ?


I thought the same thing (or more precisely, a single .pkg file), but 
when I got lower in the PEP I saw the reason was to support system 
packages not having overlapping filenames.  The PEP could probably be 
a little clearer about the connection between needing *.pkg and the 
system-package use case.




One of the namespace packages, the defining namespace package, will have
to include a __init__.py file.


Note that there is no such thing as a "defining namespace package" -- 
namespace package contents are symmetrical peers.




The above mechanism allows the same kind of flexibility we already
have with the existing normal __init__.py mechanism.

* It doesn't add yet another .pth-style sys.path extension (which are
difficult to manage in installations).

* It always uses the same naive sys.path search strategy. The strategy
is not determined by some file contents.


The above are also true for using only a '*' in .pkg files -- in that 
event there are no sys.path changes.  (Frankly, I'm doubtful that 
anybody is using extend_path and .pkg files to begin with, so I'd be 
fine with a proposal that instead used something like '.nsp' files 
that didn't even need to be opened and read -- which would let the 
directory scan stop at the first .nsp file found.




* The search is only done once - on the first import of the package.


I believe the PEP does this as well, IIUC.



* It's possible to have a defining package dir and add-one package
dirs.


Also possible in the PEP, although the __init__.py must be in the 
first such directory on sys.path.  (However, such "defining" packages 
are not that common now, due to tool limitations.)


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Greg Ewing

Hrvoje Niksic wrote:

I thought the entire *point* of C object was that it's an opaque box 
without any info whatsoever, except that which is known and shared by 
its creator and its consumer.


But there's no way of telling who created a given
CObject, so *nobody* knows anything about it for
certain.

--
Greg

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's update CObject API so it is safe and regular!

2009-04-02 Thread Greg Ewing

Jim Fulton wrote:

The original use case for CObjects was to export an API from a module, 
in which case, you'd be importing the API from the module.  The presence 
in the module indicates the type.


Sure, but it can't hurt to have an additional sanity
check.

Also, there are wider uses for CObjects than this.
I see it as a quick way of creating a wrapper when
you don't want to go to the trouble of a full-blown
extension type. A small amount of metadata would
make CObjects much more useful.

--
Greg


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 382: Namespace Packages

2009-04-02 Thread Matthias Klose
Martin v. Löwis schrieb:
> I propose the following PEP for inclusion to Python 3.1.
> Please comment.
> 
> Regards,
> Martin
> 
> Abstract
> 
> 
> Namespace packages are a mechanism for splitting a single Python
> package across multiple directories on disk. In current Python
> versions, an algorithm to compute the packages __path__ must be
> formulated. With the enhancement proposed here, the import machinery
> itself will construct the list of directories that make up the
> package.

+1

speaking as a downstream packaging python for Debian/Ubuntu I welcome this
approach.  The current practice of shipping the very same file (__init__.py) in
different packages leads to conflicts for the installation of these packages
(this is not specific to dpkg, but is true for rpm packaging as well).

Current practice of packaging (for downstreams) so called "name space packages" 
is:

 - either to split out the namespace __init__.py into a separate
   (linux distribution) package (needing manual packaging effort for each
   name space package)

 - using downstream specific packaging techniques to handle conflicting files
   (diversions)

 - replicating the current behaviour of setuptools simply overwriting the
   file conflicts.

Following this proposal (downstream) packaging of namespace packages is made
possible independent of any manual downstream packaging decisions or any
downstream specific packaging decisions.

  Matthias
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 382: Namespace Packages

2009-04-02 Thread P.J. Eby

At 03:21 AM 4/3/2009 +0200, Matthias Klose wrote:
+1 speaking as a downstream packaging python for Debian/Ubuntu I 
welcome this approach.  The current practice of shipping the very 
same file (__init__.py) in different packages leads to conflicts for 
the installation of these packages (this is not specific to dpkg, 
but is true for rpm packaging as well). Current practice of 
packaging (for downstreams) so called "name space packages" is: - 
either to split out the namespace __init__.py into a 
separate(linux distribution) package (needing manual packaging 
effort for eachname space package) - using downstream specific 
packaging techniques to handle conflicting files(diversions) - 
replicating the current behaviour of setuptools simply overwriting 
thefile conflicts. Following this proposal (downstream) 
packaging of namespace packages is made possible independent of any 
manual downstream packaging decisions or any downstream specific 
packaging decisions


A clarification: setuptools does not currently install the 
__init__.py file when installing in 
--single-version-externally-managed or --root mode.  Instead, it uses 
a project-version-nspkg.pth file that essentially simulates a 
variation of Martin's .pkg proposal, by abusing .pth file 
support.  If this PEP is adopted, setuptools would replace its 
nspkg.pth file with a .pkg file on Python versions that provide 
native support for .pkg imports, keeping the .pth file only for older Pythons.


(.egg files and directories will not be affected by the change, 
unless the zipimport module will also supports .pkg files...  and 
again, only for Python versions that support the new approach.)


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] [issue3609] does parse_header really belong in CGI module?

2009-04-02 Thread Stephen J. Turnbull
Senthil Kumaran writes:

 > http://bugs.python.org/issue3609  requests to move the function
 > parse_header present in cgi module to email package.
 > 
 > The reasons for this request are:
 > 
 > 1) The MIME type header parsing methods rightly belong to email
 > package. Confirming to RFC 2045.

In practice, the "mail" part of the name is historical; RFC 822-style
headers are used in many protocols, most prominently email, netnews
(less important nowadays :-( ), and HTTP.  If there are differences in
usage, the parsing methods may be different.  If not, then this
functionality is redundant in email, which has its own parser.  It
can't be right for email to have two parsers and CGI none!

Anyway, "moving" the function is almost certainly the *wrong* thing to
do, as the email package has its own conventions and organization.  In
particular, in email header parsing is done by methods of the message
and header objects (in their respective initializations), rather than
by a (global) function.

Since Barry et al have been sprinting on email TNG, you really ought
to coordinate this with them.  I think it would be good to have header
parsing and generation in a free-standing package separate from other
aspects of handling Internet protocols, but this will require
coordination of several modules besides email and cgi.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Should the io-c modules be put in their own directory?

2009-04-02 Thread Alexandre Vassalotti
Hello,

I just noticed that the new io-c modules were merged in the py3k
branch (I know, I am kind late on the news—blame school work). Anyway,
I am just wondering if it would be a good idea to put the io-c modules
in a sub-directory (like sqlite), instead of scattering them around in
the Modules/ directory.

Cheers,
-- Alexandre
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Package Management - thoughts from the peanut gallery

2009-04-02 Thread Stephen J. Turnbull
Chris Withers writes:

 > Personally I feel all of the above are perfectly possible, and can't see 
 > anyone being left unhappy by them. I'm sure I've missed something then, 
 > otherwise why not make it happen?

Labor shortage.

We will need a PEP, the PEP will need a sample implementation, and
a proponent.  Who's gonna bell the cat?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com