Re: [Python-Dev] new security doc using object-capabilities

2006-07-23 Thread Armin Rigo
Hi David, hi Brett,

On Sun, Jul 23, 2006 at 02:18:48AM +0100, David Hopwood wrote:
 If I understand correctly, the proposal is that any incompatible changes
 to the language would apply only in sandboxed interpreters. So there is
 no reason why support for these couldn't go into the main branch.

That's what I originally thought too, but Brett writes:

Implementation Details


An important point to keep in mind when reading about the
implementation details for the security model is that these are
general changes and are not special to any type of interpreter,
sandboxed or otherwise.  That means if a change to a built-in type is
suggested and it does not involve a proxy, that change is meant
Python-wide for *all* interpreters.

So that's why I'm starting to worry that Brett is proposing to change
the regular Python language too.  However, Brett, you also say somewhere
else that backward compatibility is not an issue.  So I'm a bit confused
actually...

Also, I hate to sound self-centered, but I should point out somewhere
that PyPy was started by people who no longer wanted to maintain a fork
of CPython, and preferred to work on building CPython-like variants
automatically.  Many of the security features you list would be quite
easier to implement and maintain in PyPy than CPython -- also from a
security perspective: it is easier to be sure that some protection is
complete, and remains complete over time, if it is systematically
generated instead of hand-patched in a dozen places.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] new security doc using object-capabilities

2006-07-22 Thread Armin Rigo
Hi Brett,

On Wed, Jul 19, 2006 at 03:35:45PM -0700, Brett Cannon wrote:
 I also plan to rewrite the import machinery in pure Python.

http://codespeak.net/svn/pypy/dist/pypy/module/__builtin__/importing.py


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Document performance requirements?

2006-07-22 Thread Armin Rigo
Hi,

On Sat, Jul 22, 2006 at 12:33:45PM +1000, Nick Coghlan wrote:
 Agreed, but there's more to doing that than just writing down the O() implied 
 by the current CPython implementation - it's up to Guido to decide which of 
 the constraints are part of the language definition, and which are 
 implementation accidents

I think that O-wise the current CPython situation should be documented
as a minimal requirement for implementations of the language, with
just one exception: the well-documented don't rely on this hack in 2.4
to make repeated 'str += str' amortized linear, for which the 2.3
quadratic behavior is considered compliant enough.

I suppose that allowing implementations to provide better algorithmic
complexities than required is fine, although I can think of some
problems with that (e.g. nice and efficient user code that would perform
horribly badly on CPython).


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] new security doc using object-capabilities

2006-07-22 Thread Armin Rigo
Re-hi,

On Wed, Jul 19, 2006 at 03:35:45PM -0700, Brett Cannon wrote:
 http://svn.python.org/view/python/branches/bcannon-sandboxing/securing_python.txt?rev=50717view=log.

I'm not sure I understand what you propose to fix holes like
constructors and __subclasses__: it seems that you want to remove them
altogether (and e.g. make factory functions instead).  That would
completely break all programs, right?  I mean, there is no way such
changes would go into mainstream CPython.  Or do you propose to maintain
a CPython branch manually for the foreseeable future?  (From experience
this is a bad idea...)


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] User's complaints

2006-07-17 Thread Armin Rigo
Hi Jeroen,

On Thu, Jul 13, 2006 at 02:02:22PM +0200, Jeroen Ruigrok van der Werven wrote:
 He doesn't specifically need the builtin types to be extendable. It's
 just nice to be able to define a single class in multiple modules.

There are various simple ways to do this; the one I'm using from time to
time is the extendabletype metaclass from:

   http://codespeak.net/svn/pypy/dist/pypy/annotation/pairtype.py

Example:

   class A:
   __metaclass__ = extendabletype
   def f(...): ...

Somewhere else:

   class __extend__(A):
   def g(...): ...

FWIW the above 30-lines file also contains a fast double-dispatch
multimethod implementation :-)


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] User's complaints

2006-07-17 Thread Armin Rigo
Hi Bob,

On Thu, Jul 13, 2006 at 12:58:08AM -0700, Bob Ippolito wrote:
  @main
  def whatever():
  ...
 
 It would probably need to be called something else, because main is  
 often the name of the main function...

Ah, but there is theoretically no name clash here :-)

@main # - from the built-ins
def main():   # - and only then set the global
...


Just-making-a-stupid-point-and-not-endorsing-the-feature-ly yours,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] User's complaints

2006-07-12 Thread Armin Rigo
Hi Brett,

On Tue, Jul 11, 2006 at 06:05:21PM -0700, Brett Cannon wrote:
 It is the last point in the first paragraph on time.strftime() discussing
 what changed in Python 2.4 as to what the change was.  It's also in
 Misc/NEWS .  Basically the guy didn't read the release notes or the docs to
 see why that changed and that it was legitimate and needed for stability.

Surely everybody should read and think carefully about each (longish)
NEWS file for each software package whenever they update their machines
or switch to one with newer software than they last used.

Or if they cannot bother, surely they should read at least Python's?

I guess I'm going to side with Greg Black on his blog entry.
Only two breakages is certainly nice, and I know that we all try quite
hard to minimize that; that's probably still two breakages too much.


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] User's complaints

2006-07-10 Thread Armin Rigo
Hi,

On Tue, Jul 04, 2006 at 04:49:13PM -0700, Neal Norwitz wrote:
 On 7/4/06, Guido van Rossum [EMAIL PROTECTED] wrote:
 
   From actual users of
  the language I get more complaints about the breakneck speed of
  Python's evolution than about the brokenness of the current language.

I'd like to report another (subjective) experience in favor of the
Python is complete enough already camp, from last year's EuroPython,
during Guido's keynote.  He announced he accepted two of the major 2.5
PEPs: the 'yield' extension, and I think the 'with' statement.  This
didn't draw much applause.  It certainly gave me the impression that
many changes in Python are advocated and welcomed by only a small
fraction of users.

I cannot be objective here, though, being myself firmly of the
impression that there are only so many syntactic features you can put in
a language before it stops being elegant and starts promoting obscure
code...

 PS.  One thing I tend to talk to users about is stability of the
 interpreter.  When I talk about crashing the interpreter, the most
 common first reaction I get is you can crash the interpreter? How do
 you do that?  I take that answer as a good sign. :-)

Indeed :-)  Getting some more python-dev discussions about
Lib/test/crashers/*.py would be nice too, though.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] what can we do to hide the 'file' type?

2006-07-06 Thread Armin Rigo
Hi Brett,

On Wed, Jul 05, 2006 at 05:01:48PM -0700, Brett Cannon wrote:
 And if Armin and/or Samuele sign off that what we find is most likely (with
 most likely equalling 99% chance) all there is, then bonus points and I
 will *really* be convinced.  =)

I don't think I can sign off that.  Really hiding Python objects is
quite hard IMHO.


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Cleanup of test harness for Python

2006-07-01 Thread Armin Rigo
Hi all,

On Fri, Jun 30, 2006 at 10:05:14AM -0400, Frank Wierzbicki wrote:
 some checks for CPython internal tests that should be excluded from
 Jython

I know Frank already knows about this, but I take the occasion to remind
us that
http://codespeak.net/svn/pypy/dist/lib-python/modified-2.4.1/test
already shows which tests we had to modify for PyPy to make them less
implementation-detail-dependent, and which changes were made.

A possible first step here would be to find a consistent way to check,
in the test, which implementation we are running on top of, so that we
can (re-)write the tests accordingly.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] sys.settrace() in Python 2.3 vs. 2.4

2006-07-01 Thread Armin Rigo
Hi Josiah,

On Fri, Jun 30, 2006 at 01:27:24PM -0700, Josiah Carlson wrote:
 I'll just have to gracefully degrade functionality for older Pythons. 

More precisely, the bug shows up because in

  while 1:
  pass

the current line remains on the 'pass' forever.  It works for a loop
like that:

  while 1:
  sys
  sys

but it's admittedly quite obscure.


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3103: A Switch/Case Statement

2006-06-30 Thread Armin Rigo
Hi,

On Mon, Jun 26, 2006 at 12:23:00PM -0700, Guido van Rossum wrote:
 Feedback (also about misrepresentation of alternatives I don't favor)
 is most welcome, either to me directly or as a followup to this post.

So my 2 cents, particularly about when things are computed and ways to
control that explicitly: there was a point in time where I could say
that I liked Python because language design was not constrained by
performance issues.  Looks like it's getting a matter of the past, small
step by small step.  I'll have to get used to mentally filter out
'static' or whatever the keyword will be, liberally sprinkled in
programs I read to make them slightly faster.

Maybe I should, more constructively, propose to start a thread on the
subject of: what would be required to achieve similar effects as the
intended one at the implementation level, without strange
early-computation semantics?

I'm not talking about Psyco stuff here; there are way to do this with
reasonably-simple refactorings of global variable accesses.  I have
experimented a couple of years ago with making them more direct (just
like a lot of people did, about the faster LOAD_GLOBAL trend).  I
dropped this as it didn't make things much faster, but it had a nice
side-effect: allowing call-backs for binding changes.  This would be a
good base on top of which to make transparent, recomputed-when-changed
constant-folding of simple expressions.  Building dicts for switch and
keeping them up-to-date...  Does it make sense for me to continue
this discussion?


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] For sandboxing: alternative to crippling file()

2006-06-30 Thread Armin Rigo
Hi Brett,

On Thu, Jun 29, 2006 at 11:48:36AM -0700, Brett Cannon wrote:
 1) Is removing 'file' from the builtins dict in PyInterpreterState (and
 maybe some other things) going to be safe enough to sufficiently hide 'file'
 confidently (short of someone being stupid in their C extension module and
 exposing 'file' directly)?

No.

 object.__subclasses__()
[..., type 'file']

Maybe this one won't work if __subclasses__ is forbidden, but in general
I think there *will* be a way to find this object.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Is Lib/test/crashers/recursive_call.py really a crasher?

2006-06-28 Thread Armin Rigo
Hi Brett,

On Tue, Jun 27, 2006 at 10:32:08AM -0700, Brett Cannon wrote:
 OK, with you and Thomas both wanting to keep it I will let it be.  I just
 won't worry about fixing it myself during my interpreter hardening crusade.

I agree with this too.  If I remember correctly, you even mentioned in
your rexec docs that sys.setrecursionlimit() should be disallowed from
being run by untrusted code, which means that an untrusted interpreter
would be safe.

I guess we could add an example of a bogus 'new.code()' call in the
Lib/test/crashers directory too, without you having to worry about it in
untrusted mode if new.code() is forbidden.  I could also add my
'gc.get_referrers()' attack, which should similarly not be callable from
untrusted code anyway.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] An obscene computed goto bytecode hack for switch :)

2006-06-17 Thread Armin Rigo
Hi Phillip,

On Fri, Jun 16, 2006 at 10:01:05PM -0400, Phillip J. Eby wrote:
 One thing I'm curious about, if there are any PyPy folks listening: will 
 tricks like this drive PyPy or Psyco insane?  :)

Yes, both :-)

The reason is that the details of the stack behavior of END_FINALLY are
messy in CPython.  The finally blocks are the only place where the depth
of the stack is not known in advance: depending on how the finally block
is entered, there will be between one and three objects pushed (a single
None, or an int and another object, or an exception type, instance and
traceback).  Psyco cheats here and emulates a behavior where there is
always exactly one object instead (which can be a tuple), so if a
END_FINALLY sees values not put there in the official way it will just
crash.  PyPy works similarily but always expect three values.

(Hum, Psyco could easily be fixed to support your use case...  For PyPy
it would be harder without performance hit)


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Improve error msgs?

2006-06-14 Thread Armin Rigo
Hi Georg,

On Wed, Jun 14, 2006 at 08:51:03AM +0200, Georg Brandl wrote:
 type_error(object does not support item assignment);
 
 It helps debugging if the object's type was prepended.
 Should I go through the code and try to enhance them
 where possible?

I think it's an excellent idea.


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Please stop changing wsgiref on the trunk

2006-06-13 Thread Armin Rigo
Hi Phillip,

On Mon, Jun 12, 2006 at 12:29:48PM -0400, Phillip J. Eby wrote:
 This idea would address the needs of external maintainers (having a single 
 release history) while still allowing Python developers to modify the code 
 (if the external package is in Python's SVN repository).

It's actually possible to import a part of an SVN repository into
another while preserving history.  That would be a way to move the
regular development of such packages completely to the Python SVN,
without loss.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 'fast locals' in Python 2.5

2006-06-07 Thread Armin Rigo
Hi,

On Wed, Jun 07, 2006 at 02:07:48AM +0200, Thomas Wouters wrote:
 I just submitted http://python.org/sf/1501934 and assigned it to Neal so it
 doesn't get forgotten before 2.5 goes out ;) It seems Python 2.5 compiles
 the following code incorrectly:

No, no, it's an underground move by Jeremy to allow assignment to
variables of enclosing scopes:

[in 2.5]
def f():
x = 5
def g():
x += 1
g()
print x # 6

The next move is to say that this is an expected feature of the
augmented assignment operators, from which it follows naturally that we
need a pseudo-augmented assignment that doesn't actually read the old
value:

[in 2.6]
def f():
x = 5
def g():
x := 6
g()
print x# 6

Credits to Samuele's evil side for the ideas.  His non-evil side doesn't
agree, and neither does mine, of course :-)

More seriously, a function with a variable that is only written to as
the target of augmented assignments cannot possibly be something else
than a newcomer's mistake: the augmented assignments will always raise
UnboundLocalError.  Maybe this should be a SyntaxWarning?


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's stop eating exceptions in dict lookup

2006-05-30 Thread Armin Rigo
Hi Fredrik,

On Tue, May 30, 2006 at 07:48:50AM +0200, Fredrik Lundh wrote:
 since abc.find(, 0) == 0, I would have thought that a program that 
 searched for an empty string in a loop wouldn't get anywhere at all.

Indeed.  And when this bug was found in the program in question, a
natural fix was to add 1 to the start position if the searched string
was empty, which used to ensure that the loop terminates.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Let's stop eating exceptions in dict lookup

2006-05-29 Thread Armin Rigo
Hi all,

I've finally come around to writing a patch that stops dict lookup from
eating all exceptions that occur during lookup, like rare bugs in user
__eq__() methods.  After another 2-hours long debugging session that
turned out to be caused by that, I had a lot of motivation.

  http://python.org/sf/1497053

The patch doesn't change the PyDict_GetItem() interface, which is the
historical core of the problem.  It works around this issue by just
moving the exception-eating bit there instead of in lookdict(), so it
gets away with changing only dictobject.c (plus ceval.c's direct usage
of ma_lookup for LOAD_GLOBAL).  The benefit of this patch is that all
other ways to work with dicts now correctly propagate exceptions, and
this includes all the direct manipulation from Python code (including
'x=d[key]').

The reason I bring this up here is that I'm going to check it in 2.5,
unless someone seriously objects.  About the objection we need a better
fix, PyDict_GetItem() should really be fixed and all its usages
changed: this would be good, and also require some careful
compatibility considerations, and quite some work in total; it would
also give a patch which is basically a superset of mine, so I don't
think I'm going in the wrong direction there.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's stop eating exceptions in dict lookup

2006-05-29 Thread Armin Rigo
Hi Guido,

On Mon, May 29, 2006 at 12:34:30PM -0700, Guido van Rossum wrote:
 +1, as long as (as you seem to imply) PyDict_GetItem() still swallows
 all exceptions.

Yes.

 Fixing PyDict_GetItem() is a py3k issue, I think. Until then, there
 are way too many uses. I wouldn't be surprised if after INCREF and
 DECREF it's the most commonly used C API method...

Alternatively, we could add a new C API function, and gradually replace
PyDict_GetItem() uses with the the new one.  I can't think of an obvious
name, though...

Maybe code should just start using PyMapping_GetItem() instead.  It's
not incredibly slower than PyDict_GetItem(), at least in the
non-KeyError case.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com



Re: [Python-Dev] Let's stop eating exceptions in dict lookup

2006-05-29 Thread Armin Rigo
Hi Raymond,

On Mon, May 29, 2006 at 12:20:44PM -0700, Raymond Hettinger wrote:
  I've finally come around to writing a patch that stops dict lookup from
  eating all exceptions that occur during lookup, like rare bugs in user
  __eq__() methods. 
 
 Is there a performance impact?

I believe that this patch is good anyway, because I consider my (and
anybody's) debugging hours worth more than a few seconds of a
long-running process.  You get *really* obscure bugs this way.

I would also point out that this is the kind of feature that should not
be traded off for performance, otherwise we'd loose much of the point of
Python.  IMHO.

As it turns out, I measured only 0.5% performance loss in Pystone.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's stop eating exceptions in dict lookup

2006-05-29 Thread Armin Rigo
Hi Raymond,

On Mon, May 29, 2006 at 02:02:25PM -0700, Raymond Hettinger wrote:
 Please run some better benchmarks and do more extensive assessments on the 
 performance impact.

At the moment, I'm trying to, but 2.5 HEAD keeps failing mysteriously on
the tests I try to time, and even going into an infinite loop consuming
all my memory - since the NFS sprint.  Am I allowed to be grumpy here,
and repeat that speed should not be used to justify bugs?  I'm proposing
a bug fix, I honestly don't care about 0.5% of speed.

With a benchmark that passed, and that heavily uses instance and class
attribute look-ups (richards), I don't even see any relevant difference.

 and assessments of whether there are real benefits for everyday Python
 users.

It would have saved me two hours-long debugging sessions, and I consider
myself an everyday Python user, so yes, I think so.


Grumpy-ly yours,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's stop eating exceptions in dict lookup

2006-05-29 Thread Armin Rigo
Re-hi,

On Mon, May 29, 2006 at 11:34:28PM +0200, Armin Rigo wrote:
 At the moment, I'm trying to, but 2.5 HEAD keeps failing mysteriously on
 the tests I try to time, and even going into an infinite loop consuming
 all my memory

Ah, it's a corner case of str.find() whose behavior just changed.
Previously, 'abc'.find('', 100) would return -1, and now it returns 100.
Just to confuse matters, the same test with unicode returns 100, and has
always done so in the past.  (Oh well, one of these again...)

So, we need to decide which behavior is right.  One could argue that
the current 2.5 HEAD now has a consistent behavior, so it could be kept;
but there is an opposite argument as well, which is that some existing
programs like the one I was testing are now thrown into annoying
infinite loops because str.find() never returns -1 any more, even with
larger and larger start arguments.  I believe that it's a pattern that
could be common in string-processing scripts, trying to match substrings
at various points in a string trusting that str.find() will eventually
return -1.  It's harder to think of a case where a program previously
relied on unicode.find('',n) never returning -1.  Also, introducing a
new way for programs to be caught in an infinite loop is probably not a
good idea.

Hum, my apologies for my grumpy comments about the NFS sprint.  At
least, the unification of the string and unicode algorithm that was
started there is a good move, also because it exposes pre-existing
inconsistencies.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's stop eating exceptions in dict lookup

2006-05-29 Thread Armin Rigo
Hi Fredrik,

On Tue, May 30, 2006 at 12:01:46AM +0200, Fredrik Lundh wrote:
 not unless you can produce some code.  unfounded accusations don't 
 belong on this list (it's not like the sprinters didn't test the code on 
 a whole bunch of platforms), and neither does lousy benchmarks (why are 
 you repeating the 0.5% figure when pystone doesn't even test non-string 
 dictionary behaviour?  PyString_Eq cannot fail...)

Sorry, I do apologize for my wording.  I must admit that I was a bit
apalled by the amount of reference leaks that Michael had to fix after
the sprint, so jumped to conclusions a bit too fast when I saw by 1GB
laptop swap after less than a minute.  See my e-mail, which crossed
yours, for the explanation.

The reason I did not quote examples involving non-string dicts is that
my patch makes the non-string case simpler, so -- as I expected, and as
I have now measured -- marginally faster.  All in all it's hard to say
if there is a global consistent performance change.  At this point I'd
rather like to spend my time more interestingly; this might be by
defending my point of view that very minor performance hits should not
get in the way of fixes that avoid very obscure bugs, even only
occasionally-occuring but still very obscure bugs.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's stop eating exceptions in dict lookup

2006-05-29 Thread Armin Rigo
Hi Fredrik,

On Tue, May 30, 2006 at 12:23:04AM +0200, Fredrik Lundh wrote:
 well, the empty string is a valid substring of all possible strings 
 (there are no null strings in Python).  you get the same behaviour 
 from slicing, the in operator, replace (this was discussed on the 
 list last week), count, etc.
 
 if you're actually searching for a *non-empty* string, find() will 
 always return -1 sooner or later.

I know this.  These corner cases are debatable and different answers
could be seen as correct, as I think is the case for find().  My point
was different: I was worrying that the recent change in str.find() would
needlessly send existing and working programs into infinite loops, which
can be a particularly bad kind of failure for some applications.

At least, it gave a 100% performance loss on the benchmark I was trying
to run :-)


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] dictionary order

2006-05-28 Thread Armin Rigo
Hi all,

I'm playing with dicts that mangle the hash they receive before using it
for hashing.  The goal was to detect obscure dict order dependencies in
my own programs, but I couldn't resist and ran the Python test suite
with various mangling schemes.  As expected -- what is not tested is
broken -- I found and fixed tons of small dependencies in the tests
themselves, plus one in base64.py.

Now I'm stumbling upon this test for urllib2:

 mgr = urllib2.HTTPPasswordMgr()
 add = mgr.add_password
 add(Some Realm, http://example.com/;, joe, password)
 add(Some Realm, http://example.com/ni;, ni, ni)
(...)

Currently, we use the highest-level path where more than one
match:

 mgr.find_user_password(Some Realm, http://example.com/ni;)
('joe', 'password')

Returning the outermost path is a bit strange, if you ask me, but I am
no expert here.  Stranger is the fact that the actual implement actually
returns, not the outermost path at all -- there is no code to do that --
but a random pick, the first match in dictionary order.  The comment in
the test is just misleading.  I believe that urllib2 should be fixed to
always return the *innermost* path, but I need confirmation about
this...


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New methods for weakref.Weak*Dictionary types

2006-05-10 Thread Armin Rigo
Hi Tim,

On Mon, May 01, 2006 at 04:57:06PM -0400, Tim Peters wrote:
 
 # Return a list of weakrefs to all the objects in the collection.
 # Because a weak dict is used internally, iteration is dicey (the
 # underlying dict may change size during iteration, due to gc or
 # activity from other threads).

But then, isn't the real problem the fact that applications cannot
safely iterate over weak dicts?  This fact could be viewed as a bug, and
fixed without API changes.  For example, I can imagine returning to the
client an iterator that locks the dictionary.  Upon exhaustion, or via
the __del__ of the iterator, or even in the 'finally:' part of the
generator if that's how iteration is implemented, the dict is unlocked.
Here locking means that weakrefs going away during this time are not
eagerly removed from the dict; they will be removed only when the dict
is unlocked.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] rich comparisions and old-style classes

2006-04-30 Thread Armin Rigo
Hi Fredrik,

On Sun, Apr 30, 2006 at 08:13:40AM +0200, Fredrik Lundh wrote:
 trying to come up with a more concise description of the rich
 comparision machinery for pyref.infogami.com,

That's quite optimistic.  It's a known dark area.

 I stumbled upon an oddity that I cannot really explain:

I'm afraid the only way to understand this is to step through the C
code.  I'm sure there is a reason along the lines of well, we tried
this and that, now let's try this slightly different thing which might
or might not result in the same methods being called again.

I notice that you didn't try comparing an old-style instance with a
new-style one :-)

More pragmatically I'd suggest that you only describe the new-style
behavior.  Old-style classes have two levels of dispatching starting
from the introduction of new-style classes in 2.2 and I'm sure that no
docs apart from deep technical references should have to worry about
that.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] EuroPython 2006: Call for papers

2006-04-25 Thread Armin Rigo
Hi all,

A shameless plug and reminder for EuroPython 2006 (July 3-5):

* you can submit talk proposals until May 31st.

* there is a refereed papers track; deadline for abstracts: May 5th.
  See the full call for papers below.


A bientot,

Armin Rigo  Carl Friedrich Bolz




   EuroPython 2006
   CERN, Geneva, 3-5 July

   Refereed Track: Call for Paper

 http://www.europython.org


EuroPython is the only conference in the Python world that has a
properly prestigious peer-reviewed forum for presenting technical and
scientific papers. Such papers, with advanced and highly innovative
contents, can equally well stem from academic research or industrial
research. We think this is an important function for EuroPython, so we
are even making some grants available to help people with travel costs.

For this refereed track, we will be happy to consider papers in subject
areas including, but not necessarily limited to, the following:

* Python language and implementations
* Python modules (in the broadest sense)
* Python extensions
* Interoperation between Python and other languages / subsystems
* Scientific applications of Python
* Python in Education
* Benchmarking Python

We are looking for Python-related scientific and technical papers of
advanced, highly innovative content that present the results of original
research (be it of the academic or industrial research kind), with
proper attention to state of the art and previous relevant
literature/results (whether such relevant previous literature is itself
directly related to Python or not).

We do not intend to let the specific subject area block a paper's
acceptance, as long as the paper satisfies other requirements:
innovative, Python-related, reflecting original research, with proper
attention to previous literature.

Abstracts
=

Please submit abstracts of no more than 200 words to the refereeing
committee. You can send submissions no later than 5 May 2006. We shall
inform you whether your paper has been selected no later than 15 May
2006. For all details regarding the submission of abstracts, please see
the EuroPython website (http://www.europython.org).  Papers

If your abstract is accepted, you must submit your corresponding paper
before 17 June 2006. You should submit the paper as a PDF file, in A4
format, complete, stand-alone, and readable on any standards-compliant
PDF reader (basically, the paper must include all fonts and figures it
uses, rather than using external pointers to them; by default, most
PDF-preparation programs typically produce such valid stand-alone PDF
documents).

Refereeing
==

The refereeing committee, selected by Armin Rigo, will examine all
abstracts and papers. The committee may consult external experts as it
deems fit. Referees may suggest or require certain changes and editing
in submissions, and make acceptance conditional on such changes being
performed. We expect all papers to reflect the abstract as approved and
reserve the right, at our discretion, to reject a paper, despite having
accepted the corresponding abstract, if the paper does not substantially
correspond to the approved abstract.

Presentation


The paper must be presented at EuroPython by one or more of the
authors. Presentation time will be either half an hour or an hour,
including time for questions and answers, depending on each paper's
details, and also on the total number of papers approved for
presentation.

Proceedings
===

We will publish the conference's proceedings in purely electronic
form. By presenting a paper, authors agree to give the EuroPython
conference non-exclusive rights to publish the paper in electronic forms
(including, but not limited to, partial and total publication on web
sites and/or such media as CDROM and DVD-ROM), and warrant that the
papers are not infringing on the rights of any third parties. Authors
retain all other intellectual property rights on their submitted
abstracts and papers excepting only this non-exclusive license.

Subsidised travel
=

We have funds available to subsidise travel costs for some presenters
who would otherwise not be able to attend EuroPython. When submitting
your abstract, please indicate if you would need such a subsidy as a
precondition of being able to come and present your paper. (Yes, this
possibility does exist even if you are coming from outside of
Europe. Papers from people in New Zealand who can only come if their
travel is subsidised, for example, would be just fine with us...).
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options

Re: [Python-Dev] possible fix for recursive __call__ segfault

2006-04-18 Thread Armin Rigo
Hi Brett,

On Mon, Apr 17, 2006 at 05:34:16PM -0700, Brett Cannon wrote:
 +   if (meth == self) {
 +   PyErr_SetString(PyExc_RuntimeError,
 +   recursive __call__ definition);
 +   return NULL;
 +   }

This is not the proper way, as it can be worked around with a pair of
objects whose __call__ point to each other.  The solution is to use the
counter of Py_{Enter,Leave}RecursiveCall(), as was done for old-style
classes (see classobject.c).

By the way, this is a known problem: the example you show is
Lib/test/crashers/infinite_rec_3.py, and the four other
infinite_rec_*.py are all slightly more subtle ways to trigger a similar
infinite loop in C.  They point to the SF bug report at
http://python.org/sf/1202533, where we discuss the problem in general.
Basically, someone should try to drop many
Py_{Enter,Leave}RecursiveCall() pairs in the source until all the
currently-known bugs go away, and then measure if this has a noticeable
performance impact.


A bientot,

Mr. 8 Of The 12 Files In That Directory
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Checkin 45232: Patch #1429775

2006-04-15 Thread Armin Rigo
Hi Simon,

On Thu, Apr 13, 2006 at 06:43:09PM +0200, Simon Percivall wrote:
 Building SVN trunk with --enable-shared has been broken on Mac OS X  
 Intel
 since rev. 45232 a couple of days ago. I can't say if this is the case
 anywhere else as well. What happens is simply that ld can't find the  
 file
 to link the shared mods against.

For what it's worth, it still works on Linux (Gentoo/i386), insofar as
it always worked -- which is that we need either to make install or to
tweak /etc/ld.so.conf to let the executable find libpython2.5.so.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Checkin 45232: Patch #1429775

2006-04-15 Thread Armin Rigo
Hi Martin,

On Sat, Apr 15, 2006 at 11:30:07AM +0200, Martin v. L?wis wrote:
 Armin Rigo wrote:
  For what it's worth, it still works on Linux (Gentoo/i386), insofar as
  it always worked -- which is that we need either to make install or to
  tweak /etc/ld.so.conf to let the executable find libpython2.5.so.
 
 I usually set LD_LIBRARY_PATH in the shell where I want to use an
 --enable-share'd binary.

Thanks for reminding me of that trick!


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] refleaks in 2.4

2006-04-12 Thread Armin Rigo
Hi all,

On Sun, Mar 26, 2006 at 11:39:50PM -0800, Neal Norwitz wrote:
 There are 5 tests that leak references that are present in 2.4.3c1,
 but not on HEAD.  It would be great if someone can diagnose these and
 suggest a fix.
 
 test_doctest leaked [1, 1, 1] references
 test_pkg leaked [10, 10, 10] references
 test_pkgimport leaked [2, 2, 2] references
 test_traceback leaked [11, 11, 11] references
 test_unicode leaked [7, 7, 7] references
 
 test_traceback leaks due to test_bug737473.

A follow-up on this: all the tests apart from test_traceback are due to
the dummy object in dictionaries.  I modified the code to ignore exactly
these references and all the 4 other tests no longer leak.  I'm about to
check in this nice time saver :-)

For information, the 2.5 HEAD now reports the following remaining leaks:

test_generators leaked [1, 1, 1, 1] references
test_threadedtempfile leaked [-85, 85, -85, 85] references
test_threading_local leaked [34, 40, 26, 28] references
test_urllib2 leaked [-66, 143, -77, -66] references


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] refleaks in 2.4

2006-04-01 Thread Armin Rigo
Hi Michael,

On Sat, Apr 01, 2006 at 02:54:25PM +0100, Michael Hudson wrote:
 It's actually because somewhere in the bowels of compilation, the file
 name being compiled gets interned and test_pkg writes out some
 temporary files and imports them.  If this doesn't happen on the
 trunk, did this feature get lost somewhere?

I guess it's highly non-deterministic.  If the new strings happen to
take a previously-dummy entry of the interned strings dict, then after
they die the entry is dummy again and we don't have an extra refcount.
But if they take a fresh entry, then the dummy they become afterwards
counts for one ref.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] INPLACE_ADD and INPLACE_MULTIPLY oddities in ceval.c

2006-03-29 Thread Armin Rigo
Hi Greg,

On Wed, Mar 29, 2006 at 12:38:55PM +1200, Greg Ewing wrote:
 I'm really thinking more about the non-inplace operators.
 If nb_add and sq_concat are collapsed into a single slot,
 it seems to me that if you do
 
a = [1, 2, 3]
b = array([4, 5, 6])
c = a + b
 
 then a will be asked Please add yourself to b, and a
 will say Okay, I know how to do that! and promptly
 concatenate itself with b.

No: there is a difference between + and += for lists.  You can only
concatenate exactly a list to a list.  Indeed:

[].__add__((2, 3))
   TypeError: can only concatenate list (not tuple) to list

By contrast, list += is like extend() and accepts any iterable.
So if we provide a complete fix, [].__add__(x) will be modified to
return NotImplemented instead of raising TypeError if x is not a list,
and then [1,2,3]+array([4,5,6]) will fall back to array.__radd__() as
before.

I'll try harder to see if there is a reasonable example whose behavior
would change...


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] INPLACE_ADD and INPLACE_MULTIPLY oddities in ceval.c

2006-03-29 Thread Armin Rigo
Hi all,

On Tue, Mar 28, 2006 at 09:50:49AM -0800, Guido van Rossum wrote:
 C extensions are my main worry -- OTOH if += for a list can already
 passes arbitrary types as the argument, then any extension types
 should be ready to expect this, right?

Yes, I don't think C extensions are going to segfault.  My worry is
about returning a different result than before.  Actually I believe the
problem is not specific to C extensions.  Here are some typical behavior
changes that could be observed in pure Python already:

class X(object):
def __radd__(self, other):
return 42
def __iter__(self):
return iter(xyz)
def __rmul__(self, other):
return 42
def __index__(self):
return 5

t = []
t += X()
print t# current: 42   new: ['x', 'y', 'z']
print [1] * X()# current: 42   new: [1, 1, 1, 1, 1]

Another visible difference is that the __add__/__iadd__/__mul__/__imul__
methods of lists, tuples, strings etc., will return NotImplemented
instead of raising the TypeError themselves.  This could impact user
subclasses of these built-in types trying to override and call the super
methods, not expecting a NotImplemented result (a reason why
NotImplemented should have been an exception in the first place IMHO).

(A different bug I found is that [1].__mul__(X()) with an __index__able
class X currently raises TypeError, even though [1]*X() works just
fine.)

This seems to be it on the incompatibility side.  I'd vote for the
change anyway because the language specs -- as well as PyPy and probably
all Python implementations other than CPython -- don't have this
double-slot inconsistency and already show the new behavior.  For what
it's worth no CPython test breaks on top of PyPy because of this.

If this change is accepted I'll submit a patch for 2.5.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] INPLACE_ADD and INPLACE_MULTIPLY oddities in ceval.c

2006-03-29 Thread Armin Rigo
Hi Tim,

On Wed, Mar 29, 2006 at 08:45:10AM -0700, Tim Hochberg wrote:
 Ouch. Assuming the same path is followed with tuples, I think that this 
 means the following behaviour will continue:
 
   t = (1,2,3)
   a = array([4,5,6])
   t += a
   t
 array([5, 7, 9])

I fell into the same trap at first, but no: in fact, only lists have a
special in-place addition among all the built-in objects.  Tuples fall
back to the normal addition, which means that you can only add tuples to
tuples:

 t = (1,2,3)
 t += [4,5,6]
TypeError: can only concatenate tuple (not list) to tuple

 t += array([4,5,6])
TypeError: ...

This is current behavior and it wouldn't change.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] INPLACE_ADD and INPLACE_MULTIPLY oddities in ceval.c

2006-03-29 Thread Armin Rigo
Hi Tim,

Oups, sorry.  I only just realized my mistake and the meaning of your
message.

On Thu, Mar 30, 2006 at 09:27:02AM +0200, Armin Rigo wrote:
  t = (1,2,3)
  t += [4,5,6]
 TypeError: can only concatenate tuple (not list) to tuple
 
  t += array([4,5,6])
 TypeError: ...
 
 This is current behavior and it wouldn't change.

I'm pasting untested bits of code.  Indeed, as you point out:

 t = (1,2,3)
 t += array([4,5,6])
 t
array([5, 7, 9])

and it would remain so after the fix.  I still think the fix is a good
thing, and the above is an issue at a different level.  It's somehow the
fault of list.__iadd__ and list.__imul__, which are oddballs -- before
the introduction of set objects, it was the single place in the whole
library of built-in types where in-place behavior was different from
normal behavior.

It would require an official language extension to say that for all
sequences, += is supposed to accept any iterable (which may or may not
be a good thing, I have no opinion here).

Otherwise, I'd just ignore the whole sub-issue, given that 'tuple +=
array' returning an array is just correct language-wise and doesn't look
like a trap for bad surprises -- if the user expected a tuple but gets
an array, most tuple-like operations will work just fine on the array,
except hashing, which gives a clean TypeError.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] INPLACE_ADD and INPLACE_MULTIPLY oddities in ceval.c

2006-03-28 Thread Armin Rigo
Hi,

On Mon, Mar 27, 2006 at 08:00:09PM -0800, Guido van Rossum wrote:
 So for consistency we want a += b to also execute a.__iadd__. The
 opcode calls PyNumber_InplaceAdd; I think that PyNumber_InplaceAdd
 (and PySequence_InplaceConcat, if it exists) should test for both the
 numeric and the sequence augmented slot of the left argument first;
 then they should try both the numeric and sequence non-augmented slot
 of the left argument; and then the numeric non-augmented slot of the
 right argument. Coercion should not be attempted at all.
 
 The question is, can we do this in 2.5 without breaking backwards
 compatibility? Someone else with more time should look into the
 details of that.

I agree that there is a bug.  There is more than one inconsistency left
around here, though.  Fixing one might expose the next one...  For
example, if -- as the documention says -- the expression 'a + b' would
really try all slots corresponding to a.__add__(b) first and then fall
back only if the slots return NotImplemented, then we'd also have to fix
the following to return NotImplemented:

[].__add__(5)
   TypeError: can only concatenate list (not int) to list

and then we have no place to put that nice error message.

Nevertheless I think we should fix all this for consistency.  I can try
to give it a good look.  I don't think many programs would break if the
change goes into 2.5, but there are some C extension modules out there
abusing the inner details of the type slots in unpredictable ways...


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] refleaks in 2.4

2006-03-27 Thread Armin Rigo
Hi Neal,

On Sun, Mar 26, 2006 at 11:39:50PM -0800, Neal Norwitz wrote:
 test_pkg leaked [10, 10, 10] references

This one at least appears to be caused by dummy (deleted) entries in the
dictionary of interned strings.  So it is not really a leak.  It is a
pain that it is so hard to figure this out, though.  Wouldn't it make
sense to find a trick to exclude these dummy entries from the total
reference count?  E.g. by subtracting the refcount of the dummy
object...


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] str.count is slow

2006-03-12 Thread Armin Rigo
Hi Ben,

On Mon, Feb 27, 2006 at 06:50:28PM -0500, Ben Cartwright wrote:
  It seems to me that str.count is awfully slow.  Is there some reason
  for this?

stringobject.c could do with a good clean-up.  It contains very similar
algorithms multiple times, in slightly different styles and with
different performance characteristics.

If I find some motivation I'll try to come up with a patch.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Making staticmethod objects callable?

2006-03-12 Thread Armin Rigo
Hi Nicolas,

On Thu, Mar 02, 2006 at 01:55:03AM -0500, Nicolas Fleury wrote:
 (...)  A use case is not hard to 
 imagine, especially a private static method called only to build a class 
 attribute.

Uh.  I do this all the time, and the answer is simply: don't make that a
staticmethod.  Staticmethods are for the rare case where you need
dynamic class-based dispatch but don't have an instance around.

class A:
def _myinitializer():
do strange stuff here
_myinitializer()
del _myinitializer   # optional


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Please comment on PEP 357 -- adding nb_index slot to PyNumberMethods

2006-02-17 Thread Armin Rigo
Hi Travis,

On Tue, Feb 14, 2006 at 08:41:19PM -0700, Travis E. Oliphant wrote:
 2) The __index__ special method will have the signature
 
def __index__(self):
return obj

Where obj must be either an int or a long or another object that has 
 the
__index__ special method (but not self).

The anything but not self rule is not consistent with any other
special method's behavior.  IMHO we should just do the same as
__nonzero__():

* __nonzero__(x) must return exactly a bool or an int.

This ensures that there is no infinite loop in C created by a
__nonzero__ that returns something that has a further __nonzero__
method.

The rule that the PEP proposes for __index__ (returns anything but not
'self') is not useful, because you can still get infinite loops (you
just have to work slightly harder, and even not much).  We should just
say that __index__ must return an int or a long.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.5 release schedule

2006-02-17 Thread Armin Rigo
Hi,

On Tue, Feb 14, 2006 at 09:24:57PM -0800, Neal Norwitz wrote:
 http://www.python.org/peps/pep-0356.html

There is at least one SF bug, namely #1333982 Bugs of the new AST
compiler, that in my humble opinion absolutely needs to be fixed before
the release, even though I won't hide that I have no intention of fixing
it myself.  Should I raise the issue here in python-dev, and see if we
agree that it is critical?

(Sorry if I should know about the procedure.  Does it then go in the
PEP's Planned Features list?)


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Baffled by PyArg_ParseTupleAndKeywords modification

2006-02-11 Thread Armin Rigo
Hi Tim,

On Fri, Feb 10, 2006 at 12:19:01PM -0500, Tim Peters wrote:
 Oh, who cares?  I predict Jack's problem would go away if we changed
 the declaration of PyArg_ParseTupleAndKeywords to what you intended
 wink to begin with:
 
 PyAPI_FUNC(int) PyArg_ParseTupleAndKeywords(PyObject *, PyObject *,
   const char *, const
 char * const *, ...);

Alas, this doesn't make gcc happy either.  (I'm trying gcc 3.4.4.)  In
theory, it prevents the const-bypassing trick showed by Martin, but
apparently the C standard (or gcc) is not smart enough to realize that.

I don't see a way to spell it in C so that the same extension module
compiles with 2.4 and 2.5 without a warning, short of icky macros.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] _length_cue()

2006-02-10 Thread Armin Rigo
Hi Greg,

On Thu, Feb 09, 2006 at 04:27:54PM +1300, Greg Ewing wrote:
 The iterator protocol is currently very simple and
 well-focused on a single task -- producing things
 one at a time, in sequence. Let's not clutter it up
 with too much more cruft.

Please refer to my original message: I intended these methods to be
private and undocumented, not part of any official protocol in any way.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] _length_cue()

2006-02-10 Thread Armin Rigo
Hi Nick,

On Fri, Feb 10, 2006 at 11:21:52PM +1000, Nick Coghlan wrote:
 Do they really need anything more sophisticated than:
 
def __repr__(self):
  return %s(%r) % (type(self).__name__, self._subiter)
 
 (modulo changes in the format of arguments, naturally. This simple one would 
 work for things like enumerate and reversed, though)

My goal here is not primarily to help debugging, but to help playing
around at the interactive command-line.  Python's command-line should
not be dismissed as useless for real programmers; I definitely use it
all the time to try things out.  It would be nicer if all these
iterators I'm not familiar with would give me a hint about what they
actually return, instead of:

 itertools.count(17)
count(17)  # yes, thank you, not very helpful
 enumerate(spam)
enumerate(spam)  # with your proposed extension -- not better

However, if this kind of goal is considered not serious enough for
adding a private special method, then I'm fine with trying out a fishing
approach.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] _length_cue()

2006-02-10 Thread Armin Rigo
Hi Raymond,

On Wed, Feb 08, 2006 at 09:21:02PM -0500, Raymond Hettinger wrote:
 (... __getitem_cue__ ...)
 Before putting this in production, it would probably be worthwhile to search 
 for code where it would have been helpful.  In the case of __length_cue__, 
 there was an immediate payoff.

Indeed, I don't foresee any place where it would help apart from the
__repr__ of the iterators, which is precisely what I'm aiming at.  The
alternative here would be a kind of smart global function that knows
about many built-in iterator types and is able to fish for the data
inside automatically (but this hits problems of data structures being
private).  I thought that __getitem_cue__ would be a less dirty
solution.  I really think a better __repr__ would be generally helpful,
and I cannot think of a 3rd solution at the moment...  (Ideas welcome!)


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] _length_cue()

2006-02-08 Thread Armin Rigo
Hi all,

Last september, the __len__ method of iterators was removed -- see
discussion at:

http://mail.python.org/pipermail/python-dev/2005-September/056879.html

It was replaced by an optional undocumented method called _length_cue(),
which would be used to guess the number of remaining items in an
iterator, for performance reasons.

I'm worried about the name.  There are now exactly two names that behave
like a special method without having the double-underscores around it.
The first name is 'next', which is kind of fine because it's for
iterator classes only and it's documented.  But now, consider: the
CPython implementation can unexpectedly invoke a method on a
user-defined iterator class, even though this method's name is not
'__*__' and not documented as special!  That's new and that's bad.

IMHO for safety reasons we need to stick double-underscores around this
name too, e.g. __length_cue__().  It's new in 2.5 and not documented
anyway so this change won't break anything.  Do you agree with that?

BTW the reason I'm looking at this is that I'm considering adding
another undocumented internal-use-only method, maybe __getitem_cue__(),
that would try to guess what the nth item to be returned will be.  This
would allow the repr of some iterators to display more helpful
information when playing around with them at the prompt, e.g.:

 enumerate([3.1, 3.14, 3.141, 3.1415, 3.14159, 3.141596])
enumerate (0, 3.1), (1, 3.14), (2, 3.141),... length 6


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] _length_cue()

2006-02-08 Thread Armin Rigo
Hi Raymond,

On Wed, Feb 08, 2006 at 03:02:21PM -0500, Raymond Hettinger wrote:
 IMHO, the safety reasons are imaginary -- the scenario would involve 
 subclassing one of these builtin objects and attaching an identically named 
 private method.

No, the senario applies to any user-defined iterator class, not
necessary subclassing an existing one:

 class MyIter(object):
... def __iter__(self):
... return self
... def next(self):
... return whatever
... def _length_cue(self):
... print oups! please, CPython, don't call me unexpectedly
...
 list(MyIter())
oups! please, CPython, don't call me unexpectedly
(...)

This means that _length_cue() is at the moment a special method, in the
sense that Python can invoke it implicitely.

This said, do we vote for __length_hint__ or __length_cue__? :-)
And does anyone objects about __getitem_hint__ or __getitem_cue__?
Maybe __lookahead_hint__ or __lookahead_cue__?


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] cProfile module

2006-02-07 Thread Armin Rigo
Hi all,

As promized two months ago, I eventually finished the integration of the
'lsprof' profiler.  It's now in an internal '_lsprof' module that is
exposed via a 'cProfile' module with the same interface as 'profile',
producing compatible dump stats that can be inspected with 'pstats'.

See previous discussion here:

* http://mail.python.org/pipermail/python-dev/2005-November/058212.html

The code is currently in the following repository, from where I'll merge
it into CPython if nobody objects:

* http://codespeak.net/svn/user/arigo/hack/misc/lsprof/Doc
* http://codespeak.net/svn/user/arigo/hack/misc/lsprof/Lib
* http://codespeak.net/svn/user/arigo/hack/misc/lsprof/Modules

with tests and docs, including new tests and doc refinements for profile
itself.  The docs mark hotshot as reversed for specialized usage.
They probably need a bit of bad-English-hunting...

And yes, I do promize to maintain this code in the future.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Summer of PyPy

2006-01-17 Thread Armin Rigo
Hi Brett, hi all,

On Sat, Jan 14, 2006 at 05:51:25PM -0800, Brett Cannon wrote:
 That would be cool!  I definitely would not mind working on PyPy. 
 Unfortunately I would not consider changing universities; I really
 like it here.

We are looking at the possibility to do a Summer of PyPy in the same
style as last year's Google's Summer of Code.  It might be a way for you
(or anybody else interested!) to get to work a bit on PyPy first :-)

  http://codespeak.net/pipermail/pypy-dev/2006q1/002721.html


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Ph.D. dissertation ideas?

2006-01-14 Thread Armin Rigo
Hi Brett,

If by any chance PyPy continues to be funded beyond 2006, we would
definitely welcome you around :-)  (If our funding model doesn't change,
it might be difficult for us to give you money oversea, though... just
asking, just in case, would you consider moving to a European
university?)

PyPy contains several open language research areas that you mentioned:
network-distributed language support, concurrent programming...  and we
even already have a Python - JavaScript compiler :-)  Making it useful
is an open challange, though.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Draft proposal: Implicit self in Python 3.0

2006-01-06 Thread Armin Rigo
Hi Alexander,

On Fri, Jan 06, 2006 at 12:56:01AM +0300, Alexander Kozlovsky wrote:
 There are three different peculiarity in Python 2.x
 in respect of 'self' method argument:

Yuk!  This has been discussed again and again already.  *Please* move
this discussion to comp.lang.python.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] buildno (Was: [Python-checkins] commit of r41907- python/trunk/Makefile.pre.in)

2006-01-05 Thread Armin Rigo
Hi Martin,

On Thu, Jan 05, 2006 at 12:36:40AM +0100, Martin v. L?wis wrote:
 OTOH, I also think we should get rid of buildno entirely. Instead,
 svnversion should be compiled into the object file, or, if it is absent,
 $Revision$ should be used; the release process should be updated to
 force a commit to the tag/Modules/buildno.c right after creating the
 tag. sys.build_number should go, and be replaced with sys.svn_info,
 which should also include the branch from which the checkout/export
 was made. $Revision$ should only be trusted if it comes from a
 tag/.

All this sounds good.

 Should I write a PEP for that?

I agree with Barry that it's overkill to ask for PEPs for too many small
details.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New PEP: Using ssize_t as the index type

2006-01-05 Thread Armin Rigo
Hi Martin,

On Fri, Dec 30, 2005 at 11:26:44AM +0100, Martin v. L?wis wrote:
  Hum.  It would be much cleaner to introduce a new format character to
  replace '#' and deprecate '#'...
 
 That would certainly be clearer. What character would you suggest?
 
 I see two drawbacks with that approach:
 1. writing backwards-compatible modules will become harder.
Users have to put ifdefs around the ParseTuple calls (or atleast
around the format strings)

Ok, I see your point.

In theory we could reuse a macro-based trick in C extensions:

#include Python.h
#ifndef Py_SIZE_CHR
typedef int Py_Ssize_t;
#define Py_SIZE_CHR #
#endif

And then we can replace -- say -- the format string is#s# with

is Py_SIZE_CHR s Py_SIZE_CHR

But it's rather cumbersome.

An equally strange alternative would be to start C modules like this:

#define Py_Ssize_t int  /* compatibility with Python = 2.4 */
#include Python.h

This would do the right thing for = 2.4, using ints everywhere; and the
Python.h version 2.5 would detect the #define and assume it's a
2.5-compatible module, so it would override the #define with the real
thing *and* turn on the ssize_t interpretation of the '#' format
character.

Not that I think that this is a good idea.  Just an idea.

I still don't like the idea of a magic #define that changes the behavior
of '#include Python.h', but I admit I don't find any better solution.
I suppose I'll just blame C.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New PEP: Using ssize_t as the index type

2005-12-29 Thread Armin Rigo
Hi Martin,

On Thu, Dec 29, 2005 at 03:04:30PM +0100, Martin v. L?wis wrote:
 New conversion functions PyInt_FromSsize_t, PyInt_AsSsize_t,
 PyLong_AsSsize_t are introduced. PyInt_FromSsize_t will transparently
 return a long int object if the value exceeds the MAX_INT.

I guess you mean LONG_MAX instead of MAX_INT, in the event that ssize_t
is larger than a long.  Also, distinguishing between PyInt_AsSsize_t()
and PyLong_AsSsize_t() doesn't seem to be useful (a quick look in your
branch makes me guess that they both accept an int or a long object
anyway).

 The conversion codes 's#' and 't#' will output Py_ssize_t
 if the macro PY_SIZE_T_CLEAN is defined before Python.h
 is included, and continue to output int if that macro
 isn't defined.

Hum.  It would be much cleaner to introduce a new format character to
replace '#' and deprecate '#'...

 Compatibility with previous Python
 versions can be achieved with the test::
 
  #if PY_VERSION_HEX  0x0205
  typedef int Py_ssize_t;
  #endif
 
 and then using Py_ssize_t in the rest of the code.

Nice trick :-)

As far as I can tell you have done as much as possible to ensure
compatibility, short of adding new slots duplicating the existing ones
with the new signature -- which would make abstract.c/typeobject.c a
complete nightmare.  I'm +1 on doing this in 2.5.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] NotImplemented reaching top-level

2005-12-28 Thread Armin Rigo
Hi Marc,

On Wed, Dec 28, 2005 at 09:56:43PM +0100, M.-A. Lemburg wrote:
  d += 1.2
  d
  NotImplemented
 
 The PEP documenting the coercion logic has complete tables
 for what should happen:

Well, '+=' does not invoke coercion at all, with new-style classes like
Decimal.

 Looking at the code in abstract.c the above problem appears
 to be related to the special cases applied to += and *=
 in case both operands cannot deal with the type combination.

 In such a case, a check is done whether the operation could
 be interpreted as sequence operation (concat or repeat) and
 then delegated to the appropriate handlers.

Indeed.  The bug was caused by this delegation, which (prior to my
patch) would also return a Py_NotImplemented that would leak through
abstract.c.  My patch is to remove this unnecessary delegation by not
defining sq_concat/sq_repeat for user-defined classes, and restoring the
original expectation that the sq_concat/sq_repeat slots should not
return Py_NotImplemented.  How does this relate to coercion?

 But then again, looking in typeobject.c, the following code
 could be the cause for leaking a NotImplemented singleton
 reference:
 
 #define SLOT1BINFULL(FUNCNAME, TESTFUNC, SLOTNAME, OPSTR, ROPSTR) \
 static PyObject * \
 FUNCNAME(PyObject *self, PyObject *other) \
 { \
   static PyObject *cache_str, *rcache_str; \
   int do_other = self-ob_type != other-ob_type  \
   other-ob_type-tp_as_number != NULL  \
   other-ob_type-tp_as_number-SLOTNAME == TESTFUNC; \
   if (self-ob_type-tp_as_number != NULL  \
   self-ob_type-tp_as_number-SLOTNAME == TESTFUNC) { \
   PyObject *r; \
   if (do_other  \
   PyType_IsSubtype(other-ob_type, self-ob_type)  \
   method_is_overloaded(self, other, ROPSTR)) { \
   r = call_maybe( \
   other, ROPSTR, rcache_str, (O), self); \
   if (r != Py_NotImplemented) \
   return r; \
   Py_DECREF(r); \
   do_other = 0; \
   } \
   r = call_maybe( \
   self, OPSTR, cache_str, (O), other); \
   if (r != Py_NotImplemented || \
   other-ob_type == self-ob_type) \
 ^
 If both types are of the same type, then a NotImplemented returng
 value would be returned.

Indeed, however:

 
   return r; \
   Py_DECREF(r); \
   } \
   if (do_other) { \
   return call_maybe( \
   other, ROPSTR, rcache_str, (O), self); \
   } \
   Py_INCREF(Py_NotImplemented); \
   return Py_NotImplemented; \
 }

This last statement also returns Py_NotImplemented.  So it's expected of
this function to be able to return Py_NotImplemented, isn't it?  The
type slots like nb_add can return Py_NotImplemented; the code that
converts it to a TypeError is in the caller, which is abstract.c.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] deque alternative

2005-12-27 Thread Armin Rigo
Hi Christian,

On Mon, Dec 26, 2005 at 01:38:37PM +0100, Christian Tismer wrote:
 I don't think your code has to decide about this. The power lies
 in the fact that you don't specify that, but just use the list
 in a different way. We do this in the PyPy implementation;
 right now it is true that we have a static analysis, but a JIT
 is to come, and I'm pretty sure it will try to use an array
 until something gets used like a list.

You are mentioning confusingly many levels of PyPy for this argument.
This is not directly related to static analysis nor to the JIT.  The
point is just that while a Python program runs, the implementation can
decide to start using a deque-like structure instead of a zero-based
array for a given user list.  This can be done in any implementation of
Python; both in PyPy and in CPython it would be done by adding checks
and cases in the code that implements list objects.

As much as I like this approach I fear that it will be rejected for
CPython, as e.g. lazily concatenated string objects were, on grounds of
code obfuscation and unpredictability of performance.  But it's PyPy's
goal to experiment here :-)


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] NotImplemented reaching top-level

2005-12-27 Thread Armin Rigo
Hi Facundo,

On Mon, Dec 26, 2005 at 02:31:10PM -0300, Facundo Batista wrote:
  nb_add and nb_multiply should be tried.  I don't think that this would
  break existing C or Python code, but it should probably only go in 2.5,
  together with the patch #1390657 that relies on it.
 
 It'd be good to know if this will be applied for the next version
 2.4.x or will wait until 2.4.5, for me to search a workaround in
 Decimal or not (really don't know if I can find a solution here).

I completed the patch on the SF tracker, and now I believe that it could
safely be checked in the HEAD and in the 2.4 branch (after the
appropriate review).


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] NotImplemented reaching top-level

2005-12-26 Thread Armin Rigo
Hi Brett,

On Sun, Dec 25, 2005 at 11:55:11AM -0800, Brett Cannon wrote:
 Maybe.  Also realize we will have a chance to clean it up when Python
 3 comes around since the classic class stuff will be ripped out.  That
 way we might have a chance to streamline the code.

For once, old-style classes are not to blame here: it's only about the
oldest aspects of the PyTypeObject structure and substructures.


I-said-that-no-one-knows-this-code-any-more'ly yours,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] NotImplemented reaching top-level

2005-12-26 Thread Armin Rigo
Hi,

On Mon, Dec 26, 2005 at 02:40:38AM +1000, Nick Coghlan wrote:
 That sounds like the right definition to me (I believe this behaviour is what 
 Raymond and Facundo were aiming for with the last round of updates to 
 Decimal).

Done in patch #1390657.

Although this patch passes all existing tests plus the ones it adds,
there is a corner and untested case where it could potentially break
code.  Indeed, the only sane patch I could come up with makes
user-defined types fail to work with PySequence_Concat() and
PySequence_Repeat() -- details in the patch.  So I propose that we
clarify what these two functions really mean in term of the Python
language spec, instead of just in term of the CPython-specific sq_concat
and sq_repeat slots.  (BTW that's needed for PyPy/Jython/etc., too, to
give a reasonable meaning to the operator.concat() and operator.repeat()
built-ins.)

I propose the following definitions (which are mostly what the
docstrings already explain anyway):

* PySequence_Concat(a, b) and operator.concat(a, b) mean a + b, with
  the difference that we check that both arguments appear to be
  sequences (as checked with operator.isSequenceType()).

* PySequence_Repeat(a, b) and operator.repeat(a, b) mean a * b, where
  a is checked to be a sequence and b is an integer.  Some bounds
  can be enforced on b -- for CPython, it means that it must fit in a
  C int.

The idea is to extend PySequence_Concat() and PySequence_Repeat() to
match the above definitions precisely, which means that for objects not
defining sq_repeat or sq_concat but still appearing to be sequences,
nb_add and nb_multiply should be tried.  I don't think that this would
break existing C or Python code, but it should probably only go in 2.5,
together with the patch #1390657 that relies on it.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] NotImplemented reaching top-level

2005-12-25 Thread Armin Rigo
Hi Facundo,

On Sat, Dec 24, 2005 at 02:31:19PM -0300, Facundo Batista wrote:
  d += 1.2
  d
 NotImplemented

The situation appears to be a mess.  Some combinations of specific
operators fail to convert NotImplemented to a TypeError, depending on
old- or new-style-class-ness, although this is clearly a bug (e.g. in an
example like yours but using -= instead of +=, we get the correct
TypeError.)

Obviously, we need to write some comprehensive tests about this.  But
now I just found out that the old, still-pending SF bug #847024 about
A()*5 in new-style classes hasn't been given any attention; my theory is
that nobody fully understands the convoluted code paths of abstract.c
any more :-(


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] NotImplemented reaching top-level

2005-12-25 Thread Armin Rigo
Hi Reinhold,

On Sun, Dec 25, 2005 at 12:37:53PM +0100, Reinhold Birkenfeld wrote:
  that nobody fully understands the convoluted code paths of abstract.c
  any more :-(
 
 Time for a rewrite?

Of course, speaking of a rewrite, PyPy does the right thing in these
two areas.  Won't happen to CPython, though.  There are too much
backward-compatibility issues with the PyTypeObject structure; I think
we're doomed with patching the bugs as they show up.

Looking up in the language reference, I see no mention of NotImplemented
in the page about __add__, __radd__, etc.  I guess it's a documentation
bug as well, isn't it?  The current code base tries to implement the
following behavior: Returning NotImplemented from any of the binary
special methods (__xxx__, __rxxx__, __ixxx__) makes Python proceed as if
the method was not defined in the first place.

If we agree on this, I could propose a doc fix, a test, and appropriate
bug fixes.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Expose Subversion revision number to Python

2005-12-18 Thread Armin Rigo
Hi Barry,

On Sat, Dec 17, 2005 at 08:28:17PM -0500, Barry Warsaw wrote:
 Done. r41744.

Doesn't appear to work for me: sys.build_number receives the value from
the buildno.  Looking at the Makefile, the reason is that I'm building
CPython in a separate directory (running '/some/path/configure; make').

Running 'svnversion .' by hand is quite fast if the whole tree of files
is in the cache.  My guess is that if you do 'svn up; make' then the
tree will indeed be in the cache, so the extra build time shouldn't be
noticeable in this common case (unless you are low on RAM).

Do we have any plan to make sys.build_number meaningful in the releases
as well (generally compiled from an svn export, as Michael pointed out),
or are we happy with a broken number in this case?

Should I propose / check-in a patch to expose sys.build_info instead
(CPython, 41761, trunk), as this got positive feedback so far?
It's also less surprizing than the current sys.build_number, which is a
string despite its name.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Expose Subversion revision number to Python

2005-12-16 Thread Armin Rigo
Hi Barry,

On Fri, Dec 16, 2005 at 12:16:49AM -0500, Barry Warsaw wrote:
 SF patch # 1382163 is a fairly simple patch to expose the Subversion
 revision number to Python, both in the Py_GetBuildInfo() text, and in a
 new Py_GetBuildNumber() C API function, and via a new sys.build_number
 attribute.

I have a minor concern about people starting to use sys.build_number to
check for features in their programs, instead of using sys.version_info
or hasattr() or whatever is relevant -- e.g. because it seems to them
that comparing a single number is easier than a tuple.  The problem is
that this build number would most likely have no meaning in non-CPython
implementations.

What about having instead:

sys.build_info = (CPython, svn rev, trunk)

This would make it clear that it's the CPython svn rev number, and it
could possibly be used to distinguish between branches, too, which the
revision number alone cannot do.  (trunk is the last part of the path
returned by svn info.)

Of course, what I'm trying to sneak in here is that it may be a good
occasion to introduce an official way to determine which Python
implementation the program is running on top of -- something more
immediate than the sys.platform==java occasionally used in the test
suite to look for Jython.  (I know programs should not depend on this in
general; I'm more thinking about places like the test suite.)


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Expose Subversion revision number to Python

2005-12-16 Thread Armin Rigo
Hi Phillip,

On Fri, Dec 16, 2005 at 10:51:33AM -0500, Phillip J. Eby wrote:
  svn info -R|grep '^Last Changed Rev'|sort -nr|head -1|cut -f 4 -d 
 
 To get the highest-numbered revision.  However, both this approach and 
 yours will not deal with Subversion messages in non-English locales.

The 'py' lib works around this problem by running LC_ALL=C svn info.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Expose Subversion revision number to Python

2005-12-16 Thread Armin Rigo
Hi Skip,

On Fri, Dec 16, 2005 at 05:02:19AM -0600, [EMAIL PROTECTED] wrote:
 Armin (trunk is the last part of the path returned by svn info.)

 Did you mean the last part of the URL?

Yes, sorry.


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Expose Subversion revision number to Python

2005-12-16 Thread Armin Rigo
Hi Phillip,

On Fri, Dec 16, 2005 at 10:59:23AM -0500, Phillip J. Eby wrote:
 The Revision from svn info isn't reliable; it doesn't actually relate 
 to what version of code is in the subtree.  It can change when nothing has 
 changed.

Indeed, the patch should not use the Revision line but the Last
Changed Rev one.

 SVN does track the actual 
 *changed* revision, it just takes a little more work to get it.

Not if you're happy with Last Changed Rev:

LC_ALL=C svn info | grep -i last changed rev | cut -f 4 -d  


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Expose Subversion revision number to Python

2005-12-16 Thread Armin Rigo
Hi Phillip,

On Fri, Dec 16, 2005 at 11:33:00AM -0500, Phillip J. Eby wrote:
 Not if you're happy with Last Changed Rev:
 
  LC_ALL=C svn info | grep -i last changed rev | cut -f 4 -d  
 
 You left off the all-important -R from svn info, and the sort -nr | 
 head -1 at the end.  The Last Changed Rev of the root is not necessarily 
 the highest Last Changed Rev, no matter how or where you update or check 
 out.  Try it and see.

I was proposing this line as a slight extension of the one currently in
the SF patch.  In accordance with Martin I am still unconvinced that
'svn info -R' or more fancy tools are really useful here.

If you meant that the following situation is possible:

trunk$  svn up
At revision xxx.
trunk$  svn info
Last Changed Rev: 1
trunk$  cd Python
trunk/python$  svn info
Last Changed Rev: 10001

then I object.  As far as I can tell this is not possible.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] For Python 3k, drop default/implicit hash, and comparison

2005-11-27 Thread Armin Rigo
Hi Noam,

On Sun, Nov 27, 2005 at 09:04:25PM +0200, Noam Raphael wrote:
 No, I meant real programming examples. My theory is that most
 user-defined classes have a value, and those that don't are related
 to I/O, in some sort of a broad definition of the term. I may be
 wrong, so I ask for counter-examples.

In the source code base of PyPy, trying to count only what we really
wrote and not external tools, I found 19 classes defining __eq__ on a
total of 1413.  There must be close to zero classes that have anything
to do with I/O in there.  If anything, this proves that the default
comparison for classes is absolutely fine and nothing needs to be fixed
in the Python language.

Please move this discussion outside python-dev.


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Problems with the Python Memory Manager

2005-11-25 Thread Armin Rigo
Hi Jim,

You wrote:
 (2)  Is he allocating new _types_, which I think don't get properly
  collected.

(Off-topic) For reference, as far as I know new types are properly
freed.  There has been a number of bugs and lots of corner cases to fix,
but I know of no remaining one.  This assumes that the new types are
heap types allocated in some official way -- either by Python code or by
somehow calling type() from C.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Problems with the Python Memory Manager

2005-11-24 Thread Armin Rigo
Hi,

On Thu, Nov 24, 2005 at 01:59:57AM -0800, Robert Kern wrote:
 You can get the version of scipy_core just before the fix that Travis
 applied:

Now we can start debugging :-)

   http://projects.scipy.org/scipy/scipy_core/changeset/1490

This changeset alone fixes the small example you provided.  However,
compiling python --without-pymalloc doesn't fix it, so we can't blame
the memory allocator.  That's all I can say; I am rather clueless as to
how the above patch manages to make any difference even without
pymalloc.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Problems with the Python Memory Manager

2005-11-24 Thread Armin Rigo
Hi,

Ok, here is the reason for the leak...

There is in scipy a type called 'int32_arrtype' which inherits from both
another scipy type called 'signedinteger_arrtype', and from 'int'.
Obscure!  This is not 100% officially allowed: you are inheriting from
two C types.  You're living dangerously!

Now in this case it mostly works as expected, because the parent scipy
type has no field at all, so it's mostly like inheriting from both
'object' and 'int' -- which is allowed, or would be if the bases were
written in the opposite order.  But still, something confuses the
fragile logic of typeobject.c.  (I'll leave this bit to scipy people to
debug :-)

The net result is that unless you force your own tp_free as in revision
1490, the type 'int32_arrtype' has tp_free set to int_free(), which is
the normal tp_free of 'int' objects.  This causes all deallocated
int32_arrtype instances to be added to the CPython free list of integers
instead of being freed!


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Problems with mro for dual inheritance in C [Was: Problems with the Python Memory Manager]

2005-11-24 Thread Armin Rigo
Hi Travis,

On Thu, Nov 24, 2005 at 10:17:43AM -0700, Travis E. Oliphant wrote:
 Why doesn't the int32 
 type inherit its tp_free from the early types first?

In your case I suspect that the tp_free is inherited from the tp_base
which is probably 'int'.  I don't see how to fix typeobject.c, because
I'm not sure that there is a solution that would do the right thing in
all cases at this level.

I would suggest that you just force the tp_alloc/tp_free that you want
in your static types instead.  That's what occurs for example if you
build a similar inheritance hierarchy with classes defined in Python:
these classes are then 'heap types', so they always get the generic
tp_alloc/tp_free before PyType_Ready() has a chance to see them.


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] s/hotshot/lsprof

2005-11-21 Thread Armin Rigo
Hi Brett, hi Floris,

On Sat, Nov 19, 2005 at 04:12:28PM -0800, Brett Cannon wrote:
 Just  for everyone's FYI while we are talking about profilers, Floris
 Bruynooghe (who I am cc'ing on this so he can contribute to the
 conversation), for Google's Summer of Code, wrote a replacement for
 'profile' that uses Hotshot directly.  Thanks to his direct use of
 Hotshot and rewrite of pstats it loads Hotshot data 30% faster and
 also alleviates keeping 'profile' around and its slightly questionable
 license.

Thanks for the note!  30% faster than an incredibly long time is still
quite long, but that's an improvment, I suppose.  However, this code is
not ready yet.  For example the new loader gives wrong results in the
presence of recursive function calls.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] s/hotshot/lsprof

2005-11-21 Thread Armin Rigo
Hi Barry,

On Mon, Nov 21, 2005 at 11:40:37AM -0500, Barry Warsaw wrote:
 Hi Armin.  Actually it was SF #900092 that I was referring to.

Ah, we're talking about different things then.  The patch in SF #900092
is not related to hotshot, it's just ceval.c not producing enough events
to allow a precise timing of exceptions.  (Now that ceval.c is fixed, we
could remove a few hacks from profile.py, BTW.)

I am referring to a specific bug of hotshot which entirely drops some
genuine time intervals, all the time.  It's untested code!  A minimal
test like Floris' test_profile shows it clearly.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] s/hotshot/lsprof

2005-11-21 Thread Armin Rigo
Hi Floris,

On Mon, Nov 21, 2005 at 04:41:04PM +, Floris Bruynooghe wrote:
  Now Brett's
  student, Floris, extended hotshot to allow custom timers.  This is
  essential, because it enables testing.  The timing parts of hotshot were
  not tested previously.
 
 Don't be too enthousiastic here.

Testing is done by feeding the profiler something that is not a real
timer function, but gives easy to predict answers.  Then we check that
the profiler accounted all this pseudo-time to the correct functions in
the correct way.  This is one of the few way to reliably test a
profiler, that's why it is essential.

 Iirc I did compare the output of test_profile between profile and my
 wrapper.  This was one of my checks to make sure it was wrapped
 correctly.  So could you tell me how they are different?

test_profile works as I explained above.  Running it with hotshot shows
different numbers, which means that there is a bug (and not just some
difference in real speed).   More precisely, a specific number of the
pseudo-clock-ticks are dropped for no reason other than a bug, and
doesn't show up in the final results at all.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] s/hotshot/lsprof

2005-11-21 Thread Armin Rigo
Hi Floris,

On Mon, Nov 21, 2005 at 04:45:03PM +, Floris Bruynooghe wrote:
 Afaik I did test recursive calls etc.

It seems to show up in any test case I try, e.g.

import hprofile
def wait(m):
if m  0:
wait(m-1)
def f(n):
wait(n)
if n  1:
return n*f(n-1)
else:
return 1
hprofile.run(f(500), 'dump-hprof')

The problem is in the cumulative time column, which (on this machine)
says 163 seconds for both f() and wait().  The whole program finishes in
1 second...  The same log file loaded with hotshot.stats doesn't have
this problem.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] s/hotshot/lsprof

2005-11-21 Thread Armin Rigo
Hi Martin,

On Mon, Nov 21, 2005 at 10:29:55PM +0100, Martin v. L?wis wrote:
  I see no incremental way of fixing some of the downsides of hotshot,
  like its huge log file size and loading time.
 
 I haven't looked into the details myself, but it appears that some
 google-summer-of-code contributor has found some way of fixing it.

As discussed elsewhere on this thread: this contribution did not fix any
of the mentioned problems.  The goal was only to get rid of profile.py
by linking it to Hotshot.  So the log file size didn't change and the
loading time was only 20-30% better, which is still a really long time.

 So essentially: fixing bugs isn't fun, but rewriting it from scratch is.

Well, sorry for being interested in having fun.  And yes, I am formally
committing myself to maintaining this new piece of software, because
that also looks like fun: it's simple code that does just what you
expect from it.

Note that I may sound too negative about Hotshot.  I see by now that it
is a very powerful piece of code, full of careful design trade-offs and
capabilities.  It can do much more than what the minimalistic
documentation says, e.g. it can or could be used as the basis of a
tracing tool to debug software, to measure test coverage, etc. (with
external tools).  Moreover, it comes with carefully chosen drawbacks --
log file size and loading time -- for advanced reasons.  You won't find
them discussed in the documentation, which makes user experience mostly
negative, but you do find them in Tim's e-mails :-)

So no, I'm not willing to debug and maintain an unfinished (quoting
Tim) advanced piece of software doing much more than what common-people-
reading-the-stdlib-docs use it for.  That is not fun.

 Now, it might be that in this specific case, replacing the library
 really is the right thing to do. It would be if:
 1.it has improvements over the current library already
(certified by users other than the authors), AND
 2.it has no drawbacks over the current library, AND
 3.there is some clear indication that it will get better maintenance
than the previous library.

1. Log file size (could reuse the existing compact profile.py format) --
good profile-tweak-reprofile round-trip time for the developer (no
ages spent loading the log) -- ability to interpret the logs in memory,
no need for a file -- collecting children call stats.  Positive early
user experience comes from the authors, me, and at least one other
company (Strakt) that cared enough to push for lsprof on the SF tracker.

There is this widespread user experience that hotshot is nice but it
doesn't actually appear to work (as Nick Coghlan put it).  Hotshot is
indeed buggy and has been producing wrong timings all along (up to and
including the current HEAD version) as shown by the test_profile found
in the Summer of Code project mentioned above.  Now we can fix that one,
and see if things get better.  In some sense this fix will discard the
meaning of any previous user experience, so that lsprof has now more of
it than Hotshot...

2. Drawbacks: there are many, as Hotshot has much more capabilities or
potential capabilities than lsprof.  None of them is to be found in the
documentation of Hotshot, though.  There is no drawback for people using
Hotshot only as documented.  Of course we might keep both Hotshot and
lsprof in the stdlib, if this sounds like a problem, but I really think
the stdlib could do with clean-ups more than pile-ups.

3. Maintenance group: two core developers.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] s/hotshot/lsprof

2005-11-19 Thread Armin Rigo
Hi!

The current Python profilers situation is a mess.

'profile.Profile' is the ages-old pure Python profiler.  At the end of a
run, it builds a dict that is inspected by 'pstats.Stats'.  It has some
recent support for profiling C calls, which however make it crash in
some cases [1].  And of course it's slow (makes a run take about 10x
longer).

'hotshot', new from 2.2, is quite faster (reportedly, only 30% added
overhead).  The log file is then loaded and turned into an instance of
the same 'pstats.Stats'.  This loading takes ages.  The reason is that
the log file only records events, and loading is done by instantiating a
'profile.Profile' and sending it all the events.  In other words, it
takes exactly as long as the time it spared in the first place!
Moreover, for some reasons, the results given by hotshot seem sometimes
quite wrong.  (I don't understand why, but I've seen it myself, and it's
been reported by various people, e.g. [2].)  'hotshot' doesn't know
about C calls, but it can log line events, although this information is
lost(!) in the final conversion to a 'pstats.Stats'.

'lsprof' is a third profiler by Brett Rosen and Ted Czotter, posted on
SF in June [2].  Michael Hudson and me did some minor clean-ups and
improvements on it, and it seems to be quite useful.  It is, for
example, the only of the three profilers that managed to give sensible
information about the PyPy translation process without crashing,
allowing us to accelerate it from over 30 to under 20 minutes.  The SF
patch contains a more detailed account on the reasons for writing
'lsprof'.  The current version [3] does not support C calls nor line
events.  It has its own simple interface, which is not compatible with
any of the other two profilers.  However, unlike the other two
profilers, it can record detailed stats about children, which I found
quite useful (e.g. how much take is spent in a function when it is
called by another specific function).

Therefore, I think it would be a great idea to add 'lsprof' to the
standard library.  Unless there are objections, it seems that the best
plan is to keep 'profile.py' as a pure Python implementation and replace
'hotshot' with 'lsprof'.  Indeed, I don't see any obvious advantage that
'hotshot' has over 'lsprof', and I certainly see more than one downside.
Maybe someone has a use for (and undocumented ways to fish for) line
events generated by hotshot.  Well, there is a script [4] to convert
hotshot log files to some format that a KDE tool [5] can display.  (It
even looks like hotshot files were designed with this in mind.)  Given
that the people doing that can still compile 'hotshot' as a separate
extension module, it doesn't strike me as a particularly good reason to
keep Yet Another Profiler in the standard library.

So here is my plan:

Unify a bit more the interfaces of the pure Python and the C profilers.
This also means that 'lsprof' should be made to use a pstats-compatible
log format.  The 'pstats' documentation specifically says that the file
format can change: that would give 'lsprof' a place to store its
detailed children stats.

Then we can provide a dummy 'hotshot.py' for compatibility, remove its
documentation, and provide documentation for 'lsprof'.

If anyone feels like this is a bad idea, please speak up.


A bientot,

Armin


[1] 
https://sourceforge.net/tracker/?group_id=5470atid=105470func=detailaid=1117670

[2] 
http://sourceforge.net/tracker/?group_id=5470atid=305470func=detailaid=1212837

[3] http://codespeak.net/svn/user/arigo/hack/misc/lsprof (Subversion)

[4] http://mail.python.org/pipermail/python-list/2003-September/183887.html

[5] http://kcachegrind.sourceforge.net/cgi-bin/show.cgi
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Is some magic required to check out new files from svn?

2005-11-16 Thread Armin Rigo
Hi,

On Sun, Nov 13, 2005 at 07:08:15AM -0600, [EMAIL PROTECTED] wrote:
 The full svn status output is
 
 % svn status
 !  .
 !  Python

The ! definitely mean that these items are missing, or for
directories, incomplete in some way.  You need to play around until the
! goes away; for example, you may try

svn revert -R . # revert to pristine state, recursively

if you have no local changes you want to keep, followed by 'svn up'.  If
it still doesn't help, then I'm lost about the cause and would just
recommend doing a fresh checkout.


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] AST branch update

2005-10-17 Thread Armin Rigo
Hi Jeremy,

On Thu, Oct 13, 2005 at 04:52:14PM -0400, Jeremy Hylton wrote:
 I don't think the current test suite covers all of the possible syntax
 errors that can be raised.  I'd like to add a new test suite that
 covers all of the remaining cases, perhaps moving some existing tests
 into this module as well.

You might be interested in PyPy's test suite here.  In particular,
http://codespeak.net/svn/pypy/dist/pypy/interpreter/test/test_syntax.py
contains a list of syntactically valid and invalid corner cases.

If you are willing to check out the whole of PyPy (i.e.
http://codespeak.net/svn/pypy/dist) you should also be able to run the
whole test suite, or at least the following tests:

   python test_all.py pypy/interpreter/test/test_compiler.py
   python test_all.py pypy/interpreter/pyparser/

which compare CPython's builtin compiler with our own compilers; as of
PyPy revision 18722 these tests pass on all CPython versions (2.3.5,
2.4.2, HEAD).


A bientot,

Armin.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] IMPORTANT: release24-maint branch is FROZEN from 2005-09-21 00:00 UTC for 2.4.2

2005-09-20 Thread Armin Rigo
Hi,

A quick note, the profile.py module is broken -- crashes on some
examples and real-world programs.  I think I should be able to fix it by
tomorrow, but not tonight.

(See example checked in in the CVS trunk -- Lib/test/test_profile --
which passes, but for some reason I get completely unrealistic results
on a real-world application.  Can't investigate more now...)


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] IMPORTANT: release24-maint branch is FROZEN from 2005-09-21 00:00 UTC for 2.4.2

2005-09-20 Thread Armin Rigo
Hi,

On Tue, Sep 20, 2005 at 09:21:14PM +0200, Armin Rigo wrote:
 A quick note, the profile.py module is broken -- crashes on some
 examples and real-world programs.  I think I should be able to fix it by
 tomorrow, but not tonight.

It was easier than I thought, sorry for the alarm.


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bug in urlparse

2005-09-06 Thread Armin Rigo
Hi Duncan,

On Tue, Sep 06, 2005 at 12:51:24PM +0100, Duncan Booth wrote:
 The net effect of this is that on some sites using a Python spider (e.g. 
 webchecker.py) will produce a large number of error messages for links 
 which browsers will actually resolve successfully.

As far as I'm concerned, even if it is not theoretically a buggy
behavior, a proposed patch with the above motivation would be welcome
(and, of course, this patch wouldn't break the RFC either).


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PyPy release 0.7.0

2005-08-28 Thread Armin Rigo
Hi Python-dev'ers,

The first Python implementation of Python is now also the
second C implementation of Python :-)


Samuele  Armin ( the rest of the team)

-+-+-


pypy-0.7.0: first PyPy-generated Python Implementations
==

What was once just an idea between a few people discussing 
on some nested mailing list thread and in a pub became reality ... 
the PyPy development team is happy to announce its first
public release of a fully translatable self contained Python
implementation.  The 0.7 release showcases the results of our
efforts in the last few months since the 0.6 preview release
which have been partially funded by the European Union:

- whole program type inference on our Python Interpreter 
  implementation with full translation to two different 
  machine-level targets: C and LLVM 

- a translation choice of using a refcounting or Boehm 
  garbage collectors

- the ability to translate with or without thread support 

- very complete language-level compliancy with CPython 2.4.1 


What is PyPy (about)? 


PyPy is a MIT-licensed research-oriented reimplementation of
Python written in Python itself, flexible and easy to
experiment with.  It translates itself to lower level
languages.  Our goals are to target a large variety of
platforms, small and large, by providing a compilation toolsuite
that can produce custom Python versions.  Platform, Memory and
Threading models are to become aspects of the translation
process - as opposed to encoding low level details into a
language implementation itself.  Eventually, dynamic
optimization techniques - implemented as another translation
aspect - should become robust against language changes.

Note that PyPy is mainly a research and development project
and does not by itself focus on getting a production-ready
Python implementation although we do hope and expect it to
become a viable contender in that area sometime next year. 


Where to start? 
-

Getting started:http://codespeak.net/pypy/dist/pypy/doc/getting-started.html

PyPy Documentation: http://codespeak.net/pypy/dist/pypy/doc/ 

PyPy Homepage:  http://codespeak.net/pypy/

The interpreter and object model implementations shipped with
the 0.7 version can run on their own and implement the core
language features of Python as of CPython 2.4.  However, we still
do not recommend using PyPy for anything else than for education, 
playing or research purposes.  

Ongoing work and near term goals
-

PyPy has been developed during approximately 15 coding sprints
across Europe and the US.  It continues to be a very
dynamically and incrementally evolving project with many
one-week meetings to follow.  You are invited to consider coming to 
the next such meeting in Paris mid October 2005 where we intend to 
plan and head for an even more intense phase of the project
involving building a JIT-Compiler and enabling unique
features not found in other Python language implementations.

PyPy has been a community effort from the start and it would
not have got that far without the coding and feedback support
from numerous people.   Please feel free to give feedback and 
raise questions. 

contact points: http://codespeak.net/pypy/dist/pypy/doc/contact.html

contributor list: http://codespeak.net/pypy/dist/pypy/doc/contributor.html

have fun, 

the pypy team, of which here is a partial snapshot
of mainly involved persons: 

Armin Rigo, Samuele Pedroni, 
Holger Krekel, Christian Tismer, 
Carl Friedrich Bolz, Michael Hudson, 
Eric van Riet Paap, Richard Emslie, 
Anders Chrigstroem, Anders Lehmann, 
Ludovic Aubry, Adrien Di Mascio, 
Niklaus Haldimann, Jacob Hallen, 
Bea During, Laura Creighton, 
and many contributors ... 

PyPy development and activities happen as an open source project  
and with the support of a consortium partially funded by a two 
year European Union IST research grant. Here is a list of 
the full partners of that consortium: 

Heinrich-Heine University (Germany), AB Strakt (Sweden)
merlinux GmbH (Germany), tismerysoft GmbH(Germany) 
Logilab Paris (France), DFKI GmbH (Germany)
ChangeMaker (Sweden), Impara (Germany)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] refcounting vs PyModule_AddObject

2005-06-15 Thread Armin Rigo
Hi Michael,

On Wed, Jun 15, 2005 at 01:35:35PM +0100, Michael Hudson wrote:
 if (ProfilerError == NULL)
 ProfilerError = PyErr_NewException(hotshot.ProfilerError,
NULL, NULL);
 if (ProfilerError != NULL) {
 Py_INCREF(ProfilerError);
 PyModule_AddObject(module, ProfilerError, ProfilerError);
 }

I think the Py_INCREF is needed here.  The ProfilerError is a global
variable that needs the extra reference.  Otherwise, a malicious user
could do del _hotshot.ProfilerError and have it garbage-collected
under the feet of _hotshot.c which still uses it.  What I don't get is
how ProfilerError could fail to be NULL in the first 'if' above, but
that's a different matter.

While we're at strange refcounting problems, PyModule_AddObject() only
decrefs its last argument if no error occurs.  This is probably wrong.

In general I've found that the C modules' init code is fragile.  This
might be due to the idea that it runs only once anyway, and global
C-module objects are immortal anyway, so sloppiness sneaks in.  But for
example, the following is common:

m = Py_InitModule3(xxx, NULL, module_doc);
Py_INCREF(Xxx_Type);
PyModule_AddObject(m, Xxx, (PyObject *)Xxx_Type);

This generates a segfault if Py_InitModule3() returns NULL (however rare
that situation is).


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] python/dist/src/Modules _csv.c, 1.37, 1.38

2005-06-15 Thread Armin Rigo
Hi Skip,

On Wed, Jun 15, 2005 at 06:35:10AM -0700, [EMAIL PROTECTED] wrote:
 Why this worked is a bit mystical.  Perhaps it never gets freed because the
 object just happens never to be DECREF'd (but that seems unlikely).
  /* Add the Dialect type */
 + Py_INCREF(Dialect_Type);
  if (PyModule_AddObject(module, Dialect, (PyObject *)Dialect_Type))
  return;

Hum, you probably don't want to know, but it works just fine to forget
a Py_INCREF before PyModule_AddObject() for the following reason:

1. the reference is stored in the module's dict, so the object is kept
alive from there.

2. after the module initialization code is completed, the import
mechanism make a copy of the dict (!) just in case some users wants to
reload() the module (!!) in which case the module's dict is simplify
overwritten with the copy again (!!!).

So there is a reference left to the object from this hidden dict, and no
way for the user to kill it -- short of using gc.getreferrers(), which
is how I figured this out, but gc.getreferrers() is officially
dangerous.  So unlike what I thought in my previous e-mail, even if the
user deletes the entry in the module's normal dict, nothing bad can
happend because of this particular feature of import...

So it just works.  Hum.


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Localized Type Inference of Atomic Types in Python

2005-05-25 Thread Armin Rigo
Hi Brett,

On Tue, May 24, 2005 at 04:11:34PM -0700, Brett C. wrote:
 My thesis, Localized Type Inference of Atomic Types in Python, was
 successfully defended today for my MS in Computer Science at the California
 Polytechnic State University, San Luis Obispo.

Congratulations !

Nitpickingly... thanks for the references to Psyco, though I should add
that Psyco has been supporting more than just ints and strings since
shortly after my first e-mail to python-dev about it (in 2001 I think)
:-)  it actually knows more or less about all common built-in types.


A bientot,

Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] First PyPy (preview) release

2005-05-20 Thread Armin Rigo

The PyPy 0.6 release
 

*The PyPy Development Team is happy to announce the first 
public release of PyPy after two years of spare-time and
half a year of EU funded development.  The 0.6 release 
is eminently a preview release.*  

What it is and where to start 
-

Getting started:http://codespeak.net/pypy/index.cgi?doc/getting_started.html

PyPy Documentation: http://codespeak.net/pypy/index.cgi?doc

PyPy Homepage:  http://codespeak.net/pypy/

PyPy is a MIT-licensed reimplementation of Python written in
Python itself.  The long term goals are an implementation that
is flexible and easy to experiment with and retarget to
different platforms (also non-C ones) and such that high
performance can be achieved through high-level implementations
of dynamic optimisation techniques.

The interpreter and object model implementations shipped with 0.6 can
be run on top of CPython and implement the core language features of
Python as of CPython 2.3.  PyPy passes around 90% of the Python language
regression tests that do not depend deeply on C-extensions.  Some of
that functionality is still made available by PyPy piggy-backing on
the host CPython interpreter.  Double interpretation and abstractions
in the code-base make it so that PyPy running on CPython is quite slow
(around 2000x slower than CPython ), this is expected.  

This release is intended for people that want to look and get a feel
into what we are doing, playing with interpreter and perusing the
codebase.  Possibly to join in the fun and efforts.

Interesting bits and highlights
-

The release is also a snap-shot of our ongoing efforts towards 
low-level translation and experimenting with unique features. 

* By default, PyPy is a Python version that works completely with
  new-style-classes semantics.  However, support for old-style classes
  is still available.  Implementations, mostly as user-level code, of
  their metaclass and instance object are included and can be re-made
  the default with the ``--oldstyle`` option.

* In PyPy, bytecode interpretation and object manipulations 
  are well separated between a bytecode interpreter and an 
  *object space* which implements operations on objects. 
  PyPy comes with experimental object spaces augmenting the
  standard one through delegation:

  * an experimental object space that does extensive tracing of
bytecode and object operations;

  * the 'thunk' object space that implements lazy values and a 'become'
operation that can exchange object identities.
  
  These spaces already give a glimpse in the flexibility potential of
  PyPy.  See demo/fibonacci.py and demo/sharedref.py for examples
  about the 'thunk' object space.

* The 0.6 release also contains a snapshot of our translation-efforts 
  to lower level languages.  For that we have developed an
  annotator which is capable of infering type information
  across our code base.  The annotator right now is already
  capable of successfully type annotating basically *all* of
  PyPy code-base, and is included with 0.6.  

* From type annotated code, low-level code needs to be generated.
  Backends for various targets (C, LLVM,...) are included; they are
  all somehow incomplete and have been and are quite in flux. What is
  shipped with 0.6 is able to deal with more or less small/medium examples.


Ongoing work and near term goals
-

Generating low-level code is the main area we are hammering on in the
next months; our plan is to produce a PyPy version in August/September 
that does not need to be interpreted by CPython anymore and will 
thus run considerably faster than the 0.6 preview release. 

PyPy has been a community effort from the start and it would
not have got that far without the coding and feedback support
from numerous people.   Please feel free to give feedback and 
raise questions. 

contact points: http://codespeak.net/pypy/index.cgi?contact

contributor list: http://codespeak.net/pypy/index.cgi?doc/contributor.html 

have fun, 

Armin Rigo, Samuele Pedroni, 

Holger Krekel, Christian Tismer, 

Carl Friedrich Bolz 


PyPy development and activities happen as an open source project  
and with the support of a consortium funded by a two year EU IST 
research grant. Here is a list of partners of the EU project: 

Heinrich-Heine University (Germany), AB Strakt (Sweden)

merlinux GmbH (Germany), tismerysoft GmbH(Germany) 

Logilab Paris (France), DFKI GmbH (Germany)

ChangeMaker (Sweden)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New style classes and operator methods

2005-04-11 Thread Armin Rigo
Hi Greg,

On Fri, Apr 08, 2005 at 05:03:42PM +1200, Greg Ewing wrote:
 If the left and right operands are of the same class,
 and the class implements a right operand method but
 not a left operand method, the right operand method
 is not called. Instead, two attempts are made to call
 the left operand method.

This is not a general rule.  The rule is that if both elements are of the same
class, only the non-reversed method is ever called.  The confusing bit is
about having it called twice.  Funnily enough, this only occurs for some
operators (I think only add and mul).  The reason is that internally, the C
core distinguishes about number adding vs sequence concatenation, and number
multiplying vs sequence repetition.  So __add__() and __mul__() are called
twice: once as a numeric computation and as a sequence operation...

Could be fixed with more strange special cases in abstract.c, but I'm not sure
it's worth it.


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] longobject.c ob_size

2005-04-04 Thread Armin Rigo
Hi Michael,

On Sun, Apr 03, 2005 at 04:14:16PM +0100, Michael Hudson wrote:
 Asking mostly for curiousity, how hard would it be to have longs store
 their sign bit somewhere less aggravating?

As I guess your goal is to get rid of all the if (size  0) size = -size in
object.c and friends, I should point out that longobject.c has set out an
example that might have been followed by C extension writers.  Maybe it is too
late to say now that ob_size cannot be negative any more :-(


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] can we stop pretending _PyTyple_Lookup is internal?

2005-03-14 Thread Armin Rigo
Hi Michael,

 ... _PyType_Lookup ...

There has been discussions about copy_reg.py and at least one other place in
the standard library that needs this; it is an essential part of the
descriptor model of new-style classes.  In my opinion it should be made part
not only of the official C API but the Python one too, e.g. as a method of
'type' instances:  type(x).lookup('name')


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] builtin_id() returns negative numbers

2005-02-17 Thread Armin Rigo
Hi Tim,

On Mon, Feb 14, 2005 at 10:41:35AM -0500, Tim Peters wrote:
 # This is a puzzle:  there's no way to know the natural width of
 # addresses on this box (in particular, there's no necessary
 # relation to sys.maxint).

Isn't this natural width nowadays available as:

256 ** struct.calcsize('P')

?


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Getting rid of unbound methods: patch available

2005-01-20 Thread Armin Rigo
Hi,

Removing unbound methods also breaks the 'py' lib quite a bit.  The 'py.test'
framework handles function and bound/unbound method objects all over the
place, and uses introspection on them, as they are the objects defining the
tests to run.
  
It's nothing that can't be repaired, and at places the fix even looks nicer
than the original code, but I thought that it points to large-scale breakage.  
I'm expecting any code that relies on introspection to break at least here or
there.  My bet is that even if it's just for fixes a couple of lines long
everyone will have to upgrade a number of their packages when switching to
Python 2.5 -- unheard of !

For reference, the issues I got with the py lib are described at
 http://codespeak.net/pipermail/py-dev/2005-January/000159.html


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 246: lossless and stateless

2005-01-18 Thread Armin Rigo
Hi Clark,

On Fri, Jan 14, 2005 at 12:41:32PM -0500, Clark C. Evans wrote:
 Imagine enhancing the stack-trace with additional information about
 what adaptations were made; 
 
 Traceback (most recent call last):
File xxx, line 1, in foo
  Adapting x to File
File yyy, line 384, in bar
  Adapting x to FileName
etc.

More thoughts should be devoted to this, because it would be very precious.  
There should also be a way to know why a given call to adapt() returned an
unexpected object even if it didn't crash.  Given the nature of the problem,
it seems not only nice but essential to have a good way to debug it.

 How can we express your thoughts so that they fit into a narrative
 describing how adapt() should and should not be used?

I'm attaching a longer, hopefully easier reformulation...


Armin

A view on adaptation


Adaptation is a tool to help exchange data between two pieces of code; a very 
powerful tool, even.  But it is easy to misunderstand its aim, and unlike other 
features of a programming language, misusing adaptation will quickly lead into 
intricate debugging nightmares.  Here is the point of view on adaptation which 
I defend, and which I believe should be kept in mind.


Let's take an example.  You want to call a function in the Python standard 
library to do something interesting, like pickling (saving) a number of 
instances to a file with the ``pickle`` module.  You might remember that there 
is a function ``pickle.dump(obj, file)``, which saves the object ``obj`` to the 
file ``file``, and another function ``pickle.load(file)`` which reads back the 
object from ``file``.  (Adaptation doesn't help you to figure this out; you 
have to be at least a bit familiar with the standard library to know that this 
feature exists.)

Let's take the example of ``pickle.load(file)``.  Even if you remember about 
it, you might still have to look up the documentation if you don't remember 
exactly what kind of object ``file`` is supposed to be.  Is it an open file 
object, or a file name?  All you know is that ``file`` is meant to somehow 
be, or stand for, the file.  Now there are at least two commonly used ways 
to stand for a file: the file path as a string, or the file object directly.  
Actually, it might even not be a file at all, but just a string containing the 
already-loaded binary data.  This gives a third alternative.

The point here is that the person who wrote the ``pickle.load(x)`` function 
also knew that the argument was supposed to stand for a source of binary data 
to read from, and he had to make a choice for one of the three common 
representations: file path, file object, or raw data in a string.  The source 
of binary data is what both the author of the function and you would easily 
agree on; the formal choice of representation is more arbitrary.  This is where 
adaptation is supposed to help.  With properly setup adaptation, you can pass 
to ``pickle.load()`` either a file name or a file object, or possibly anything 
else that reasonably stands for an input file, and it will just work.


But to understand it more fully, we need to look a bit closer.  Imagine 
yourself as the author of functions like ``pickle.load()`` and 
``pickle.dump()``.  You decide if you want to use adaptation or not.  
Adaptation should be used in this case, and ONLY in this kind of case: there is 
some generally agreed concept on what a particular object -- typically an 
argument of function -- should represent, but not on precisely HOW it should 
represent it.  If your function expects a place to write the data to, it can 
typically be an open file or just a file name; in this case, the function would 
be defined like this::

def dump_data_into(target):
file = adapt(target, TargetAsFile)
file.write('hello')

with ``TargetAsFile`` being suitably defined -- i.e. having a correct 
``__adapt__()`` special method -- so that the adaptation will accept either a 
file or a string, and in the latter case open the named file for writing.

Surely, you think that ``TargetAsFile`` is a strange name for an interface if 
you think about adaptation in term of interfaces.  Well, for the purpose of 
this argument, don't.  Forget about interfaces.  This special object 
``TargetAsFile`` means not one but two things at once: that the input argument 
``target`` represents the place into which data should be written; and that the 
result ``file`` of the adaptation, as used within function itself, must be more 
precisely a file object.

This two-level distinction is important to keep in mind, specially when 
adapting built-in objects like strings and files.  For example, the adaptation 
that would be used in ``pickle.load(source)`` is more difficult to get right, 
because there are two common ways that a string object can stand for a source 
of data: either as the name of a file, or as raw binary data.  It is not 
possible to distinguish 

Re: [Python-Dev] Exceptions *must*? be old-style classes?

2005-01-17 Thread Armin Rigo
Hi,

On Fri, Jan 14, 2005 at 07:20:31PM -0500, Jim Jewett wrote:
 The base of the Exception hierarchy happens to be a classic class.
 But why are they required to be classic?

For reference, PyPy doesn't have old-style classes at all so far, so we had to
come up with something about exceptions.  After some feedback from python-dev
it appears that the following scheme works reasonably well.  Actually it's
surprizing how little problems we actually encountered by removing the
old-/new-style distinction (particularly when compared with the extremely
obscure workarounds we had to go through in PyPy itself, e.g. precisely
because we wanted exceptions that are member of some (new-style) class
hierarchy).

Because a bit of Python code tells more than long and verbose explanations,
here it is:

def app_normalize_exception(etype, value, tb):
Normalize an (exc_type, exc_value) pair:
exc_value will be an exception instance and exc_type its class.

# mistakes here usually show up as infinite recursion, which is fun.
while isinstance(etype, tuple):
etype = etype[0]
if isinstance(etype, type):
if not isinstance(value, etype):
if value is None:
# raise Type: we assume we have to instantiate Type
value = etype()
elif isinstance(value, tuple):
# raise Type, Tuple: assume Tuple contains the constructor
#args
value = etype(*value)
else:
# raise Type, X: assume X is the constructor argument
value = etype(value)
# raise Type, Instance: let etype be the exact type of value
etype = value.__class__
elif type(etype) is str:
# XXX warn -- deprecated
if value is not None and type(value) is not str:
raise TypeError(string exceptions can only have a string value)
else:
# raise X: we assume that X is an already-built instance
if value is not None:
raise TypeError(instance exception may not have a separate
 value)
value = etype
etype = value.__class__
# for the sake of language consistency we should not allow
# things like 'raise 1', but it's probably fine (i.e.
# not ambiguous) to allow them in the explicit form 'raise int, 1'
if not hasattr(value, '__dict__') and not hasattr(value, '__slots__'):
raise TypeError(raising built-in objects can be ambiguous, 
use 'raise type, value' instead)
return etype, value, tb


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Exceptions *must*? be old-style classes?

2005-01-17 Thread Armin Rigo
Hi Guido,

On Mon, Jan 17, 2005 at 07:27:33AM -0800, Guido van Rossum wrote:
 That is stricter than classic Python though -- it allows the value to
 be anything (and you get the value back unadorned in the except 's',
 x: clause).

Thanks for the note !


Armin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


<    1   2   3   4   >