Re: [Python-Dev] Hg: inter-branch workflow

2011-03-17 Thread Reid Kleckner
On Thu, Mar 17, 2011 at 9:33 AM, Antoine Pitrou solip...@pitrou.net wrote:
 On Thu, 17 Mar 2011 09:24:26 -0400
 R. David Murray rdmur...@bitdance.com wrote:

 It would be great if rebase did work with share, that would make a
 push race basically a non-issue for me.

 rebase as well as strip destroy some history, meaning some of your
 shared clones may end up having their working copy based on a
 non-existent changeset. I'm not sure why rebase would be worse that
 strip in that regard, though.

I don't think anyone has laid out why destroying history is considered
bad by some, so I thought I'd plug this post:
http://paul.stadig.name/2010/12/thou-shalt-not-lie-git-rebase-ammend.html

Essentially, lets say I have a handful of commits hacking on C code.
While I wrote them, someone changed a C API from under me and pushed
their change.  However, in the last change, I remove my dependence on
this API.  I pull, rebase, rebuild and test.  The tests pass in the
latest commit, so I push.  But now if someone tries to go back to
those intermediate commits (say, searching for the introduction of a
regression), they will find a broken build.

It boils down to when you alter history, at each altered commit you
have some source tree state for which you haven't built and run the
tests.

On the flipside, in the case of a single commit, it's easy to pull,
rebase, rerun tests, and then push.  Running the tests takes a while
so you open yourself to another push race, though.

98% of the time, if you don't actually have merge conflicts, applying
your change over someone else's without testing will work, so I feel
like rebasing a single commit without testing is no big deal.  On the
off chance that it breaks, the buildbots will find out.  Just don't
rebase a whole feature branch of commits, leave the final merge in.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython: Add a 'timeout' argument to subprocess.Popen.

2011-03-14 Thread Reid Kleckner
On Mon, Mar 14, 2011 at 12:36 PM, Antoine Pitrou solip...@pitrou.net wrote:
 On Mon, 14 Mar 2011 17:16:11 +0100
 reid.kleckner python-check...@python.org wrote:
 @@ -265,34 +271,43 @@
        generates enough output to a pipe such that it blocks waiting
        for the OS pipe buffer to accept more data.

 +   .. versionchanged:: 3.2
 +      *timeout* was added.

 Unless you plan to borrow someone's time machine, this should probably
 be 3.3.

Fixed soon after in http://hg.python.org/cpython/rev/72e49cb7fcf5 .

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] public visibility of python-dev decisions before it's too late (was: PyCObject_AsVoidPtr removed from python 3.2 - is this documented?)

2011-03-14 Thread Reid Kleckner
On Mon, Mar 14, 2011 at 6:30 PM, Lennart Regebro rege...@gmail.com wrote:
 On Wed, Mar 9, 2011 at 01:15, Stefan Behnel stefan...@behnel.de wrote:
 I can confirm that the Cython project was as surprised of the PyCapsule
 change in Python 3.2 as (I guess) most other users, and I would claim that
 we are a project with one of the highest probabilities of being impacted by
 C-API changes.

 And so was I. I discovered it today.

 And personally, I don't mind being surprised. And I'm sorry I didn't
 have time to try out the zope.* packages that support Python 3 on 3.2,
 but then again the difference was supposed to be between 2.x and 3.x.
 I didn't expect 3.2 to suddenly be backwards incompatible. Of the
 eight packages that currently support 3.1 (in trunk), two packages do
 not compile, and one gets massive test failures (which may only be
 test failures, and not actual failures). That is *not* good. Perhaps
 there is a easy way to map the API's with #defines, but if this is the
 case, why was the change done in the first place?

I don't know how your code works, but handling either type from C
seems very straightforward to me.  You can simply use #ifdef
Py_COBJECT_H to see if the cobject.h header was pulled into Python.h.
Similarly for Py_CAPSULE_H.  All you lose is that if you do get a
PyCObject, there is no way of knowing if the void pointer is of the
right type.

 Many projects, not only the Zope Toolkit needs to support a lot of
 versions. The Zope component architecture currently supports 2.4, 2.5
 and 2.6 and is expected to work on 2.7. I don't know if 2.4 or 2.5 can
 be dropped, but it definitely will be *years* until we can drop
 support for 2.6.  But if I move the PyCObject API to the PyCapsule
 API, the zope packages will **only work on Python 2.7 and 3.2**. This
 is obviously not an option. If I do *not* switch, I can't support
 Python 3.2. That's bad.

 **We can't deprecate an API in one version and drop the API in the
 next. This is not acceptable. The deprecation period must be much
 longer!**

Surely, you must be joking.  Python already has a long release cycle.
I'm not familiar with this feature, but suppose it is decided that
there is sufficient cause to remove a feature.  First, we have to wait
until the next release to deprecate it.  Then we have to wait yet one
more release to remove it.  With an 18-month release cycle, that's 27
months on average.  To me, that is a very long time to wait.

 In fact, since the deprecation in the Python 2 line happened in 2.7,
 the deprecation period of this API in practice was between July 3rd
 2010 and February 20 2011. That is a deprecation period of somewhat
 longer than seven months. Nobody obviously though 2.6 was out of
 practical use by now, so why did you decide to remove one if it's
 API's?

PyCObject was deprecated in 3.1, as well as 2.7.
http://docs.python.org/release/3.1.3/c-api/cobject.html#PyCObject

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python3 regret about deleting list.sort(cmp=...)

2011-03-12 Thread Reid Kleckner
They should be able to use a slotted cmp_to_key style class:
http://docs.python.org/howto/sorting.html

That will allocate 1 Python object with no dict per key, but that
might not be good enough.

Reid

On Sat, Mar 12, 2011 at 3:44 PM, Guido van Rossum gu...@python.org wrote:
 I was just reminded that in Python 3, list.sort() and sorted() no
 longer support the cmp (comparator) function argument. The reason is
 that the key function argument is always better. But now I have a
 nagging doubt about this:

 I recently advised a Googler who was sorting a large dataset and
 running out of memory. My analysis of the situation was that he was
 sorting a huge list of short lines of the form shortstring,integer
 with a key function that returned a tuple of the form (shortstring,
 integer). Using the key function argument, in addition to N short
 string objects, this creates N tuples of length 2, N more slightly
 shorter string objects, and N integer objects. (Not to count a
 parallel array of N more pointers.) Given the object overhead, this
 dramatically increased the memory usage. It so happens that in this
 particular Googler's situation, memory is constrained but CPU time is
 not, and it would be better to parse the strings over and over again
 in a comparator function.

 But in Python 3 this solution is no longer available. How bad is that?
 I'm not sure. But I'd like to at least get the issue out in the open.

 --
 --Guido van Rossum (python.org/~guido)
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/reid.kleckner%40gmail.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python3 regret about deleting list.sort(cmp=...)

2011-03-12 Thread Reid Kleckner
On Sat, Mar 12, 2011 at 4:58 PM, Nick Coghlan ncogh...@gmail.com wrote:
 On Sat, Mar 12, 2011 at 4:50 PM, Reid Kleckner reid.kleck...@gmail.com 
 wrote:
 They should be able to use a slotted cmp_to_key style class:
 http://docs.python.org/howto/sorting.html

 That will allocate 1 Python object with no dict per key, but that
 might not be good enough.

 Tuples are already slotted, so that isn't likely to help in this case.

It's three allocations vs. one.  The first is tuple + str + int, while
the adapter is just one object.  I'm not sure how it eventually shakes
out, though.

That said, it's still worse than Python 2, which is zero allocations.  :)

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Const-correctness in C-API Object Protocol

2011-02-22 Thread Reid Kleckner
On Tue, Feb 22, 2011 at 2:09 PM, Eric Smith e...@trueblade.com wrote:
 Also changing it now would be a giant hassle, leading to so-called const
 poisoning where many, many APIs need to be changed before everything would
 again work.

The poisoning will not break any users of the API, though, since they
can pass const and non-const pointers.  Internally Python would have
to go through and add const keywords as appropriate when passing
strings around.  IMO it's worth it to not cause this warning for
users.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] News of the faulthandler project

2011-02-03 Thread Reid Kleckner
On Thu, Feb 3, 2011 at 8:05 AM, Victor Stinner
victor.stin...@haypocalc.com wrote:
  - SIGABRT is not handled

Why not?  That seems useful for debugging assertion failures, although
most C code in Python raises exceptions rather than asserting.

I'm guessing it's because it aborts the process after printing the
backtrace.  You could just clear the signal handler before aborting.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Possible optimization for LOAD_FAST ?

2011-01-04 Thread Reid Kleckner
On Tue, Jan 4, 2011 at 8:21 AM, Alex Gaynor alex.gay...@gmail.com wrote:
 Ugh, I can't be the only one who finds these special cases to be a little
 nasty?
 Special cases aren't special enough to break the rules.
 Alex

+1, I don't think nailing down a few builtins is that helpful for
optimizing Python.  Anyone attempting to seriously optimize Python is
going to need to use more general techniques that apply to
non-builtins as well.

In unladen swallow (I'm sure something similar is done in PyPy) we
have some infrastructure for watching dictionaries for changes, and in
particular we tend to watch the builtins and module dictionaries.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyPy 1.4 released

2010-11-26 Thread Reid Kleckner
Congratulations!  Excellent work.

Reid

On Fri, Nov 26, 2010 at 1:23 PM, Maciej Fijalkowski fij...@gmail.com wrote:
 ===
 PyPy 1.4: Ouroboros in practice
 ===

 We're pleased to announce the 1.4 release of PyPy. This is a major 
 breakthrough
 in our long journey, as PyPy 1.4 is the first PyPy release that can translate
 itself faster than CPython.  Starting today, we are using PyPy more for
 our every-day development.  So may you :) You can download it here:

    http://pypy.org/download.html

 What is PyPy
 

 PyPy is a very compliant Python interpreter, almost a drop-in replacement
 for CPython. It's fast (`pypy 1.4 and cpython 2.6`_ comparison)

 Among its new features, this release includes numerous performance 
 improvements
 (which made fast self-hosting possible), a 64-bit JIT backend, as well
 as serious stabilization. As of now, we can consider the 32-bit and 64-bit
 linux versions of PyPy stable enough to run `in production`_.

 Numerous speed achievements are described on `our blog`_. Normalized speed
 charts comparing `pypy 1.4 and pypy 1.3`_ as well as `pypy 1.4 and cpython 
 2.6`_
 are available on benchmark website. For the impatient: yes, we got a lot 
 faster!

 More highlights
 ===

 * PyPy's built-in Just-in-Time compiler is fully transparent and
  automatically generated; it now also has very reasonable memory
  requirements.  The total memory used by a very complex and
  long-running process (translating PyPy itself) is within 1.5x to
  at most 2x the memory needed by CPython, for a speed-up of 2x.

 * More compact instances.  All instances are as compact as if
  they had ``__slots__``.  This can give programs a big gain in
  memory.  (In the example of translation above, we already have
  carefully placed ``__slots__``, so there is no extra win.)

 * `Virtualenv support`_: now PyPy is fully compatible with
 virtualenv_: note that
  to use it, you need a recent version of virtualenv (= 1.5).

 * Faster (and JITted) regular expressions - huge boost in speeding up
  the `re` module.

 * Other speed improvements, like JITted calls to functions like map().

 .. _virtualenv: http://pypi.python.org/pypi/virtualenv
 .. _`Virtualenv support`:
 http://morepypy.blogspot.com/2010/08/using-virtualenv-with-pypy.html
 .. _`in production`:
 http://morepypy.blogspot.com/2010/11/running-large-radio-telescope-software.html
 .. _`our blog`: http://morepypy.blogspot.com
 .. _`pypy 1.4 and pypy 1.3`:
 http://speed.pypy.org/comparison/?exe=1%2B41,1%2B172ben=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20env=1hor=falsebas=1%2B41chart=normal+bars
 .. _`pypy 1.4 and cpython 2.6`:
 http://speed.pypy.org/comparison/?exe=2%2B35,1%2B172ben=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20env=1hor=falsebas=2%2B35chart=normal+bars

 Cheers,

 Carl Friedrich Bolz, Antonio Cuni, Maciej Fijalkowski,
 Amaury Forgeot d'Arc, Armin Rigo and the PyPy team
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/reid.kleckner%40gmail.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] r86467 - in python/branches/py3k: Doc/library/logging.rst Lib/logging/__init__.py Misc/NEWS

2010-11-15 Thread Reid Kleckner
On Mon, Nov 15, 2010 at 8:24 AM, Nick Coghlan ncogh...@gmail.com wrote:
 On Mon, Nov 15, 2010 at 7:33 AM, vinay.sajip python-check...@python.org 
 wrote:

 +   .. attribute:: stack_info
 +
 +      Stack frame information (where available) from the bottom of the stack
 +      in the current thread, up to and including the stack frame of the
 +      logging call which resulted in the creation of this record.
 +

 Interesting - my mental model of the call stack is that the outermost
 frame is the top of the stack and the stack grows downwards as calls
 are executed (there are a few idioms like recursive descent, the
 intuitive parallel with inner functions being lower in the stack
 than outer functions as well as the order in which Python prints
 stack traces that reinforce this view).

Probably because the C stack tends to grow down for most
architectures, but most stack data structures are implemented over
arrays and hence, grow upwards from 0.  Depending on the author's
background, they probably use one mental model or the other.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Locked-in defect? 32-bit hash values on 64-bit builds

2010-10-15 Thread Reid Kleckner
On Fri, Oct 15, 2010 at 4:10 PM, Raymond Hettinger
raymond.hettin...@gmail.com wrote:

 On Oct 15, 2010, at 10:40 AM, Benjamin Peterson wrote:

 I think the panic is a bit of an overreaction. PEP 384 has still not
 been accepted, and I haven't seen a final decision about freezing the
 ABI in 3.2.

 Not sure where the panic seems to be.
 I just want to make sure the ABI doesn't get frozen
 before hash functions are converted to Py_ssize_t.

 Even if the ABI is nor frozen at 3.2 as Martin has proposed,
 it would still be great to get this in for 3.2

 Fortunately, this doesn't affect everyday users, it only
 arises for very large datasets.  When it does kick-in though
 (around 2**32 entries), the degradation is not small, it
 is close to catastrophic, making dicts/set unusable
 where O(1) lookups become O(n) with a *very* large n.

Just to be clear, hashing right now just uses the C long type.  The
only major platform where sizeof(long)  sizeof(Py_ssize_t) is 64-bit
Windows, right?  And the change being proposed is to make tp_hash
return a Py_ssize_t instead of a long, and then make all the clients
of tp_hash compute with Py_ssize_t instead of long?

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] View tracker patches with ViewVC?

2010-07-27 Thread Reid Kleckner
On Tue, Jul 27, 2010 at 7:56 AM, Terry Reedy tjre...@udel.edu wrote:
 I also suggest that, instead of uploading the patch to Rietveld
 yourself, you can ask the submitter to do it.

 That adds another step.

 Let me repeat me original question: Would it be feasible to add a [view]
 button that I could click to get a nice view of a patch, such as provided by
 ViewVC?

How are you proposing to use ViewVC to view the patch?  I'd think that
you'd have to commit it first, unless it has some functionality that
I'm unaware of.

Anyway, one uses Rietveld mostly via upload.py, not the form above.
Instead of running 'svn diff' + uploading the patch file in a web
browser and having several versions accumulate, you run `upload.py -i
rietveld issue #` and it uploads the diff to rietveld.  Rietveld's
diff view is quite nice.

Would the ViewVC functionality you are proposing look like this?
http://svn.python.org/view/python/branches/release27-maint/Demo/classes/Vec.py?r1=82503r2=83175pathrev=83175

Rietveld's differ is smarter (it does intra-line diffs) and the inline
comments there are a lot better than pasting the diff into an email.

It's true that the workflow isn't really described anywhere, so I'll
try to outline it in detail here.

Author's steps to upload a patch and create an issue:
- Discuss issue in the tracker
- Hack away at solution in svn checkout
- When done, run `upload.py` (no args creates a new issue and prints URL)
- When prompted, enter Google account credentials
- When prompted, enter the issue title you want to give it, probably
by pasting in the tracker title plus IssueXXX
- I always check the diff on Rietveld to make sure it looks good to me
before sending
- Go to the URL printed and click 'Start Review' to send mail

Reviewer's steps to add review comments:
- Receive mail, click URL to open issue
- Click the link to the first file, and read through the colored diff,
using 'n' to scroll down and 'j' to go to the next file.
- To make a comment, double click the line you want to comment on.
This is the most unintuitive part to beginners.
  - Enter the comment in the textbox that appears.
- Repeat until done reading the diff, then go back to the issue page
and click 'Publish+Mail Comments'

Author's steps to respond to comments:
- Open the files in the issue
- Read through the comments ('N' skips from comment to comment)
- Apply fixes, reply to each comment
- Run `upload.py -i issue#` to add a new patch with your fixes.
- Reply by clicking 'Publish+Mail Comments' to let your reviewer know
that you've addressed the comments

Repeat ad nauseum until reviewer is happy, then commit.

===

Not sure why I spelled that all out when these docs exist:
http://code.google.com/p/rietveld/wiki/CodeReviewHelp

Hopefully my outline reflects the Python workflow more accurately, though.  :)

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3 optimizations...

2010-07-23 Thread Reid Kleckner
On Fri, Jul 23, 2010 at 1:58 AM, stefan brunthaler
ste...@brunthaler.net wrote:
 Do I understand correctly that you modify the byte code of modules/functions
 at runtime?

 Yes. Quickening is runtime only optimization technique that rewrites
 instructions from a generic instruction to an optimized derivative
 (orignally for the Java virtual machine). It is completely hidden from
 the compiler and has no other dependencies than the interpreter
 dispatch routine itself.

How do you generate the specialized opcode implementations?
Presumably that is done ahead of time, or you'd have to use a JIT,
which is what you're avoiding.

I'm guessing from your comments below about cross-module inlining that
you generate a separate .c file with the specialized opcode bodies and
then call through to them via a table of function pointers indexed by
opcode, but I could be totally wrong.  :)

 Another benefit of using my technique is that a compiler could decide
 to inline all of the functions of the optimized derivatives (e.g., the
 float_add function call inside my FLOAT_ADD interpreter instruction).
 Unfortunately, however, gcc currently does not allow for cross-module
 inlining (AFAIR). (Preliminary tests with manually changing the
 default inlining size for ceval.c resulted in speedups of up to 1.3 on
 my machine, so I think inlinling of function bodies for the optimized
 derivatives would boost performance noticeably.)

There are a variety of solutions to getting cross-module inlining
these days.  Clang+LLVM support link-time optimization (LTO) via a
plugin for gold.  GCC has LTO and LIPO as well.

 Such an approach would also be very useful for Cython. Think of a profiler
 that runs a program in CPython and tells you exactly what static type
 annotations to put where in your Python code to make it compile to a fast
 binary with Cython. Or, even better, it could just spit out a .pxd file that
 you drop next to your .py file and that provides the static type information
 for you.

This would be interesting.  We have (obviously) have similar
instrumentation in unladen swallow to gather type feedback.  We talked
with Craig Citro about finding a way to feed that back to Cython for
exactly this reason, but we haven't really pursued it.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3 optimizations...

2010-07-23 Thread Reid Kleckner
On Fri, Jul 23, 2010 at 11:26 AM, stefan brunthaler
ste...@brunthaler.net wrote:
 I'm guessing from your comments below about cross-module inlining that
 you generate a separate .c file with the specialized opcode bodies and
 then call through to them via a table of function pointers indexed by
 opcode, but I could be totally wrong.  :)

 No, dead on ;)
 Probably a small example from the top of my head illustrates what is going on:

 TARGET(FLOAT_ADD):
  w= POP();
  v= TOP();
  x= PyFloat_Type.tp_as_number-nb_add(v, w);
  SET_TOP(x);
  if (x != NULL) FAST_DISPATCH();
  break;

 And I extend the standard indirect threaded code dispatch table to
 support the FLOAT_ADD operation.

I think I was wrong, but now I understand.  The inlining you want is
to get the nb_add body, not the opcode body.

The example you've given brings up a correctness issue.  It seems
you'd want to add checks that verify that the operands w and v are
both floats, and jump to BINARY_ADD if the guard fails.  It would
require reshuffling the stack operations to defer the pop until after
the check, but it shouldn't be a problem.

 This would be interesting.  We have (obviously) have similar
 instrumentation in unladen swallow to gather type feedback.  We talked
 with Craig Citro about finding a way to feed that back to Cython for
 exactly this reason, but we haven't really pursued it.

 Ok; I think it would actually be fairly easy to use the type
 information gathered at runtime by the quickening approach. Several
 auxiliary functions for dealing with these types could be generated by
 my code generator as well. It is probably worth looking into this,
 though my current top-priority is my PhD research, so I cannot promise
 to being able to allocate vast amounts of time for such endeavours.

I think you also record (via gdb) exactly the information that we
record.  I now see three consumers of type feedback from the CPython
interpreter: you or any quickening approach, Cython, and Unladen
Swallow.  It might be worth converging on a standard way to record
this information and serialize it so that it can be analyzed offline.

Our feedback recording mechanism currently uses LLVM data structures,
but the point of quickening is to avoid that kind of dependency, so
we'd need to rewrite it before it would really be useful to you.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New regex module for 3.2?

2010-07-22 Thread Reid Kleckner
On Thu, Jul 22, 2010 at 7:42 AM, Georg Brandl g.bra...@gmx.net wrote:
 Am 22.07.2010 14:12, schrieb Nick Coghlan:
 On Thu, Jul 22, 2010 at 9:34 PM, Georg Brandl g.bra...@gmx.net wrote:
 So, I thought there wasn't a difference in performance for this use case
 (which is compiling a lot of regexes and matching most of them only a
 few times in comparison).  However, I found that looking at the regex
 caching is very important in this case: re._MAXCACHE is by default set to
 100, and regex._MAXCACHE to 1024.  When I set re._MAXCACHE to 1024 before
 running the test suite, I get times around 18 (!) seconds for re.

It might be fun to do a pygments based macro benchmark.  Just have it
syntax highlight itself and time it.

 Sure -- I don't think this is a showstopper for regex.  However if we don't
 include regex in a future version, we might think about increasing MAXCACHE
 a bit, and maybe not clear the cache when it reaches its max length, but
 rather remove another element.

+50 for the last idea.  Collin encountered a problem two summers ago
in Mondrian where we were relying on the regex cache and were
surprised to find that it cleared itself after filling up, rather than
using LRU or random eviction.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Set the namespace free!

2010-07-22 Thread Reid Kleckner
On Thu, Jul 22, 2010 at 11:49 AM, Alexander Belopolsky
alexander.belopol...@gmail.com wrote:
 On Thu, Jul 22, 2010 at 12:53 PM,  gregory.smi...@sympatico.ca wrote:
 I'm very amused by all the jokes about turning python into perl, but there's
 a good idea here that doesn't actually require that...

 No, there isn't.  And both '' and '|' are valid python operators that
 cannot be used this way.

 If you want ::, I think you can find a language or two to your liking. :-)

A syntax for escaping reserved words used as identifiers is a worthy
and interesting idea, but I don't think it's worth adding to Python.
Appending '_' in the kinds of cases described by the OP is what's
suggested by the style guide, and seems acceptable to me.

Prefixing all reserved words with punctuation as suggested by the OP
is, of course, completely ludicrous.  He might just be trolling.

Reid

P.S. I'm not trolling!  http://www.youtube.com/watch?v=6bMLrA_0O5I

P.P.S.  Sorry, I couldn't help it.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python-dev signal-to-noise processing question

2010-07-21 Thread Reid Kleckner
On Wed, Jul 21, 2010 at 4:43 AM, Martin v. Löwis mar...@v.loewis.de wrote:
 Unfortunately (?) the question also revealed a lack of understanding
 of a fairly basic concept. IIUC, he wanted to know how Python
 handles SIGKILL, when the hole point of SIGKILL is that you cannot
 handle it. So he shouldn't have been surprised that he couldn't find
 a place in Python where it's handled.

No, you misunderstood.  He knew that one cannot set a SIGKILL signal
handler.  He just wanted to find the code in CPython responsible for
turning that error into an exception for the purposes of giving a
tutorial on signals.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Unladen swallow status

2010-07-21 Thread Reid Kleckner
On Wed, Jul 21, 2010 at 8:11 AM, Tim Golden m...@timgolden.me.uk wrote:
 Brett suggested that
 the Unladen Swallow merge to trunk was waiting for some work to complete
 on the JIT compiler and Georg, as release manager for 3.2, confirmed that
 Unladen Swallow would not be merged before 3.3.

Yeah, this has slipped.  I have patches that need review, and Jeff and
Collin have been distracted with other work.  Hopefully when one of
them gets around to that, I can proceed with the merge without
blocking on them.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Timeouts for subprocess module

2010-07-21 Thread Reid Kleckner
Hi python-dev,

I've been working through a patch to add timeouts to the subprocess module:
http://bugs.python.org/issue5673

It's gotten a fair amount of review, and I'm planning to commit it.
Since it's my first contribution, I'm taking Georg's suggestion to
send mail to python-dev to see if anyone objects.  If not, I'll commit
it to the py3k branch tomorrow night.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Does trace modules have a unit test?

2010-07-20 Thread Reid Kleckner
On Tue, Jul 20, 2010 at 10:51 AM, Eli Bendersky eli...@gmail.com wrote:
 As Terry wrote in the beginning of this thread, Lib/test/test_trace.py
 currently tests the sys.settrace module, so the tests of trace.py
 should find a new home. Does Lib/test/test_trace_module.py make sense
 or is something else preferable?

IMO you should just rename test_trace.py to test_settrace.py, and put
the trace.py tests in test_trace.py.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Function Operators

2010-07-18 Thread Reid Kleckner
Usual disclaimer: python-dev is for the development *of* python, not
*with*.  See python-list, etc.

That said, def declares new functions or methods, so you can't put
arbitrary expressions in there like type(f).__mul__ .

You can usually assign to things like that though, but in this case
you run into trouble, as shown below:

 def func(): pass
...
 type(func)
class 'function'
 def compose(f, g):
... return lambda x: f(g(x))
...
 type(func).__mul__ = compose
Traceback (most recent call last):
  File stdin, line 1, in module
TypeError: can't set attributes of built-in/extension type 'function'

As the interpreter says, it doesn't like people mucking with operator
slots on built in types.

Finally, if you like coding in that very functional style, I'd
recommend Haskell or other ML derived languages.  Python doesn't
support that programming style very well by choice.

Reid

On Sun, Jul 18, 2010 at 8:34 AM, Christopher Olah
christopherolah...@gmail.com wrote:
 Dear python-dev,

 In mathematical notation, f*g = z-f(g(z)) and f^n = f*f*f... (n
 times). I often run into situations in python where such operators
 could result in cleaner code. Eventually, I decided to implement it
 myself and see how it worked in practice.

 However, my intuitive implementation [1] doesn't seem to work. In
 particular, despite what it says in function's documentation, function
 does not seem to be in __builtin__. Furthermore, when I try to
 implement this through type(f) (where f is a function) I get invalid
 syntax errors.

 I hope I haven't made some trivial error; I'm rather inexperienced as
 a pythonist.

 Christopher Olah


 [1] Sketch:

 def __builtin__.function.__mul__(self, f):
    return lambda x: self(f(x))

 def __builtin__.function.__pow__(self, n):
    return lambda x: reduce(lambda a,b: [f for i in range(n)]+[x])
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/reid.kleckner%40gmail.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] commit privs

2010-07-13 Thread Reid Kleckner
Thanks for the support!

Georg Brandl authorized my SSH keys for SVN access.

Reid

On Tue, Jul 13, 2010 at 7:29 AM, Gregory P. Smith g...@krypto.org wrote:

 On Sun, Jul 11, 2010 at 9:28 AM, Antoine Pitrou solip...@pitrou.net wrote:

 On Sun, 11 Jul 2010 13:23:13 +
 Reid Kleckner reid.kleck...@gmail.com wrote:
 
  I'm also expecting to be doing more work merging unladen-swallow into
  the py3k-jit branch, so I was wondering if I could get commit
  privileges for that.

 It sounds good to me. Also, thanks for your threading patches!

 Regards

 +1

 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 http://mail.python.org/mailman/options/python-dev/reid.kleckner%40gmail.com


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New regex module for 3.2?

2010-07-13 Thread Reid Kleckner
On Mon, Jul 12, 2010 at 2:07 PM, Nick Coghlan ncogh...@gmail.com wrote:
 MRAB's module offers a superset of re's features rather than a subset
 though, so once it has had more of a chance to bake on PyPI it may be
 worth another look.

I feel like the new module is designed to replace the current re
module, and shouldn't need to spend time in PyPI.  A faster regex
library isn't going to motivate users to add external dependencies to
their projects.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Idle-dev] Removing IDLE from the standard library

2010-07-12 Thread Reid Kleckner
On Mon, Jul 12, 2010 at 9:20 AM, Kurt B. Kaiser k...@shore.net wrote:
 Also, the current right click edit action on Windows is to only open an
 edit window; no shell.  And it uses the subprocess!  So, some of the
 comments on this thread are not up to date.

 The reason that bug languished for two years was because first, it was a
 bit of a hack, and second, Windows was problematic in that it reused
 sockets and often left zombie subprocesses behind which couldn't be
 killed except with the task manager.  This causes real problems with
 students - they lose confidence in the tool.

 Scherer and Weeble put together a patch using ephemeral ports which
 nailed the problem, and I checked it in right away and
 forward/backported it.

That's great news!  I TAed a freshman Python class this January, and
Windows users ran into this problem a lot.  Mostly when hitting 'x' in
the upper right.  Fortunately, some quick searching led me to the
Python tracker where I found the workaround.  :)

(Somwhat off-topic):  Another pain point students had was accidentally
shadowing stdlib modules, like random.  Renaming the file didn't solve
the problem either, because it left behind .pycs, which I had to help
them delete.

Overall, I would say that IDLE worked very well in that situation, so
while it does have its quirks, it worked very well for us.  Imagine
trying to get students started with Eclipse or NetBeans.  Yuck!

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Threading bug review + commit privs

2010-07-11 Thread Reid Kleckner
Hey all,

I'm porting some fixes for threading.py that we applied to unladen-swallow:
http://bugs.python.org/issue6643

We ran into these bizarre race conditions involving fork + threads
while running the test suite with a background JIT compilation thread.
 I really wish we weren't trying to support forking from a child
thread, but it's already in the test suite.  I've solved the problem
by throwing away radioactive locks that may have been held across a
fork.*

If I could get a reviewer to look at this, I would be very grateful,
since reviewing threading patches is somewhat tricky.  =/

I'm also expecting to be doing more work merging unladen-swallow into
the py3k-jit branch, so I was wondering if I could get commit
privileges for that.

Thanks,
Reid

* In general I wouldn't think this is safe, but we already do it for
_active_limbo_lock in threading.py.  One of the problems I've
encountered is that on OS X, releasing locks held by other threads
across a fork results in a crash.  Furthermore, when locks are
deallocated, the destructor does a non-blocking acquire and then
release, which I would think would crash.  However, we get lucky here,
because any thread that holds a lock across a fork usually holds a
reference to it on the stack.  Therefore the lock is leaked and the
destructor never run.  Moral: fork + threads is *crazy*, avoid it if
you can.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] More detailed build instructions for Windows

2010-07-03 Thread Reid Kleckner
On Sat, Jul 3, 2010 at 12:00 AM, Martin v. Löwis mar...@v.loewis.de wrote:
 I'm trying to test out a patch to add a timeout in subprocess.py on
 Windows, so I need to build Python with Visual Studio.  The docs say
 the files in PCBuild/ work with VC 9 and newer.

 Which docs did you look at specifically that said and newer? That
 would be a bug.

On the developer FAQ page it says:
http://www.python.org/dev/faq/#id8
For VC 9 and newer, the PCbuild directory contains the build files.

But I'll go get 2008.  Thanks!

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] More detailed build instructions for Windows

2010-07-02 Thread Reid Kleckner
Hey folks,

I'm trying to test out a patch to add a timeout in subprocess.py on
Windows, so I need to build Python with Visual Studio.  The docs say
the files in PCBuild/ work with VC 9 and newer.  I downloaded Visual
C++ 2010 Express, and it needs to convert the .vcproj files into
.vcxproj files, but it fails.

I can't figure out where to get VC 9, all I see is 2008 and 2010.  Can
someone with experience share the best practices for building Python
on Windows?  In particular, what is the most recent compiler known to
work and where can I download it?

Thanks,
Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] variable name resolution in exec is incorrect

2010-05-27 Thread Reid Kleckner
On Thu, May 27, 2010 at 11:42 AM, Nick Coghlan ncogh...@gmail.com wrote:
 However, attaining the (sensible) behaviour Colin is requesting when such
 top level variable references exist would actually be somewhat tricky.
 Considering Guido's suggestion to treat two argument exec like a function
 rather than a class and generate a closure with full lexical scoping a
 little further, I don't believe this could be done in exec itself without
 breaking code that expects the current behaviour.

Just to give a concrete example, here is code that would break if exec
were to execute code in a function scope instead of a class scope:

exec 
def len(xs):
return -1
def foo():
return len([])
print foo()
 in globals(), {}

Currently, the call to 'len' inside 'foo' skips the outer scope
(because it's a class scope) and goes straight to globals and
builtins.  If it were switched to a local scope, a cell would be
created for the broken definition of 'len', and the call would resolve
to it.

Honestly, to me, the fact that the above code ever worked (ie prints
0, not -1) seems like a bug, so I wouldn't worry about backwards
compatibility.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Reid Kleckner
On Thu, May 27, 2010 at 4:13 AM, Brian Quinlan br...@sweetapp.com wrote:
 Keep in mind that this library magic is consistent with the library magic
 that the threading module does - unless the user sets Thread.daemon to True,
 the interpreter does *not* exit until the thread does.

Is there a compelling to make the threads daemon threads?  If not,
perhaps they can just be normal threads, and you can rely on the
threading module to wait for them to finish.

Unrelatedly, I feel like this behavior of waiting for the thread to
terminate usually manifests as deadlocks when the main thread throws
an uncaught exception.  The application then no longer responds
properly to interrupts, since it's stuck waiting on a semaphore.  I
guess it's better than the alternative of random crashes when daemon
threads wake up during interpreter shutdown, though.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tuning Python dicts

2010-04-13 Thread Reid Kleckner
On Tue, Apr 13, 2010 at 12:12 PM, Daniel Stutzbach
dan...@stutzbachenterprises.com wrote:
 I don't know what benchmarks were used to write dictnotes.txt, but moving
 forward I would recommend implementing your changes on trunk (i.e., Python
 2.x) and running the Unladen Swallow Benchmarks, which you can get from the
 link below:

 http://code.google.com/p/unladen-swallow/wiki/Benchmarks

I'm a contributor, actually.  ;)

 They are macro-benchmarks on real applications.  You will probably also want
 to write some micro-benchmarks of your own so that you can pinpoint any
 bottlenecks in your code and determine where you are ahead of the current
 dict implementation and where you are behind.

What I really wanted to do was to find the benchmarks for the
experiments discussed in dictnotex.txt so I could put them in the
unladen benchmark repository, which now lives at hg.python.org.

Since no one knows where they are, I think my next step will be to go
back and see who wrote what parts of that file and contact them
individually.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Tuning Python dicts

2010-04-10 Thread Reid Kleckner
Hey folks,

I was looking at tuning Python dicts for a data structures class final
project.  I've looked through Object/dictnotes.txt, and obviously
there's already a large body of work on this topic.  My idea was to
alter dict collision resolution as described in the hopscotch hashing
paper[1].  I think the PDF I have came from behind a pay-wall, so I
can't find a link to the original paper.

[1] http://en.wikipedia.org/wiki/Hopscotch_hashing

Just to be clear, this is an experiment I'm doing for a class.  If it
is successful, which I think is unlikely since Python dicts are
already well-tuned, I might consider trying to contribute it back to
CPython over the summer.

The basic idea of hopscotch hashing is to use linear probing with a
cutoff (H), but instead of rehashing when the probe fails, find the
next empty space in the table and move it into the neighborhood of the
original hash index.  This means you have to spend potentially a lot
of extra time during insertion, but it makes lookups very fast because
H is usually chosen such that the entire probe spans at most two cache
lines.  This is much better than the current random (what's the right
name for the current approach?) probing solution, which does
potentially a handful of random accesses into the table.

Looking at dictnotes.txt, I can see that people have experimented with
taking advantage of cache locality.  I was wondering what benchmarks
were used to glean these lessons before I write my own.  Python
obviously has very particular workloads that need to be modeled
appropriately, such as namespaces and **kwargs dicts.

Any other advice would also be helpful.

Thanks,
Reid


One caveat I need to work out:  If more than H items collide into a
single bucket, then you need to rehash.  However, if you have a
particularly evil hash function which always returns zero, no matter
how much you rehash, you will never be able to fit all the items into
the first H buckets.  This would cause an infinite loop, while I
believe the current solution will simply have terrible performance.
IMO the solution is just to increase H for the table if the rehash
fails, but realistically, this will never happen unless the programmer
is being evil.  I'd probably skip this detail for the class
implementation.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python compiler

2010-04-05 Thread Reid Kleckner
On Mon, Apr 5, 2010 at 11:11 AM, Michael Foord
fuzzy...@voidspace.org.uk wrote:
 Python itself is a highly dynamic language and not amenable to direct
 compilation. Instead modern just-in-time compiler technology is seen as the
 way to improve Python performance. Projects that are doing this are PyPy and
 Unladen Swallow. A static subset of Python can be statically compiled,
 projects that do that include RPython (part of PyPy) and ShedSkin. These are
 not really Python though, just Python like languages that happen to be valid
 subsets of Python.

I agree.  However, if you're doing it as a fun final project and don't
care about performance and don't mind generating slow code, then go
for it.  You'd also want to cut a bunch of corners like exec and eval.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Scope object (Re: nonlocals() function?)

2010-04-05 Thread Reid Kleckner
On Mon, Apr 5, 2010 at 7:35 PM, Antoine Pitrou solip...@pitrou.net wrote:
 If you can prove that making locals() (or its replacement) writable doesn't
 complicate the interpreter core too much, then why not. Otherwise -1 :-)

I think writable locals would significantly complicate the job of
people trying to optimize Python.  The current situation is that so
long as a code object is compiled with certain flags and avoids using
exec, then it is impossible to indirectly modify locals in a call
frame without resorting to compiled code that mucks with the frame
directly.  It was very easy for us to check for these conditions, and
if they were met, emit faster code.

Collin implemented/optimized local variable access for unladen, so he
would know more than I.

If I remember correctly, the exec statement is going away in py3k, and
calling exec() with one argument can modify the local scope.
Therefore we'll probably have to do something more sophisticated
anyway.  :(

This would impact PyPy, Jython, and the other implementations, so I
would think twice about it.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] r79397 - in python/trunk: Doc/c-api/capsule.rst Doc/c-api/cobject.rst Doc/c-api/concrete.rst Doc/data/refcounts.dat Doc/extending/extending.rst Include/Python.h Incl

2010-03-28 Thread Reid Kleckner
On Sun, Mar 28, 2010 at 3:25 PM, Larry Hastings la...@hastings.org wrote:
 M.-A. Lemburg wrote:

 Just as reminder of the process we have in place for such changes:
 Please discuss any major breakage on python-dev before checking in
 the patch.


 I'm aware this is a good idea.  I simply didn't consider this a major 
 breakage.  Recompiling against the 2.7 header files fixes it for everybody.  
 (Except external users of pyexpat, if any exist.  Google doesn't show any, 
 though this is not proof that they don't exist.)

 If you suggest that any breakage with previous versions is worth mentioning, 
 no matter how small, then I'll remember that in the future.  Certainly the 
 Python community has had many thrilling and dynamic conversations over 
 minutae, so I guess it wouldn't be that surprising if this were true.

I'm curious what is considered reasonable/unreasonable breakage.  If
this breakage just requires a recompile, then doesn't it just
introduce an ABI incompatibility?  Aren't those allowed every minor
(point) release?  Or do people believe that this is more than an ABI
change?

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Caching function pointers in type objects

2010-03-02 Thread Reid Kleckner
I don't think this will help you solve your problem, but one thing
we've done in unladen swallow is to hack PyType_Modified to invalidate
our own descriptor caches.  We may eventually want to extend that into
a callback interface, but it probably will never be considered an API
that outside code should depend on.

Reid

On Tue, Mar 2, 2010 at 9:57 PM, Benjamin Peterson benja...@python.org wrote:
 2010/3/2 Daniel Stutzbach dan...@stutzbachenterprises.com:
 In CPython, is it safe to cache function pointers that are in type objects?

 For example, if I know that some_type-tp_richcompare is non-NULL, and I
 call it (which may execute arbitrary user code), can I assume that
 some_type-tp_richcompare is still non-NULL?

 Not unless it's builtin. Somebody could have deleted the rich
 comparison methods.



 --
 Regards,
 Benjamin
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/reid.kleckner%40gmail.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-03 Thread Reid Kleckner
On Wed, Feb 3, 2010 at 6:51 AM, M.-A. Lemburg m...@egenix.com wrote:
 You lost me there :-)

 I am not familiar with how U-S actually implements the compilation
 step and was thinking of it working at the functions/methods level
 and based on input/output parameter type information.

Yes, but it's more like for every attribute lookup we ask what was
the type of the object we did the lookup on?  So, we simply take a
reference to obj-ob_type and stuff it in our feedback record, which
is limited to just three pointers.  Then when we generate code, we may
emit a guard that compares obj-ob_type in the compiled lookup to the
pointer we recorded.  We also need to place a weak reference to the
code object on the type so that when the type is mutated or deleted,
we invalidate the code, since its assumptions are invalid.

 Most Python functions and methods have unique names (when
 combined with the module and class name), so these could
 be used for the referencing and feedback writing.

Right, so when building the .so to load, you would probably want to
take all the feedback data and find these dotted names for the
pointers in the feedback data.  If you find any pointers that can't be
reliably identified, you could drop it from the feedback and flag that
site as polymorphic (ie don't optimize this site).  Then you generate
machine code from the feedback and stuff it in a .so, with special
relocation information.

When you load the .so into a fresh Python process with a different
address space layout, you try to recover the pointers to the PyObjects
mentioned in the relocation information and patch up the machine code
with the new pointers, which is very similar to the job of a linker.
If you can't find the name or things don't look right, you just drop
that piece of native code.

 The only cases where this doesn't work too well is dynamic
 programming of the sort done in namedtuples: where you
 dynamically create a class and then instantiate it.

It might actually work if the namedtuple is instantiated at module
scope before loading the .so.

 Type information for basic types and their subclasses can
 be had dynamically (there's also a basic type bitmap for
 faster lookup) or in a less robust way by name.

If I understand you correctly, you're thinking about looking the types
up by name or bitmap in the machine code.  I think it would be best to
just do the lookup once at load time and patch the native code.

 It sounds like a huge amount of work, and we haven't approached it.
 On the other hand, it sounds like it might be rewarding.

 Indeed. Perhaps this could be further investigated in a SoC
 project ?!

Or maybe a thesis.  I'm really walking out on a limb, and this idea is
quite hypothetical.  :)

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-02 Thread Reid Kleckner
On Tue, Feb 2, 2010 at 8:57 PM, Collin Winter collinwin...@google.com wrote:
 Wouldn't it be possible to have the compiler approach work
 in three phases in order to reduce the memory footprint and
 startup time hit, ie.

  1. run an instrumented Python interpreter to collect all
    the needed compiler information; write this information into
    a .pys file (Python stats)

  2. create compiled versions of the code for various often
    used code paths and type combinations by reading the
    .pys file and generating an .so file as regular
    Python extension module

  3. run an uninstrumented Python interpreter and let it
    use the .so files instead of the .py ones

 In production, you'd then only use step 3 and avoid the
 overhead of steps 1 and 2.

 That is certainly a possibility if we are unable to reduce memory
 usage to a satisfactory level. I've added a Contingency Plans
 section to the PEP, including this option:
 http://codereview.appspot.com/186247/diff2/8004:7005/8006.

This would be another good research problem for someone to take and
run.  The trick is that you would need to add some kind of linking
step to loading the .so.  Right now, we just collect PyObject*'s, and
don't care whether they're statically allocated or user-defined
objects.  If you wanted to pursue offline feedback directed
compilation, you would need to write something that basically can map
from the pointers in the feedback data to something like a Python
dotted name import path, and then when you load the application, look
up those names and rewrite the new pointers into the generated machine
code.  It sounds a lot like writing a dynamic loader.  :)

It sounds like a huge amount of work, and we haven't approached it.
On the other hand, it sounds like it might be rewarding.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Forking and Multithreading - enemy brothers

2010-02-01 Thread Reid Kleckner
On Mon, Feb 1, 2010 at 5:18 PM, Jesse Noller jnol...@gmail.com wrote:
 I don't disagree there; but then again, I haven't seen this issue
 arise (in my own code)/no bug reports/no test cases that show this to
 be a consistent issue. I'm perfectly OK with being wrong, I'm just
 leery to tearing out the internals for something else not forking.

I'd appreciate it.  It made my life a lot harder when trying to move
JIT compilation to a background thread, for exactly the reasons we've
been talking about.  All the locks in the queue can be left in an
undefined state.   I solved my problem by digging into the posix
module and inserting the code I needed to stop the background thread.

Another problem with forking from a threaded Python application is
that you leak all the references held by the other thread's stack.
This isn't a problem if you're planning on exec'ing soon, but it's
something we don't usually think about.

It would be nice if threads + multiprocessing worked out of the box
without people having to think about it.  Using threads and fork
without exec is evil.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Forking and Multithreading - enemy brothers

2010-02-01 Thread Reid Kleckner
On Mon, Feb 1, 2010 at 5:20 PM, Martin v. Löwis mar...@v.loewis.de wrote:
 Instead, we should aim to make Python fork-safe. If the primary concern
 is that locks get inherited, we should change the Python locks so that
 they get auto-released on fork (unless otherwise specified on lock
 creation). This may sound like an uphill battle, but if there was a
 smart and easy solution to the problem, POSIX would be providing it.

The right (as if you can actually use fork and threads at the same
time correctly) way to do this is to acquire all locks before the
fork, and release them after the fork.  The reason is that if you
don't, whatever data the locks guarded will be in an undefined state,
because the thread that used to own the lock was in the middle of
modifying it.

POSIX does provide pthread_atfork, but it's not quite enough.  It's
basically good enough for things like libc's malloc or other global
locks held for a short duration buried in libraries.  No one will ever
try to fork while doing an allocation, for example.  The problem is
that if you have a complicated set of locks that must be acquired in a
certain order, you can express that to pthread_atfork by giving it the
callbacks in the right order, but it's hard.

However, I think for Python, it would be good enough to have an
at_fork registration mechanism so people can acquire and release locks
at fork.  If we assume that most library locks are like malloc, and
won't actually be held while forking, it's good enough.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Forking and Multithreading - enemy brothers

2010-02-01 Thread Reid Kleckner
On Mon, Feb 1, 2010 at 5:48 PM, Jesse Noller jnol...@gmail.com wrote:
 Your reasonable argument is making it difficult for me to be irrational
 about this.

No problem.  :)

 This begs the question - assuming a patch that clones the behavior of win32
 for multiprocessing, would the default continue to be forking behavior, or
 the new?

Pros of forking:
- probably faster (control doesn't start back at Py_Main)
- more shared memory (but not that much because of refcounts)
- objects sent to child processes don't have to be pickleable

Cons:
- leaks memory with threads
- can lead to deadlocks or races with threads

I think the fork+exec or spawnl version is probably the better default
because it's safer.  If people can't be bothered to make their objects
pickleable or really want the old behavior, it can be left as an
option.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3147: PYC Repository Directories

2010-01-31 Thread Reid Kleckner
On Sun, Jan 31, 2010 at 8:34 AM, Nick Coghlan ncogh...@gmail.com wrote:
 That still leaves the question of what to do with __file__ (for which
 even the solution in the PEP isn't particularly clean). Perhaps the
 thing to do there is to have __file__ always point to the source file
 and introduce a __file_cached__ that points to the bytecompiled file on
 disk (set to None if it doesn't exist, as may be the case for __main__
 or due to writing of bytecode files being disabled).

+1 for this, it seems to be what most people want anyway, given the
code that munges the .pyc back to the .py.  I bet this change would
break very little code.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-28 Thread Reid Kleckner
On Thu, Jan 28, 2010 at 1:14 PM, Paul Moore p.f.mo...@gmail.com wrote:
 So, just to extend the question a little (or reiterate, it may be that
 this is already covered and I didn't fully understand):

 On Windows, would a C extension author be able to distribute a single
 binary (bdist_wininst/bdist_msi) which would be compatible with
 with-LLVM and without-LLVM builds of Python?

 Actually, if we assume that only a single Windows binary, presumably
 with-LLVM, will be distributed on python.org, I'm probably being
 over-cautious here, as distributing binaries compatible with the
 python.org release should be sufficient. Nevertheless, I'd be
 interested in the answer.

We have broken ABI compatibility with Python 2.6, but unladen
--without-llvm should be ABI compatible with itself.  In the future we
would probably want to set up a buildbot to make sure we don't mess
this up.  One thing we have done is to add #ifdef'd attributes to
things like the code object, but so long as you don't touch those
attributes, you should be fine.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-25 Thread Reid Kleckner
On Mon, Jan 25, 2010 at 9:05 PM, Meador Inge mead...@gmail.com wrote:
 Also related to reduced code size with C++ I was wondering whether or not
 anyone has explored using the ability of some toolchains to remove unused
 code and data?  In GCC this can be enabled by compiling with
 '-ffunction-sections' and '-fdata-sections' and linking with
 '--gc-sections'.  In MS VC++ you can compile with '/Gy' and link with
 '/OPT'.  This feature can lead to size reductions sometimes with C++ due to
 things like template instantation causing multiple copies of the same
 function to be linked in.  I played around with compiling CPython with this
 (gcc + Darwin) and saw about a 200K size drop.  I want to try compiling all
 of U-S (e.g. including LLVM) with these options next.

I'm sure someone has looked at this before, but I was also considering
this the other day.  One catch is that C extension modules need to be
able to link against any symbol declared with the PyAPI_* macros, so
you're not allowed to delete PyAPI_DATA globals or any code reachable
from a PyAPI_FUNC.

Someone would need to modify the PyAPI_* macros to include something
like __attribute__((used)) with GCC and then tell the linker to strip
unreachable code.  Apple calls it dead stripping:
http://developer.apple.com/mac/library/documentation/Darwin/Reference/ManPages/man1/ld.1.html

This seems to have a section on how to achieve the same effect with a
gnu toolchain:
http://utilitybase.com/article/show/2007/04/09/225/Size+does+matter:+Optimizing+with+size+in+mind+with+GCC

I would guess that we have a fair amount of unused LLVM code linked in
to unladen, so stripping it would reduce our size.  However, we can
only do that if we link LLVM statically.  If/When we dynamically link
against LLVM, we lose our ability to strip out unused symbols.  The
best we can do is only link with the libraries we use, which is what
we already do.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 7:25 AM, Antoine Pitrou solip...@pitrou.net wrote:
 32-bit; gcc 4.0.3

 +-+---+---+--+ |
 Binary size | CPython 2.6.4 | CPython 3.1.1 | Unladen Swallow r988 |
 +=+===+===+==+ |
 Release     | 3.8M          | 4.0M          |  74M                 |
 +-+---+---+--+ |

 This is positively humongous. Is there any way to shrink these numbers
 dramatically (I'm talking about the release builds)? Large executables or
 libraries may make people anxious about the interpreter's memory
 efficiency; and they will be a nuisance in many situations (think making
 standalone app bundles using py2exe or py2app).

When we link against LLVM as a shared library, LLVM will still all be
loaded into memory, but it will be shared between all python
processes.

The size increase is a recent regression, and we used to be down
somewhere in the 20 MB range:
http://code.google.com/p/unladen-swallow/issues/detail?id=118

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 9:35 AM, Floris Bruynooghe
floris.bruynoo...@gmail.com wrote:
 I just compiled with the --without-llvm option and see that the
 binary, while only an acceptable 4.1M, still links with libstdc++.  Is
 it possible to completely get rid of the C++ dependency if this option
 is used?  Introducing a C++ dependency on all platforms for no
 additional benefit (with --without-llvm) seems like a bad tradeoff to
 me.

There isn't (and shouldn't be) any real source-level dependency on
libstdc++ when LLVM is turned off.  However, the eval loop is now
compiled as C++, and that may be adding some hidden dependency
(exception handling code?).  The final binary is linked with $(CXX),
which adds an implicit -lstdc++, I think.  Someone just has to go and
track this down.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 12:27 PM, Jake McGuire mcgu...@google.com wrote:
 On Wed, Jan 20, 2010 at 2:27 PM, Collin Winter collinwin...@google.com 
 wrote:
 Profiling
 -

 Unladen Swallow integrates with oProfile 0.9.4 and newer [#oprofile]_ to 
 support
 assembly-level profiling on Linux systems. This means that oProfile will
 correctly symbolize JIT-compiled functions in its reports.

 Do the current python profiling tools (profile/cProfile/pstats) still
 work with Unladen Swallow?

Sort of.  They disable the use of JITed code, so they don't quite work
the way you would want them to.  Checking tstate-c_tracefunc every
line generated too much code.  They still give you a rough idea of
where your application hotspots are, though, which I think is
acceptable.

oprofile is useful for figuring out if more time is being spent in
JITed code or with interpreter overhead.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 3:14 PM, Collin Winter collinwin...@google.com wrote:
 P.S. Is there any chance of LLVM doing something like tracing JITs?
 Those seem somewhat more promising to me (even though I understand
 they're quite hard in the face of Python features like stack frames).

 Yes, you could implement a tracing JIT with LLVM. We chose a
 function-at-a-time JIT because it would a) be an easy-to-implement
 baseline to measure future improvement, and b) create much of the
 infrastructure for a future tracing JIT. Implementing a tracing JIT
 that crosses the C/Python boundary would be interesting.

I was thinking about this recently.  I think it would be a good 3
month project for someone.

Basically, we could turn off feedback recording until we decide to
start a trace at a loop header, at which point we switch to recording
everything, and compile the trace into a single stream of IR with a
bunch of guards and side exits.  The side exits could be indirect tail
calls to either a side exit handler, or a freshly compiled trace
starting at the opcode where the side exit occurred.  The default
handler would switch back to the interpreter, record the trace, kick
off compilation, and patch the indirect tail call target.

The only limitation with that approach is that you would have to do
extra work to propagate conditions like passed guards across the call
boundary, since we currently try to throw away as much LLVM IR as
possible after compilation to save memory.

So yes, I think it would be possible to implement a tracing JIT in the
future.  If people are really interested in that, I think the best way
to get there is to land unladen in py3k as described in the PEP and do
more perf work like this there and in branches on python.org, where it
can be supported by the wider Python developer community.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 4:34 PM, Martin v. Löwis mar...@v.loewis.de wrote:
 How large is the LLVM shared library? One surprising data point is that the
 binary is much larger than some of the memory footprint measurements given in
 the PEP.

 Could it be that you need to strip the binary, or otherwise remove
 unneeded debug information?

Python is always built with debug information (-g), at least it was in
2.6.1 which unladen is based off of, and we've made sure to build LLVM
the same way.  We had to muck with the LLVM build system to get it to
include debugging information.  On my system, stripping the python
binary takes it from 82 MB to 9.7 MB.  So yes, it contains extra debug
info, which explains the footprint measurements.  The question is
whether we want LLVM built with debug info or not.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-21 Thread Reid Kleckner
On Thu, Jan 21, 2010 at 5:07 PM, David Malcolm dmalc...@redhat.com wrote:
 To what extent would it be possible to use (conditionally) use full
 ahead-of-time compilation as well as JIT?

It would be possible to do this, but it doesn't have nearly the same
benefits as JIT compilation, as Alex mentioned.  You could do a static
compilation of all code objects in a .pyc to LLVM IR and compile that
to a .so that you load at runtime, but it just eliminates the
interpreter overhead.  That is significant, and I think someone should
try it, but I think there are far more wins to be had using feedback.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-20 Thread Reid Kleckner
On Wed, Jan 20, 2010 at 8:14 PM, Terry Reedy tjre...@udel.edu wrote:
 If CPython development moves to distributed hg, the notion of 'blessed'
 branches (other than the PSF release branch) will, as I understand it,
 become somewhat obsolete. If you make a branch publicly available, anyone
 can grab it and merge it with their branch, just as they can with anyone
 elses.

It's true that as Martin said, we can rebase our code to Py3K in a
branch on python.org any time we like, the question is more if we do
the work, will the Python community accept it.

 Given the slight benefits compared to the costs, I think this, in its
 current state, should be optional, such as is psyco.

How optional would you want it to be?  I'll point out that there are
two ways you can turn off the JIT right now:
1) As a configure time option, pass --without-llvm.  Obviously, this
is really only useful to people who are building their own binaries,
or for embedded platforms.
2) As a command line option, you can pass -j never.  If you have a
short-lived script, you can just stick this in your #! line and forget
about it.  This has more overhead, since all of the JIT machinery is
loaded into memory but never used.  Right now we record feedback that
will never be used, but we could easily make that conditional on the
jit control flag.

 Your results suggest that speeding up garden-variety Python code is harder
 than it sometimes seems. I wonder how your results from fancy codework
 compare, for instance, with simply making built-in names reserved, so that,
 for instance, len = whatever is illegal, and all such names get
 dereferenced at compile time.

That's cheating.  :)

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Whether to call Py_Finalize when exiting from the child process of a fork from a spawned thread

2009-09-01 Thread Reid Kleckner
Hi all,

I'm working on http://bugs.python.org/issue6642 for unladen swallow
(because it happens to bite us in a weird way), and Jeff Yasskin told
me to ask python-dev what the proper behavior should be at exit from
the child process of a fork from a spawned thread.

Right now, it seems that there is an assumption that when exiting the
process, control will always return through Py_Main, which in turn
calls Py_Finalize, to do things like GC and calling atexit handlers.
Normally, if you fork a process from the main thread, this assumption
will remain true, because main and Py_Main are still at the bottom of
the stack in the child process.  However, if you fork from a spawned
thread, then the root of the stack will be the thread bootstrap
routine for the platform's pythreads implementation.

On one hand, you may not want to call the user's atexit handlers
multiple times from different processes if they have externally
visible effects.  On the other hand, people seem to assume that
Py_Finalize will be called at process exit to do various cleanups.  On
the third hand, maybe Python could just clear out all the atexit
handlers in the child after a fork.  So what should the correct
behavior be?

Thanks,
Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Whether to call Py_Finalize when exiting from the child process of a fork from a spawned thread

2009-09-01 Thread Reid Kleckner
On Tue, Sep 1, 2009 at 2:58 PM, Martin v. Löwismar...@v.loewis.de wrote:
 On one hand, you may not want to call the user's atexit handlers
 multiple times from different processes if they have externally
 visible effects.  On the other hand, people seem to assume that
 Py_Finalize will be called at process exit to do various cleanups.  On
 the third hand, maybe Python could just clear out all the atexit
 handlers in the child after a fork.  So what should the correct
 behavior be?

 Standard POSIX fork semantics should be a guidance. IIUC, termination
 of the last thread is equivalent to calling exit(0) (although return
 from main() still means that exit is invoked right away, and the return
 value of main is the exit code - right?). Calling exit means to call
 all exit handlers.

It depends, there is also _exit, which exists solely for the purpose
of working around exit handlers being called from a forked child
process at exit.  Which semantics should Python have?  In my opinion,
it is more obvious that the user's handlers would be called than not,
so I agree with you.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com