Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-07 Thread Fernando Perez

On 2017-09-07 23:00:43 +, Barry Warsaw said:


On Sep 7, 2017, at 14:25, Barry Warsaw  wrote:


I’ll see what it takes to add `header` to pdb.set_trace(), but I’ll do 
that as a separate PR (i.e. not as part of this PEP).


Turns out to be pretty easy.

https://bugs.python.org/issue31389
https://github.com/python/cpython/pull/3438


Ah, perfect! I've subscribed to the PR on github and can pitch in there 
further if my input is of any use.


Thanks again,

f


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-07 Thread Fernando Perez

On 2017-09-07 00:20:17 +, Barry Warsaw said:

Thanks Fernando, this is exactly the kind of feedback from other 
debuggers that I’m looking for.  It certainly sounds like a handy 
feature; I’ve found myself wanting something like that from pdb from 
time to time.


Glad it's useful, thanks for the pep!

The PEP has an open issue regarding breakpoint() taking *args and 
**kws, which would just be passed through the call stack.  It sounds 
like you’d be in favor of that enhancement.


If you go witht the `(*a, **k)` pass-through API, would you have a 
special keyword-only arg called 'header' or similar? That seems like a 
decent compromise to support the feature with the builtin while 
allowing other implementations to offer more features. In any case, +1 
to a pass-through API, as long as the built-in supports some kind of 
mechanism to help the user get their bearings with "you're here" type 
messages.


Cheers

f


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Fernando Perez
If I may suggest a small API tweak, I think it would be useful if 
breakpoint() accepted an optional header argument. In IPython, the 
equivalent for non-postmortem debugging is IPython.embed, which can be 
given a header. This is useful to provide the user with some 
information about perhaps where the breakpoint is coming from, relevant 
data they might want to look at, etc:


```
from IPython import embed

def f(x=10):
   y = x+2
   embed(header="in f")
   return y

x = 20
print(f(x))
embed(header="Top level")
```

I understand in most cases these are meant to be deleted right after 
usage and the author is likely to have a text editor open next to the 
terminal where they're debugging.  But still, I've found myself putting 
multiple such calls in a code to look at what's going on in different 
parts of the execution stack, and it can be handy to have a bit of 
information to get your bearings.


Just a thought...

Best

f


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What if we didn't have repr?

2013-05-23 Thread Fernando Perez
On Tue, 21 May 2013 06:36:54 -0700, Guido van Rossum wrote:

> Actually changing __str__ or __repr__ is out of the question, best we
> can do is discourage makingbthem different. But adding a protocol for
> pprint (with extra parameters to convey options) is a fair idea. I note
> that Nick sggested to use single-dispatch generic functions for this
> though. Both have pros and cons. Post design ideas to python-ideas
> please, not here!

Just in case you guys find this useful, in IPython we've sort of created 
this kind of 'extended repr protocol', described and illustrated here 
with examples:

http://nbviewer.ipython.org/url/github.com/ipython/ipython/raw/master/
examples/notebooks/Custom%20Display%20Logic.ipynb

It has proven to be widely used in practice.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] A panel with Guido/python-dev on scientific uses and Python 3 at Google HQ, March 2nd

2012-02-21 Thread Fernando Perez
On Tue, 21 Feb 2012 07:44:41 +, Fernando Perez wrote:

> I wanted to point out to you folks, and invite any of you who could make
> it in person, to a panel discussion we'll be having on Friday March 2nd,
> at 3pm, during the PyData workshop that will take place at Google's
> headquarters in Mountain View:
> 
> http://pydataworkshop.eventbrite.com

as luck would have it, it seems that *today* eventbrite revamped their url 
handling and the url I gave yesterday no longer works; it's now:

http://pydataworkshop-esearch.eventbrite.com/?srnk=1

Sorry for the hassle, folks.

Ah, Murphy's law, web edition...

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] A panel with Guido/python-dev on scientific uses and Python 3 at Google HQ, March 2nd

2012-02-20 Thread Fernando Perez
Hi all,

I wanted to point out to you folks, and invite any of you who could make 
it in person, to a panel discussion we'll be having on Friday March 2nd, 
at 3pm, during the PyData workshop that will take place at Google's 
headquarters in Mountain View:

http://pydataworkshop.eventbrite.com

The PyData workshop is organized by several developers coming from the 
numerical/scientific side of the Python world, and we thought this would 
be a good opportunity, both timing- and logistics-wise, for a discussion 
with as many Python developers as possible.  The upcoming Python 3.3 
release, the lifting of the language moratorium, the gradual (but slow) 
uptake of Python 3 in science, the continued and increasing growth of 
Python as a tool in scientific research and education, etc, are all good 
reasons for thinking this could be a productive discussion.

This is the thread on the Numpy mailing list where we've had some back-and-
forth about ideas:

http://mail.scipy.org/pipermail/numpy-discussion/2012-February/060437.html


Guido has already agreed to participate, and a number of developers for 
'core' scientific Python projects will be present at the panel, including:

- Travis Oliphant, Peter Wang, Mark Wiebe, Stefan van der Walt (Numpy, 
Scipy)
- John Hunter (Matplotlib)
- Fernando Perez, Brian Granger, Min Ragan-Kelley (IPython)
- Dag Sverre Seljebotn (Numpy, Cython)

It would be great if as many core Python developers for whom a Bay Area 
Friday afternoon drive to Mountain View is feasible could attend.  Those 
of you already at Google will hopefully all make it, of course :)

We hope this discussion will be a good start for interesting developments 
that require dialog between the 'science crowd' and python-dev.  Several 
of us will also be available at PyCon 2012, so if there's interest we can 
organize an informal follow-up/BoF on this topic the next week at PyCon.

Please forward this information to anyone you think might be interested 
(I'll be posting in a second to the Bay Piggies list).

If you are not a Googler nor already registered for PyData, but would like 
to attend, please let me know by emailing me at:

fernando.pe...@berkeley.edu

We have room for a few extra people (in addition to PyData attendees) for 
this particular meeting, and we'll do our best to accomodate you.  Please 
let me know if you're a core python committer in your message.

I'd like to thank Google for their hospitality in hosting us for PyData, 
and Guido for his willingness to take part in this discussion.  I hope it 
will be a productive one for all involved.

Best,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent script/console behaviour

2011-12-17 Thread Fernando Perez
On Fri, 23 Sep 2011 16:32:30 -0700, Guido van Rossum wrote:

> You can't fix this without completely changing the way the interactive
> console treats blank lines. None that it's not just that a blank line is
> required after a function definition -- you also *can't* have a blank
> line *inside* a function definition.
> 
> The interactive console is optimized for people entering code by typing,
> not by copying and pasting large gobs of text.
> 
> If you think you can have it both, show us the code.

Apology for the advertising, but if the OP is really interested in that 
kind of behavior, then instead of asking for making the default shell more 
complex, he can use ipython which supports what he's looking for:

In [5]: def some():
   ...:   print 'xxx'
   ...: some()
   ...: 
xxx

and even blank lines inside functions (albeit only in certain locations):

In [6]: def some():
   ...: 
   ...:   print 'xxx'
   ...: some()
   ...: 
xxx


Now, the dances we have to do in ipython to achieve that are much more 
complex than what would be reasonable to have in the default '>>>' python 
shell, which should remain simple, light and robust.  But ipython is a 
simple install for someone who wants fancier features for interactive work.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] range objects in 3.x

2011-09-28 Thread Fernando Perez
On Thu, 29 Sep 2011 11:36:21 +1300, Greg Ewing wrote:


>> I do hope, though, that the chosen name is *not*:
>> 
>> - 'interval'
>> 
>> - 'interpolate' or similar
> 
> Would 'subdivide' be acceptable?

I'm not great at finding names, and I don't totally love it, but I 
certainly don't see any problems with it.  It is, after all, a subdivision 
of an interval :)

I think 'grid' has been mentioned, and I think it's reasonable, even 
though most people probably associate the word with a two-dimensional 
object.  But grids can have any desired dimensionality.

Now, in fact, numpy has a slightly demented (but extremely useful) ogrid 
object:

In [7]: ogrid[0:10:3]
Out[7]: array([0, 3, 6, 9])

In [8]: ogrid[0:10:3j]
Out[8]: array([  0.,   5.,  10.])

Yup, that's a complex slice :)

So if python named the builtin 'grid', I think it would go well with 
existing numpy habits.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] range objects in 3.x

2011-09-28 Thread Fernando Perez
On Tue, 27 Sep 2011 11:25:48 +1000, Steven D'Aprano wrote:

> The audience for numpy is a small minority of Python users, and they

Certainly, though I'd like to mention that scientific computing is a major 
success story for Python, so hopefully it's a minority with something to 
contribute 

> tend to be more sophisticated. I'm sure they can cope with two functions
> with different APIs 

No problem with having different APIs, but in that case I'd hope the 
builtin wouldnt' be named linspace, to avoid confusion.  In numpy/scipy we 
try hard to avoid collisions with existing builtin names, hopefully in 
this case we can prevent the reverse by having a dialogue.

> While continuity of API might be a good thing, we shouldn't accept a
> poor API just for the sake of continuity. I have some criticisms of the
> linspace API.
> 
> numpy.linspace(start, stop, num=50, endpoint=True, retstep=False)
> 
> http://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html
> 
> * It returns a sequence, which is appropriate for numpy but in standard
> Python it should return an iterator or something like a range object.

Sure, no problem there.

> * Why does num have a default of 50? That seems to be an arbitrary
> choice.

Yup.  linspace was modeled after matlab's identically named command:

http://www.mathworks.com/help/techdoc/ref/linspace.html

but I have no idea why the author went with 50 instead of 100 as the 
default (not that 100 is any better, just that it was matlab's choice).  
Given how linspace is often used for plotting, 100 is arguably a more 
sensible choice to get reasonable graphs on normal-resolution displays at 
typical sizes, absent adaptive plotting algorithms.

> * It arbitrarily singles out the end point for special treatment. When
> integrating, it is just as common for the first point to be singular as
> the end point, and therefore needing to be excluded.

Numerical integration is *not* the focus of linspace(): in numerical 
integration, if an end point is singular you have an improper integral and 
*must* approach the singularity much more carefully than by simply 
dropping the last point and hoping for the best.  Whether you can get away 
by using (desired_end_point - very_small_number) --the dumb, naive 
approach-- or not depends a lot on the nature of the singularity.

Since numerical integration is a complex and specialized domain and the 
subject of an entire subcomponent of the (much bigger than numpy) scipy 
library, there's no point in arguing the linspace API based on numerical 
integration considerations.

Now, I *suspect* (but don't remember for sure) that the option to have it 
right-hand-open-ended was to match the mental model people have for range:

In [5]: linspace(0, 10, 10, endpoint=False)
Out[5]: array([ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9.])

In [6]: range(0, 10)
Out[6]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]


I'm not arguing this was necessarily a good idea, just my theory on how it 
came to be.  Perhaps R. Kern or one of the numpy lurkers in here will 
pitch in with a better recollection.

> * If you exclude the end point, the stepsize, and hence the values
> returned, change:
> 
>  >>> linspace(1, 2, 4)
> array([ 1.,  1.,  1.6667,  2.])
>  >>> linspace(1, 2, 4, endpoint=False)
> array([ 1.  ,  1.25,  1.5 ,  1.75])
> 
> This surprises me. I expect that excluding the end point will just
> exclude the end point, i.e. return one fewer point. That is, I expect
> num to count the number of subdivisions, not the number of points.

I find it very natural.  It's important to remember that *the whole point* 
of linspace's existence is to provide arrays with a known, fixed number of 
points:

In [17]: npts = 10

In [18]: len(linspace(0, 5, npts))
Out[18]: 10

In [19]: len(linspace(0, 5, npts, endpoint=False))
Out[19]: 10

So the invariant to preserve is *precisely* the number of points, not the 
step size.  As Guido has pointed out several times, the value of this 
function is precisely to steer people *away* from thinking of step sizes 
in a context where they are more likely than not going to get it wrong.  
So linspace focuses on a guaranteed number of points, and lets the step 
size chips fall where they may.


> * The retstep argument changes the return signature from => array to =>
> (array, number). I think that's a pretty ugly thing to do. If linspace
> returned a special iterator object, the step size could be exposed as an
> attribute.

Yup, it's not pretty but understandable in numpy's context, a library that 
has a very strong design focus around arrays, and numpy arrays don't have 
writable attributes:

In [20]: a = linspace(0, 10)

In [21]: a.stepsize = 0.1
---
AttributeErrorTraceback (most recent call last)
/home/fperez/ in ()
> 1 a.stepsize = 0.1

AttributeError: 'numpy.ndarray' object has no attribute 'stepsize'


So while 

Re: [Python-Dev] range objects in 3.x

2011-09-26 Thread Fernando Perez
On Sat, 24 Sep 2011 08:13:11 -0700, Guido van Rossum wrote:

> I expect that to implement a version worthy of the stdlib math module,
> i.e. that computes values that are correct within 0.5ULP under all
> circumstances (e.g. lots of steps, or an end point close to the end of
> the floating point range) we'd need a numerical wizard like Mark
> Dickinson or Tim Peters (retired). Or maybe we could just borrow numpy's
> code.

+1 to using the numpy api, having continuity of API between the two would 
be great (people work interactively with 'from numpy import *', so having 
the linspace() call continue to work identically would be a bonus).

License-wise there shouldn't be major issues in using the numpy code, as 
numpy is all BSD.  Hopefully if there are any, the numpy community can 
help out.  And now that Mark Dickinson is at Enthought (http://
enthought.com/company/developers.php) where Travis Oliphant --numpy 
author-- works, I'm sure the process of ironing out any implementation/api 
quirks could be handled easily.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] nonlocal keyword in 2.x?

2009-10-22 Thread Fernando Perez
On Thu, 22 Oct 2009 12:32:37 -0300, Fabio Zadrozny wrote:


> Just as a note, the nonlocal there is not a requirement...
> 
> You can just create a mutable object there and change that object (so,
> you don't need to actually rebind the object in the outer scope).
> 
> E.g.: instead of creating a float in the context, create a list with a
> single float and change the float in the list (maybe the nonlocal would
> be nicer, but it's certainly still usable)

Yup, that's what I meant by 'some slightly ugly solutions' in this note:

http://mail.scipy.org/pipermail/ipython-dev/2009-September/005529.html

in the thread that spawned those notes.  nonlocal allows for this pattern 
to work without the ugliness of writing code like:

s = [s]

@somedeco
def foo():
  s[0] += 1

s = s[0]

just to be able to 'change s' inside the foo() scope.


I felt this was both obvious and ugly enough not to warrant too much 
explicit mention, but I probably should have left it there for the sake of 
completeness.  Thanks for the feedback.

Cheers,

f

ps - the above shouldn't be taken as either pro or con on the idea of 
nonlocal in 2.x, just a clarification on why I didn't add the mutable 
container trick to the original notes.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 389: argparse - new command line parsing module

2009-09-27 Thread Fernando Perez
On Sun, 27 Sep 2009 14:57:34 -0700, Brett Cannon wrote:

> I am going to state upfront that I am +1 for this and I encouraged
> Steven to submit this PEP on the stdlib-SIG. I still remember watching
> Steven's lightning talk at PyCon 2009 on argparse and being impressed by
> it (along with the rest of the audience which was obviously impressed as
> well).
> 
> I think argprase is a good improvement over what we have now (getopt and
> optparse), it's stable, considered best-of-breed by the community for a
> while (as shown as how many times argparse has been suggestion for
> inclusion), and Steven is already a core committer so is already set to
> maintain the code. That covers the usual checklist we have for even
> looking at a PEP to add a module to the standard library.

FWIW, +1 from IPython: we ship a private copy of argparse as well (we use 
it in several places but we want the basic ipython to be installable just 
with the stdlib). Steven was gracious enough to relicense it as BSD at our 
request:

http://mail.scipy.org/pipermail/ipython-dev/2009-September/005537.html

so we could ship this copy without any GPL incompatibilities, but we'd 
much rather rely on stdlib components.

Best,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Feedback from numerical/math community on PEP 225

2008-11-07 Thread Fernando Perez
Hi all,

a while back there was a discussion about new operators for the language, which
ended in people mentioning that the status of PEP 225 was still undecided and
that it was the most likely route to consider in this discussion.  I offered
to collect some feedback from the numerical and math/scientific computing
communities and report back here.

Here are the results, with some background and links (I'll also paste the
original reST file at the end for reference and archival):

https://cirl.berkeley.edu/fperez/static/numpy-pep225/numpy-pep225.html

I'll mention this thread on the numpy and sage lists in case anyone from the
original discussions would like to pitch in.  

I hope this feedback is useful for you guys to make a decision.  I should note
that I'm just acting as a scribe here, relaying feedback (though I do like the
PEP and think it would be very useful for numpy).  I would most certainly NOT
be available for implementing the PEP in Python itself...

Cheers,

f


### Original reST source for feedback doc kept at
### https://cirl.berkeley.edu/fperez/static/numpy-pep225/numpy-pep225.html

=
 Discussion regarding possible new operators in Python (PEP 225)
=====

.. Author: Fernando Perez
.. Contact: [EMAIL PROTECTED]
.. Time-stamp: "2008-10-28 16:47:52 fperez"
.. Copyright: this document has been placed in the public domain.

.. contents::
..
1  Introduction
2  Background: matrix multiplication and Numpy
3  Summary from the NumPy community
4  Arguments neutral towards or against the PEP
5  Other considerations and suggestions
  5.1  Operator form for logical_X functions
  5.2  Alternate comparisons
  5.3  Why just go one step?
  5.4  Alternate syntax
  5.5  Unicode operators
  5.6  D-language inspired syntax


Introduction


In the Python-dev mailing lists, there were recently two threads regarding the
possibility of adding to the language a new set of operators.  This would
provide easy syntactic support for behavior such as element-wise and
matrix-like multiplication for numpy arrays.  The original python-dev threads
are here:

* http://mail.python.org/pipermail/python-dev/2008-July/081508.html
* http://mail.python.org/pipermail/python-dev/2008-July/081551.html

And the actual PEP that discusses this at length is PEP 225:

* http://www.python.org/dev/peps/pep-0225/

In order to provide feedback from the scientific/numerical community, a
discussion was held on the numpy mailing list.  This document is a brief
summary of this discussion, in addition to some issues that were brought up
during a BOF session that was held during the SciPy'08 conference.  The point
of this document is to provide the Python development team (and ultimately
Guido) with feedback from a community that would be likely an important target
of the features discussed in PEP 225, hoping that a final decision can be made
on the PEP, either as-is or with modifications arising from this feedback.

This document contains a summary of an original discussion whose thread in the
numpy list can be found here:

* http://projects.scipy.org/pipermail/numpy-discussion/2008-August/036769.html
* http://projects.scipy.org/pipermail/numpy-discussion/2008-August/036858.html

and it has been further updated with a final round of vetting.  The final
thread which this document summarizes is here:

* http://projects.scipy.org/pipermail/numpy-discussion/2008-October/038234.html


Background: matrix multiplication and Numpy
===

It is probably useful, for the sake of the Python dev team, to provide a bit of
background regarding the array/matrix situation in `Numpy
<http://numpy.scipy.org>`_, which is at the heart of much of this.  The numpy
library provides, in addition to the basic arrays that support element-wise
operations::

In [1]: from numpy import array, matrix

In [2]: a = array([[1,2],[3,4]])

In [3]: b = array([[10,20],[30,40]])

In [4]: a*b
Out[4]:
array([[ 10,  40],
   [ 90, 160]])

also an object called ``matrix`` that implements classic matrix multiplication
semantics for the ``*`` operator::
   
In [5]: aM = matrix(a)

In [6]: bM = matrix(b)

In [7]: aM*bM
Out[7]:
matrix([[ 70, 100],
[150, 220]])

The existence of two almost-but-not-quite identical objects at the core of
numpy has been the source of a large amount of discussion and effort to provide
smoother integration.  Yet, despite much work it is the opinion of many that
the situation will always remain unsatisfactory, as many pitfalls continue to
exist.  It is very easy to pass a matrix to some routine that does numerical
manipulations on its inputs and forgets to verify that a matrix was received,
and to end up with an array instead afterwards.  While muc

Re: [Python-Dev] Documentation idea

2008-10-15 Thread Fernando Perez
Raymond Hettinger wrote:


> Bright idea
> --
> Let's go one step further and do this just about everywhere and instead of
> putting it in the docs, attach an exec-able string as an
> attribute to our C functions.  Further, those pure python examples should
> include doctests so that the user can see a typical invocation and calling
> pattern.
> 
> Say we decide to call the attribute something like ".python", then you
> could write something like:
> 
> >>> print(all.python)
>def all(iterable):
> '''Return True if all elements of the iterable are true.
> 

[...]

+1 from the peanut gallery, with a note: since ipython is a common way for
many to use/learn python interactively, if this is adopted, we'd
*immediately* add to ipython's '?' introspection machinery the ability to
automatically find this information.  This way, when people type "all?"
or "all??" we'd fetch the doc and source code.

A minor question inspired by this: would it make sense to split the
docstring part from the code of this .python object?  I say this because in
principle, the docstring should be the same of the 'parent', and it would
simplify our implementation to eliminate the duplicate printout. 
The .python object could always be a special string-like object made from
combining the pure python code with a single docstring, common to the C and
the Python versions, that would remain exec-able.

In any case, details aside I think this is great and if it comes to pass,
we'll be happy to make it readily accessible to interactive users via
ipython.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Matrix product

2008-08-01 Thread Fernando Perez
Guido van Rossum wrote:

> On Fri, Jul 25, 2008 at 6:50 PM, Greg Ewing <[EMAIL PROTECTED]>
> wrote:
>> Sebastien Loisel wrote:
>>
>>> What are the odds of this thing going in?
>>
>> I don't know. Guido has said nothing about it so far this
>> time round, and his is the only opinion that matters in the
>> end.
> 
> I'd rather stay silent until a PEP exists, but I should point out that
> last time '@' was considered as a new operator, that character had no
> uses in the language at all. Now it is the decorator marker. Therefore
> it may not be so attractive any more.

Others have indicated already how pep 225 seems to be the best current summary 
of this issue.  Here's a concrete proposal:  the SciPy conference, where a lot 
of people with a direct stake on this mattter will be present, will be held 
very soon (August 19-24 at Caltech):

http://conference.scipy.org/

I am hereby volunteering to try to organize a BOF session at the conference on 
this topic, and can come back later with the summary.  I'm also scheduled to 
give a talk at BayPiggies on Numpy/Scipy soon after the conference, so that may 
be a good opportunity to have some further discussions in person with some of 
you.

It's probably worth noting that python is *really* growing in the scientific 
world.  A few weeks ago I ran a session on Python for science at the annual 
SIAM conference (the largest applied math conference in the country), with 
remarkable success:

http://fdoperez.blogspot.com/2008/07/python-tools-for-science-go-to-siam.html

(punchline: we were selected for the annual highlights - 
http://www.ams.org/ams/siam-2008.html#python).

This is just to show that python really matters to scientific users, and its 
impact is growing rapidly, as the tools mature and we reach critical mass so 
the network effects kick in.  It would be great to see this topic considered 
for the language in the 2.7/3.1 timeframe, and I'm willing to help with some of 
the legwork.

So if this idea sounds agreeable to python-dev, I'd need to know whether I 
should propose the BOF using pep 225 as a starting point, or if there are any 
other considerations on the matter I should be aware of (I've read this thread 
in full, but I just want to start on track since the BOF is a one-shot event).  
I'll obviously post this on the numpy/scipy mailing lists so those not coming 
to the conference can participate, but an all-hands BOF is an excellent 
opportunity to collect feedback and ideas from the community that is likely to 
care most about this feature.

Thanks,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New Developer

2008-01-11 Thread Fernando Perez
Mark Dickinson wrote:

> Hello all,
> I've recently been granted commit privileges; so, following the usual
> protocol, here's a quick introduction.  I'm a mathematician by day;  my
> degree is in number theory, but five summers of Fortran 77 programming and
> two semesters of teaching numerical analysis have given me a taste for
> numerics as well.  I discovered Python around twelve years ago and found
> that it fit my brain nicely (even more so after nested namespaces were
> introduced) and now use it almost daily for a wide variety of tasks.  I've
> been lurking on python-dev for longer than I care to admit to.  I also
> dabble in Haskell and O'Caml.

Very interesting!  Are you aware of Sage? http://sagemath.org.  All
Python-based, developed originally by a number theorist
(http://wstein.org), and with a rapidly growing team of developers
(including John Cremona, who's contributed a lot of his code to Sage).  

The Python-dev team should be proud of the impact Python is having in
scientific computing: python is without a doubt the leading tool for open
source, high-level scientific codes (i.e. not Fortran/C), and growing. 
Thanks!


I normally wouldn't announce this here, but please forgive the mini-spam
(and let's continue off list if you are interested):

http://wiki.sagemath.org/days8

Just contact me off list at [EMAIL PROTECTED] if you think you'd
like to attend.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Pydoc Improvements / Rewrite

2007-01-05 Thread Fernando Perez
Ron Adam wrote:

> Laurent Gautier wrote:

>  > From the top of my head, there might be  "ipython" (the excellent
>  > interactive console) is possibly using pydoc
>  > (in any case, I would say that the authors would be interested in
>  > developments with pydoc)

Certainly :)  I'd like to ask whether this discussion considers any kind of
merging of pydoc with epydoc.  Many projects (ipython included) are moving
towards epydoc as a more capable system than pydoc, though it would be nice
to see its API-doc-generation capabilities be part of the stdlib.  I don't
know if that's considered either too large or too orthogonal to the current
direction.

> According to the web site, ipython is based on twisted, and is currently
> still
> limited to python 2.3 and 2.4.  Also, the output of the help() function
> will not change much so I doubt it would be a problem for them.

A few corrections:

- IPython's trunk is NOT based on twisted at all, it's a self-contained
Python package which depends only on the Python standard library (plus
readline support under windows, which we also provide but requires ctypes).

- The ipython development branch does use twisted, but only for its
distributed and parallel computing capabilities.  Eventually when that
branch becomes trunk, there will /always/ be a non-twisted component for
local, terminal-based work.

- The last release (0.7.3) fully supports Python 2.5.  In fact, one nasty
bug in 2.5 with extremely slow traceback generation was kindly fixed by
python-dev in the nick of time after my pestering (an ipython user found it
first and brought it to my attention).

>  > Otherwise a quick search lead to:
>  > - "cgitb" (!? - it seems that the HTML formatting functions of pydoc
>  > are only in use - wouldn't these functions belong more naturally to
>  > "cgi" ?)
> 
> Thanks!, These html formatting functions still exist or are small enough
> to move
> into cgitb, so it will be just a matter of making sure they can be
> reached.  I don't think they will be a problem.

If anyone is interested in updating cgitb, you might want to look at
ipython's ultratb (which was born as a cgitb port to ANSI terminals):

http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/IPython/ultraTB.py

It contains functionality for generating /extremely/ detailed tracebacks
with gobs of local detail.  These verbose tracebacks have let me fix many
ipython bugs from crash dumps triggered by remote code and libraries I
don't have access to, in cases where a normal traceback would have been
insufficient.  Here's a link to a slightly outdated simple example (the
formatting has improved a bit):

http://ipython.scipy.org/screenshots/snapshot1.png

Obviously the right thing to do would be to separate the ANSI coloring from
the structural formatting, so that the traceback could be formatted as
HTML, ANSI colors or anything else.  That is very much on my todo list,
since the network-enabled ipython will have browser-based interfaces in the
future.

All of ipython is BSD licensed, so any code in there is for the taking.

Best,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] inspect.py very slow under 2.5

2006-09-07 Thread Fernando Perez
Nick Coghlan wrote:

> I've updated the patch on SF, and committed the fix (including PJE's and
> Neal's comments) to the trunk.
> 
> I'll backport it tomorrow night (assuming I don't hear any objections in the
> meantime :).

I just wanted to thank you all for taking the time to work on this, even with
my 11-th hour report.  Greatly appreciated, really.

Looking forward to 2.5!

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] inspect.py very slow under 2.5

2006-09-05 Thread Fernando Perez
Hi all,

I know that the 2.5 release is extremely close, so this will probably be
2.5.1 material.  I discussed it briefly with Guido at scipy'06, and he
asked for some profile-based info, which I've only now had time to gather. 
I hope this will be of some use, as I think the problem is rather serious.

For context: I am the IPython lead developer (http://ipython.scipy.org), and
ipython is used as the base shell for several interactive environments, one
of which is the mathematics system SAGE
(http://modular.math.washington.edu/sage).  It was the SAGE lead who first
ran into this problem while testing SAGE with 2.5.

The issue is the following: ipython provides several exception reporting
modes which give a lot more information than python's default tracebacks. 
In order to generate this info, it makes extensive use of the inspect
module.  The module in ipython responsible for these fancy tracebacks is:

http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/IPython/ultraTB.py

which is an enhanced port of Ka Ping-Yee's old cgitb module.

Under 2.5, the generation of one of these detailed tracebacks is /extremely/
expensive, and the cost goes up very quickly the more modules have been
imported into the current session.  While in a new ipython session the
slowdown is not crippling, under SAGE (which starts with a lot of loaded
modules) it is bad enough to make the system nearly unusable.

I'm attaching a little script which can be run to show the problem, but you
need IPython to be installed to run it.  If any of you run ubuntu, fedora,
suse or almost any other major linux distro, it's already available via the
usual channels.

In case you don't want to (or can't) run the attached code, here's a summary
of what I see on my machine (ubuntu dapper).  Using ipython under python
2.4.3, I get:

 2268 function calls (2225 primitive calls) in 0.020 CPU seconds

   Ordered by: call count
   List reduced from 127 to 32 due to restriction <0.25>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  3050.0000.0000.0000.000 :0(append)
  259/2530.0100.0000.0100.000 :0(len)
  1770.0000.0000.0000.000 :0(isinstance)
   900.0000.0000.0000.000 :0(match)
   680.0000.0000.0000.000 ultraTB.py:539(tokeneater)
   680.0000.0000.0000.000 tokenize.py:16
(generate_tokens)
   610.0000.0000.0000.000 :0(span)
   570.0000.0000.0000.000 sre_parse.py:130(__getitem__)
   560.0000.0000.0000.000 string.py:220(lower)

etc, while running the same script under ipython/python2.5 and no other
changes gives:

 230370 function calls (229754 primitive calls) in 3.340 CPU seconds

   Ordered by: call count
   List reduced from 83 to 21 due to restriction <0.25>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
550030.4200.0000.4200.000 :0(startswith)
450260.2640.0000.2640.000 :0(endswith)
200130.1480.0000.1480.000 :0(append)
121380.1800.0000.6600.000 posixpath.py:156(islink)
121380.1920.0000.1920.000 :0(lstat)
121380.1800.0000.2880.000 stat.py:60(S_ISLNK)
121380.1080.0000.1080.000 stat.py:29(S_IFMT)
118380.6800.0001.2440.000 posixpath.py:56(join)
 48370.0520.0000.0520.000 :0(len)
 43620.0280.0000.0280.000 :0(split)
 43620.0480.0000.1000.000 posixpath.py:47(isabs)
 35980.0360.0000.0560.000 string.py:218(lower)
 35980.0200.0000.0200.000 :0(lower)
 28150.0320.0000.0320.000 :0(isinstance)
 28090.0280.0000.0280.000 :0(join)
 28080.2640.0000.5200.000 posixpath.py:374(normpath)
 26320.0400.0000.0680.000 inspect.py:35(ismodule)
 21430.0160.0000.0160.000 :0(hasattr)
 18840.0280.0000.4440.000 posixpath.py:401(abspath)
 15570.0160.0000.0160.000 :0(range)
 10780.0080.0000.0440.000 inspect.py:342(getfile)


These enormous numbers of calls are the origin of the slowdown, and the more
modules have been imported, the worse it gets.

I haven't had time to dive deep into inspect.py to try and fix this, but I
figured it would be best to at least report it now.  As far as IPython and
its user projects is concerned, I'll probably hack things to overwrite
inspect.py from 2.4 over the 2.5 version in the exception reporter, because
the current code is simply unusable for detailed tracebacks.  It would be
great if this could be fixed in the trunk at some point.

I'll be happy to provide further feedback or put this information elsewhere. 
Guido suggested initially posting here, but if you prefer it on the SF
tracker (even a

Re: [Python-Dev] 2.5 and beyond

2006-07-04 Thread Fernando Perez
Thomas Heller wrote:

> I would like to ask about the possibility to add some improvements to
> ctypes
> in Python 2.5, although the feature freeze is now in effect.  Hopefully
> former third-party libraries can have the freeze relaxed somewhat;-).
> 
> I intend to do these changes, the first is a small and trivial one, but
> allows a lot of flexibility:

[...]

I'd just like to provide a bit of context for Thomas' request (disclaimer:
he did NOT ask me to write this, nor did anyone else).  I understand the
release managers' need to be strict with the freeze, but perhaps knowing
what's behind this particular change will help them make a more informed
decision.

Numpy (http://numpy.scipy.org/) is the new python array package for
numerical computing, which has been developed at enormous effort by Travis
Oliphant (with community help) over the last year, as a way to unify the
old Numeric package (written by Jim Hugunin, of Jython and IronPython fame)
and Numarray (written by the Hubble telescope team).  The effect of numpy
in the community, even in its current pre-1.0 form, has been tremendous. 
There is a real, pressing need in the scientific world for open source and
technically superior replacements to Matlab and IDL, the propietary 800-lb
gorillas of the field.  Many major research institutions across the world
are seriously looking at python as fulfilling this role, but the previous
situation of a divided library (Numeric/numarray) was keeping a lot of
people on the fence.

With Travis' effort and numpy maturing towards a 1.0 release right around
the time of python 2.5, a LOT of people have come out of the woodwork to
contribute code, ideas, documentation, etc.  There is a real sense that the
combination of python2.5 (with better 64-bit and __index__ support) and
numpy will provide a significant advancement for scientific computing with
modern, high-level tools.

In this particular community, the need to interface with low-level existing
libraries is probably much more common than in other fields.  There are
literally millions of lines of C/C++ code for scientific work which we have
to use efficiently, and this is an everyday need for us.  While there are a
number of tools for this (SWIG, Boost::Python, pyrex, scipy.weave,...),
very recently people have discovered how useful ctypes can be for this
task.  One of the Google SoC projects (http://2006.planet-soc.com/blog/140)
started trying to wrap libsvm with SWIG and a week of frustrated efforts
led nowhere.  Albert then discovered ctypes and in a few hours was up and
running.  This has generated a lot of interest in the numpy crowd for
ctypes, and people would really, really like to see python2.5 come 'out of
the box' with as solid a support as possible from ctypes for numpy array
interfacing.

Ultimately the decision is up to the release team, I know that.  But at
least with this info, I hope you can understand:

1. why this is important to this community

2. why the timing isn't ideal: it is only /very/ recently that the numpy
team 'discovered' how much ctypes could truly help with a necessary (and
often very unpleasant) task in the numerical/python world.


Thanks for reading,


f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python sprint mechanics

2006-05-06 Thread Fernando Perez
"Martin v. Löwis" wrote:

> Tim Peters wrote:
>> Since I hope we see a lot more of these problems in the future, what
>> can be done to ease the pain?  I don't know enough about SVN admin to
>> know what might be realistic.  Adding a pile of  "temporary
>> committers" comes to mind, but wouldn't really work since people will
>> want to keep working on their branches after the sprint ends.  Purely
>> local SVN setups wouldn't work either, since sprint projects will
>> generally have more than one worker bee, and they need to share code
>> changes.
> 
> I think Fredrik Lundh points to svk at such occasions.

Allow me to make a suggestion.  A few months ago, Robert Kern (scipy dev)
taught me a very neat trick for this kind of problem; we were going to hack
on ipython together and wanted a quick way to share code without touching
the main SVN repo (though both of us have commit rights).  His solution was
very simple, involving just twisted and bazaar-ng (Robert had some bad
experiences with SVK, though I suppose you could use that as well).

I later had the occasion to test it on a multi-day sprint with other
developers, and was extremely happy with the results.  For that sprint, I
documented the process here:

http://projects.scipy.org/neuroimaging/ni/wiki/SprintCodeSharing

You might find this useful.  In combination with IP aliasing over a local
subnet to create stable IPs, and a few named aliases in a little hosts
file, a group of people can find each other's data in an extremely
efficient way over a several days, sync back and forth as needed, etc.  At
the end of the sprint, the 'real' committers can evaluate how much of what
got done they want to push to the upstream SVN servers.

I hope this helps, feel free to ask for further details.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules.

2006-04-29 Thread Fernando Perez
Hi all,

Mateusz Rukowicz wrote:

> I wish to participate in Google Summer of Code as a python developer. I
> have few ideas, what would be improved and added to python. Since these
> changes and add-ons would be codded in C, and added to python-core
> and/or as modules,I am not sure, if you are willing to agree with these
> ideas.
> 
> First of all, I think, it would be good idea to speed up long int
> implementation in python. Especially multiplying and converting
> radix-2^k to radix-10^l. It might be done, using much faster algorithms
> than already used, and transparently improve efficiency of multiplying
> and printing/reading big integers.
> 
> Next thing I would add is multi precision floating point type to the
> core and fraction type, which in some cases highly improves operations,
> which would have to be done using floating point instead.
> Of course, math module will need update to support multi precision
> floating points, and with that, one could compute asin or any other
> function provided with math with precision limited by memory and time.
> It would be also good idea to add function which computes pi and exp
> with unlimited precision.
> And last thing - It would be nice to add some number-theory functions to
> math module (or new one), like prime-tests, factorizations etc.

Sorry for pitching in late, I was away for a while.  I'd just like to point
out in the context of this discussion:

http://sage.scipy.org/sage/

SAGE is a fairly comprehensive system built on top of python to do all sorts
of research-level number theory work, from basic things up to
unpronouncable ones.  It includes wrappers to many of the major
number-theory related libraries which exist with an open-source license.

I am not commenting one way or another on your proposal, just bringing up a
project with a lot of relevance to what you are talking about.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] r43041 - python/trunk/Modules/_ctypes/cfield.c

2006-03-18 Thread Fernando Perez
"Martin v. Löwis" wrote:

> M.-A. Lemburg wrote:

>> Do you really think module authors do have a choice given that last
>> sentence ?
> 
> I really do. Most developers will not be confronted with 64-bit systems
> for several years to come. That current hardware supports a 64-bit mode
> is only one aspect: Most operating system installations on such hardware
> will continue to operate in 32-bit mode for quite some time.

I think it's worth pointing out that the above is not true in a fairly
significant market: AMD hardware under Linux.  Just about any AMD chip you
can buy for a desktop these days is 64-bit, and all major linux
distributions have out-of-the box native x86-64 support (via a native build
downloadable as a separate install CD, typically).  So while it may well be
true that most Win32 users who have 64 bit hardware will still be using it
in 32 bit mode, in the Linux world it is /extremely/ common to find native
64 bit users.  If you want confirmation, stop by the scipy list anytime for
any of the recurrent battles being fought on the 64 bit front (often
related to obscure compilation problems with Fortran code).

So I think M.A. is right on the money here with his statement.  Unless you
consider the Linux/64bit camp insignificant.  But if that is the case, it
might be worth putting a big statement in the 2.5 release notes indicating
"there is a good chance, if you use third party extensions and a 64 bit OS,
that this won't work for you".  Which will mean that a fraction of the
scientific userbase (a big, enthusiastic and growing community of python
users) will have to stick to 2.4.

Regards,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] When do sets shrink?

2006-01-01 Thread Fernando Perez
Raymond Hettinger wrote:

> It might be better to give more generic advice that tends to be true
> across implementations and versions:  "Dense collections like lists
> tuples iterate faster than sparse structures like dicts and sets.
> Whenever repeated iteration starts to dominate application run-time,
> consider converting to a dense representation for faster iteration and
> better memory/cache utilization."  A statement like this remains true
> whether or not a down-sizing algorithm is present.

Thanks.  While I certainly wasn't advocating an early optimization approach, I
think that part of using a tool effectively is also knowing its dark corners. 
Sometimes you _do_ need them, so it's handy to have the little 'break the glass
in case of an emergency' box :)

>> Cheers,
>> 
>> f
> 
> Hmm, your initial may be infringing on another developer's trademarked
> signature ;-)

Well, tough.  It happens to be my name, and I've been signing like this since
long before I started using python.  I'll think about changing when the
lawsuits come knocking, if I can't get the EFF to defend me ;-)


Thanks again for your feedback.  Until a suitable wiki comes along, I've kept
your message in my python-info folder as a potentially useful nugget.

Regards,

f 

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] When do sets shrink?

2005-12-31 Thread Fernando Perez
Raymond Hettinger wrote:

> Was Guido's suggestion of s=set(s) unworkable for some reason?  dicts
> and sets emphasize fast lookups over fast iteration -- apps requiring
> many iterations over a collection may be better off converting to a list
> (which has no dummy entries or empty gaps between entries).
> 
> Would the case be improved by incurring the time cost of 999,998 tests
> for possible resizing (one for each pop) and some non-trivial number of
> resize operations along the way (each requiring a full-iteration over
> the then current size)?

Note that this is not a comment on the current discussion per se, but rather a
small request/idea in the docs department: I think it would be a really useful
thing to have a summary page/table indicating the complexities of the various
operations on all the builtin types, including at least _mention_ of subtleties
and semi-gotchas.

Python is growing in popularity, and it is being used for more and more
demanding tasks all the time.  Such a 'complexity overview' of the language's
performance would, I think, be very valuable to many.   I know that much of
this information is available, but I'm talking about a specific summary, which
also discusses things like Noam's issue.  

For example, I had never realized that on dicts, for some O(N) operations, N
would mean "largest N in the dict's history" instead of "current number of
elements".  While I'm not arguing for any changes, I think it's good to _know_
this, so I can plan for it if I am ever in a situation where it may be a
problem.

Just my 1e-2.

And Happy New Year to the python-dev team, with many thanks for all your
fantastic work on making the most pleasant, useful programming language out
there.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] a quit that actually quits

2005-12-29 Thread Fernando Perez
Nick Coghlan wrote:

> As Fernando pointed out, anything else means we'd be well on our way to
> re-inventing IPython (although I'd be interested to know if sys.inputhook
> would have made IPython easier to write).

[sorry if this drifts off-topic for python-dev.  I'll try to provide useful info
on interactive computing in python here, and will gladly answer further
detailed discussion about ipython on the ipython-dev/user lists ]

In my case, I don't think it would have made that much of a difference in the
end, though initially it might have been tempting to use it.  IPython started
as my private collection of sys.ps{1,2} + sys.displayhook hacks in
$PYTHONSTARTUP.  I then discovered LazyPython, which had a great
sys.excepthook, and IPP, which was a full-blown derivative of
code.InteractiveConsole.  I decided to join all three projects, and thus was
ipython born.  Given that IPP was the 'architecture', from the moment we had
what is still today's ipython, it was based on code.InteractiveConsole, and at
that point I doubt that having sys.inputhook would have mattered.

Incidentally, just two days ago I removed the last connection to code.py: at
this point I had overridden so many methods, that there was no point in keeping
the inheritance relationship.  All I had to do was copy _two_ remaining
methods, and the main ipython class became standalone (this frees me for
ongoing redesign work, so it was worth doing it).

So in summary, while sys.inputhook would make it easy to do _lightweight_
interactive customizations, if you really want a more sophisticated and
featureful system, it probably won't matter.

Note that this is not an argument against sys.inputhook: exposing
customizability here may indeed be useful.  This will allow people to write,
with minimal effort, systems which pre-process special syntaxes and ride on top
of the python engine.  IPython exposes the exact same thing as a customizable
hook (called prefilter), and people have made some excellent use of this
capability.  The most notable one is SAGE:

http://modular.ucsd.edu/sage

a system for interactive mathematical computing (NSF funded).  If anyone is in
the San Diego area, the first SAGE meeting is in February:

http://modular.ucsd.edu/sage/days1/

and I'll be giving a talk there about ipython, including some of its design and
my plans for a more ambitious system for interactive computing (including
distributed computing) based on Python.  The prototypes of what we've done so
far are briefly described here (the first was funded by Google as a summer of
code project):

http://ipython.scipy.org/misc/ipython-notebooks-scipy05.pdf
http://ipython.scipy.org/misc/scipy05-parallel.pdf

I hope this is of some use and interest.

Regards,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] a quit that actually quits

2005-12-29 Thread Fernando Perez
Fredrik Lundh wrote:

> Fernando Perez wrote:
> 
>> In [1]: x='hello'
>>
>> In [2]: x?
> /.../
>> Docstring:
>> str(object) -> string
>>
>> Return a nice string representation of the object.
>> If the argument is a string, the return value is the same object.
> 
> I'm not sure what I find more confusing: a help system that claims that
> the variable x returns a nice string representation of an object, or that
> there's no help to be found for "hello".

Then, complain about docstrings:

Python 2.3.4 (#1, Feb  2 2005, 12:11:53)
[GCC 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> x='hello'
>>> print x.__doc__
str(object) -> string

Return a nice string representation of the object.
If the argument is a string, the return value is the same object.



In ipython, '?' does the best it can to collect information about an object,
making heavy use of python's introspection capabilities.  It provides class and
constructor docstrings, function call signatures (built from the function code
object), and more.  Using ?? gives even more details, including
syntax-highlighted source when available.  For example:

In [5]: pydoc.ErrorDuringImport??
Type:   classobj
String Form:pydoc.ErrorDuringImport
Namespace:  Interactive
File:   /usr/lib/python2.3/pydoc.py
Source:
class ErrorDuringImport(Exception):
"""Errors that occurred while trying to import something to document it."""
def __init__(self, filename, (exc, value, tb)):
self.filename = filename
self.exc = exc
self.value = value
self.tb = tb

def __str__(self):
exc = self.exc
if type(exc) is types.ClassType:
exc = exc.__name__
return 'problem in %s - %s: %s' % (self.filename, exc, self.value)
Constructor information:
Definition: pydoc.ErrorDuringImport(self, filename, (exc, value, tb))


I'm sorry it can't provide the information you'd like to see.  It is still found
to be useful by many people, including myself.  You are welcome to use it, and
patches to improve it will be well received.

Best,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] a quit that actually quits

2005-12-29 Thread Fernando Perez
Walter Dörwald wrote:

> Alex Martelli wrote:
> 
>> On 12/28/05, Walter Dörwald <[EMAIL PROTECTED]> wrote:
>>   ...
>>> We have sys.displayhook and sys.excepthook. Why not add a sys.inputhook?
>>
>> Sure, particularly with Nick's suggestion for a default input hook it would
>> be fine.
> 
> I'd like the inputhook to be able to define the prompt. I'm not sure how this
> could be accomplished.
> 
> Another API would be that the inputhook returns what line or command should be
> executed instead, e.g.
> 
> def default_inputhook(statement):
>if statement.endswith("?"):
>   return "help(%s)" % statement[:-1]
>etc.

And you're on your way to re-writing ipython:

In [1]: x='hello'

In [2]: x?
Type:   str
Base Class: 
String Form:hello
Namespace:  Interactive
Length: 5
Docstring:
str(object) -> string

Return a nice string representation of the object.
If the argument is a string, the return value is the same object.

I also started it with "let's add a cool hack to sys.ps1 and sys.displayhook in
10 minutes".  Now we're at 18000 lines of python, a 70 page manual, and growing
suport for remote distributed interactive computing, a new scientific computing
GUI, and more  :)   If you like this kind of thing, by all means join in: I can
use all the helping hands I can get.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] a quit that actually quits

2005-12-28 Thread Fernando Perez
Steve Holden wrote:

> Except that if you have iPython installed on Windows you *don't* enter
> the platform EOF any more, you enter CTRL/D (which threw me for a
> while).

To be fair, that's due to the win32 readline library used by ipython, which
modifies console handling.  IPython itself doesn't do anything to the EOF
conventions, it's pure python code with no direct access to the console.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] a quit that actually quits

2005-12-28 Thread Fernando Perez
Alex Martelli wrote:

> 
> On Dec 28, 2005, at 3:24 AM, Michael Hudson wrote:

>> The thing that bothers me about it is that the standard way you tell
>> python to do something is "call a function" -- to me, a special case
>> for exiting the interpreter seems out of proportion.
> 
> Just brainstorming, but -- maybe this means we should generalize the
> idea?  I.e., allow other cases in which "just mentioning X" means
> "call function Y [with the following arguments]", at least at the
> interactive prompt if not more generally.  If /F's idea gets
> implemented by binding to names 'exit' and 'quit' the result of some
> factory-call with "function to be called" set to sys.exit and
> "arguments for it" set to () [[as opposed to specialcasing checks of
> last commandline for equality to 'exit' &c]] then the implementation
> of the generalization would be no harder.  I do find myself in
> sessions in which I want to perform some action repeatedly, and
> currently the least typing is 4 characters (x()) while this
> would reduce it to two (iPython does allow such handy shortcuts, but
> I'm often using plain interactive interpreters).
> 
> If this generalization means a complicated implementation, by all
> means let's scrap it, but if implementation is roughly as easy, it
> may be worth considering to avoid making a too-special "special
> case" (or maybe not, but brainstorming means never having to say
> you're sorry;-).

Allow me to add a few comments, here: as the ipython author, I happen to have
thought an awful lot about all these issues.  

First, your suggestion of incorporating 'autocalling' ( the automatic 'foo a' ->
'foo(a)' transformation) into the core python interpreter may not be a very
good idea.  The code that does this is precisely the most brittle, delicate
part of ipython, a little regexp/eval/introspection dance that tries really,
really hard to understand whether 'foo' is a string that will point to a
callable once evaluated, but without eating generators, causing side effects,
or anything else.  I know it sounds silly, and perhaps it's just my
limitations, but it has taken several years to flesh out all the corner cases
where this code could fail (and in the past, a few really surprising failure
cases have been found).

In ipython, this functionality can still be turned off (via %autocall) at any
time, in case it is not working correctly.  You are welcome to look at the
code, it's the _prefilter method here:

http://projects.scipy.org/ipython/ipython/file/ipython/trunk/IPython/iplib.py

So while I think that this is extremely useful in a third-party library like
ipython, it's probably a little too delicate for the core, official interpreter
where reliability is so absolutely critical.  In fact, my own standard is that
I trust the CPython prompt as a test of 'doing the right thing', so I like that
it remains simple and solid.

Now, on to the wider discussion started by the quit/exit debate: the problem is
that we are here trying to make some particular words 'interactive commands' in
a language that doesn't have such a notion to begin with.  The tension of how
best to implement it, the various low-level tricks proposed, etc, stems (I
think) from this underlying fact.  

In IPyhton, I've convinced myself that this is a problem whose proper solution
requires an orthogonal command subsystem, the 'magics' (credit where credit is
due: the magic system was already in IPP, Janko Hauser's library which was one
of ipython's three original components; Janko understood the issue early on and
got it right).  This creates a separate namespace, controlled by a
disambiguation character (the % prefix in ipython), and therefore allows you to
cleanly offer the kind of behavior and semantics which are more convenient for
command-line use (whitespace argument separation, --dashed-options) without
colliding with the underlying language.  

By having the 'magic' command system be separate, it is also trivially
extensible by the users (see
http://ipython.scipy.org/doc/manual/node6.html#SECTION00062000 for
details).  This means that instead of going into a never-ending rabbit-chase of
special cases, you now have a well-defined API (IPython's is primitive but it
works, and we're cleaning it up as part of a major rework) where users can add
arbitrary command functionality which remains separate from the language
itself.

The main point here is, I think, that any good interactive environment for a
programming language requires at least TWO separate kinds of syntax:

1. the language itself, perhaps with aids to economize typing at the command
line (ipython offers many: auto-calling, auto-quoting, readline tricks, input
history as a Python list, output caching, macros, and more).

2. a set of control commands, meant to manipulate the environment itself.  These
can obviously be implemented in the underlying language, but there should be a
way to keep them in a separate namespace (so they don't collide with user
v

Re: [Python-Dev] Conclusion: Event loops, PyOS_InputHook, and Tkinter

2005-11-15 Thread Fernando Perez
Michiel Jan Laurens de Hoon wrote:

> There are several other reasons why the alternative solutions that came
> up in this discussion are more attractive than IPython:
> 1) AFAICT, IPython is not intended to work with IDLE.

Not so far, but mostly by accident. The necessary changes are fairly easy
(mainly abstracting out assumptions about being in a tty).  I plan on making
ipython embeddable inside any GUI (including IDLE), as there is much demand for
that.

> 2) I didn't get the impression that the IPython developers understand
> why and how their event loop works very well (which made it hard to
> respond to their posts). I am primarily interested in understanding the
> problem first and then come up with a suitable mechanism for events.
> Without such understanding, IPython's event loop smells too much like a
> hack.

I said I did get that code off the ground by stumbling in the dark, but I tried
to explain to you what it does, which is pretty simple:

a. You find, for each toolkit, what its timer/idle mechanism is.  This requires
reading a little about each toolkit's API, as they all do it slightly
differently.  But the principle is always the same, only the implementation
details change.

b. You subclass threading.Thread, as you do for all threading code.  The run
method of this class manages a one-entry queue where code is put for execution
from stdin.

c. The timer you set up with the info from (a) calls the method which executes
the code object from the queue in (b), with suitable locking.

That's pretty much it.  Following this same idea, just this week I implemented
an ipython-for-OpenGL shell.  All I had to do was look up what OpenGL uses for
an idle callback.

> 3) IPython adds another layer on top of Python. For IPython's purpose,
> that's fine. But if we're just interested in event loops, I think it is
> hard to argue that another layer is absolutely necessary. So rather than
> setting up an event loop in a layer on top of Python, I'd prefer to find
> a solution within the context of Python itself (be it threads, an event
> loop, or PyOS_InputHook).

I gave you a link to a 200 line script which implements the core idea for GTK
without any ipython at all.  I explained that in my message.  I don't know how
to be more specific, ipython-independent or clear with you.

> 4) Call me a sentimental fool, but I just happen to like regular Python.

That's fine.  I'd argue that ipython is exceptionally useful in a scientific
computing workflow, but I'm obviously biased.  Many others in the scientific
community seem to agree with me, though, given the frequency of ipython prompts
in posts to the scientific computing lists.  

But this is free software in a free world: use whatever you like.  All I'm
interested in is in clarifying a technical issue, not in evangelizing ipython;
that's why I gave you a link to a non-ipython example which implements the key
idea using only the standard python library.

> My apologies in advance to the IPython developers if I misunderstood how
> it works.

No problem.  But your posts so far seem to indicate you hardly read what I said,
as I've had to repeat several key points over and over (the non-ipython
solutions, for example).

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Event loops, PyOS_InputHook, and Tkinter

2005-11-13 Thread Fernando Perez
Josiah Carlson wrote:

> Or heck, if you are really lazy, people can use a plot() calls, but
> until an update_plot() is called, the plot isn't updated.

I really recommend that those interested in all these issues have a look at
matplotlib.  All of this has been dealt with there already, a long time ago, in
detail.  The solutions may not be perfect, but they do work for a fairly wide
range of uses, including the interactive case.

There may be a good reason why mpl's approach is insufficient, but I think that
the discussion here would be more productive if that were stated precisely and
explicitly.  Up to this point, all the requirements I've been able to
understand clearly  work just fine with ipython/mpl (though I may well have
missed the key issue, I'm sure).

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Event loops, PyOS_InputHook, and Tkinter

2005-11-13 Thread Fernando Perez
Michiel Jan Laurens de Hoon wrote:

> Fernando Perez wrote:

>>Did you read my reply? ipython, based on code.py, implements a few simple
>>threading tricks (they _are_ simple, since I know next to nothing about
>>threading) and gives you interactive use of PyGTK, WXPython and PyQt
>>applications in a manner similar to Tkinter.
>>
> That may be, and I think that's a good thing, but it's not up to me to
> decide if PyGtk should support interactive use. The PyGtk developers
> decide whether they want to decide to spend time on that, and they may
> decide not to, no matter how simple it may be.

OK, I must really not be making myself very clear.  I am not saying anything
aobut the pygtk developers: what I said is that this can be done by the
application writer, trivially, today.  There's nothing you need from the
authors of GTK.  Don't take my word for it, look at the code:

1. You can download ipython, it's a trivial pure-python install.  Grab
matplotlib and see for yourself (which also addresses the repaint issues you
mentioned).  You can test the gui support without mpl as well.

2. If you don't want to download/install ipython, just look at the code that
implements these features:

http://projects.scipy.org/ipython/ipython/file/ipython/trunk/IPython/Shell.py

3. If you really want to see how simple this is, you can run this single,
standalone script:

http://ipython.scipy.org/tmp/pyint-gtk.py

I wrote this when I was trying to understand the necessary threading tricks for
GTK, it's a little multithreaded GTK shell based on code.py.  230 lines of code
total, including readline support and (optional) matplotlib support.  Once this
was running, the ideas in it were folded into the more complex ipython
codebase.


At this point, I should probably stop posting on this thread.  I think this is
drifting off-topic for python-dev, and I am perhaps misunderstanding the
essence of your problem for some reason.  All I can say is that many people are
doing scientific interactive plotting with ipython/mpl and all the major GUI
toolkits, and they seem pretty happy about it.

Best,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Event loops, PyOS_InputHook, and Tkinter

2005-11-13 Thread Fernando Perez
Michiel Jan Laurens de Hoon wrote:

> For an extension module such as
> PyGtk, the developers may decide that PyGtk is likely to be run in
> non-interactive mode only, for which the PyGtk mainloop is sufficient.

Did you read my reply? ipython, based on code.py, implements a few simple
threading tricks (they _are_ simple, since I know next to nothing about
threading) and gives you interactive use of PyGTK, WXPython and PyQt
applications in a manner similar to Tkinter.  Meaning, you can from the command
line make a window, change its title, add buttons to it, etc, all the while
your interactive prompt remains responsive as well as the GUI.  With that
support, matplotlib can be used to do scientific plotting with any of these
toolkits and no blocking of any kind (cross-thread signal handling is another
story, but you didn't ask about that).

As I said, there may be something in your problem that I don't understand.  But
it is certainly possible, today, to have a non-blocking Qt/WX/GTK-based
scientific plotting application with interactive input.  The ipython/matplotlib
combo has done precisely that for over a year (well, Qt support was added this
April).

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Event loops, PyOS_InputHook, and Tkinter - Summary attempt

2005-11-12 Thread Fernando Perez
Jim Jewett wrote:

> (6)  Mark Hammond suggests that it might be easier to
> replace the interactive portions of python based on the
> "code" module.  matplotlib suggests using ipython
> instead of standard python for similar reasons.
> 
> If that is really the simplest answer (and telling users
> which IDE to use is acceptable), then ... I think Michiel
> has a point.

I don't claim to understand all the low-level details of this discussion, by a
very long shot.  But as the author of ipython, at least I'll mention what
ipython does to help in this problem.  Whether that is a satisfactory solution
for everyone or not, I won't get into.

For starters, ipython is an extension of the code.InteractiveConsole class, even
though by now I've changed so much that I could probably just stop using any
inheritance at all.  But this is just to put ipython in the context of the
stdlib.  

When I started using matplotlib, I wanted to be able to run my code
interactively and get good plotting, as much as I used to have with Gnuplot
before (IPython ships with extended Gnuplot support beyond what the default
Gnuplot.py module provides).  With help from John Hunter (matplotlib - mpl for
short - author), we were able to add support for ipython to happily coexist
with matplotlib when either the GTK or the WX backends were used. mpl can plot
to Tk, GTK, WX, Qt and FLTK; Tk worked out of the box (because of the Tkinter
event loop integration in Python), and with our hacks we got GTK and WX to
work.  Earlier this year, with the help of a few very knowledgeable Qt
developers, we extended the same ideas to add support for Qt as well.  As part
of this effort, ipython can generically (meaning, outside of matplotlib)
support interactive non-blocking control of WX, GTK and Qt apps, you get that
by starting it with

ipython -wthread/-gthread/-qthread

The details of how this works are slightly different for each toolkit, but the
overall approach is the same for all.  We just register with each toolkit's
idle/timer system a callback to execute pending code which is waiting in what
is essentially a one-entry queue.  I have a private branch where I'm adding
similar support for OpenGL windows using the GLUT idle function, though it's ot
ready for release yet.  So far this has worked quite well.  
If anyone wants to see the details, the relevant code is here:

http://projects.scipy.org/ipython/ipython/file/ipython/trunk/IPython/Shell.py

It may not be perfect, and it may well be the wrong approach.  If so, I'll be
glad to learn how to do it better: I know very little about threading and I got
this to work more or less by stumbling in the dark.

In particular, one thing that definitely does NOT work is mixing TWO GUI
toolkits together.  There is a hack (the -tk option) to try to allow mixing of
ONE of Qt/WX/GTK with Tk, but it has only ever worked on Debian, and we don't
really understand why.  I think it's some obscure combination of how the
low-level threading support for many different libraries is compiled in Debian.

As far as using IDLE/Emacs/whatever (I use Xemacs personally for my own
editing), our approach has been to simply tell people that the _interactive
shell_ should be ipython always.  They can use anything they want to edit their
code with, but they should execute their scripts with ipython.  ipython has a
%run command which allows code execution with a ton of extra control, so the
work cycle with ipython is more or less:

1. open the editor you like to use with your foo.py code.  Hack on foo.py

2. whenever you wish to test your code, save foo.py

3. switch to the ipython window, and type 'run foo'.  Play with the results
interactively (the foo namespace updates the interactive one after completion).

4. rinse, repeat.

In the matplotlib/scipy mailing lists we've more or less settled on 'this is
what we support.  If you don't like it, go write your own'.  It may not be
perfect, but it works reasonably for us (I use this system 10 hours a day in
scientific production work, and so does John, so we do eat our own dog food).

Given that ipython is trival to install (it's pure python code with no extra
dependencies under *nix and very few under win32), and that it provides so much
additional functionality on top of the default interactive interpreter, we've
had no complaints so far.

OK, I hope this information is useful to some of you.  Feel free to contact me
if you have any questions (I will monitor the thread, but I follow py-dev on
gmane, so I do miss things sometimes).

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] reference counting in Py3K

2005-09-08 Thread Fernando Perez
Josiah Carlson wrote:

> Fernando Perez <[EMAIL PROTECTED]> wrote:

>> Would you care to elaborate on the reasons behind the 'ick'?  I'm a big fan
>> of weave.inline and have used it very successfully for my own needs, so I'm
>> genuinely curious (as I tend to teach its use, I like to know of potential
>> problems I may not have seen).
> 
> 1. Mixing multiple languages in a single source file is bad form, yet it
> seems to be encouraged in weave.inline and other such packages (it
> becomes a big deal when the handful of Python becomes 20+ lines of C).

Agreed.  I only use inline with explicit C strings for very short stuff, and
typically use a little load_C_snippet() utility I wrote.  That lets me keep
the C sources in real C files, with proper syntax highlighting in Xemacs and
whatnot.

[... summary of weave problems]

> Agreed.  I admit that some of my issues would likely be lesser if I were
> to start to use inline now, with additional experience with such things.
> But with a few thousand lines of Pyrex and C working right now, I'm hard
> pressed to convince anyone (including myself) that such a switch is
> worthwhile.

Thanks for your input.  I certainly wasn't trying to suggest you change, I was
just curious about your experiences.  If you ever see this again, specific
feedback on the scipy list would be very welcome.  While I'm not 'officially'
a scipy developer, I care enough about weave that occasionally I dig in and go
in bugfixing expeditions.  With proper bug reports we could improve a system
which I think has a place (especially for scientific computing, with the Blitz
support for arrays, which gives Numpy-like arrays in C++).  I don't see weave
as a competitor to pyrex, but rather as an alternate tool which can be
excellent in certain contexts, and which I'd like to see improve whre
possible.

Regards,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] reference counting in Py3K

2005-09-08 Thread Fernando Perez
Josiah Carlson wrote:

> Here's a perspective "from the trenches" as it were.
> 
> I've been writing quite a bit of code, initially all in Python (27k
> lines in the last year or so).  It worked reasonably well and fast. It
> wasn't fast enough. I needed a 25x increase in performance, which would
> have been easily attainable if I were to rewrite everything in C, but
> writing a module in pure C is a bit of a pain (as others can attest), so
> I gave Pyrex a shot (after scipy.weave.inline, ick).

Would you care to elaborate on the reasons behind the 'ick'?  I'm a big fan of
weave.inline and have used it very successfully for my own needs, so I'm
genuinely curious (as I tend to teach its use, I like to know of potential
problems I may not have seen).  

I should also add that a while ago a number of extremely annoying spurious
recompilation bugs were finally fixed, in case this was what bothered you. 
Those bugs (hard to find) made weave in certain cases useless, as it
recompiled everything blindly, thus killing its whole purpose.

Feel free to reply off-list if you feel this is not appropriate for python-dev,
though I think that a survey of the c-python bridges may be of interest to
others.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SWIG and rlcompleter

2005-08-16 Thread Fernando Perez
Guido van Rossum wrote:

> (3) I think a better patch is to use str(word)[:n] instead of word[:n].

Mmh, I'm not so sure that's a good idea, as it leads to this:

In [1]: class f: pass
   ...:

In [2]: a=f()

In [3]: a.__dict__[1] = 8

In [4]: a.x = 0

In [5]: a.
a.1  a.x

In [5]: a.1

   File "", line 1
 a.1
   ^
SyntaxError: invalid syntax


In general, foo.x named attribute access is only valid for strings to begin with
(what about unicode in there?).  Instead, this is what I've actually
implemented in ipython:

words = [w for w in dir(object) if isinstance(w, basestring)]

That does allow unicode, I'm not sure if that's a good thing to do.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SWIG and rlcompleter

2005-08-16 Thread Fernando Perez
Michael Hudson wrote:

> [EMAIL PROTECTED] writes:
> 
>> You don't need something like a buggy SWIG to put non-strings in dir().
>>
> class C: pass
>> ...
> C.__dict__[3] = "bad wolf"
> dir(C)
>> [3, '__doc__', '__module__']
>>
>> This is likely to happen "legitimately", for instance in a class that allows
>> x.y and x['y'] to mean the same thing. (if the user assigns to x[3])
> 
> I wonder if dir() should strip non-strings?

Me too.  And it would be a good idea, I think, to specify explicitly in the
dir() docs this behavior.  Right now at least rlcompleter and ipython's
completer can break due to this, there may be other tools out there with
similar problems.

If  it's a stated design goal that dir() can return non-strings, that's fine.  I
can filter them out in my completion code.  I'd just like to know what the
official stance on dir()'s return values is.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] SWIG and rlcompleter

2005-08-15 Thread Fernando Perez
Guido van Rossum wrote:

> (1) Please use the SF patch manager.
> 
> (2) Please don't propose adding more bare "except:" clauses to the
> standard library.
> 
> (3) I think a better patch is to use str(word)[:n] instead of word[:n].

Sorry to jump in, but this same patch was proposed for ipython, and my reply
was that it appeared to me as a SWIG bug.  From:

http://www.python.org/doc/2.4.1/lib/built-in-funcs.html

the docs for dir() seem to suggest that dir() should only return strings (I am
inferring that from things like 'The resulting list is sorted
alphabetically').  The docs are not fully explicit on this, though.

Am I interpreting the docs correctly, case in which this should be considered a
SWIG bug?  Or is it OK for objects to stuff non-strings in __dict__, case in
which SWIG is OK and then rlcompleter (and the corresponding system in
ipython) do need to protect against this situation.

I'd appreciate a clarification here, so I can close my own ipython bug report
as well.

Thanks,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: Migrating the Python CVS to Subversion

2005-07-29 Thread Fernando Perez
"Martin v. Löwis" wrote:

> Fernando Perez wrote:
>> For ipython, which recently went through cvs2svn, I found that moving over
>> to a
>> project/trunk structure was a few minutes worth of work.  Since svn has
>> moving commands, it was just a matter of making the extra project/ directory
>> and
>> moving things one level down the hierarchy.  Even if cvs2svn doesn't quite
>> create things the way you want them in the long run, svn is flexible enough
>> that a few manual tweaks should be quite easy to perform.
> 
> Doesn't this give issues as *every* file the starts out renamed? e.g.
> what if you want "revision 100 of project/trunk/foo", but, at revision
> 100, it really was trunk/project/foo?

To be honest, I don't really know the details, but it seems to work fine. A
quick look at ipython:

planck[IPython]> svn update
At revision 661.

planck[IPython]> svn diff -r 10 genutils.py | tail
-
-Deprecated: this function has been superceded by timing() which has better
-fucntionality."""
-
-rng = range(reps)
-start = time.clock()
-for dummy in rng: func(*args,**kw)
-return time.clock()-start
-
-#*** end of file  **

Revision 10 was most certainly back in the early CVS days, and the wholesale
renaming happened when I started using svn, which was around revision 600 or
so.  There may be other subtleties I'm missing, but so far I haven't
experienced any problems.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: Migrating the Python CVS to Subversion

2005-07-28 Thread Fernando Perez
"Martin v. Löwis" wrote:

> Fred L. Drake, Jr. wrote:
>> More interestingly, keeping it in a single repository makes it easier to
>> merge
>> projects, or parts of projects, together, without losing the history.  This
>> would be useful when developing packages that may be considered for the
>> standard library, but which also need to continue separate releases to
>> support older versions of Python.  We've found it very handy to keep multiple
>> projects in a single repository for zope.org.
> 
> So do you use project/trunk or trunk/project? If the former, I would
> need to get instructions on how to do the conversion from CVS.

For ipython, which recently went through cvs2svn, I found that moving over to a
project/trunk structure was a few minutes worth of work.  Since svn has moving
commands, it was just a matter of making the extra project/ directory and
moving things one level down the hierarchy.  Even if cvs2svn doesn't quite
create things the way you want them in the long run, svn is flexible enough
that a few manual tweaks should be quite easy to perform.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: Migrating the Python CVS to Subversion

2005-07-28 Thread Fernando Perez
Tim Peters wrote:

> [Martin v. Löwis]

>> The conversion should be done using cvs2svn utility, available e.g.
>> in the cvs2svn Debian package. The command for converting the Python
>> repository is

[...]
 
> I'm sending this to Jim Fulton because he did the conversion of Zope
> Corp's code base to SVN.  Unfortunately, Jim will soon be out of touch
> for several weeks.  Jim, do you have time to summarize the high bits
> of the problems you hit?  IIRC, you didn't find any conversion script
> at the time capable of doing the whole job as-is.  If that's wrong, it
> would be good to know that too.

If you hit any snags, you may be interested in contacting the admin for
scipy.org.  The scipy CVS repo choked cvs2svn pretty badly a while ago, but
recent efforts eventually prevailed.  This afternoon an email arrived from
him:

 Original Message 
Subject: [SciPy-dev] SciPy CVS to Subversion migration
Date: Thu, 28 Jul 2005 20:02:59 -0500
From: Joe Cooper <[EMAIL PROTECTED]>
Reply-To: SciPy Developers List <[EMAIL PROTECTED]>
To: SciPy Dev List <[EMAIL PROTECTED]>

Hi all,

The issues with converting our CVS repository to Subversion have been 
resolved, and so I'd like to make the switch tomorrow (Friday) afternoon.

[...]



I know Joe was in contact with the SVN devs to work on this, so perhaps he's
using a patched version of cvs2svn, I simply don't know.  But I mention it in
case it proves useful to the python.org conversion.

cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: Migrating the Python CVS to Subversion

2005-07-28 Thread Fernando Perez
"Martin v. Löwis" wrote:

> Converting the CVS Repository
> =
> 
> The Python CVS repository contains two modules: distutils and
> python. Keeping them together will produce quite long repository
> URLs, so it is more convenient if the Python CVS and the distutils
> CVS are converted into two separate repositories.

If I understand things correctly, one project/one repo creates a 'hard' barrier
for moving code across projects (while retaining history, so done via an svn
command).  Is the 'long url' really the only argument for this, and is it
significant enough?  Instead of:

https://svn.python.org/python
https://svn.python.org/distutils

you could have

https://svn.python.org/main/python
https://svn.python.org/main/distutils

or something similar.  It's an extra few chars, and it would give a convenient
way to branch off pieces of the main code into their own subprojects in the
future if needed.

For more experimental things, you can always have other repos:

https://svn.python.org/someotherrepo/...

But maybe the issue of moving code isn't too important, I'm certainly no expert
on svn.

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] A bug in pyconfig.h under Linux?

2005-06-14 Thread Fernando Perez
"Martin v. Löwis" wrote:

> Fernando Perez wrote:
>> sorry for posting to this list, but I'm not even 100% sure this is a bug. 
>> If it is, I'll gladly post it to SF if you folks want it there.
> 
> This is not a bug. Most likely, sc_weave.cpp fails to meet the
> requirement documented in
> 
> http://docs.python.org/api/includes.html
> 
> "Warning:  Since Python may define some pre-processor definitions which
> affect the standard headers on some systems, you must include Python.h
> before any standard headers are included. "

Many thanks to Martin and Jeff Epler for the feedback, and sorry for the noise,
the bug was in weave and not in pyconfig.h. 

I was able to fix scipy's weave to respect this constraint, which it didn't in
the case of blitz-enhanced array handling code.

Regards,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] A bug in pyconfig.h under Linux?

2005-06-14 Thread Fernando Perez
Hi all,

sorry for posting to this list, but I'm not even 100% sure this is a bug.  If it
is, I'll gladly post it to SF if you folks want it there.

I use scipy a lot, and the weave.inline component in there allows dynamic
inclusion of C/C++ code in Python sources.  In particular, it supports Blitz++
array expressions for access to Numeric arrays.  However, whenever I use
blitz-based code, I get these annoying warnings:


In file included from /usr/include/python2.3/Python.h:8,
 from sc_weave.cpp:5:
/usr/include/python2.3/pyconfig.h:850:1: warning: "_POSIX_C_SOURCE" redefined
In file included
from 
/usr/lib/gcc/i386-redhat-linux/3.4.3/../../../../include/c++/3.4.3/i386-redhat-linux/bits/os_defines.h:39,

from 
/usr/lib/gcc/i386-redhat-linux/3.4.3/../../../../include/c++/3.4.3/i386-redhat-linux/bits/c++config.h:35,

from 
/usr/lib/gcc/i386-redhat-linux/3.4.3/../../../../include/c++/3.4.3/string:45,

from /usr/lib/python2.3/site-packages/weave/blitz-20001213/blitz/blitz.h:153,

from 
/usr/lib/python2.3/site-packages/weave/blitz-20001213/blitz/array-impl.h:154,

from /usr/lib/python2.3/site-packages/weave/blitz-20001213/blitz/array.h:94,
 from sc_weave.cpp:4:
/usr/include/features.h:150:1: warning: this is the location of the previous
definition


This is on a Fedora Core 3 box, using glibc-headers.i386 version 2.3.5.

The source of the problem seems to be that in
file /usr/include/python2.3/pyconfig.h, line 850, I have:

/* Define to activate features from IEEE Stds 1003.1-2001 */
#define _POSIX_C_SOURCE 200112L

But the system headers, in /usr/include/features.h, line 150 give:

# define _POSIX_C_SOURCE199506L

Hence the double-define.  Now, I noticed that the system headers all use the
following approach to defining these constants:

# undef  _POSIX_SOURCE
# define _POSIX_SOURCE  1
# undef  _POSIX_C_SOURCE
# define _POSIX_C_SOURCE199506L

etc.  That is, they undef everything before defining their value.  I applied the
same change manually to pyconfig.h:

/* Define to activate features from IEEE Stds 1003.1-2001 */
#undef _POSIX_C_SOURCE
#define _POSIX_C_SOURCE 200112L

and my spurious warnings went away.  But I realize that pyconfig.h is
auto-generated, so the right solution, if this is indeed a bug, has to be
applied somewhere else, at the code generation source.  I am unfortunately not
familiar enough with Python's build system and the autoconf toolset to do that. 
Furthermore, I am not even 100% sure this is really a bug, though the spurious
warning is very annoying.

If this is indeed a bug, do you folks want it reported on SF as such?  In that
case, is this explanation enough/correct?  Any advice would be much
appreciated.

Regards,

Fernando.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Thoughts on stdlib evolvement

2005-06-06 Thread Fernando Perez
Josiah Carlson wrote:

> Fernando Perez wrote:

>> I've wondered if it wouldn't be better if the std lib were all stuffed into
>> its own namespace:
>> 
>> from std import urllib
>> 
>> If a more structured approach is desired, it could be
>> 
>> from std.www import urllib
> 
> This generally brings up the intersection of stdlib and nonstdlib naming
> hierarchy.  More specifically, what does "import email" mean?
> Presumably it means to import the email module or package, but from the
> current module directory, or from the standard library?

Well, I've thought of this (ligthly) mostly as a py3k thing, since it would
require that 'import email' in the naked fails, as it would become 'import
std.email', or 'import std.www.email' or whatever.  A plain 'import email'
would then refer to some third-party 'email' module, not part of the standard
library.

Since this would mean a massive break of exisiting code, it would necessarily be
a py3k issue.  But nonetheless the idea of confinign the stdlib to the 'std'
namespace does have some appeal, at least to me.

Best,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Thoughts on stdlib evolvement

2005-06-06 Thread Fernando Perez
Skip Montanaro wrote:

> I wouldn't mind a stdlib that defined a set of top-level packages (some of
> which might be wholly unpopulated by modules in the standard distribution)
> It might, for example, define a gui package and gui.Tkinter and gui._tkinter
> modules, leaving the remainder of gui namespace available for 3rd party
> modules.  Such a scheme would probably work best if there was some fairly
> lightweight registration/approval process in the community to avoid needless
> collisions.  For example, it might be a bit confusing if two organizations
> began installing their packages in a subpackage named gui.widgets.  An
> unsuspecting person downloading an application that used one version of
> gui.widgets into environment with the conflicting gui.widgets would run into
> trouble.

I've wondered if it wouldn't be better if the std lib were all stuffed into its
own namespace:

from std import urllib

If a more structured approach is desired, it could be 

from std.www import urllib

for example.  But having std. as the top-level namespace for the stdlib, could
simplify (I think) life a lot in the long run.  If a decision for a more
structured namespace is made, then it might be nice to have the same top-level
structure in site-packages, albeit empty by default:

from std.www import foo  -> standard library www packages

from www import bar  -> third-party www packages

Third-party packages can still be put into base site-packages, of course, but
developers could be encouraged to transition into putting things into these
categories.

This would also ease the process of 'staging' a module as external for a while
before deciding whether it meets the requirement for being put into the
stdlib.

Just an idea (sorry if it's been discussed and shot down before).

best,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: Re: Re: Re: Patch review: [ 1094542 ] add Bunch type to collections module

2005-01-28 Thread Fernando Perez
Steven Bethard wrote:

> That sounds reasonable to me.  I'll fix update to be a staticmethod.
> If people want other methods, I'll make sure they're staticmethods
> too.[1]
> 
> Steve
> 
> [1] In all the cases I can think of, staticmethod is sufficient -- the
> methods don't need to access any attributes of the Bunch class.  If
> anyone has a good reason to make them classmethods instead, let me
> know...

Great.  I probably meant staticmethod.  I don't use either much, so I don't
really know the difference in the terminology.  For a long time I stuck to 2.1
features for ipython and my other codes, and I seem to recall those appeared in
2.2.  But you got what I meant :)

Cheers,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: Re: Patch review: [ 1094542 ] add Bunch type to collections module

2005-01-27 Thread Fernando Perez
Steven Bethard wrote:

> Fernando Perez <[EMAIL PROTECTED]> wrote:

> My feeling about this is that if the name of the attribute is held in
> a variable, you should be using a dict, not a Bunch/Struct.  If you
> have a Bunch/Struct and decide you want a dict instead, you can just
> use vars:
> 
> py> b = Bunch(a=1, b=2, c=3)
> py> vars(b)
> {'a': 1, 'c': 3, 'b': 2}

Well, the problem I see here is that often, you need to mix both kinds of
usage.  It's reasonable to have code for which Bunch is exactly what you need
in most places, but where you have a number of accesses via variables whose
value is resolved at runtime.  Granted, you can use getattr(bunch,varname), or
make an on-the-fly dict as you indicated above.  But since Bunch is above all
a convenience class for common idioms, I think supporting a common need is a
reasonable idea.  Again, just my opinion.

>> Another useful feature of this Struct class is the 'merge' method.
> [snip]
>> my values() method allows an optional keys argument, which I also
>> find very useful.
> 
> Both of these features sound useful, but I don't see that they're
> particularly more useful in the case of a Bunch/Struct than they are
> for dict.  If dict gets such methods, then I'd be happy to add them to
> Bunch/Struct, but for consistency's sake, I think at the moment I'd
> prefer that people who want this functionality subclass Bunch/Struct
> and add the methods themselves.

It's very true that these are almost a request for a dict extension.  Frankly,
I'm too swamped to follow up with a pep/patch for it, though.  Pity, because
they can be really useful... Takers?

> I'm probably not willing to budge much on adding dict-style methods --
> if you want a dict, use a dict.  But if people think they're
> necessary, there are a few methods from Struct that I wouldn't be too
> upset if I had to add, e.g. clear, copy, etc.  But I'm going to need
> more feedback before I make any changes like this.

You already have update(), which by the way precludes a bunch storing an
'update' attribute.  My class suffers from the same problem, just with many
more names.  I've thought about this, and my favorite solution so far would be
to provide whichever dict-like methods end up implemented (update, merge (?),
etc) with a leading single underscore.  I simply don't see any other way to
cleanly distinguish between a bunch which holds an 'update' attribute and the
update method.  

I guess making them classmethods (or is it staticmethods? I don't use those so
I may be confusing terminology) might be a clean way out:

Bunch.update(mybunch, othermapping) -> modifies mybunch.

Less traditional OO syntax for bunches, but this would sidestep the potential
name conflicts.

Anyway, these are just some thoughts.  Feel free to take what you like.

Regards,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: Re: Re: Patch review: [ 1094542 ] add Bunch type to collections module

2005-01-27 Thread Fernando Perez
Steven Bethard wrote:

> Fernando Perez wrote:
>> Steven Bethard wrote:
>> > I'm probably not willing to budge much on adding dict-style methods --
>> > if you want a dict, use a dict.  But if people think they're
>> > necessary, there are a few methods from Struct that I wouldn't be too
>> > upset if I had to add, e.g. clear, copy, etc.  But I'm going to need
>> > more feedback before I make any changes like this.
>> 
>> You already have update(), which by the way precludes a bunch storing an
>> 'update' attribute.
> 
> Well, actually, you can have an update attribute, but then you have to
> call update from the class instead of the instance:

[...]

Of course, you are right.

However, I think it would perhaps be best to advertise any methods of Bunch as
strictly classmethods from day 1.  Otherwise, you can have:

b = Bunch()
b.update(otherdict) -> otherdict happens to have an 'update' key

... more code

b.update(someotherdict) -> boom! update is not callable

If all Bunch methods are officially presented always as classmethods, users can
simply expect that all attributes of a bunch are meant to store data, without
any instance methods at all.

Regards,

f

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: Patch review: [ 1094542 ] add Bunch type to collections module

2005-01-27 Thread Fernando Perez
Hi all,

Steven Bethard wrote:

> Alan Green <[EMAIL PROTECTED]> wrote:
>> Steven Bethard is proposing a new collection class named Bunch. I had
>> a few suggestions which I attached as comments to the patch - but what
>> is really required is a bit more work on the draft PEP, and then
>> discussion on the python-dev mailing list.
>> 
>>
http://sourceforge.net/tracker/?func=detail&aid=1100942&group_id=5470&atid=305470
> 
> I believe the correct tracker is:
> 
>
http://sourceforge.net/tracker/index.php?func=detail&aid=1094542&group_id=5470&atid=305470

A while back, when I started writing ipython, I had to write this same class (I
called it Struct), and I ended up building a fairly powerful one for handling
ipython's reucursive configuration system robustly.  

The design has some nasty problems which I'd change if I were doing this today
(I was just learning the language at the time).  But it also taught me a few
things about what one ends up needing from such a beast in complex situations.

I've posted the code here as plain text and syntax-highlighted html, in case
anyone is interested:

http://amath.colorado.edu/faculty/fperez/python/Struct.py
http://amath.colorado.edu/faculty/fperez/python/Struct.py.html

One glaring problem of my class is the blocking of dictionary method names as
attributes, this would have to be addressed differently.

But one thing which I really find necessary from a useful 'Bunch' class, is
the ability to access attributes via foo[name] (which requires implementing
__getitem__).  Named access is convenient when you _know_ the name you need
(foo.attr).  However, if the name of the attribute is held in a variable, IMHO 
foo[name] beats getattr(foo,name) in clarity and feels much more 'pythonic'.

Another useful feature of this Struct class is the 'merge' method.  While mine
is probably overly flexible and complex for the stdlib (though it is
incredibly useful in many situations), I'd really like dicts/Structs to have
another way of updating with a single method, which was non-destructive
(update automatically overwrites with the new data).  Currently this is best
done with a loop, but a 'merge' method which would work like 'update', but
without overwriting would be a great improvement, I think.

Finally, my values() method allows an optional keys argument, which I also
find very useful.  If this keys sequence is given, values are returned only
for those keys.  I don't know if anyone else would find such a feature useful,
but I do :).  It allows a kind of 'slicing' of dicts which can be really
convenient.

I understand that my Struct is much more of a dict/Bunch hybrid than what you
have in mind.  But in heavy usage, I quickly realized that at least having
__getitem__ implemented was an immediate need in many cases.

Finally, the Bunch class should have a way of returning its values easily as a
plain dictionary, for cases when you want to pass this data into a function
which expects a true dict.  Otherwise, it will 'lock' your information in.

I really would like to see such a class in the stdlib, as it's something that
pretty much everyone ends up rewriting.  I certainly don't claim my
implementation to be a good reference (it isn't).  But perhaps it can be
useful to the discussion as an example of a 'battle-tested' such class, flaws
and all.

I think the current pre-PEP version is a bit too limited to be generally
useful in complex, real-world situtations.  It would be a good starting point
to subclass for more demanding situations, but IMHO it would be worth
considering a more powerful default class.

Regards,



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com