Writing Android application using GPS data with Python

2012-04-15 Thread Noam Peled
I want to write an Android application using Python. I've found 2 options for 
that: kivy and SL4A. In kivy, at least for now, I can't use the GPS data. 
Anyone knows if I can get the GPS data using SL4A with Python? As I understood, 
one can write commercial apps using kivy. On the other hand, with SL4A you must 
install first SL4A and python on your Android device, so I'm not sure it's 
suitable for commercial apps. And last one, can I use funf with python?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: DreamPie - The Python shell you've always dreamed about!

2010-03-01 Thread Noam Yorav-Raphael
Can you try DreamPie 1.0.1 and say if it still happens?

There's a bug report system at launchpad.net/dreampie.

Thanks,
Noam
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: DreamPie - The Python shell you've always dreamed about!

2010-03-01 Thread Noam Yorav-Raphael
This is most probably a bug discovered in DreamPie 1.0 (See
https://bugs.launchpad.net/dreampie/+bug/525652 )

Can you try to download DreamPie 1.0.1, and if it still happens,
report a bug?

Thanks!
Noam
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: DreamPie - The Python shell you've always dreamed about!

2010-02-23 Thread Noam Yorav-Raphael
‎Thanks! I'm happy you like it!
Thanks for the feedback too. Here are my replies.

On Sun, Feb 21, 2010 at 7:13 PM, Chris Colbert  wrote:
> This is bloody fantastic! I must say, this fixes everything I hate about
> Ipython and gives me the feature I wished it had (with a few minor
> exceptions).
> I confirm this working on Kubuntu 9.10 using the ppa listed on the sites
> download page.
Great. It's important to know.

> I also confirm that it works interactively with PyQt4 and PyGtk (as to be
> expected since these toolkits use the PyOS_inputhook for the mainloop).
> However, it does not work interactively with wx (again, this is as expected
> since wx doesn't use the PyOS_inputhook). In short, the gui toolkit support
> is the same as in Ipython if you dont use any of the magic threading
> switches, which are now deprecated anyway.
Actually, currently DreamPie doesn't use PyOS_inputhook, but
implements the GUI hooks by itself. So it should be possible to
implement wx support if there's a way to handle events for a few
milliseconds. I tried it a bit and didn't find how to do it - if you
are interested in wx support and think you can help, please do.

> Matplotlib does not work interactively for me. Is there a special switch
> that needs to be used? or should a pick a non-wx backend? (i'm thinking the
> latter is more likely)
You should set "interactive:True" in your matplotlibrc file. The next
DreamPie version will warn about this.

> A couple of things I would like to see (and will help implement if I can
> find the time):
> 1) A shortcut to show the docstring of an object. Something like Ipython's
> `?`. i.e.  `object.foo?` translates to `help(object.foo)`
I wrote this at http://wiki.python.org/moin/DreamPieFeatureRequests .
I hope I will manage to implement this soon.

> 2) How do I change the color of the blinking cursor at the bottom? I can't
> see the damn thing!
It should be in the color of the default text. If this is not the
case, please file a bug!

> 3) line numbers instead of the `>>>` prompt
I know IPython does this, but I thought you needed it only if placing
the cursor on top of the command doesn't do anything. Can you tell me
why do you need this in the context of a graphical user interface?

> 4) a plugin facility where we can define our own `magic` commands. I use
> Ipython's %timeit ALL the time.
Added it to the feature request page.

> 5) Double-click to re-fold the output section as well.
I don't think that's a good idea, because usually double-click selects
the word, and I don't want to change that behavior for regular text.
You can use ctrl-minus to fold the last output section!

> Thanks for making this
Thanks for the feedback!

Noam
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: DreamPie - The Python shell you've always dreamed about!

2010-02-21 Thread Noam Yorav-Raphael
Delete \Documents and Settings\\DreamPie and it should now
work.
Did you edit the colors using the configuration window or manually?
If you edited them using the configuration window, can you give
instructions on how to reproduce the bug?

Noam

On Feb 21, 3:06 pm, "Aage Andersen"  wrote:
> I reinstalled and got this message:
>
> Traceback (most recent call last):
>   File "dreampie.py", line 4, ()
>   File "dreampielib\gui\__init__.pyc", line 972, main()
>   File "dreampielib\gui\__init__.pyc", line 153,
> __init__(self=DreamPie(path..."window_main"),
> pyexec='C:\\Python26\\python.exe')
>   File "dreampielib\gui\__init__.pyc", line 829,
> configure(self=DreamPie(path..."window_main"))
>   File "dreampielib\gui\tags.pyc", line 224,
> apply_theme_text(textview=,
> textbuffer=, theme={('bracket-match', 'bg', 'color'):
> 'darkblue', ('bracket-match', 'bg', 'isset'): True, ('bracket-match', 'fg',
> 'color'): 'white', ('bracket-match', 'fg', 'isset'): False, ...})
> ValueError: unable to parse colour specification
>
> Aage

-- 
http://mail.python.org/mailman/listinfo/python-list


DreamPie - The Python shell you've always dreamed about!

2010-02-21 Thread Noam Yorav-Raphael
I'm pleased to announce DreamPie 1.0 - a new graphical interactive
Python shell!

Some highlights:

* Has whatever you would expect from a graphical Python shell -
attribute completion, tooltips which show how to call functions,
highlighting of matching parentheses, etc.
* Fixes a lot of IDLE nuisances - in DreamPie interrupt always works,
history recall and completion works as expected, etc.
* Results are saved in the Result History.
* Long output is automatically folded so you can focus on what's
important.
* Jython and IronPython support makes DreamPie a great tool for
exploring Java and .NET classes.
* You can copy any amount of code and immediately execute it, and you
can also copy code you typed interactively into a new file, with the
Copy Code Only command. No tabs are used!
* Free software licensed under GPL version 3.

Check it out at http://dreampie.sourceforge.net/ and tell me what you
think!

Have fun,
Noam
-- 
http://mail.python.org/mailman/listinfo/python-list


reloading all modules

2009-03-09 Thread Noam Aigerman
Hi,

Is there a way to use the reload() func or something else, to refresh
all of the modules you have imported? 

I don't care whether the module that will run this code will be reloaded
or not, so whichever is the easiest...

Thanks, Noam

--
http://mail.python.org/mailman/listinfo/python-list


RE: Kill a function while it's being executed

2009-02-17 Thread Noam Aigerman
Hi,
Sorry for resurrecting an old thread, but it just bothers me that this
is the best way that python has to deal with killing running
functions... it's quite an ugly hack, no?

Is it a feature that's needed bit missing from python, or is it left out
on purpose (same way like java has deprecated thread.stop and
thread.suspend because they were not safe)?

Thanks, Noam

 

-Original Message-
From: python-list-bounces+noama=answers@python.org
[mailto:python-list-bounces+noama=answers@python.org] On Behalf Of
Albert Hopkins
Sent: Wednesday, February 04, 2009 5:26 PM
To: python-list@python.org
Subject: Re: Kill a function while it's being executed

On Wed, 2009-02-04 at 13:40 +0200, Noam Aigerman wrote:
> Hi All,
> I have a script in which I receive a list of functions. I iterate over
> the list and run each function. This functions are created by some
other
> user who is using the lib I wrote. Now, there are some cases in which
> the function I receive will never finish (stuck in infinite loop).
> Suppose I use a thread which times the amount of time passed since the
> function has started, Is there some way I can kill the function after
a
> certain amount of time has passed (without asking the user who's
giving
> me the list of functions to make them all have some way of notifying
> them to finish)?
> Thanks, Noam

Noam, did you hijack a thread?

You could decorate the functions with a timeout function.  Here's one
that I either wrote or copied from a recipe (can't recall):

class FunctionTimeOut(Exception):
pass

def function_timeout(seconds):
"""Function decorator to raise a timeout on a function call"""
import signal

def decorate(f):
def timeout(signum, frame):
raise FunctionTimeOut()

def funct(*args, **kwargs):
old = signal.signal(signal.SIGALRM, timeout)
signal.alarm(seconds)

try:
result = f(*args, **kwargs)
finally:
signal.signal(signal.SIGALRM, old)
signal.alarm(0)
return result

return funct

return decorate


Then

func_dec = function_timeout(TIMEOUT_SECS)
for func in function_list:
timeout_function = func_dec(func)
try:
timeout_function(...)
except FunctionTimeout:
...


-a



--
http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list


Referencing resources from python

2009-02-16 Thread Noam Aigerman
Hi,

What is the best way to reference a non-python file's path from inside
python ?

Until now, I (stupidly) had such lines  as:

 

theFile=open('../somefile.txt')

 

in my  python files. Now I moved my python files to another dir, and all
those relative filenames broke.

On the other hand, the solution of writing the full path of each
resource isn't welcome either, as we're working from a SVN repository
and we sometimes checkout to different dir's. I am sure there is a third
way which didn't occur to me... 

What do you recommend?

Thanks, Noam

--
http://mail.python.org/mailman/listinfo/python-list


Propagating function calls

2009-02-10 Thread Noam Aigerman
Suppose I have a python object X, which holds inside it a python object
Y. How can I propagate each function call to X so the same function call
in Y will be called, i.e:

X.doThatFunkyFunk()

Would cause

Y.doThatFunkyFunk()

Thanks, Noam

--
http://mail.python.org/mailman/listinfo/python-list


RE: Distributing simple tasks

2009-02-06 Thread Noam Aigerman
Hi,

The delta between the finishing times of each machine is insignificant
compared to the actual runtime, thus I don't feel it's necessary at the
moment. Anyway, I want to keep it simple until I understand how to
distribute tasks J

Thanks!

 

From: Thomas Raef [mailto:tr...@ebasedsecurity.com] 
Sent: Friday, February 06, 2009 4:01 PM
To: Noam Aigerman; python-list@python.org
Subject: RE: Distributing simple tasks

 

 

Hi,

Suppose I have an array of functions which I execute in threads (each
thread get a slice of the array, iterates over it and executes each
function in it's slice one after the other). Now I want to distribute
these tasks between two machines, i.e give each machine half of the
slices and let it run them in threads as described above. Is there an
easy way, or an article on this matter you can point me to?

Thanks, Noam

 

I would suggest maybe a separate queue machine that would hand out each
"next" function. That way if one machine takes a little longer, the
faster machine can keep picking off functions and running them, while
the slower machine can finish it's task.

 

Just a thought.

--
http://mail.python.org/mailman/listinfo/python-list


Distributing simple tasks

2009-02-06 Thread Noam Aigerman
Hi,

Suppose I have an array of functions which I execute in threads (each
thread get a slice of the array, iterates over it and executes each
function in it's slice one after the other). Now I want to distribute
these tasks between two machines, i.e give each machine half of the
slices and let it run them in threads as described above. Is there an
easy way, or an article on this matter you can point me to?

Thanks, Noam

--
http://mail.python.org/mailman/listinfo/python-list


RE: Kill a function while it's being executed

2009-02-04 Thread Noam Aigerman
About the hijacking - I *might* have done it without understanding what
I did (replied to a previous message and then changed the subject), if
that's what you mean...
Sorry

-Original Message-
From: python-list-bounces+noama=answers@python.org
[mailto:python-list-bounces+noama=answers@python.org] On Behalf Of
Albert Hopkins
Sent: Wednesday, February 04, 2009 5:26 PM
To: python-list@python.org
Subject: Re: Kill a function while it's being executed

On Wed, 2009-02-04 at 13:40 +0200, Noam Aigerman wrote:
> Hi All,
> I have a script in which I receive a list of functions. I iterate over
> the list and run each function. This functions are created by some
other
> user who is using the lib I wrote. Now, there are some cases in which
> the function I receive will never finish (stuck in infinite loop).
> Suppose I use a thread which times the amount of time passed since the
> function has started, Is there some way I can kill the function after
a
> certain amount of time has passed (without asking the user who's
giving
> me the list of functions to make them all have some way of notifying
> them to finish)?
> Thanks, Noam

Noam, did you hijack a thread?

You could decorate the functions with a timeout function.  Here's one
that I either wrote or copied from a recipe (can't recall):

class FunctionTimeOut(Exception):
pass

def function_timeout(seconds):
"""Function decorator to raise a timeout on a function call"""
import signal

def decorate(f):
def timeout(signum, frame):
raise FunctionTimeOut()

def funct(*args, **kwargs):
old = signal.signal(signal.SIGALRM, timeout)
signal.alarm(seconds)

try:
result = f(*args, **kwargs)
finally:
signal.signal(signal.SIGALRM, old)
signal.alarm(0)
return result

return funct

return decorate


Then

func_dec = function_timeout(TIMEOUT_SECS)
for func in function_list:
timeout_function = func_dec(func)
try:
timeout_function(...)
except FunctionTimeout:
...


-a



--
http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list


Kill a function while it's being executed

2009-02-04 Thread Noam Aigerman
Hi All,
I have a script in which I receive a list of functions. I iterate over
the list and run each function. This functions are created by some other
user who is using the lib I wrote. Now, there are some cases in which
the function I receive will never finish (stuck in infinite loop).
Suppose I use a thread which times the amount of time passed since the
function has started, Is there some way I can kill the function after a
certain amount of time has passed (without asking the user who's giving
me the list of functions to make them all have some way of notifying
them to finish)?
Thanks, Noam
--
http://mail.python.org/mailman/listinfo/python-list


A replacement to closures in python?

2009-01-30 Thread Noam Aigerman
Hi,

I want to create an array of functions, each doing the same thing with a
change to the parameters it uses... something like:

arr=['john','terry','graham']

funcs=[]

for name in arr:

def func():

print 'hello, my name is '+name

funcs.append(func)

for f in funcs:

f()

 

And I would like that to print

hello, my name is john

hello, my name is terry

hello, my name is graham

of course... 

Now I understand why the above code doesn't work as I want it to, but is
there some simple workaround for it? 

Thanks, Noam 

 

 

--
http://mail.python.org/mailman/listinfo/python-list


Re: How can I know how much to read from a subprocess

2007-09-18 Thread spam . noam
On Sep 18, 1:48 pm, "A.T.Hofkamp" <[EMAIL PROTECTED]> wrote:
> On 2007-09-17, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> > It seems that another solution is gobject.io_add_watch, but I don't
> > see how it tells me how much I can read from the file - if I don't
> > know that, I won't know the argument to give to the read() method in
> > order to get all the data:
>
> >http://www.pygtk.org/docs/pygobject/gobject-functions.html#function-g...
>
> Usually, gobject only tells you that data is there (that is all it knows).
> Therefore a read(1) should be safe.

But even if it's fast enough, how do you know how many times you
should call read(1)? If you do it too much, you'll be blocked until
more output is available.

> If that is too slow, consider os.read() which reads all data available (afaik,
> never tried it myself).
>
I tried it now, and it blocks just like the normal file.read().

Thanks,
Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I know how much to read from a subprocess

2007-09-17 Thread spam . noam
Ok, I could have researched this before posting, but here's an
explanation how to do it with twisted:

http://unpythonic.blogspot.com/2007/08/spawning-subprocess-with-pygtk-using.html

It seems that another solution is gobject.io_add_watch, but I don't
see how it tells me how much I can read from the file - if I don't
know that, I won't know the argument to give to the read() method in
order to get all the data:

http://www.pygtk.org/docs/pygobject/gobject-functions.html#function-gobject--io-add-watch

Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


How can I know how much to read from a subprocess

2007-09-17 Thread spam . noam
Hello,

I want to write a terminal program in pygtk. It will run a subprocess,
display everything it writes in its standard output and standard
error, and let the user write text into its standard input.

The question is, how can I know if the process wrote something to its
output, and how much it wrote? I can't just call read(), since it will
block my process.

Thanks,
Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: byteplay - a bytecode assembler/disassembler

2006-08-14 Thread spam . noam
Hello,

I would like to present a module that I have wrote, called byteplay.
It's a Python bytecode assembler/disassembler, which means that you can
take Python code object, disassemble them into equivalent objects which
are easy to play with, play with them, and then assemble a new,
modified, code object.

I think it's pretty useful if you like to learn more about Python's
bytecode - playing with things and seeing what happens is a nice way to
learn, I think.

Here's a quick example. We can define this stupid function:

>>> def f(a, b):
... print (a, b)
>>> f(3, 5)
(3, 5)

We can convert it to an equivalent object, and see how it stores the
byte code:

>>> from byteplay import *
>>> c = Code.from_code(f.func_code)
>>> from pprint import pprint; pprint(c.code)
[(SetLineno, 2),
 (LOAD_FAST, 'a'),
 (LOAD_FAST, 'b'),
 (BUILD_TUPLE, 2),
 (PRINT_ITEM, None),
 (PRINT_NEWLINE, None),
 (LOAD_CONST, None),
 (RETURN_VALUE, None)]

We can change the bytecode easily, and see what happens. Let's insert a
ROT_TWO opcode, that will swap the two arguments:

>>> c.code[3:3] = [(ROT_TWO, None)]
>>> f.func_code = c.to_code()
>>> f(3, 5)
(5, 3)

You can download byteplay from
http://byteplay.googlecode.com/svn/trunk/byteplay.py and you can read
(and edit) the documentation at http://wiki.python.org/moin/ByteplayDoc
. I will be happy to hear if you find it useful, or if you have any
comments or ideas.

Have a good day,
Noam Raphael

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Allowing zero-dimensional subscripts

2006-06-10 Thread spam . noam
George Sakkis wrote:
> [EMAIL PROTECTED] wrote:
>
> > However, I'm designing another library for
> > managing multi-dimensional arrays of data. Its purpose is similiar to
> > that of a spreadsheet - analyze data and preserve the relations between
> > a source of a calculation and its destination.
>
> Sounds interesting. Will it be related at all to OLAP or the
> Multi-Dimensional eXpressions language
> (http://msdn2.microsoft.com/en-us/library/ms145506.aspx) ?
>
Thanks for the reference! I didn't know about any of these. It will
probably be interesting to learn from them. From a brief look at OLAP
in wikipedia, it may have similarities to OLAP. I don't think it will
be related to Microsoft's language, because the language will simply by
Python, hopefully making it very easy to do whatever you like with the
data.

I posted to python-dev a message that (hopefully) better explains my
use for x[]. Here it is - I think that it also gives an idea on how it
will look like.


I'm talking about something similar to a spreadsheet in that it saves
data, calculation results, and the way to produce the results.
However, it is not similar to a spreadsheet in that the data isn't
saved in an infinite two-dimensional array with numerical indices.
Instead, the data is saved in a few "tables", each storing a different
kind of data. The tables may be with any desired number of dimensions,
and are indexed by meaningful indices, instead of by natural numbers.

For example, you may have a table called sales_data. It will store the
sales data in years from set([2003, 2004, 2005]), for car models from
set(['Subaru', 'Toyota', 'Ford']), for cities from set(['Jerusalem',
'Tel Aviv', 'Haifa']). To refer to the sales of Ford in Haifa in 2004,
you will simply write: sales_data[2004, 'Ford', 'Haifa']. If the table
is a source of data (that is, not calculated), you will be able to set
values by writing: sales_data[2004, 'Ford', 'Haifa'] = 1500.

Tables may be computed tables. For example, you may have a table which
holds for each year the total sales in that year, with the income tax
subtracted. It may be defined by a function like this:

lambda year: sum(sales_data[year, model, city] for model in models for
city in cities) / (1 + income_tax_rate)

Now, like in a spreadsheet, the function is kept, so that if you
change the data, the result will be automatically recalculated. So, if
you discovered a mistake in your data, you will be able to write:

sales_data[2004, 'Ford', 'Haifa'] = 2000

and total_sales[2004] will be automatically recalculated.

Now, note that the total_sales table depends also on the
income_tax_rate. This is a variable, just like sales_data. Unlike
sales_data, it's a single value. We should be able to change it, with
the result of all the cells of the total_sales table recalculated. But
how will we do it? We can write

income_tax_rate = 0.18

but it will have a completely different meaning. The way to make the
income_tax_rate changeable is to think of it as a 0-dimensional table.
It makes sense: sales_data depends on 3 parameters (year, model,
city), total_sales depends on 1 parameter (year), and income_tax_rate
depends on 0 parameters. That's the only difference. So, thinking of
it like this, we will simply write:

income_tax_rate[] = 0.18

Now the system can know that the income tax rate has changed, and
recalculate what's needed. We will also have to change the previous
function a tiny bit, to:

lambda year: sum(sales_data[year, model, city] for model in models for
city in cities) / (1 + income_tax_rate[])

But it's fine - it just makes it clearer that income_tax_rate[] is a
part of the model that may change its value.


Have a good day,
Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Allowing zero-dimensional subscripts

2006-06-09 Thread spam . noam
Hello,

Following Fredrik's suggestion, I wrote a pre-PEP. It's available on
the wiki, at http://wiki.python.org/moin/EmptySubscriptListPEP and I
also copied it to this message.

Have a good day,
Noam


PEP: XXX
Title: Allow Empty Subscript List Without Parentheses
Version: $Revision$
Last-Modified: $Date$
Author: Noam Raphael <[EMAIL PROTECTED]>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 09-Jun-2006
Python-Version: 2.5?
Post-History: 30-Aug-2002

Abstract


This PEP suggests to allow the use of an empty subscript list, for
example ``x[]``, which is currently a syntax error. It is suggested
that in such a case, an empty tuple will be passed as an argument to
the __getitem__ and __setitem__ methods. This is consistent with the
current behaviour of passing a tuple with n elements to those methods
when a subscript list of length n is used, if it includes a comma.


Specification
=

The Python grammar specifies that inside the square brackets trailing
an expression, a list of "subscripts", separated by commas, should be
given. If the list consists of a single subscript without a trailing
comma, a single object (an ellipsis, a slice or any other object) is
passed to the resulting __getitem__ or __setitem__ call. If the list
consists of many subscripts, or of a single subscript with a trailing
comma, a tuple is passed to the resulting __getitem__ or __setitem__
call, with an item for each subscript.

Here is the formal definition of the grammar:

::
   trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
   subscriptlist: subscript (',' subscript)* [',']
   subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop]
   sliceop: ':' [test]

This PEP suggests to allow an empty subscript list, with nothing
inside the square brackets. It will result in passing an empty tuple
to the resulting __getitem__ or __setitem__ call.

The change in the grammar is to make "subscriptlist" in the first
quoted line optional:

::
   trailer: '(' [arglist] ')' | '[' [subscriptlist] ']' | '.' NAME


Motivation
==

This suggestion allows you to refer to zero-dimensional arrays
elegantly. In
NumPy, you can have arrays with a different number of dimensions. In
order to refer to a value in a two-dimensional array, you write
``a[i, j]``. In order to refer to a value in a one-dimensional array,
you write ``a[i]``. You can also have a zero-dimensional array, which
holds a single value (a scalar). To refer to its value, you currently
need to write ``a[()]``, which is unexpected - the user may not even
know that when he writes ``a[i, j]`` he constructs a tuple, so he
won't guess the ``a[()]`` syntax. If the suggestion is accepted, the
user will be able to write ``a[]`` in order to refer to the value, as
expected. It will even work without changing the NumPy package at all!

In the normal use of NumPy, you usually don't encounter
zero-dimensional arrays. However, the author of this PEP is designing
another library for managing multi-dimensional arrays of data. Its
purpose is similar to that of a spreadsheet - to analyze data and
preserve the relations between a source of a calculation and its
destination. In such an environment you may have many
multi-dimensional arrays - for example, the sales of several products
over several time periods. But you may also have several
zero-dimensional arrays, that is, single values - for example, the
income tax rate. It is desired that the access to the zero-dimensional
arrays will be consistent with the access to the multi-dimensional
arrays. Just using the name of the zero-dimensional array to obtain
its value isn't going to work - the array and the value it contains
have to be distinguished.


Rationale
=

Passing an empty tuple to the __getitem__ or __setitem__ call was
chosen because it is consistent with passing a tuple of n elements
when a subscript list of n elements is used. Also, it will make NumPy
and similar packages work as expected for zero-dimensional arrays
without
any changes.

Another hint for consistency: Currently, these equivalences hold:

::
   x[i, j, k]  <-->  x[(i, j, k)]
   x[i, j] <-->  x[(i, j)]
   x[i, ]  <-->  x[(i, )]
   x[i]<-->  x[(i)]

If this PEP is accepted, another equivalence will hold:

::
   x[] <-->  x[()]


Backwards Compatibility
===

This change is fully backwards compatible, since it only assigns a
meaning to a previously illegal syntax.


Reference Implementation


Available as SF Patch no. 1503556.
(and also in http://python.pastebin.com/768317 )

It passes the Python test suite, but currently doesn't provide
additional tests or documentation.


Copyright
=

This document has been placed in the public domain.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Allowing zero-dimensional subscripts

2006-06-09 Thread spam . noam
Hello,

Fredrik Lundh wrote:
> (but should it really result in an empty tuple?  wouldn't None be a bit
> more Pythonic?)

I don't think it would. First of all, x[()] already has the desired
meaning in numpy. But I think it's the right thing - if you think of
what's inside the brackets as a list of subscripts, one for each
dimension, which is translated to a call to __getitem__ or __setitem__
with a tuple of objects representing the subscripts, then an empty
tuple is what you want to represent no subscripts.

Of course, one item without a comma doesn't make a tuple, but I see
this as the special case - just like parentheses with any number of
commas are interpreted as tuples, except for parentheses with one item
without a comma.

(By the way, thanks for the tips for posting a PEP - I'll try to do it
quickly.)

Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Allowing zero-dimensional subscripts

2006-06-09 Thread spam . noam
Hello,

Sybren Stuvel wrote:
> I think it's ugly to begin with. In math, one would write simply 'x'
> to denote an unsubscribed (ubsubscripted?) 'x'. And another point, why
> would one call __getitem__ without an item to call?

I think that in this case, mathematical notation is different from
python concepts.

If I create a zero-dimensional array, with the value 5, like this:
>>> a = array(5)

I refer to the array object as "a", and to the int it stores as "a[]".

For example, I can change the value it holds by writing
>>> a[] = 8
Writing "a = 8" would have a completely different meaning - create a
new name, a, pointing at a new int, 8.

Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Allowing zero-dimensional subscripts

2006-06-08 Thread spam . noam
Hello,

Terry Reedy wrote:
> So I do not see any point or usefulness in saying that a tuple subcript is
> not what it is.

I know that a tuple is *constructed*. The question is, is this,
conceptually, the feature that allows you to ommit the parentheses of a
tuple in some cases. If we see this as the same feature, it's
reasonable that "nothing" won't be seen as an empty tuple, just like "a
= " doesn't mean "a = ()".

However, if we see this as a different feature, which allows
multidimensional subscript by constructing a tuple behind the scenes,
constructing an empty tuple for x[] seems very reasonable to me. Since
in some cases you can't have the parentheses at all, I think that x[]
makes sense.

Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Allowing zero-dimensional subscripts

2006-06-08 Thread spam . noam
Hello,

Terry Reedy wrote:
> > In a few more words: Currently, an object can be subscripted by a few
> > elements, separated by commas. It is evaluated as if the object was
> > subscripted by a tuple containing those elements.
>
> It is not 'as if'.   'a,b' *is* a tuple and the object *is* subcripted by a
> tuple.
> Adding () around the non-empty tuple adds nothing except a bit of noise.
>

It doesn't necessarily matter, but technically, it is not "a tuple".
The "1, 2" in "x[1, 2]" isn't evaluated according to the same rules as
in "x = 1, 2" - for example, you can have "x[1, 2:3:4, ..., 5]", which
isn't a legal tuple outside of square braces - in fact, it even isn't
legal inside parens: "x[(1, 2:3:4, ..., 5)]" isn't legal syntax.

Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


Allowing zero-dimensional subscripts

2006-06-08 Thread spam . noam
Hello,

I discovered that I needed a small change to the Python grammar. I
would like to hear what you think about it.

In two lines:
Currently, the expression "x[]" is a syntax error.
I suggest that it will be evaluated like "x[()]", just as "x[a, b]" is
evaluated like "x[(a, b)]" right now.

In a few more words: Currently, an object can be subscripted by a few
elements, separated by commas. It is evaluated as if the object was
subscripted by a tuple containing those elements. I suggest that an
object will also be subscriptable with no elements at all, and it will
be evaluated as if the object was subscripted by an empty tuple.

It involves no backwards incompatibilities, since we are dealing with
the legalization of a currently illegal syntax.

It is consistent with the current syntax. Consider that these
identities currently hold:

x[i, j, k]  <-->  x[(i, j, k)]
x[i, j]  <-->  x[(i, j)]
x[i, ]  <-->  x[(i, )]
x[i]  <-->  x[(i)]

I suggest that the next identity will hold too:

x[]  <-->  x[()]

I need this in order to be able to refer to zero-dimensional arrays
nicely. In NumPy, you can have arrays with a different number of
dimensions. In order to refer to a value in a two-dimensional array,
you write a[i, j]. In order to refer to a value in a one-dimensional
array, you write a[i]. You can also have a zero-dimensional array,
which holds a single value (a scalar). To refer to its value, you
currently need to write a[()], which is unexpected - the user may not
even know that when he writes a[i, j] he constructs a tuple, so he
won't guess the a[()] syntax. If my suggestion is accepted, he will be
able to write a[] in order to refer to the value, as expected. It will
even work without changing the NumPy package at all!

In the normal use of NumPy, you usually don't encounter
zero-dimensional arrays. However, I'm designing another library for
managing multi-dimensional arrays of data. Its purpose is similiar to
that of a spreadsheet - analyze data and preserve the relations between
a source of a calculation and its destination. In such an environment
you may have a lot of multi-dimensional arrays - for example, the sales
of several products over several time periods. But you may also have a
lot of zero-dimensional arrays, that is, single values - for example,
the income tax. I want the access to the zero-dimensional arrays to be
consistent with the access to the multi-dimensional arrays. Just using
the name of the zero-dimensional array to obtain its value isn't going
to work - the array and the value it contains have to be distinguished.

I have tried to change CPython to support it, and it was fairly easy.
You can see the diff against the current SVN here:
http://python.pastebin.com/768317
The test suite passes without changes, as expected. I didn't include
diffs of autogenerated files. I know almost nothing about the AST, so I
would appreciate it if someone who is familiar with the AST will check
to see if I did it right. It does seem to work, though.

Well, what do you think about this?

Have a good day,
Noam Raphael

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why keep identity-based equality comparison?

2006-01-14 Thread Noam Raphael
Mike Meyer wrote:
> Noam Raphael <[EMAIL PROTECTED]> writes:
> 
>>>>Also note that using the current behaviour, you can't easily
>>>>treat objects that do define a meaningful value comparison, by
>>>>identity.
>>>
>>>Yes you can. Just use the "is" operator.
>>
>>Sorry, I wasn't clear enough. In "treating" I meant how containers
>>treat the objects they contain. For example, you can't easily map a
>>value to a specific instance of a list - dict only lets you map a
>>value to a specific *value* of a list.
> 
> 
> Wrong. All you have to do is create a list type that uses identity
> instead of value for equality testing. This is easier than mapping an
> exception to false.
> 
You're suggesting a workaround, which requires me to subclass everything 
that I want to lookup by identity (and don't think it's simple - I will 
have to wrap a lot of methods that return a list to return a list with a 
modified == operator).

I'm suggesting the use of another container class: iddict instead of 
dict. That's all.
I don't think that mapping an exception to false is so hard (certainly 
simpler than subclassing a list in that way), and the average user won't 
have to do it, anyway - it's the list implementation that will do it.
> 
>>Another example - you can't
>>search for a specific list object in another list.
> 
> 
> Your proposed == behavior doesn't change that at all.

It does - *use idlist*.
> 
> 
>>>I will point out why your example usages aren't really usefull if
>>>you'll repeat your post with newlines.
>>
>>Here they are:
>>* Things like "Decimal(3.0) == 3.0" will make more sense (raise an
>>exception which explains that decimals should not be compared to
>>floats, instead of returning False).
> 
> 
> While I agree that Decimal(3.0) == 3.0 returning false doesn't make
> sense, having it raise an exception doesn't make any more sense. This
> should be fixed, but changing == doesn't fix it.
> 
No, it can't be fixed your way. It was decided on purpose that Decimal 
shouldn't be comparable to float, to prevent precision errors. I'm 
saying that raising an exception will make it clearer.
> 
>>* You won't be able to use objects as keys, expecting them to be
>>compared by value, and causing a bug when they don't. I recently wrote
>>a sort-of OCR program, which contains a mapping from a numarray array
>>of bits to a character (the array is the pixel-image of the char).
>>Everything seemed to work, but the program didn't recognize any
>>characters. I discovered that the reason was that arrays are hashed
>>according to their identity, which is a thing I had to guess. If
>>default == operator were not defined, I would simply get a TypeError
>>immediately.
> 
> 
> This isn't a use case. You don't get correct code with either version
> of '=='. While there is some merit to doing things that make errors
> easier to find, Python in general rejects the idea of adding
> boilerplate to do so. Your proposal would generate lots of boilerplate
> for many practical situations.
> 
I would say that there's a lot of merit to doing things that make errors 
easier to find. That's what exceptions are for.

Please say what those practical situations are - that what I want.
(I understand. You think that added containers and a try...except  fro 
time to time aren't worth it. I think they are. Do you have any other 
practical situations?)
> 
>>* It is more forward compatible - when it is discovered that two types
>>can sensibly be compared, the comparison can be defined, without
>>changing an existing behaviour which doesn't raise an exception.
> 
> 
> Sorry, but that doesn't fly. If you have code that relies on the
> exception being raised when two types are compared, changing it to
> suddenly return a boolean will break that code.
> 
You are right, but that's the case for every added language feature (if 
you add a method, you break code that relies on an AttributeError...)
You are right that I'm suggesting a try...except when testing if a list 
contains an object, but a case when you have a list with floats and 
Decimals, and you rely on "Decimal("3.0") in list1" to find only 
Decimals seems to me a little bit far-fetched. If you have another 
example, please say it.

Noam
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why keep identity-based equality comparison?

2006-01-14 Thread Noam Raphael
Mike Meyer wrote:
> [EMAIL PROTECTED] writes:
> 
>>try:
 >>return a == b
 >>except TypeError:
 >>return a is b
> 
> 
> This isn't "easy". It's an ugly hack you have to use everytime you
> want to iterate through a heterogenous set doing equality tests.

I wouldn't define this as an "ugly hack". These are four simple line, 
which state clearly and precisely what you mean, and always work. I have 
seen ugly hacks in my life, and they don't look like this.
> 
> You're replacing "false" with an "emphathetic false", that *all*
> containers to change for the worse to deal with it.
> 
I don't see how they change for the worse if they have exactly the same 
functionality and a few added lines of implementation.
> 
>>Also, Mike said that you'll need an idlist object too - and I think
>>he's right and that there's nothing wrong with it.
> 
> 
> Except that we now need four versions of internal data structures,
> instead of two: list, tuple, idlist, idtuple; set, idset, frozenset,
> frozenidset, and so on. What's wrong with this is that it's ugly.

Again, "ugly" is a personal definition. I may call this "explicitness". 
By the way, what's the "and so on" - I think that these are the only 
built-in containers.
> 
> 
>>Note that while you
>>can easily define the current == behaviour using the proposed
>>behaviour, you can't define the proposed behaviour using the current
>>behaviour.
> 
> 
> Yes you can, and it's even easy. All you have to do is use custom
> classes that raise an exception if they don't

You can't create a general container with my proposed == behaviour. 
That's what I meant.
> 
> 
>>Also note that using the current behaviour, you can't easily
>>treat objects that do define a meaningful value comparison, by
>>identity.
> 
> 
> Yes you can. Just use the "is" operator.

Sorry, I wasn't clear enough. In "treating" I meant how containers treat 
the objects they contain. For example, you can't easily map a value to a 
specific instance of a list - dict only lets you map a value to a 
specific *value* of a list. Another example - you can't search for a 
specific list object in another list.
> 
> Note that this behavior also has the *highly* pecular behavior that a
> doesn't necessarily equal a by default.

Again, "peculiar" is your aesthethic sense. I would like to hear 
objections based on use cases that are objectively made more difficult. 
Anyway, I don't see why someone should even try checking if "a==a", and 
if someone does, the exception can say "this type doesn't support value 
comparison. Use the "is" operator".
> 
> I will point out why your example usages aren't really usefull if
> you'll repeat your post with newlines.
> 
Here they are:

* Things like "Decimal(3.0) == 3.0" will make more sense (raise an
exception which explains that decimals should not be compared to
floats, instead of returning False).
* You won't be able to use objects as keys, expecting them to be
compared by value, and causing a bug when they don't. I recently wrote
a sort-of OCR program, which contains a mapping from a numarray array
of bits to a character (the array is the pixel-image of the char).
Everything seemed to work, but the program didn't recognize any
characters. I discovered that the reason was that arrays are hashed
according to their identity, which is a thing I had to guess. If
default == operator were not defined, I would simply get a TypeError
immediately.
* It is more forward compatible - when it is discovered that two types
can sensibly be compared, the comparison can be defined, without
changing an existing behaviour which doesn't raise an exception.

The third example applies to the Decimal==float use case, and for every 
type that currently has the default identity-based comparison and that 
may benefit from a value-based comparison. Take the class

class Circle(object):
 def __init__(self, center, radius):
 self.center = center
 self.radius = radius

Currently, it's equal only to itself. You may decide to define an 
equality operator which checks whether both the center and the radius 
are the same, but since you already have a default equality operator, 
that change would break backwards-compatibility.

Noam
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why keep identity-based equality comparison?

2006-01-10 Thread spam . noam
It seems to me that both Mike's and Fuzzyman's objections were that
sometimes you want the current behaviour, of saying that two objects
are equal if they are: 1. the same object or 2. have the same value
(when it's meaningful).  In both cases this can be accomplished pretty
easily: You can do it with a try..except block, and you can write the
try...except block inside the __contains__ method.  (It's really pretty
simple: try: return a == b except TypeError: return a is b )
Also, Mike said that you'll need an idlist object too - and I think
he's right and that there's nothing wrong with it.  Note that while you
can easily define the current == behaviour using the proposed
behaviour, you can't define the proposed behaviour using the current
behaviour. Also note that using the current behaviour, you can't easily
treat objects that do define a meaningful value comparison, by
identity. Also note that in the cases that you do want identity-based
behaviour, defining it explicitly can result in a more efficient
program: explicit identity-based dict doesn't have to call any __hash__
and __eq__ protocols - it can compare the pointers themselves. The same
if you want to locate a specific object in a list - use the proposed
idlist and save yourself O(n) value-based comparisons, which might be
heavy.  Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why keep identity-based equality comparison?

2006-01-10 Thread spam . noam
> Can you provide a case where having a test for equality throw an  > exception 
> is actually useful?  Yes. It will be useful because: 1. The bug of not 
> finding a key in a dict because it was implicitly hashed by identity and not 
> by value, would not have happened. 2. You wouldn't get the weird 3.0 != 
> Decimal("3.0") - you'll get an exception which explains that these types 
> aren't comparable. 3. If, in some time, you will decide that float and 
> Decimal could be compared, you will be able to implement that without being 
> concerned about backwards compatibility issues.  >>>> But there are certainly 
> circumstances that I would prefer 1 == (1,2)  >>>> to throw an exception 
> instead of simply turning up False.  >>> So what are they?  >  > Again - give 
> us real use cases.   You may catch bugs earlier - say you have a 
> multidimensional array, and you forgot one index. Having comparison raise an 
> exception because type comparison is meaningless, instead of returning False 
> silently, will help y!
 ou catch your problem earlier.  Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


Why keep identity-based equality comparison?

2006-01-09 Thread spam . noam
Hello,

Guido has decided, in python-dev, that in Py3K the id-based order
comparisons will be dropped. This means that, for example, "{} < []"
will raise a TypeError instead of the current behaviour, which is
returning a value which is, really, id({}) < id([]).

He also said that default equality comparison will continue to be
identity-based. This means that x == y will never raise an exception,
as is the situation is now. Here's his reason:

> Let me construct a hypothetical example: suppose we represent a car
> and its parts as objects. Let's say each wheel is an object. Each
> wheel is unique and we don't have equivalency classes for them.
> However, it would be useful to construct sets of wheels (e.g. the set
> of wheels currently on my car that have never had a flat tire). Python
> sets use hashing just like dicts. The original hash() and __eq__
> implementation would work exactly right for this purpose, and it seems
> silly to have to add it to every object type that could possibly be
> used as a set member (especially since this means that if a third
> party library creates objects for you that don't implement __hash__
> you'd have a hard time of adding it).

Now, I don't think it should be so. My reason is basically "explicit is
better than implicit" - I think that the == operator should be reserved
for value-based comparison, and raise an exception if the two objects
can't be meaningfully compared by value. If you want to check if two
objects are the same, you can always do "x is y". If you want to create
a set of objects based on their identity (that is, two different
objects with the same value are considered different elements), you
have two options:
1. Create another set type, which is identity-based - it doesn't care
about the hash value of objects, it just collects references to
objects. Instead of using set(), you would be able to use, say,
idset(), and everything would work as wanted.
2. Write a class like this:

class Ref(object):
def __init__(self, obj):
self._obj = obj
def __call__(self):
return self._obj
def __eq__(self, other):
return isinstance(other, Ref) and self._obj is other._obj
def __hash__(self):
return id(self._obj) ^ 0xBEEF

and use it like this:

st = set()
st.add(Ref(wheel1))
st.add(Ref(wheel2))
if Ref(wheel1) in st:
...
Those solutions allow the one who writes the class to define a
value-based comparison operator, and allow the user of the class to
explicitly state if he wants value-based behaviour or identity-based
behaviour.

A few more examples of why this explicit behaviour is good:

* Things like "Decimal(3.0) == 3.0" will make more sense (raise an
exception which explains that decimals should not be compared to
floats, instead of returning False).
* You won't be able to use objects as keys, expecting them to be
compared by value, and causing a bug when they don't. I recently wrote
a sort-of OCR program, which contains a mapping from a numarray array
of bits to a character (the array is the pixel-image of the char).
Everything seemed to work, but the program didn't recognize any
characters. I discovered that the reason was that arrays are hashed
according to their identity, which is a thing I had to guess. If
default == operator were not defined, I would simply get a TypeError
immediately.
* It is more forward compatible - when it is discovered that two types
can sensibly be compared, the comparison can be defined, without
changing an existing behaviour which doesn't raise an exception.

My question is, what reasons are left for leaving the current default
equality operator for Py3K, not counting backwards-compatibility?
(assume that you have idset and iddict, so explicitness' cost is only
two characters, in Guido's example)

Thanks,
Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


Convention for C functions success/failure

2005-12-03 Thread spam . noam
Hello,

What is the convention for writing C functions which don't return a
value, but can fail?

If I understand correctly,
1. PyArg_ParseTuple returns 0 on failure and 1 on success.
2. PySet_Add returns -1 on failure and 0 on success.

Am I correct? What should I do with new C functions that I write?

Thanks,
Noam

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using distutils 2.4 for python 2.3

2005-09-23 Thread Noam Raphael
Fredrik Lundh wrote:
> 
> you can enable new metadata fields in older versions by assigning to
> the DistributionMetadata structure:
> 
> try:
> from distutils.dist import DistributionMetadata
> DistributionMetadata.package_data = None
> except:
> pass
> 
> setup(
> ...
> package_data=...
> )
> 
>  

I tried this, but it made python2.4 behave like python2.3, and not 
install the package_data files.

Did I do something wrong?
-- 
http://mail.python.org/mailman/listinfo/python-list


Using distutils 2.4 for python 2.3

2005-09-23 Thread Noam Raphael
Hello,

I want to distribute a package. It's compatible with Python 2.3.
Is there a way to use distutils 2.4 feature package_data, while 
maintaining the distribution compatible with python 2.3 ?

Thanks,
Noam Raphael
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How about "pure virtual methods"?

2004-12-31 Thread Noam Raphael
Thanks for your suggestion, but it has several problems which the added 
class solves:

* This is a very long code just to write "you must implement this 
method". Having a standard way to say that is better.
* You can instantiate the base class, which doesn't make sense.
* You must use testing to check whether a concrete class which you 
derived from the base class really implemented all the abstract methods. 
Testing is a good thing, but it seems to me that when the code specifies 
exactly what should happen, and it doesn't make sense for it not to 
happen, there's no point in having a separate test for it.

About the possibility of implementing only a subset of the interface: 
You are perfectly welcomed to implement any part of the interface as you 
like. Function which use only what you've implemented should work fine 
with your classes. But you can't claim your classes to be instances of 
the base class - as I see it, subclasses, even in Python, guarantee to 
behave like their base classes.

Have a good day,
Noam
--
http://mail.python.org/mailman/listinfo/python-list


Re: Non blocking read from stdin on windows.

2004-12-25 Thread Noam Raphael
You can always have a thread which continually reads stdin and stores it 
in a string, or better, in a cStringIO.StringIO object. Then in the main 
thread, you can check whether something new has arrived. This, of course 
will work on all platforms.

I hope this helped a bit,
Noam
--
http://mail.python.org/mailman/listinfo/python-list


Re: How about "pure virtual methods"?

2004-12-25 Thread Noam Raphael
Thank you very much for this answer! I learned from you about unit 
tests, and you convinced me that "testing oriented programming" is a 
great way to program.

You made me understand that indeed, proper unit testing solves my 
practical problem - how to make sure that all the methods which should 
be implemented were implemented. However, I'm still convinced that this 
feature should be added to Python, for what may be called "aesthetic 
reasons" - I came to think that it fills a gap in Python's "logic", and 
is not really an additional, optional feature. And, of course, there are 
practical advantages to adding it.

The reason why this feature is missing, is that Python supports building 
a class hierarchy. And, even in this dynamically-typed language, the 
fact that B is a subclass of A means that B is supposed to implement the 
interface of A. If you want to arrange in a class hierarchy a set of 
classes, which all implement the same interface but don't have a common 
concrete class, you reach the concept of an "abstract class", which 
can't be instantiated. And the basestring class is exactly that.

The current Python doesn't really support this concept. You can write in 
the __new__ of such a class something like "if cls == MyAbstractClass: 
raise TypeError", but I consider this as a patch - for example, if you 
have a subclass of this class which is abstract too, you'll have to 
write this exception code again. Before introducing another problem, let 
me quote Alex:

... If you WANT the method in the ABC, for documentation
purposes, well then, that's not duplication of code, it's documentation,
which IS fine (just like it's quite OK to have some of the same info in
a Tutorial document, in a Reference one, AND in a class's docstring!).
If you don't want to have the duplication your unit tests become easier:
you just getattr from the class (don't even have to bother instantiating
it, ain't it great!), and check the result with inspect.
That's actually right - listing a method which should be implemented by 
subclasses, in the class definition is mainly a matter of 
*documentation*. I like the idea that good documentation can be derived 
from my documented code automatically, and even if I provide an external 
documentation, the idea that the code should explain itself is a good 
one. The problem is, that the current convention is not a good 
documentation:

  def frambozzle(self):
 ''' must make the instance frambozzled '''
 raise NotImplementedError
The basic problem is that, if you take this basic structure, it already 
means another thing: This is a method, which takes no arguments and 
raises a NotImplementedError. This may mean, by convention, that this 
method must be implemented by subclasses, but it may also mean that this 
method *may* be implemented by subclasses. I claim that a declaration 
that a method must be implemented by subclass is simply not a method, 
and since Python's "logic" does lead to this kind of thing, it should 
supply this object (I think it should be the class "abstract"). Two of 
Python's principles are "explicit is better than implicit", and "there 
should be (only?) one obvious way to do it". Well, I think that this:

@abstract
def frambozzle(self):
"""Must make the instance frambozzled"""
pass
is better than the previous example, and from
def frambozzle(self):
raise NotImplementedError, "You must implemented this method, and 
it must make the instance frambozzled"

and from
def frambozzle(self):
"""Must make the instance frambozzled.
PURE VIRTUAL
"""
pass
and from maybe other possible conventions. Note also that making this 
explicit will help you write your tests, even if Python would allow 
instantiation of classes which contain abstract methods - you will be 
able to simple test "assert not isinstance(MyClass.frambozzle, abstract)".
(I don't like the solution of not mentioning the method at all, which 
makes the test equally simple, because it doesn't document what the 
method should do in the class definition, and I do like in-code 
documentation.)

To summarize, I think that this feature should be added to Python 
because currently, there's no proper way to write some code which fits 
the "Python way". As a bonus, it will help you find errors even when 
your unit tests are not sufficient.

I plan to raise this issue in python-dev. If you have any additional 
comments, please post them here. (I will probably be able to reply only 
by the weekend.)

Have a good day,
Noam
--
http://mail.python.org/mailman/listinfo/python-list


Re: How about "pure virtual methods"?

2004-12-25 Thread Noam Raphael
Mike Meyer wrote:
That's what DbC languages are for. You write the contracts first, then
the code to fullfill them. And get exceptions when the implementation
doesn't do what the contract claims it does.
Can you give me a name of one of them? This is a very interesting thing 
- I should learn one of those sometime. However, I'm pretty sure that 
programming in them is hell, or at least, takes a very long time.

Noam
--
http://mail.python.org/mailman/listinfo/python-list


Re: How about "pure virtual methods"?

2004-12-23 Thread Noam Raphael
Mike Meyer wrote:
Noam Raphael <[EMAIL PROTECTED]> writes:

The answer is that a subclass is guaranteed to have the same
*interface* as the base class. And that's what matters.

This is false. For instance:
class A(object):
 def method(self, a):
print a
class B(A):
 def method(self, a, b):
   print a, b
B implements a different interface than A. Statically typed OO
languages either use multi-methods or disallow changing the signature
of an overridden method.
A tool to detect such cases would probably be almost as useful as the
tool you've proposed.
 I agree that such a tool would be very useful. In fact, I think it 
exists - I'm sure pychecker checks for mistakes like that. I understand 
that it checks for not implementing functions which just raise an 
exception too, so you can say, "why all this mess? Run pychecker and 
everything will be good." However, I think that there is a difference 
between these two tests, which explains why one should be done by the 
language itself and one should be done by an analysis tool.

The difference is that handling arguments, in Python, can be seen as a 
part of the *implementation*, not the interface. The reason is that you 
can write a method which simply gets a (*args, **kwargs), and raises a 
TypeError if the number of args isn't two, and it would be completely 
equivalent to a function which is defined using def f(a, b). Of course, 
even in statically typed languages, you can't enforce an implementation 
to do what it should (too bad - it would have made debugging so much 
easier...)

So checking whether the argument list of a method of a subclass suits 
the argument list of the original implementation is nice, but should be 
left to external analysis tools, but checking whether a method is 
defined at all can be done easily by the language itself.

Noam
--
http://mail.python.org/mailman/listinfo/python-list


Re: How about "pure virtual methods"?

2004-12-23 Thread Noam Raphael
Jp Calderone wrote:
  This lets you avoid duplicate test code as well as easily test
new concrete implementations.  It's an ideal approach for frameworks
which mandate application-level implementations of a particular 
interface and want to ease the application developer's task.

  Jp
It's a great way for sharing tests between different subclasses of a 
class. Thank you for teaching me.

However, I'm not sure if this solves my practical problem - testing 
whether all abstract methods were implemented. I think that usually, you 
can't write a test which checks whether an abstract method did what it 
should have, since different implementations do different things. I 
don't even know how you can test whether an abstract method was 
implemented - should you run it and see if it raises a 
NotImplementedError? But with what arguments? And even if you find a way 
to test whether a method was implemented, I still think that the 
duplication of code isn't very nice - you have both in your class 
definition and in your test suite a section which says only "method 
so-and-so should be implemented."

I think that making abstract methods a different object really makes 
sense - they are just something else. Functions (and methods) define 
what the computer should do. Abstract methods define what the 
*programmer* should do.

Again, thanks for enlightening me.
Noam
--
http://mail.python.org/mailman/listinfo/python-list


Re: How about "pure virtual methods"?

2004-12-21 Thread Noam Raphael
Scott David Daniels wrote:
class Abstract(object):
'''A class to stick anywhere in an inheritance chain'''
__metaclass__ = MustImplement
def notimplemented(method):
'''A decorator for those who prefer the parameters declared.'''
return NotImplemented
I just wanted to say that I thought of notimplemented as a class, that 
would save a reference to the functions it got in the constructor. In 
that way pydoc and his friends would be able to find the arguments the 
method was expected to get, and its documentation string.

But it's a great implementation.
Noam
Oh, and another thing - maybe "abstract" is a better name than 
"notimplemented"? notimplemented might suggest a method which doesn't 
have to be implemented - and raises NotImplementedError when it is 
called. What do you think?
--
http://mail.python.org/mailman/listinfo/python-list


Re: How about "pure virtual methods"?

2004-12-21 Thread Noam Raphael
Jeff Shannon wrote:
Except that unit tests should be written to the *specification*, not the 
implementation.  In other words, forgetting a complete method would 
require that you forget to write the method, *and* that you failed to 
translate the specification into unit tests *for that same method*.
You are absolutely right - but when you are not that tidy, and don't 
have a written specification apart from the code, it would be natural to 
go over each method in the class definition, and write a test to check 
if it does what it should. I'm not saying that it's the ideal way, but 
it is not that bad, usually.

In the context of changing an existing interface, a unit-testing 
scenario would mean that, instead of installing a "pure virtual" method 
on a base class, you'd change the unit-tests to follow the new 
specification, and then write code that would pass the unit tests.  If 
you are subclassing from a common base, then you'd only need to change 
the unit test for that common base class (presuming that all derived 
classes would run those unit tests as well).

The problem is that I couldn't write a general unit test, since the base 
class wasn't instantiable - there wasn't even any point in making it 
instantiable, since every subclass was constructed with different 
argument definition. They were just required to provide some methods 
(check whether they contain an object, for example) - I don't know how 
to write a unit test for such a base class, or what does it mean. (Well, 
it may mean: check whether all the required methods exist, but come on - 
that's exactly the idea of my suggestion. There's no point in having to 
write the list of required methods another time).

Jeff Shannon
Technician/Programmer
Credit International
Thanks for your comment. You're welcomed to reply if you don't agree.
Noam
--
http://mail.python.org/mailman/listinfo/python-list


Re: How about "pure virtual methods"?

2004-12-21 Thread Noam Raphael
My long post gives all the philosophy, but I'll give here the short answers.
Mike Meyer wrote:
+0
Python doesn't use classes for typing. As Alex Martelli puts it,
Python uses protocols. So the client expecting a concrete subclass of
your abstract class may get an instantiation of a class that doesn't
inherit from the abstract class at all.
That's right - this mechanism is useful mostly for he who implements 
that class, to make sure that he implemented all that is needed to be 
assigned the title "a subclass of that class".

Or maybe the subclass is only going to use a subset of the features of
the abstract class, and the author knows that sum deferred methods
won't be invoked. The correct behavior in this case would be to allow
the subclass to be instantiated, and then get a runtime error if one
of the features the author thought he could skip was actually called.
I disagree - my reasoning is that a subclass must implement the complete 
interface of its base class (see my long post). The author may implement 
a class which defines only a part of the interface, and give it to the 
function, and it may work and be great. But it must not be called "an 
instance of the abstract class".

Finally, in a sufficiently complex class hierarchy, this still leaves
you wondering through the hierarchy trying to find the appropriate
parent class that tagged this method as unimplemented, and then
figuring out which class should have implemented it - as possibly a
parent of the class whose instantiation failed is the subclass that
should have made this method concrete.
You are right - but I needed this for a class hierarchy of only two 
levels (the base abstract class and the concrete subclasses), so there 
were not many classes to blame for a missing method.
   I hope this seems reasonable,
Noam
--
http://mail.python.org/mailman/listinfo/python-list


Re: How about "pure virtual methods"?

2004-12-21 Thread Noam Raphael
Steve Holden wrote:
Even if you can do it, how would you then implement a class hierarchy 
where the ultimate base class had virtual methods, and you wanted to 
derive from that class another class, to be used as a base class for 
usable classes, which implemented only a subset of the virtual methods, 
leaving the others to be implemented by the ultimate subclasses?

What I suggest is that only *instantiation* would be forbidden. You are 
free to make a subclass which defines only some of the abstract methods, 
and to subclass the subclass and define the rest. You would only be able 
to make instances of the subclass of the subclass, but that's ok.

See Scott's implementation - the test at the end does exactly this.
I hope this helped,
Noam
--
http://mail.python.org/mailman/listinfo/python-list


Re: How about "pure virtual methods"?

2004-12-21 Thread Noam Raphael
Thank you all, especially Alex for your enlightening discussion, and 
Scott for your implementation. I'm sorry that I can't be involved in a 
daily manner - but I did read all of the posts in this thread. They 
helped me understand the situation better, and convinced me that indeed 
this feature is needed. Let's see if I can convince you too.

First, the actual situation in which I stood, which made me think, "I 
would like to declare a method as not implemented, so that subclasses 
would have to implement it."

I wrote a system in which objects had to interact between themselves. In 
my design, all those objects had to implement a few methods for the 
interaction to work. So I wrote a base class for all those objects, with 
a few methods which the subclasses had to implement. I think it's good, 
for *me*, to have an explicit list of what should be implemented, so 
that when (in a function) I expect to get an object of this kind I know 
what I may and may not do with it.

Then, I wrote the classes themselves. And I wrote unit tests for them. 
(Ok, I lie. I didn't. But I should have!) Afterwards, I decided that I 
needed all my objects of that kind to supply another method. So I added 
another "raise NotImplementedError" method to the base class. But what 
about the unit tests? They would have still reported a success - where 
of course they shouldn't have; my classes, in this stage, didn't do what 
they were expected to do. This problem might arise even when not 
changing the interface at all - it's quite easy to write a class which, 
by mistake, doesn't implement all the interface. Its successful unit 
tests may check every single line of code of that class, but a complete 
method was simply forgotten, and you wouldn't notice it until you try 
the class in the larger framework (and, as I understand, the point of 
unit testing is to test the class on its own, before integrating it).

Ok. This was the practical reason why this is needed. Please note that I 
didn't use "isinstance" even once - all my functions used the 
*interface* of the objects they got. I needed the extra checking for 
myself - if someone wanted to implement a class that wouldn't inherit 
from my base class, but would nevertheless implement the required 
interface, he was free to do it, and it would have worked fine with the 
framework I wrote.

Now for the "theoretical" reason why this is needed. My reasoning is 
based on the existence of "isinstance" in Python. Well, what is the 
purpose of isinstance? I claim that it doesn't test if an object *is* of 
a given type. If that would have been its purpose, it would have checked 
whether type(obj) == something. Rather, it checks whether an object is a 
subclass of a given type. Why should we want such a function? A subclass 
may do a completely different thing from what the original class did! 
The answer is that a subclass is guaranteed to have the same *interface* 
as the base class. And that's what matters.

So I conclude that a subclass, in Python, must implement the interface 
of its parent class. Usually, this is obvious - there's no way for a 
subclass not to implement the interface of its parent class, simply 
because it can only override methods, but can't remove methods. But what 
shall we do if the some methods in the base class consist *only* of an 
interface? Can we implement only a part of the interface, and claim that 
instances of that class are instances of the original class, in the 
"isinstance" fashion? My answer is no. The whole point of "isinstance" 
is to check whether an instance implements an interface. If it doesn't - 
what is the meaning of the True that isinstance returns? So we should 
simply not allow instances of such classes.

You might say that abstract classes at the base of the hierarchy are 
"not Pythonic". But they are in Python already - the class basestring is 
exactly that. It is an uninstantiable class, which is there only so that 
you would be able to do isinstance(x, basestring). Classes with 
"notimplemented" methods would behave in exactly the same way - you 
wouldn't be able to instantiate them, just to subclass them (and to 
check, using isinstance, whether they implement the required protocol, 
which I agree that wouldn't be Pythonic, probably).

Ok. This is why I think this feature fits Python like a glove to a hand. 
Please post your comments on this! I apologize now - I may not be able 
to reply in the next few days. But I will read them at the end, and I 
will try to answer.

Have a good day,
Noam
--
http://mail.python.org/mailman/listinfo/python-list


How about "pure virtual methods"?

2004-12-18 Thread Noam Raphael
Hello,
I thought about a new Python feature. Please tell me what you think 
about it.

Say you want to write a base class with some unimplemented methods, that 
subclasses must implement (or maybe even just declare an interface, with 
no methods implemented). Right now, you don't really have a way to do 
it. You can leave the methods with a "pass", or raise a 
NotImplementedError, but even in the best solution that I know of, 
there's now way to check if a subclass has implemented all the required 
methods without running it and testing if it works. Another problem with 
the existing solutions is that raising NotImplementedError usually means 
"This method might be implemented some time", and not "you must 
implement this method when you subclass me".

What I suggest is a new class, called notimplemented (you may suggest a 
better name). It would get a function in its constructor, and would just 
save a reference to it. The trick is that when a new type (a subclass of 
the default type object) is created, It will go over all its members and 
check to see if any of them is a notimplemented instance. If that is the 
case, it would not allow an instantiation of itself.

What I want is that if I have this module:
==
class BaseClass(object):
def __init__(self):
...
@notimplemented
def save_data(self, filename):
"""This method should save the internal state of the class to
a file named filename.
"""
pass
class RealClass(BaseClass):
def save_data(self, filename):
open(filename).write(self.data)
==
then if I try to instantiate BaseClass I would get an exception, but 
instantiating RealClass will be ok.

Well, what do you say?
Noam Raphael
--
http://mail.python.org/mailman/listinfo/python-list