Re: Step-by-step exec

2008-11-07 Thread gregory . lielens
On Nov 7, 11:20 am, Steven D'Aprano <[EMAIL PROTECTED]
cybersource.com.au> wrote:

> > What I am trying to do is to execute it "step-by-step", so that I can
> > capture the exception if one line (or multi-line statement) fails, print
> > a warning about the failure, and continue the execution fo the following
> > lines/statements. Of course, an error on one line can trigger errors in
> > the following lines, but it does not matter in the application I have in
> > mind,
>
> I'm curious what sort of application you have where it doesn't matter
> that programmatic statements are invalid.

Well, it is not that it does not matter, it is that I'd like to get as
much as possible of the input file "executed".
The application is roughly to save a graph, and retrieve it later.
The graph is made of a set of python classes, derived from a GraphNode
instance but each having particular sets of new attributes.
The storage format should be human readable and editable (this is why
a simple pickle or equivalent is not OK), store only part of the
database, and automatically reflect the changes of the classes (new
attributes in old classes, new classes, ...). The format I used was
"simply" instructions that re-create the graph...

It works, but could be robust accross changes (basically, an old
database read with new version of the application should
recreate as much nodes as possible, with new attributes intialized at
default value). The node classes allows that, but during the parsing
when attributes names or classes names have changed, some errors can
happen, which will not only affect those nodes but stop execution. In
many cases, even with those error, executing the rest of the commands
will re-create an usefull graph, of course lacking some informations
but much more usefull that the partial graph obtained when stopping at
the first error...

>So basically you want to create a Python interpreter you can stop and
>start which runs inside Python. Is that right?

Yep, and the InteractiveConsole works very nicely for doing that :-)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Step-by-step exec

2008-11-07 Thread gregory . lielens
On Nov 6, 9:53 pm, Aaron Brady <[EMAIL PROTECTED]> wrote:

> Check out the InteractiveConsole and InteractiveInterpreter classes.
> Derive a subclass and override the 'push' method.  It's not documented
> so you'll have to examine the source to find out exactly when and what
> to override.

Thanks, this is the thing I was looking for: it works very nicely :-)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Step-by-step exec

2008-11-06 Thread gregory . lielens
On Nov 6, 1:02 pm, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
wrote:
> On Nov 6, 4:27 pm, [EMAIL PROTECTED] wrote:
>
>
>
> > Hi,
>
> > I am using a small python file as an input file (defining constants,
> > parameters, input data, ...) for a python application.
> > The input file is simply read by an exec statement in a specific
> > dictionary, and then the application retrieve all the data it need
> > from the dictionary...
> > Everything is working nicely, but I'd like to have something a little
> > bit more robust regarding input file errors: now
> > any error in the python input script raise an exception and stop the
> > execution.
> > What I am trying to do is to execute it "step-by-step", so that I can
> > capture the exception if one line (or multi-line statement) fails,
> > print a warning about the failure, and continue the execution fo the
> > following lines/statements. Of course, an error on one line can
> > trigger errors in the following lines, but it does not matter in the
> > application I have in mind, the goal is to parse as much of the input
> > script as possible, warn about the errors, and check what's inside the
> > dictionary after the exec.
> > One way to do it is to read the input script line per line, and exec
> > each line in turn. However, this is not convenient as it does not
> > allow multi-line statements, or basic control flow like if - else
> > statements or loops.
>
> Do you have control over the input file generation ? If the input file
> can be easily divided into self sufficient blocks of code, you could
> read each block in one at a time and do a compile() and exec(). Your
> input file need not be a full python script too, you could just have
> token delimited blocks of python code which are read in 1 block at a
> time and then exec().
>
> -srp
>
>
>
> > Is there a better way for a step-by-step exec? Syntax errors in the
> > input script are not really a problem (as it is generated elsewhere,
> > it is not directly edited by users), although it would be nice to
> > catch. The biggest problem are runtime errors (attribute error, value
> > error, ...). Maybe compiling the file into a code object, and
> > executing this code object step-by-step in a way similar to debug? pdb
> > module should do something similar
>
> > Best regards,
>
> > Greg.
>
>
Thanks for your input!
I had a similar solution in mind (with continuation comments to be
able to read multi-line statements, instead of statement delimiters,
but
I think your delimiter idea would be easier to implement).
The only problem is that files generated by current and older version
of the input generator (which do not have any kind of statement
delimiter)
will not be readable, or, if we just consider old file as one big
statement, will not offer the step-by-step execution. It would work
nice with new files,
but the best solution would be something that work also with older
auto-generated files...

In fact, the error in the input script are mainly caused by version
mismatch, and the step-by-step approach is
mainly for improving backward compatibility.
Having better backward compatibility starting from now on would be
nice, but having backward compatibility with previous versions would
be even better ;-)
contains multi-line statements
--
http://mail.python.org/mailman/listinfo/python-list


Step-by-step exec

2008-11-06 Thread gregory . lielens
Hi,

I am using a small python file as an input file (defining constants,
parameters, input data, ...) for a python application.
The input file is simply read by an exec statement in a specific
dictionary, and then the application retrieve all the data it need
from the dictionary...
Everything is working nicely, but I'd like to have something a little
bit more robust regarding input file errors: now
any error in the python input script raise an exception and stop the
execution.
What I am trying to do is to execute it "step-by-step", so that I can
capture the exception if one line (or multi-line statement) fails,
print a warning about the failure, and continue the execution fo the
following lines/statements. Of course, an error on one line can
trigger errors in the following lines, but it does not matter in the
application I have in mind, the goal is to parse as much of the input
script as possible, warn about the errors, and check what's inside the
dictionary after the exec.
One way to do it is to read the input script line per line, and exec
each line in turn. However, this is not convenient as it does not
allow multi-line statements, or basic control flow like if - else
statements or loops.

Is there a better way for a step-by-step exec? Syntax errors in the
input script are not really a problem (as it is generated elsewhere,
it is not directly edited by users), although it would be nice to
catch. The biggest problem are runtime errors (attribute error, value
error, ...). Maybe compiling the file into a code object, and
executing this code object step-by-step in a way similar to debug? pdb
module should do something similar

Best regards,

Greg.
--
http://mail.python.org/mailman/listinfo/python-list


Re: assigning to __class__ for an extension type: Is it still possible?

2007-05-03 Thread gregory . lielens

> We have then added the Py_TPFLAGS_HEAPTYPE tp_flag, which turn _PClass
> into a heap
> class and should make this class assignment possible...

A precision: it seems that just addind Py_TPFLAGS_HEAPTYPE flag in
the
PyTypeObject tp_flags is not all you have to do to turn a static type
into a heap type: indeed, when doing such in the Noddy examples,
I have a segfault when just typing:
n=Noddy()
n

there is an access to the ht_name slot that is apparently non
initialized...

So Maybe the core of the problem is that I do not define the heap type
correctlyDo anybody have (or can tell me where to find) a small
example of an extension module defining a heap class? Similar to the
Noddy examples from the python doc?
I did not find any concrete example of Py_TPFLAGS_HEAPTYPE in the
current doc or on the net...

Best regards,

Greg.

-- 
http://mail.python.org/mailman/listinfo/python-list


assigning to __class__ for an extension type: Is it still possible?

2007-05-03 Thread gregory . lielens
Hello,

We are currently writing python bindings to an existing C++ library,
and we encountered
a problem that some of you may have solved (or that has found
unsolvable :( ):

A C++ class (let's call it CClass) is binded using classical Python
extension API to _PClass, which is accesible through python without
any
problem. The problem is that I want this class to be extended/
extensible
in python, and expose the python-extended version (PClass) to library
users (_PClass should never be used directly nor be retruned by any
function).
The aim is to leave only performance critical methods in C++ so that
the
binding work is minimal, and develop the other methods in python so
that
  they are easier to maintain/extend.

We thus have something like this

class PClass(_PClass):
   def overide_method(self,...):
 ...
   def new_method(self,...):
 ...

and I can define
a=PClass()
and use my new or overiden method
a.overide_method() a.new_method() as intended...

So far, so good, trouble begin when I have a method from another
class
PClass2 derived from _PClass2 which bind the C++ class CClass2, that
should return objects of type PClass:

Let call this method troublesome_method:

b=_PClass2()
c=b.troublesome_method()
type(c) gives _PClass.

Now I want to define a python class PClass2 for extending methods of
_PClass2 like I have done for _PClass, in particular I want that
PClass2
troublesome_method return objects of type PClass instead of _PClass...

To this end I try something like this

class PClass2(_PClass2):
 ...
 def troubelsome_method(self):
base=_PClass2.troublesome_method(self)
base.__class__=PClass

and I have python complaining about TypeError: __class__ assignment:
only for heap types...

We have then added the Py_TPFLAGS_HEAPTYPE tp_flag, which turn _PClass
into a heap
class and should make this class assignment possible...or so I though:
When the _PClass is turned into a heaptype, assignent to
__class__trigger a test in
python's typeobject.c on tp_dealloc/tp_free witch raise an exception
TypeError: __class__ assignment: '_PClass' deallocator differs from
'PClass'
I have commented out this test, just to check what happen, and just
got an error later on
 TypeError: __class__ assignment: '_PClass' object layout differs from
'PClass'

It seems thus that this approach is not possible, but problem is that
delegation ( embedding a _PClass instance in
PClass (lets say as _base attribute) and automatically forwarding all
methods calls/setattr/getattr to ._base) is far from ideal...
Indeed, we would have to change all other methods that accepted
_PClass, to give them PClass._base, this means wrapping a large
number of methods (from our c++ bindings) for argument-processing...

This is particularly frustrating cause I also have the impression
that
want we want to do was at one time possible in python, let say in
2002-2003, when __class__ was already assignable but before various
safety checks were implemented (assignmenent only for heaptypes, check
on tp_free/tp_dealloc, layout check,...)

Any hint on this problem?
In particular, a list of condition type A and B have to fullfull so
that a=A(); a.__class__=B is possible would be very nice, especially
when A is an extension type (I think the correct term is static type)
while B is defined by a class statement (dynamic type)...This is
something that is not easy to find in the docs, and seems to have
changed quite a lot between revisions 2.2/2.3/2.4/2.5...

Thanks,

Greg.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: subclassing extension type and assignment to __class__

2004-12-02 Thread Gregory Lielens

> Unless I'm misunderstanding, couldn't one of __getattr__ or
> __getattribute__ make mapping other methods you don't override very
> simple and practical, not to mention fully automated with 2-3 lines of
> code?  In particular, __getattr__ would seem good for your use since
> it is only called for attributes that couldn't already be located.
> 
> I've had code that wrapped underlying objects in a number of cases, and
> always found that to be a pretty robust mechanism.

Thanks for mentioning this, after more research I came up with something
usable, using delegating technique from python cookbook:


#wrapper
 
def wrap_class(base):
class wrapper:
__base__=base
def __init__(self,*args,**kwargs):
if len(args)==1 and type(args[0])==self.__base__ and kwargs=={}:
self._base = args[0]
else:

self._base=self.__class__.__base__.__new__(self.__class__.__base__,*args,**kwargs)
def __getattr__(self,s):
return self._base.__getattribute__(s)
return wrapper
 
 
 
#wrap class
complex2=wrap_class(complex)
#extend wrapper class
def supaprint(self):
print "printed with supaprint(tm):"
print self
complex2.supaprint=supaprint
 
#define wrapper class from base class
c1=1+1j
c2=complex2(c1)
#test old functionalities
print c1==c2
print "c1=",c1
print "c2=",c1
#test extension
c2.supaprint()
c1.supaprint()

So this basically fit the bill, it even delegates special methods it
seems, although I am not sure why...It is like writting heritage ourself,
in a way :-)

Remaning problem is that if we use the class generating
wrapper function (simple), we loose the classic class definition syntax
and rely on explicitely adding methods.
The alternative is to include the wrapper
machinery in every "home-derived" class, but you are right, it is not as bad as 
I
though :-)

The biggest point I am not sure now is performance: Isn't a big penalty
associated to this embedding, compared to derivation? Python performance
is not so critical in our application, but I would be uncomfortable having
a factor 10 penalty in methd calling associated to this approach...For now, 
this will be used for
testing new implementations/extensions, that will be added to the C++
written binding afterwards.

I'd like to thanks the list for the hints, I will post the results of my
experimatations relaxing the assigment to __class__ test if they are
interesting.

Greg.

-- 
http://mail.python.org/mailman/listinfo/python-list