Re: Determining if a function is a method of a class within a decorator

2009-06-30 Thread David Hirschfield
Unfortunately that still requires two separate decorators, when I was 
hoping there was a way to determine if I was handed a function or method 
from within the same decorator.


Seems like there really isn't, so two decorators is the way to go.
Thanks,
-David

Carl Banks wrote:

On Jun 29, 6:01 pm, David Hirschfield  wrote:
  

So is there
a pattern I can follow that will allow me to determine whether the
objects I'm given are plain functions or belong to a class?

Thanks in advance,





class HomemadeUnboundMethod(object):
def __init__(self,func):
self.func = func
def __call__(self,*args,**kwargs):
print "is a function: %s" % self.func.func_name
return self.func(*args,**kwargs)
def __get__(self,obj,owner):
return HomemadeBoundMethod(obj,self.func)

class HomemadeBoundMethod(object):
def __init__(self,obj,func):
self.obj = obj
self.func = func
def __call__(self,*args,**kwargs):
print "is a method: %s" % self.func.func_name
return self.func(self.obj,*args,**kwargs)

class A(object):
@HomemadeUnboundMethod
def method(self): pass

@HomemadeUnboundMethod
def function(): pass

A().method()
function()



Just override the __call__ functions to do what you want the decorated
function to do.  There are other little improvements you might make
(account for the owner parameter of __get__ for instance) but you get
the idea.


Carl Banks
  


--
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Determining if a function is a method of a class within a decorator

2009-06-29 Thread David Hirschfield
Yeah, it definitely seems like having two separate decorators is the 
solution. But the strange thing is that I found this snippet after some 
deep googling, that seems to do something *like* what I want, though I 
don't understand the descriptor stuff nearly well enough to get what's 
happening:


http://stackoverflow.com/questions/306130/python-decorator-makes-function-forget-that-it-belongs-to-a-class

answer number 3, by ianb. It seems to indicate there's a way to 
introspect and determine the class that the function is going to be 
bound to...but I don't get it, and I'm not sure it's applicable to my case.


I'd love an explanation of what is going on in that setup, and if it 
isn't usable for my situation, why not?

Thanks again,
-David

Terry Reedy wrote:

David Hirschfield wrote:
I'm having a little problem with some python metaprogramming. I want 
to have a decorator which I can use either with functions or methods 
of classes, which will allow me to swap one function or method for 
another. It works as I want it to, except that I want to be able to 
do some things a little differently depending on whether I'm swapping 
two functions, or two methods of a class.


Unbounds methods are simply functions which have become attributes of 
a class. Especially in Py3, there is *no* difference.


Bound methods are a special type of partial function. In Python, both 
are something else, though still callables.  Conceptually, a partial 
function *is* a function, just with fewer parameters.


Trouble is, it appears that when the decorator is called the function 
is not yet bound to an instance, so no matter whether it's a method 
or function, it looks the same to the decorator.


Right. Whether it is an A or an A, it looks like an A.

Worse: when the decorator is called, there is no class for there to be 
instances of.


This simple example illustrates the problem:


Add a second parameter to tell the decorator which variant of behavior 
you want. Or write two variations of the decorator and use the one you 
want.


tjr



--
Presenting:
mediocre nebula.


--
http://mail.python.org/mailman/listinfo/python-list


Determining if a function is a method of a class within a decorator

2009-06-29 Thread David Hirschfield
I'm having a little problem with some python metaprogramming. I want to 
have a decorator which I can use either with functions or methods of 
classes, which will allow me to swap one function or method for another. 
It works as I want it to, except that I want to be able to do some 
things a little differently depending on whether I'm swapping two 
functions, or two methods of a class.


Trouble is, it appears that when the decorator is called the function is 
not yet bound to an instance, so no matter whether it's a method or 
function, it looks the same to the decorator.


This simple example illustrates the problem:

import inspect
class swapWith(object):
   def __init__(self, replacement):
   self.replacement = replacement
  
   def __call__(self, thingToReplace):

   def _replacer(*args, **kws):
   import inspect
   print 
"replacing:",self.replacement,inspect.ismethod(self.replacement)

   return self.replacement(*args, **kws)
   return _replacer

class MyClass(object):
  
   def swapIn(self):

   print "this method will be swapped in"

   @swapWith(swapIn)
   def swapOut(self):
   print "this method will be swapped out"

c = MyClass()
c.swapOut()


def swapInFn():
   print "this function will be swapped in"
  
@swapWith(swapInFn)

def swapOutFn():
   print "this function will be swapped out"

swapOutFn()


Both MyClass.swapIn and swapInFn look like the same thing to the 
decorator, and MyClass.swapOut and swapOutFn look the same. So is there 
a pattern I can follow that will allow me to determine whether the 
objects I'm given are plain functions or belong to a class?


Thanks in advance,
-David

--
Presenting:
mediocre nebula.


--
http://mail.python.org/mailman/listinfo/python-list


Replacing a built-in method of a module object instance

2009-06-26 Thread David Hirschfield
I have a need to replace one of the built-in methods of an arbitrary 
instance of a module in some python code I'm writing.


Specifically, I want to replace the __getattribute__() method of the 
module I'm handed with my own __getattribute__() method which will do 
some special work on the attribute before letting the normal attribute 
lookup continue.


I'm not sure how this would be done. I've looked at all the 
documentation on customizing classes and creating instance methods...but 
I think I'm missing something about how built-in methods are defined for 
built-in types, and where I'd have to replace it. I tried this naive 
approach, which doesn't work:


m = 

def __getattribute__(self, attr):
   print "modified getattribute:",attr
   return object.__getattribute__(self, attr)

import types
m.__getattribute__ = types.MethodType(__getattribute__,m)

It seems to create an appropriately named method on the module instance, 
but that method isn't called when doing any attribute lookups, so 
something's not right.

Any ideas? Is this even possible?
Thanks in advance!
-David

--
Presenting:
mediocre nebula.


--
http://mail.python.org/mailman/listinfo/python-list


Help: Trouble with imp.load_module

2007-12-11 Thread David Hirschfield
I'm not entirely sure what's going on here, but I suspect it's related 
to my general lack of knowledge of the python import internals.

Here's the setup:

module: tester.py:
-
import imp
def loader(mname, mpath):
fp, pathname, description = imp.find_module(mname,[mpath])
try:
m = imp.load_module(mname, fp, pathname, description)
finally:
if fp:
fp.close()
return m

module = loader("testA","/path/to/testA/)
print module.test_func("/path/to/something")

module = loader("test.B","/path/to/test.B/")
print module.test_func("/path/to/something")
--


module: testA.py:
---
def test_func(v):
import os
return os.path.exists(v)
---

module: test.B.py:
---
def test_func(v):
import os
return os.path.exists(v)
---


Okay, so modules "testA.py" and "test.B.py" are functionally identical, 
except for the name of the module files themselves, and this is the 
important part. The tester.py module is a really simple rig to run 
"imp.load_module" on those two files.

You should get no problem running the first test of module "testA.py" 
but you should get a traceback when attempting to run the second module 
"test.B.py":

Traceback (most recent call last):
  File "tester.py", line 15, in ?
print module.test_func("/path/to/something")
  File "./test.B.py", line 2, in test_func
import os
  File "/usr/lib/python2.4/os.py", line 131, in ?
from os.path import curdir, pardir, sep, pathsep, defpath, extsep, 
altsep
ImportError: No module named path


So this must have something to do with the "." in the name of module 
"test.B.py" but what is the problem, exactly? And how do I solve it? I 
will sometimes need to run load_module on filenames which happen to have 
"." in the name somewhere other than the ".py" extension. Is the 
find_module somehow thinking this is a package?

Any help would be appreciated,
-Dave

-- 
Presenting:
mediocre nebula.


-- 
http://mail.python.org/mailman/listinfo/python-list


Using marshal to manually "import" a python module

2007-11-29 Thread David Hirschfield
I had a situation recently that required I manually load python bytecode 
from a .pyc file on disk.

So, for the most part, I took code from imputil.py which loads the .pyc 
data via the marshal module and then exec's it into a newly created 
module object (created by imp.new_module()). The relevant pieces of code 
are in imputil.py_suffix_importer and imputil.Importer._process_result 
(where it exec's the code into the module's __dict__)

The strange thing is, it worked fine locally on my two machines (32bit 
running python 2.3.5 and 64bit running python 2.4.1), but when run by a 
64bit machine on the network, it would fail every time in the following 
manner:

My marshal/exec-based loader would load the module from the pyc 
apparently without problems, but if I then pulled a specific function 
attr out of the resulting module object, and that function called 
another function defined within the same module, it would raise a 
"TypeError: 'NoneType' object is not callable" exception when attempting 
to call that second function. So the setup is:

module blah:
def A():
...

def B():
x = A()

compiled to bytecode: blah.pyc
then, in my program:

m = my_marshal_loader("blah.pyc")
f = getattr(m,"B")

x = f()

raises "TypeError: 'NoneType' object is not callable" inside that call 
to f(), on the line x = A(), as though A is a None and not the function 
object defined in the module.

I can't figure out why this would work locally, but not when the module 
is loaded across a network. In fact, I have no idea what would ever 
cause it not to see A as a function. I'm stumped, and this is over my 
head as far as intimate knowledge of the direct loading of python 
bytecode via marshal is concerned...so I'm not clear on the best way to 
debug it. If anyone has an inkling of what might be going on, I'd love 
to hear it.

Thanks in advance,
-Dave

-- 
Presenting:
mediocre nebula.


-- 
http://mail.python.org/mailman/listinfo/python-list


Help: asyncore/asynchat and terminator string

2007-01-16 Thread David Hirschfield
I'm implementing a relatively simple inter-application communication 
system that uses asyncore/asynchat to send messages back and forth.

The messages are prefixed by a length value and terminator string, to 
signal that a message is incoming, and an integer value specifying the 
size of the message, followed by the message data.

My question is: how can I produce a short terminator string that won't 
show up (or has an extremely small chance of showing up) in the binary 
data that I send as messages?

Frankly, I'm not so sure this is even an important question, but is 
nagging me. If my communication is a kind of state machine:

sender: sends message length value, followed by terminator string, 
followed by message data
receiver: waiting for terminator string via set_terminator()
continually save what comes in via collect_incoming_data()
receiver: when sender's message arrives, found_terminator() is called
pull message length from previously received data
set terminator to be the length of the message via 
set_terminator()
receiver: collect_incoming_data() collects the message data
receiver: found_terminator() called when full message length is read, 
receiver goes back to waiting for message terminator string

I hope I explained that clearly enough.

The only time I can conceive that the system will get confused by 
finding a terminator string in the binary data of the message is if 
something goes haywire and I end up looking for a terminator string when 
the other side is sending the message data. What gotchas do I need to 
look out for here? I'm not a networking person, so I'm relying on the 
underlying libraries to be stable and just let me handle the high-level 
stuff here. This isn't going to be used in a malicious environment, the 
only thing I have to contend with is network hiccups...nobody is 
actively going to try and break this system.

Any advice/help would be appreciated,
-Dave

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sending binary pickled data through TCP

2006-10-13 Thread David Hirschfield




I'm using cPickle already. I need to be able to pickle pretty
arbitrarily complex python data structures, so I can't use marshal.
I'm guessing that cPickle is the best choice, but if someone has a
faster pickling-like module, I'd love to know about it.

-Dave

Fredrik Lundh wrote:

  David Hirschfield wrote:

  
  
Are there any existing python modules that do the equivalent of pickling 
on arbitrary python data, but do it a lot faster? I wasn't aware of any 
that are as easy to use as pickle, or don't require implementing them 
myself, which is not something I have time for.

  
  
cPickle is faster than pickle.  marshal is faster than cPickle, but only 
supports certain code object types.



  


-- 
Presenting:
mediocre nebula.



-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Sending binary pickled data through TCP

2006-10-13 Thread David Hirschfield




I've looked at pyro, and it is definitely overkill for what I need.

If I was requiring some kind of persistent state for objects shared
between processes, pyro would be awesome...but I just need to transfer
chunks of complex python data back and forth. No method calls or
keeping state in sync.

I don't find socket code particularly nasty, especially through a
higher-level module like asyncore/asynchat.
-Dave

Irmen de Jong wrote:

  David Hirschfield wrote:
  
  
I have a pair of programs which trade python data back and forth by 
pickling up lists of objects on one side (using 
pickle.HIGHEST_PROTOCOL), and sending that data over a TCP socket 
connection to the receiver, who unpickles the data and uses it.

So far this has been working fine, but I now need a way of separating 
multiple chunks of pickled binary data in the stream being sent back and 
forth.

  
  [...]

Save yourself the trouble of implementing some sort of IPC mechanism
over sockets, and give Pyro a swing: http://pyro.sourceforge.net

In Pyro almost all of the nastyness that is usually associated with socket
programming is shielded from you and you'll get much more as well
(a complete pythonic IPC library).

It may be a bit heavy for what you are trying to do but it may
be the right choice to avoid troubles later when your requirements
get more complex and/or you discover problems with your networking code.

Hth,
---Irmen de Jong
  


-- 
Presenting:
mediocre nebula.



-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Sending binary pickled data through TCP

2006-10-13 Thread David Hirschfield




Thanks for the great response.

Yeah, by "safe" I mean that it's all happening on an intranet with no
chance of malicious individuals getting access to the stream of data.

The chunks are arbitrary collections of python objects. I'm wrapping
them up a little, but I don't know much about the actual formal makeup
of the data, other than it pickles successfully.

Are there any existing python modules that do the equivalent of
pickling on arbitrary python data, but do it a lot faster? I wasn't
aware of any that are as easy to use as pickle, or don't require
implementing them myself, which is not something I have time for.

Thanks again,
-Dave

Steve Holden wrote:

  David Hirschfield wrote:
  
  
I have a pair of programs which trade python data back and forth by 
pickling up lists of objects on one side (using 
pickle.HIGHEST_PROTOCOL), and sending that data over a TCP socket 
connection to the receiver, who unpickles the data and uses it.

So far this has been working fine, but I now need a way of separating 
multiple chunks of pickled binary data in the stream being sent back and 
forth.

Questions:

Is it safe to do what I'm doing? I didn't think there was anything 
fundamentally wrong with sending binary pickled data, especially in the 
closed, safe environment these programs operate under...but maybe I'm 
making a poor assumption?


  
  If there's no chance of malevolent attackers modifying the data stream 
then you can safely ignore the otherwise dire consequences of unpickling 
arbitrary chunks of data.

  
  
I was going to separate the chunks of pickled data with some well-formed 
string, but couldn't that string potentially randomly appear in the 
pickled data? Do I just pick an extremely 
unlikely-to-be-randomly-generated string as the separator? Is there some 
string that will definitely NEVER show up in pickled binary data?


  
  I presumed each chunk was of a know structure. Couldn't you just lead of 
with a pickled integer saying how many chunks follow?

  
  
I thought about base64 encoding the data, and then decoding on the 
opposite side (like what xmlrpclib does), but that turns out to be a 
very expensive operation, which I want to avoid, speed is of the essence 
in this situation.


  
  Yes, base64 stuffs three bytes into four (six bits per byte) giving you 
a 33% overhead. Having said that, pickle isn't all that efficient a 
representation because it's designed to be portable. If you are using 
machines of the same type there are almost certainly faster binary 
encodings.

  
  
Is there a reliable way to determine the byte count of some pickled 
binary data? Can I rely on len() == bytes?


  
  Yes, since pickle returns a string of bytes, not a Unicode object.

If bandwidth really is becoming a limitation you might want to consider 
uses of the struct module to represent things more compactly (but this 
may be too difficult if the objects being exchanged are at all complex).

regards
  Steve
  


-- 
Presenting:
mediocre nebula.



-- 
http://mail.python.org/mailman/listinfo/python-list

Sending binary pickled data through TCP

2006-10-12 Thread David Hirschfield
I have a pair of programs which trade python data back and forth by 
pickling up lists of objects on one side (using 
pickle.HIGHEST_PROTOCOL), and sending that data over a TCP socket 
connection to the receiver, who unpickles the data and uses it.

So far this has been working fine, but I now need a way of separating 
multiple chunks of pickled binary data in the stream being sent back and 
forth.

Questions:

Is it safe to do what I'm doing? I didn't think there was anything 
fundamentally wrong with sending binary pickled data, especially in the 
closed, safe environment these programs operate under...but maybe I'm 
making a poor assumption?

I was going to separate the chunks of pickled data with some well-formed 
string, but couldn't that string potentially randomly appear in the 
pickled data? Do I just pick an extremely 
unlikely-to-be-randomly-generated string as the separator? Is there some 
string that will definitely NEVER show up in pickled binary data?

I thought about base64 encoding the data, and then decoding on the 
opposite side (like what xmlrpclib does), but that turns out to be a 
very expensive operation, which I want to avoid, speed is of the essence 
in this situation.

Is there a reliable way to determine the byte count of some pickled 
binary data? Can I rely on len() == bytes?

Thanks for all responses,
-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question about turning off garbage collection

2006-10-05 Thread David Hirschfield




Thanks for the great response!

I'm positive there's something extremely funky going on underneath
that's causing the problem when cyclic garbage collection is turned on.
Unfortunately I haven't got access to the code for the module that
appears to be causing the trouble.

It really appears to be some incompatibility between python threads,
pygtk and the database access module I'm using...and it's a royal pain.
Fortunately your response makes me comfortable that I can at least turn
off the gc without creating a leaky mess. When I have a chance I'll try
to create a test case that clearly demonstrates the problem so I can
get the authors of the modules to find the real problem.

Thanks,
-Dave

Tim Peters wrote:

  [David Hirschfield]
  
  
Question from a post to pygtk list...but it probably would be better
answered here:

I encountered a nasty problem with an external module conflicting with
my python threads recently, and right now the only fix appears to be to
turn off garbage collection while the critical code of the thread is
running, and then turn it back on afterwards.

Now, I don't know much about how the garbage collector works in python,
and in order to get the thread to run without freezing, I'm wrapping the
threaded processing function with calls to gc.disable()/gc.enable().

So what's that going to do?

  
  
Stop the /cyclic/ garbage-collection algorithm from running for as
long as it remains disabled.  Most garbage collection in Python, in
typical applications most of the time, is done by reference counting,
and /that/ can never be disabled.  Reference counting alone is not
strong enough to detect trash objects in cycles (like A points to B
and B points to A and nothing else points to either; they're
unreachable trash then, but the reference count on each remains 1).
The `gc` module controls cyclic garbage collection, which is a
distinct subsystem used to find and reclaim the trash cycles reference
counting can't find on its own.

  
  
Will calling gc.enable() put things in good shape? Will all objects created while
the garbage collector was off now be un-collectable?

  
  
No.  It has no effect except to /suspend/ running the cyclic gc
subsystem for the duration.  Trash related to cycles will pile up for
the duration.  That's all.

  
  
I'm extremely wary of this solution, as I think anyone would be. I don't want a
suddenly super-leaky app.

  
  
As above, most garbage should continue to get collected regardless.

  
  
Comments? Suggestions? (I know, I know, avoid threads...if only I could)

  
  
Nothing wrong with threads.  My only suggestion is to dig deeper into
/why/ something goes wrong when cyclic gc is enabled.  That smells of
a serious bug, so that disabling cyclic gc is just papering over a
symptom of a problem that will come back to bite you later.  For
example, if some piece of code in an extension module isn't
incrementing reference counts when it should, that could /fool/ cyclic
gc into believing an object is trash /while/ the extension module
believes it has a valid pointer to it.  If so, that would be a serious
bug in the extension module.  Enabling cyclic gc should not create
problems for any correctly written C code.
  


-- 
Presenting:
mediocre nebula.



-- 
http://mail.python.org/mailman/listinfo/python-list

Question about turning off garbage collection

2006-10-05 Thread David Hirschfield
Question from a post to pygtk list...but it probably would be better 
answered here:

I encountered a nasty problem with an external module conflicting with 
my python threads recently, and right now the only fix appears to be to 
turn off garbage collection while the critical code of the thread is 
running, and then turn it back on afterwards.

Now, I don't know much about how the garbage collector works in python, 
and in order to get the thread to run without freezing, I'm wrapping the 
threaded processing function with calls to gc.disable()/gc.enable().

So what's that going to do? Will calling gc.enable() put things in good 
shape? Will all objects created while the garbage collector was off now 
be un-collectable? I'm extremely wary of this solution, as I think 
anyone would be. I don't want a suddenly super-leaky app.

Comments? Suggestions? (I know, I know, avoid threads...if only I could)
-Dave

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting text into the copy-paste buffer...

2006-09-05 Thread David Hirschfield




Ah, indeed it does...my distro didn't have it, but a quick download and
compile and there it is.

Thanks a bunch,
-Dave

Keith Dart wrote:

  
  On 9/5/06, David
Hirschfield <[EMAIL PROTECTED]>
wrote:
  

This is good info...but I'm
looking for the opposite direction: I want
to place some arbitrary command output text into the clipboard, not get
the current selection out of the clipboard.

Any help on that end?
-Dave

  
  
  
Oh, the same command works that direction also. Just use different
command line options. 
  
Your distro may already have it. Just install it and check it out.
  
  
  
  
  
  
  
-- 
-- -
Keith Dart
  [EMAIL PROTECTED]
==


-- 
Presenting:
mediocre nebula.



-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Getting text into the copy-paste buffer...

2006-09-05 Thread David Hirschfield




This is good info...but I'm looking for the opposite direction: I want
to place some arbitrary command output text into the clipboard, not get
the current selection out of the clipboard.

Any help on that end?
-Dave

kdart wrote:

  David Hirschfield wrote:
  
  
Strange request, but is there any way to get text into the linux
copy-paste buffer from a python script ?

I know the standard python libraries won't have that functionality
(except as a side-effect, perhaps?), but is there a simple trick that
would do it on linux? A command line to get text into the buffer? Using
a gui toolkit as a proxy to get text in there?

  
  
There's a utility called xclip that you can wrap with popen2 or
something similar. I use my own proctools:

	import proctools
	XCLIP = proctools.which("xclip")
	es, arg = proctools.getstatusoutput("%s -o -selection primary" %
(XCLIP,))

"arg" has the X selection.

  


-- 
Presenting:
mediocre nebula.



-- 
http://mail.python.org/mailman/listinfo/python-list

Getting text into the copy-paste buffer...

2006-09-05 Thread David Hirschfield
Strange request, but is there any way to get text into the linux 
copy-paste buffer from a python script ?

I know the standard python libraries won't have that functionality 
(except as a side-effect, perhaps?), but is there a simple trick that 
would do it on linux? A command line to get text into the buffer? Using 
a gui toolkit as a proxy to get text in there?

I actually have a need for this, though it sounds bizarre,
thanks in advance,
-Dave

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


simpleparse parsing problem

2006-09-01 Thread David Hirschfield
Anyone out there use simpleparse? If so, I have a problem that I can't 
seem to solve...I need to be able to parse this line:

"""Cen2 = Cen(OUT, "Cep", "ies", wh, 544, (wh/ht));"""

with this grammar:

grammar = r'''
declaration := ws, line, (ws, line)*, ws
line:= (statement / assignment), ';', ws
assignment  := identifier, ws, '=', ws, statement
statement   := identifier, '(', arglist?, ')', chars?
identifier  := ([a-zA-Z0-9_.:])+
arglist := arg, (',', ws, arg)*
arg := expr/ statement / identifier / num / str /
   curve / spline / union / conditional / definition
definition  := typedef?, ws, identifier, ws, '=', ws, arg
typedef := ([a-zA-Z0-9_])+
expr:= termlist, ( operator, termlist )+
termlist:= ( '(', expr, ')' ) / term
term:= call / identifier / num
call:= identifier, '(', arglist?, ')'
union   := '{{', ws, (arg, ws, ';', ws)*, arg, ws, '}}'
operator:= ( '+' / '-' / '/' / '*' /
 '==' / '>=' / '<=' / '>' / '<' )
conditional := termlist, ws, '?', ws, termlist, ws, ':', ws, termlist
curve   := (list / num), '@', num
spline  := (cv, ',')*, cv
cv  := identifier, '@', num
list:= '[', arg, (',', ws, arg)*, ']'
str := '"', ([;] / chars)*, '"'
num := ( scinot / float / int )
 := ('-' / '/' / '?' / [EMAIL PROTECTED]&\*\+=<> :])+
   := ([-+]?, [0-9]+)
 := ([-+]?, [0-9\.]+)
:= (float, 'e', int)
:= [ \t\n]*
'''

But it fails. The problem is with how arglist/arg/expr are defined, 
which makes it unable to handle the parenthesized expression at the end 
of the line:

(wh/ht)

But everything I've tried to correct that problem fails. In the end, it 
needs to be able to parse that line with those parentheses around wh/ht, 
or without them.
Recursive parsing of expressions just seems hard to do in simpleparse, 
and is beyond my parsing knowledge.

Here's the code to get the parser going:

from simpleparse.parser import Parser
p = Parser(grammar, 'line')
import pprint
bad_line = """Cen2 = Cen(OUT, "Cep", "ies", wh, 544, (wh/ht));"""

pprint.pprint(p.parse(bad_line))


Any help greatly appreciated, thanks,
-Dave


-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Has anyone used py-xmlrpc?

2006-08-22 Thread David Hirschfield
Searching for a python xmlrpc implementation that supports asynchronous 
requests, I stumbled on this project:

http://www.xmlrpc.com/discuss/msgReader$1573

The author is Shilad Sen, and it appears to do what I'm looking for. But 
I'd love some feedback from anyone who might have used it before I go 
and base some server/client code on it.

Anyone out there have experience with this code? Is it as good/stable as 
python's standard xmlrpclib? Better?
Thanks in advance for any notes,
-Dave

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Help with async xmlrpc design

2006-08-16 Thread David Hirschfield
I have an xmlrpc client/server system that works fine, but I want to 
improve performance on the client side.

Right now the system operates like this:

client makes request from server (by calling server.request() via xml-rpc)
server places "request" on queue and returns a unique ID to the calling 
client
client repeatedly asks server if the request with the given ID is 
complete via server.isComplete()
server eventually completes processing request
client eventually gets an affirmative from server.isComplete() and then
client pulls results from server via server.fetchResults()

originally the client only made one request at a time, and each one was 
processed and results fetched in order.

I want to change this so that the client can send multiple requests to 
the server and have them processed in the background.
There are a lot of different approaches to that process, and I'd like 
advice on the best way to go about it.

I could simply place the request checking and results fetching into a 
thread or thread-pool so that the client can submit requests to some 
kind of queue and the thread will handle submitting the requests from 
that queue, checking for completion and fetching results.

Unfortunately, that would mean lots of calls to the server asking if 
each of the requests is complete yet (if I submit 100 requests, I'll be 
repeatedly making 100 "are you done yet?" calls). I want to avoid that, 
so I thought there would be some way to submit the request and then hold 
open the connection to the server and wait on the completion, but still 
allow the server to do work.

Any advice on how to avoid polling the server over and over for request 
completion, but also allowing the client to submit multiple requests 
without waiting for each one to finish before submitting the next?

Thanks in advance!

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Chunking sequential values in a list

2006-07-13 Thread David Hirschfield
I have this function:

def sequentialChunks(l, stride=1):
chunks = []
chunk = []
for i,v in enumerate(l[:-1]):
v2 = l[i+1]
if v2-v == stride:
if not chunk:
chunk.append(v)
chunk.append(v2)
else:
if not chunk:
chunk.append(v)
chunks.append(chunk)
chunk = []
if chunk:
chunks.append(chunk)
return chunks

Which takes a list of numerical values "l" and splits it into chunks 
where each chunk is sequential, where sequential means each value in a 
chunk is
separated from the next by "stride".

So sequentialChunks([1,2,3,5,6,8,12]) returns:

[[1,2,3],[5,6],[8],[12]]

I don't think the code above is the most efficient way to do this, but 
it is relatively clear. I tried fiddling with list-comprehension ways of 
accomplishing it, but kept losing track of things...so if anyone has a 
suggestion, I'd appreciate it.

Thanks,
-Dave


-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Help with deprecation-wrapper code

2006-06-21 Thread David Hirschfield





I have a deprecation-wrapper that allows me to do this:

def oldFunc(x,y):
...

def newFunc(x,y):
...

oldFunc = deprecated(oldFunc, newFunc)

It basically wraps the definition of "oldFunc" with a DeprecationWarning 
and some extra messages for code maintainers, and also prompts them to 
look at "newFunc" as the replacement. This way, anyone who calls 
oldFunc() will get the warning and messages automatically.

I'd like to expand this concept to classes and values defined in my 
modules as well.

So, I'd like a deprecatedClass that would somehow take:

class OldClass:
...

class NewClass:
...

OldClass = deprecatedClass(OldClass, NewClass)

and would similarly give the warning and other messages when someone 
tried to instantiate an OldClass object.
And, for general values, I'd like:

OLD_CONSTANT = "old"
NEW_CONSTANT = "new"

OLD_CONSTANT = deprecatedValue(OLD_CONSTANT, NEW_CONSTANT)

so any attempt to use OLD_CONSTANT (or just the first attempt) would 
also output the warning and associated messages.
I think that setting it up for classes would just mean making a wrapper 
class that generates the warning in its __init__ and then wraps the 
instantiation of the old class.

But how can I do something similar for plain values like those constants?
Is there a better way to do this whole thing, in general? Anyone already 
have a similar system set up?

Thanks in advance,
-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list



-- 
Presenting:
mediocre nebula.



-- 
http://mail.python.org/mailman/listinfo/python-list

Deprecation wrapper

2006-06-20 Thread David Hirschfield
I have a "deprecation" wrapper that allows me to do this:

def oldFunc(x,y):
...

def newFunc(x,y):
...

oldFunc = deprecated(oldFunc, newFunc)

It basically wraps the definition of "oldFunc" with a DeprecationWarning 
and some extra messages for code maintainers, and also prompts them to 
look at "newFunc" as the replacement. This way, anyone who calls 
oldFunc() will get the warning and messages automatically.

I'd like to expand this concept to classes and values defined in my 
modules as well.

So, I'd like a deprecatedClass that would somehow take:

class OldClass:
...

class NewClass:
...

OldClass = deprecatedClass(OldClass, NewClass)

and would similarly give the warning and other messages when someone 
tried to instantiate an OldClass object.
And, for general values, I'd like:

OLD_CONSTANT = "old"
NEW_CONSTANT = "new"

OLD_CONSTANT = deprecatedValue(OLD_CONSTANT, NEW_CONSTANT)

so any attempt to use OLD_CONSTANT (or just the first attempt) would 
also output the warning and associated messages.
I think that setting it up for classes would just mean making a wrapper 
class that generates the warning in its __init__ and then wraps the 
instantiation of the old class.

But how can I do something similar for plain values like those constants?
Is there a better way to do this whole thing, in general? Anyone already 
have a similar system set up?

Thanks in advance,
-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting external name of passed variable

2006-06-20 Thread David Hirschfield




Cool, thanks.
Stack inspection of sorts it is.
-Dave

faulkner wrote:

  

  
import sys
tellme = lambda x: [k for k, v in sys._getframe(1).f_locals.iteritems() if v == x]
a=1
tellme(a)

  

  
  ['a']

Michael Spencer wrote:
  
  
David Hirschfield wrote:


  I'm not sure this is possible, but it sure would help me if I could do it.

Can a function learn the name of the variable that the caller used to
pass it a value? For example:

def test(x):
  print x

val = 100
test(val)

Is it possible for function "test()" to find out that the variable it is
passed, "x", was called "val" by the caller?
Some kind of stack inspection?
  

Perhaps, but don't try it ;-)


  Any help greatly appreciated,
-David

  

Can't you use keyword arguments?

 >>> def test(**kw):
... print kw
...
 >>> test(val=3)
{'val': 3}
 >>> test(val=3, otherval = 4)
{'otherval': 4, 'val': 3}
 >>>

Michael

  
  
  


-- 
Presenting:
mediocre nebula.



-- 
http://mail.python.org/mailman/listinfo/python-list

Getting external name of passed variable

2006-06-20 Thread David Hirschfield




I'm
not sure this is possible, but it sure would help me if I could do it. 

Can a function learn the name of the variable that the caller used to
pass it a value? For example:


def test(x):

   print x


val = 100

test(val)


Is it possible for function "test()" to find out that the variable it
is passed, "x", was called "val" by the caller?

Some kind of stack inspection?

Any help greatly appreciated,

-David

-- 
Presenting:
mediocre nebula.



-- 
http://mail.python.org/mailman/listinfo/python-list

Running code on assignment/binding

2006-06-20 Thread David Hirschfield




Another
deep python question...is it possible to have code run whenever a
particular object is assigned to a variable (bound to a variable)?


So, for example, I want the string "assignment made" to print out
whenever my class "Test" is assigned to a variable:


class Test:

   ...


x = Test


would print:


"assignment made"


Note that there's no "()" after x = Test, I'm not actually
instantiating Test, just binding the class to the variable "x"

Make sense?
Possible?
-David


-- 
Presenting:
mediocre nebula.



-- 
http://mail.python.org/mailman/listinfo/python-list

TEST IGNORE

2006-06-20 Thread David Hirschfield
Having email trouble...

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Advanced lockfiles

2006-06-12 Thread David Hirschfield
I'm not sure it's even possible to do what I'm trying to here...just 
because the logistics may not really allow it, but I thought I'd ask 
around...

I want some kind of lockfile implementation that will allow one process 
to lock a file (or create an appropriately named lockfile that other 
processes will find and understand the meaning of), but there are some 
important requirements:

1. Multiple processes will be attempting to grab a lock on the file, and 
they must not freeze up if they can't get a lock
2. The processes can be on different hosts on a network, attempting to 
grab a lock on a file somewhere in network storage
3. All processes involved will know about the locking system, so no need 
to worry about rogue processes that don't care about whatever setup we have
4. The locking process has to be "crash safe" such that if the process 
that locked a file dies, the lock is released quickly, or other 
processes can find out if the lock is held by a dead process and force a 
release

I've tried a bunch of ideas, looked online, and still don't have a good 
way to make a system that meets all the requirements above, but I'm not 
too well-read on this kind of synchronicity problem.

Any good ideas?
Thanks in advance,
-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Raising a specific OSError

2006-04-21 Thread David Hirschfield




I wasn't clear enough in my original post.

I know how to raise a basic OSError or IOError, but what if I want to
raise specifically an "OSError: [Errno 2] No such file or directory"?
Somehow it must be possible to raise the error with the correct
information to bring up the standard message, but where do I find the
right values to give?

Thanks,
-Dave


alisonken1 wrote:

  To raise a specific error, just find the error that you want to raise,
then give the error a text string to print: ex.

raise IOError("This raises an IO error")

On the stderr output, when the routine hits this line, you will get:

  
  

  
raise IOError("This raises an IOError")

  

  
  Traceback (most recent call last):
  File "", line 1, in ?
IOError: This raises an IOError

  
  
Just be sure of the error that you want to raise, since some of them
will do stuff like closing open file descriptors as well.

  


-- 
Presenting:
mediocre nebula.



-- 
http://mail.python.org/mailman/listinfo/python-list

Raising a specific OSError

2006-04-21 Thread David Hirschfield
I know this should be obvious, but how does one raise a specific type of 
OSError?
When I attempt to perform a file operation on a non-existent file, I get 
an OSError: [Errno 2], but what if I want to raise one of those myself?

Thanks in advance,
-Dave

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Starting value with raw_input

2006-04-18 Thread David Hirschfield
Does the raw_input built-in function allow giving an initial value that 
the user can edit?
Perhaps by using the readline module?

I want to do something so that I can provide the user a default value 
they can edit as they wish at the prompt:

result = raw_input("Enter value: ")
 Somehow output default value so prompt looks like:

Enter value: default

they can edit "default" to whatever they want, and I'll get the result.

Does what I'm asking make sense?
-Dave

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Parsing csh scripts with python

2006-03-28 Thread David Hirschfield
Is there a module out there that would be able to parse a csh script and 
give me back a parse tree?

I need to be able to parse the script, modify some variable settings and 
then write the script back out so that the only changes are the 
variables I've modified (comments, ordering of statements, etc. must 
remain the same).

I looked at shlex, but I don't think it will do what I need.
Any suggestions would be appreciated,
-Dave

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Help: Creating condensed expressions

2006-03-24 Thread David Hirschfield
Here's the problem: Given a list of item names like:

apple1
apple2
apple3_SD
formA
formB
formC
kla_MM
kla_MB
kca_MM

which is a subset of a much larger list of items,
is there an efficient algorithm to create condensed forms that match 
those items, and only those items? Such as:

apple[12]
apple3_SD
form[ABC]
kla_M[MB]
kca_MM

The condensed expression syntax only has [...] and * as operators. [...] 
matches a set of individual characters, * matches any string.
I'd be satisfied with a solution that only uses the [...] syntax, since 
I don't think it's possible to use * without potentially matching items 
not explicitly in the given list.

I'm not sure what this condensed expression syntax is called (looks a 
bit like shell name expansion syntax), and I'm not even sure there is an 
efficient way to do what I'm asking. Any ideas would be appreciated.

Thanks,
-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Good thread pool module

2006-03-22 Thread David Hirschfield
There isn't a thread pool module in the standard library, but I'm sure 
many have been written by people in the python community.
Anyone have a favorite? Is there one particular implementation that's 
recommended?

Not looking for anything fancy, just something that lets me queue up 
tasks to be performed by a pool of threads and then retrieve results 
when the tasks complete.

Thanks,
-Dave

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question about xmlrpc and threading

2006-02-15 Thread David Hirschfield
I definitely didn't make it clear enought what I was talking about. I 
know all about the thread issues as far as namespaces go and that sort 
of thing.
Let me try and be clearer:

Forget how my xmlrpc server is implemented, it doesn't matter for this 
question. Just imagine it works and will process requests and return 
results.
I have one connection to the server from my client, created via:

import xmlrpclib
server = xmlrpclib.ServerProxy(...)

So now my client calls:

result = server.doSomething()

and that will take some time to process on the server, so that part of 
the client code blocks waiting for the result.
Now, some other thread in the client calls:

result = server.doSomethingElse()

while doSomething() is still processing on the server and the client is 
still waiting for a result.
My question was whether this is allowed? Can two calls be made via the 
same ServerProxy instance while a request is already underway?

Clearer?
-Dave

Martin P. Hellwig wrote:

>David Hirschfield wrote:
>  
>
>>An xmlrpc client/server app I'm writing used to be super-simple, but now 
>>threading has gotten into the mix.
>>
>>On the server side, threads are used to process requests from a queue as 
>>they come in.
>>On the client side, threads are used to wait on the results of requests 
>>to the server.
>>
>>So the question is: how thread-safe is python xmlrpc? If the client 
>>makes a request of the server by calling:
>>
>>result = server.doSomething()
>>
>>and while that is waiting in the background thread to complete, the 
>>client calls another:
>>
>>result = server.doSomethingElse()
>>
>>will they interfere with each other? Right now I'm avoiding this problem 
>>by queueing up calls to the server to be processed sequentially in the 
>>background. But I'd prefer to allow requests to go in parallel. Should I 
>>just make a new connection to the server for each request?
>>
>>Any advice appreciated,
>>-David
>>
>>
>>
>
>I'm not sure if I quite understand what you mean by "interfere with each 
>other" but namespaces also apply to xmlrpc servers.
>But let's give a coding example say I have this async server created:
>
>===
>from SocketServer import ThreadingMixIn
>from SimpleXMLRPCServer import SimpleXMLRPCServer
>from time import sleep
>
># Overriding with ThreadingMixIn to create a async server
>class txrServer(ThreadingMixIn,SimpleXMLRPCServer): pass
>
># the Test classs
>class Test(object):
> def __init__(self):
> self.returnValue = "Something 1"
>
> def doSomething(self):
> sleep(5)
> return self.returnValue
>
> def doSomethingElse(self,value):
> self.returnValue=value
> return self.returnValue
>
>
># setup server and bind to the specified port
>server = txrServer(('localhost', 8080))
>
># register test class
>server.register_instance(Test())
>
># start the serving
>server.serve_forever()
>===
>
>And I call the function
> >>> doSomething()
>and while it's waiting I call
> >>> doSomethingElse("Else What!"),
>doSomething() will return me:"Else What!" instead of "Something 1" 
>because it's a shared namespace of "self".
>
>
>Now if I modify my example to this:
>
>===
>from SocketServer import ThreadingMixIn
>from SimpleXMLRPCServer import SimpleXMLRPCServer
>from time import sleep
>
># Overriding with ThreadingMixIn to create a async server
>class txrServer(ThreadingMixIn,SimpleXMLRPCServer): pass
>
># the Test classs
>class Test(object):
> def __init__(self):
> pass
>
> def doSomething(self):
> returnValue = "Something 1"
> sleep(5)
> return returnValue
>
> def doSomethingElse(self,value):
> returnValue=value
> return returnValue
>
>
># setup server and bind to the specified port
>server = txrServer(('localhost', 8080))
>
># register test class
>server.register_instance(Test())
>
># start the serving
>server.serve_forever()
>===
>
> >>> doSomethingElse("Now what?")
>Will have no effect on returnValue of doSomething() because they are not 
>shared.
>
>But say that I add the sleep part to doSomethingElse() and call
> >>> doSomethingElse("First")
>and immediately after that on a other window
> >>> doSomethingElse("other")
>What do you think will happen? Will the 2nd call overwrite the firsts 
>calls variable?
>
>I'm not going to spoil it any further ;-), please try the snippets out 
>for yourself (I bet you'll be pleasantly surprised).
>
>hth
>
>  
>

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Question about xmlrpc and threading

2006-02-15 Thread David Hirschfield
An xmlrpc client/server app I'm writing used to be super-simple, but now 
threading has gotten into the mix.

On the server side, threads are used to process requests from a queue as 
they come in.
On the client side, threads are used to wait on the results of requests 
to the server.

So the question is: how thread-safe is python xmlrpc? If the client 
makes a request of the server by calling:

result = server.doSomething()

and while that is waiting in the background thread to complete, the 
client calls another:

result = server.doSomethingElse()

will they interfere with each other? Right now I'm avoiding this problem 
by queueing up calls to the server to be processed sequentially in the 
background. But I'd prefer to allow requests to go in parallel. Should I 
just make a new connection to the server for each request?

Any advice appreciated,
-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Best way to determine if a certain PID is still running

2006-02-03 Thread David Hirschfield
I'm launching a process via an os.spawnvp(os.P_NOWAIT,...) call.
So now I have the pid of the process, and I want a way to see if that 
process is complete.

I don't want to block on os.waitpid(), I just want a quick way to see if 
the process I started is finished. I could popen("ps -p %d" % pid) and 
see whether it's there anymore...but since pids get reused, there's the 
chance (however remote) that I'd get a false positive, plus I don't 
really like the idea of calling something non-pure-python to find out.

So, should I run a monitor thread which just calls os.waitpid() and when 
the thread indicates via an event that the process completed, I'm golden?

All suggestions welcome, looking for simple and clean over wickedly-clever,
-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Efficient Find and Replace

2006-01-27 Thread David Hirschfield
You aren't getting too many helpful responses. Hope this one helps:

The closest python equivalent to:

p = head(L)
while (p) {
  if (p->data == X) p->data = Y;
}

would be:

for i,v in enumerate(L):
if v == X:
L[i] = Y

modifies the list in place.

There's nothing wrong with just doing your solution A, the amount of 
time wasted by creating the new list isn't very significant.
-Dave

Murali wrote:

>Given: L = list of integers. X and Y are integers.
>Problem: find every occurence of X and replace with Y
>
>Solution1:
>def check(s):
> if s==X:
>return Y
> else return s
>
>newL = [ check(s) for s in L]
>
>Now I dont want to create another list but just modify it in place.
>
>SolutionA:
>
>for x in range(len(L)):
>if L[x] == X:
>   L[x:x] = Y
>
>SolutionB:
>
>p = L.index(X)
>while p >= 0:
>   L[p:p] = Y
>   p = L.index(X)
>
>Problem with both solutions is the efficiency. Both methods require
>time O(N^2) in the worst case, where N is the length of the list.
>Because L.index() and L[x:x] both take O(N) time in the worst case. But
>clearly one should be able to do it in time O(N). Atleast there is a C
>solution which does it in O(N) time.
>
>p = head(L)
>while (p) {
>  if (p->data == X) p->data = Y;
>}
>
>Is there a python equivalent of this? using iterators or something
>which will allow me efficient serial access to the list elements.
>
>- Murali
>
>  
>

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Help with fast tree-like structure

2006-01-25 Thread David Hirschfield
I've written a tree-like data structure that stores arbitrary python 
objects. The objective was for the tree structure to allow any number of 
children per node, and any number of root nodes...and for it to be 
speedy for trees with thousands of nodes.

At its core, the structure is just a list of lists arranged so that if 
node A has children B and C and node B has child D the data looks like:

A = [, B, C]
B = [, D]
C = []

where B, C and D are all lists with similar structures to A. I am 
holding references to the individual nodes so that access to individual 
nodes by reference is quick. Access by "tree path" is done by giving a 
tuple of integers indicating where in the tree the node you want lies. 
The path (1,2,5) indicates the 6th child of the 3rd child of the 2nd 
root node. Everything involving basic tree functions works fine. Finding 
any particular node this way is just a function of the depth of the node 
in the tree, so it's very quick unless you have some degenerate tree 
structure where nodes end up miles deep.

Here's my problem: I need to allow the tree to "hide" the roots, so that 
the top-level appears to the outside world to be the children under the 
root nodes, not the root nodes themselves. That means a path of (5,2) 
indicates the 3rd child of the 6th "pseudo-root" node. Unfortunately, in 
a tree with many root nodes, each containing many children (a common use 
case for me) finding the 6th pseudo-root is going to be slow, and the 
ways I've thought of to make it fast all require slow bookkeeping when 
new nodes are inserted or removed at the pseudo-root level.

I'm running out of ideas, so if anyone has any thoughts on how to deal 
with fudging which nodes are the roots efficiently...I'm all ears.
Thanks in advance,
-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Preventing class methods from being defined

2006-01-17 Thread David Hirschfield




I realize now that it would probably be best to define the problem I'm
having according to the constraints imposed by other requirements, and
not by describing the basic functionality. That way, since there are so
many possible ways to get something like what I want, there will
probably be only one (or none) solutions to the problem that matches
the criteria completely.

So, to restate the original problem: I want to be able to tag the
methods of a class so that they only can be called if some external
global says it's okay. Think of it as a security device for what can or
cannot be called from a particular class instance. So if "restrict" is
True, attempts to call methods on class A: foo() and bar(), will be
disallowed, and raise an exception.

Now, here are the constraints:

Whatever implements this security should be something that class A is a
subclass of, or is in some other way external to the class definition
itself, since multiple classes will want to use this ability.

It can't be a metaclass, because for other reasons, class A already has
a special metaclass, and you can't have two metaclasses on one class
without having one metaclass inherit from the other.

You should not be able to "foil" the security by overriding the secured
methods in a subclass of A. So if A.foo() is "secured" and you make
class B a subclass of A and define b.foo(), b.foo() should also be
secured.

The exception that gets raised when calling a restricted method when it
isn't allowed should be able to indicate what method was called on what
class.

I'd prefer not to override __getattribute__ to look at every attempt to
access any attribute of the class instance to see if it's restricted.
It'd also be nice if the ability to restrict some method was done via a
decorator-like syntax:

class X:
    def foo():
    ...
    restrict(foo)

So there's the problem and constraints. I've painted myself into a
corner, and I realize that, but I have faith that someone out there who
is far better at python than me, will know some way to do this, or
something close to it.

Thanks again,
-David


Bengt Richter wrote:

  On Mon, 16 Jan 2006 18:55:43 -0800, David Hirschfield <[EMAIL PROTECTED]> wrote:

  
  
Thanks for this, it's a great list of the ways it can be done. Here's a 

  
  Actually, your way is yet another ;-)

  
  
bit more insight into the arrangement I'm trying to get:

restrict = True

  
  Why a global value? If it is to affect class instantiation, why not pass it
or a value to the constructor, e.g., C(True) or C(some_bool)?

  
  
class A(object):

  
 ^--should that be R?
  
  
   _restrict = ["test"]
  
   def _null(self, *args, **kws):
   raise Exception,"not allowed to access"
  
   def test(self):
   print "test restricted"
  
   def __init__(self):
   if restrict:
   for f in self._restrict:
   setattr(self,f,self._null)

  
  I assume you know that you are using a bound method attribute
on the instance to shadow the method of the class, for a per-instance
effect as opposed to an all-instances shared effect.

  
  
  
class C(R):
   def __init__(self):
   super(C,self).__init__()
  
   def test(self):
   print "test from c"


In this design, calling c.test() where c is an instance of C will raise 
an exception. Now, the only thing I'd like is to not have to fill out 
that _restrict list like that, but to have some function or something 
that let's me say which methods are restricted in the same way you 
define class methods or properties, i.e.:

class A(object):
   _restrict = []
  
   def _null(self, *args, **kws):
   raise Exception,"not allowed to access"
  
   def test(self):
   print "test restricted"
   restrict(test)
    this does some magic to insert "test" into the _restrict list


I can't really find a way to make that work with descriptors, and it 
can't just be a function call, because I won't know what object to get 
the _restrict list from. Is there a way to refer to the class that "is 
being defined" when calling a function or classmethod?
So, ideas on how to accomplish that...again, greatly appreciated.

  
  
You can do it with a decorator, though it doesn't really do decoration,
just adding the decoratee to the associated _restrict list. You don't
have to factor out mkrdeco if you'r'e only defining the restrict decorator
in one class.

I changed A to R, and made the global restriction flag a constructor argument,
but you can easily change that back, either by using the global restricted
in R.__init__ as a global, or by passing it explicitly like c = C(restricted).

 >>> def mkrdeco(rlist):
 ... def restrict(f):
 ... rlist.append

Re: Preventing class methods from being defined

2006-01-17 Thread David Hirschfield

>>bit more insight into the arrangement I'm trying to get:
>>
>>restrict = True
>>
>>
>Why a global value? If it is to affect class instantiation, why not pass it
>or a value to the constructor, e.g., C(True) or C(some_bool)?
>
>  
>
For reasons unrelated to this problem, the class that does this magic 
can't take any parameters to its "__init__" method.

>>class A(object):
>>
>>
>   ^--should that be R?
>  
>
Yes, it should. Damn you Copy and Paste!

>>   _restrict = ["test"]
>>  
>>   def _null(self, *args, **kws):
>>   raise Exception,"not allowed to access"
>>  
>>   def test(self):
>>   print "test restricted"
>>  
>>   def __init__(self):
>>   if restrict:
>>   for f in self._restrict:
>>   setattr(self,f,self._null)
>>
>>
>I assume you know that you are using a bound method attribute
>on the instance to shadow the method of the class, for a per-instance
>effect as opposed to an all-instances shared effect.
>
>  
>
Yes, that's true...it shouldn't really matter for my usage. What would I 
do to make this an all-instances-shared thing?

>>  
>>class C(R):
>>   def __init__(self):
>>   super(C,self).__init__()
>>  
>>   def test(self):
>>   print "test from c"
>>
>>
>>In this design, calling c.test() where c is an instance of C will raise 
>>an exception. Now, the only thing I'd like is to not have to fill out 
>>that _restrict list like that, but to have some function or something 
>>that let's me say which methods are restricted in the same way you 
>>define class methods or properties, i.e.:
>>
>>class A(object):
>>   _restrict = []
>>  
>>   def _null(self, *args, **kws):
>>   raise Exception,"not allowed to access"
>>  
>>   def test(self):
>>   print "test restricted"
>>   restrict(test)
>>    this does some magic to insert "test" into the _restrict list
>>
>>
>>I can't really find a way to make that work with descriptors, and it 
>>can't just be a function call, because I won't know what object to get 
>>the _restrict list from. Is there a way to refer to the class that "is 
>>being defined" when calling a function or classmethod?
>>So, ideas on how to accomplish that...again, greatly appreciated.
>>
>>
>
>You can do it with a decorator, though it doesn't really do decoration,
>just adding the decoratee to the associated _restrict list. You don't
>have to factor out mkrdeco if you'r'e only defining the restrict decorator
>in one class.
>
>I changed A to R, and made the global restriction flag a constructor argument,
>but you can easily change that back, either by using the global restricted
>in R.__init__ as a global, or by passing it explicitly like c = C(restricted).
>
> >>> def mkrdeco(rlist):
> ... def restrict(f):
> ... rlist.append(f.func_name)
> ... return f
> ... return restrict
> ...
> >>> class R(object):
> ... _restrict = []
> ... restrict = mkrdeco(_restrict)
> ... def _null(self, *args, **kws):
> ... raise Exception,"not allowed to access"
> ... def __init__(self, restricted):
> ... if restricted:
> ... for f in self._restrict:
> ... setattr(self,f,self._null)
> ... @restrict
> ... def test(self):
> ... print "test restricted"
> ...
> >>> class C(R):
> ...def __init__(self, restricted=False):
> ...super(C,self).__init__(restricted)
> ...
> ...def test(self):
> ...print "test from c"
> ...
> >>> c = C(True)
> >>> c.test()
> Traceback (most recent call last):
>   File "", line 1, in ?
>   File "", line 5, in _null
> Exception: not allowed to access
> >>> c2 = C(False)
> >>> c2.test()
> test from c
> >>> vars(c)
> {'test': >}
> >>> vars(c2)
> {}
> >>> R._restrict
> ['test']
> 
>Still don't know what real application problem this is solving, but that's ok 
>;-)
>
>Regards,
>Bengt Richter
>  
>
I'm using python 2.3.3 here, so no go on the nice decorator, but I can 
definitely use the concept to make it work, thanks.
-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Preventing class methods from being defined

2006-01-16 Thread David Hirschfield
Thanks for this, it's a great list of the ways it can be done. Here's a 
bit more insight into the arrangement I'm trying to get:

restrict = True

class A(object):
_restrict = ["test"]
   
def _null(self, *args, **kws):
raise Exception,"not allowed to access"
   
def test(self):
print "test restricted"
   
def __init__(self):
if restrict:
for f in self._restrict:
setattr(self,f,self._null)
   
class C(R):
def __init__(self):
super(C,self).__init__()
   
def test(self):
print "test from c"


In this design, calling c.test() where c is an instance of C will raise 
an exception. Now, the only thing I'd like is to not have to fill out 
that _restrict list like that, but to have some function or something 
that let's me say which methods are restricted in the same way you 
define class methods or properties, i.e.:

class A(object):
_restrict = []
   
def _null(self, *args, **kws):
raise Exception,"not allowed to access"
   
def test(self):
print "test restricted"
restrict(test)
 this does some magic to insert "test" into the _restrict list


I can't really find a way to make that work with descriptors, and it 
can't just be a function call, because I won't know what object to get 
the _restrict list from. Is there a way to refer to the class that "is 
being defined" when calling a function or classmethod?
So, ideas on how to accomplish that...again, greatly appreciated.
-Dave

Bengt Richter wrote:

>On Sun, 15 Jan 2006 19:23:30 -0800, David Hirschfield <[EMAIL PROTECTED]> 
>wrote:
>
>  
>
>>I should have explicitly mentioned that I didn't want this particular 
>>solution, for a number of silly reasons.
>>Is there another way to make this work, without needing to place an 
>>explicit "if allowed" around each method definition?
>>
>>
>>
>Seems like you can
>a. define the class with all methods defined within, and use a metaclass to 
>prune
>   out the ones you don't want, which Alex provided.
>b. define the class with conditional execution of the method definitions, 
>which you just rejected.
>c. define the class with no iffy methods at all, and add them afterwards
>   c1. in a metaclass that adds them and possibly also defines them for that 
> purpose
>   c2. by plain statements adding method functions as class attributes
>d. define all the methods normally, but monitor attribute access on the class 
>and raise
>   attribute error for the methods that aren't supposed to be there.
>e. raise an exception conditionally _within_ methods that aren't supposed to 
>be there, if called.
>
>What would you like?
>
>BTW, defining the method functions someplace other than within the body of the 
>class whose methods
>they are to become has some subtleties, since the functions can potentially 
>refer to different
>global scopes than that of the class (e.g. if you take the functions from an 
>imported module)
>and/or use closure-defined cell variables (e.g. if the method function is 
>defined within a factory function).
>This can be used to advantage sometimes, but needs good documentation to be 
>clear for the next code maintainer ;-)
>
>I guess I should re-read your original requirements that led to thes design 
>ideas.
>
>Regards,
>Bengt Richter
>  
>

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Preventing class methods from being defined

2006-01-15 Thread David Hirschfield
I should have explicitly mentioned that I didn't want this particular 
solution, for a number of silly reasons.
Is there another way to make this work, without needing to place an 
explicit "if allowed" around each method definition?

Thanks again,
-David

Dan Sommers wrote:

>On Sun, 15 Jan 2006 18:41:02 -0800,
>David Hirschfield <[EMAIL PROTECTED]> wrote:
>
>  
>
>>I want a class that, when instantiated, only defines certain methods
>>if a global indicates it is okay to have those methods. So I want
>>something like:
>>
>>
>
>  
>
>>global allow
>>allow = ["foo","bar"]
>>
>>
>
>  
>
>>class A:
>>def foo():
>>...
>>
>>
>
>  
>
>>def bar():
>>...
>>
>>
>
>  
>
>>def baz():
>>...
>>
>>
>
>  
>
>>any instance of A will only have a.foo() and a.bar() but no a.baz()
>>because it wasn't in the allow list.
>>
>>
>
>  
>
>>I hope that makes sense.
>>
>>
>
>I think so, at least in the "I can implement that idea" sense, although
>not the "why would you need such a strange animal" sense.  Since "class"
>is an executable statement in Python, this ought to do it:
>
>allow = ["foo", "bar"]
>
>class A:
>
>  if "foo" in allow:
>def foo( ):
>  ...
>
>  if "bar" in allow:
>def bar( ):
>  ...
>
>  
>
>>Don't ask why I would need such a strange animal ...
>>
>>
>
>Consider yourself not asked.
>
>HTH,
>Dan
>
>  
>

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Can one class have multiple metaclasses?

2006-01-15 Thread David Hirschfield
Is it possible for one class definition to have multiple metaclasses?

I don't think it is, but I'm just checking to make sure.
 From what I know, you can only define "__metaclass__" to set the 
metaclass for a class, and that's it, is that correct?

-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Preventing class methods from being defined

2006-01-15 Thread David Hirschfield
Here's a strange concept that I don't really know how to implement, but 
I suspect can be implemented via descriptors or metaclasses somehow:

I want a class that, when instantiated, only defines certain methods if 
a global indicates it is okay to have those methods. So I want something 
like:

global allow
allow = ["foo","bar"]

class A:
def foo():
...

def bar():
...

def baz():
...

any instance of A will only have a.foo() and a.bar() but no a.baz() 
because it wasn't in the allow list.

I hope that makes sense.
Don't ask why I would need such a strange animal, I just do. I'm just 
not sure how to approach it.

Should class A have some special metaclass that prevents those methods 
from existing? Should I override __new__ or something? Should those 
methods be wrapped with special property subclasses that prevent access 
if they're not in the list? What's a low-overhead approach that will 
work simply?

Thanks in advance,
-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: instance attributes not inherited?

2006-01-15 Thread David Hirschfield
Nothing's wrong with python's oop inheritance, you just need to know 
that the parent class' __init__ is not automatically called from a 
subclass' __init__. Just change your code to do that step, and you'll be 
fine:

class Parent( object ):
 def __init__( self ):
 self.x = 9


class Child( Parent ):
 def __init__( self ):
super(Child,self).__init__()
print "Inside Child.__init__()"

-David

John M. Gabriele wrote:

>The following short program fails:
>
>
>--- code 
>#!/usr/bin/python
>
>class Parent( object ):
> def __init__( self ):
> self.x = 9
> print "Inside Parent.__init__()"
>
>
>class Child( Parent ):
> def __init__( self ):
> print "Inside Child.__init__()"
>
>
>p1 = Parent()
>p2 = Parent()
>c1 = Child()
>foo = [p1,p2,c1]
>
>for i in foo:
> print "x =", i.x
>- /code --
>
>
>
>yielding the following output:
>
> output --
>Inside Parent.__init__()
>Inside Parent.__init__()
>Inside Child.__init__()
>x = 9
>x = 9
>x =
>Traceback (most recent call last):
>   File "./foo.py", line 21, in ?
> print "x =", i.x
>AttributeError: 'Child' object has no attribute 'x'
> /output -
>
>
>Why isn't the instance attribute x getting inherited?
>
>My experience with OOP has been with C++ and (more
>recently) Java. If I create an instance of a Child object,
>I expect it to *be* a Parent object (just as, if I subclass
>a Car class to create a VW class, I expect all VW's to *be*
>Cars).
>
>That is to say, if there's something a Parent can do, shouldn't
>the Child be able to do it too? Consider a similar program:
>
>--- code 
>#!/usr/bin/python
>
>
>class Parent( object ):
> def __init__( self ):
> self.x = 9
> print "Inside Parent.__init__()"
>
> def wash_dishes( self ):
> print "Just washed", self.x, "dishes."
>
>
>class Child( Parent ):
> def __init__( self ):
> print "Inside Child.__init__()"
>
>
>p1 = Parent()
>p2 = Parent()
>c1 = Child()
>foo = [p1,p2,c1]
>
>for i in foo:
> i.wash_dishes()
>--- /code ---
>
>But that fails with:
>
>--- output --
>Inside Parent.__init__()
>Inside Parent.__init__()
>Inside Child.__init__()
>Just washed 9 dishes.
>Just washed 9 dishes.
>Just washed
>Traceback (most recent call last):
>   File "./foo.py", line 24, in ?
> i.wash_dishes()
>   File "./foo.py", line 10, in wash_dishes
> print "Just washed", self.x, "dishes."
>AttributeError: 'Child' object has no attribute 'x'
>--- /output -
>
>Why isn't this inherited method call working right?
>Is this a problem with Python's notion of how OO works?
>
>Thanks,
>---J
>
>  
>

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Newbie to XML-RPC: looking for advice

2006-01-13 Thread David Hirschfield
I've written a server-client system using XML-RPC. The server is using 
the twisted.web.xmlrpc.XMLRPC class to handle connections and run 
requests. Clients are just using xmlrpclib.ServerProxy to run remote 
method calls.

I have a few questions about the performance of xmlrpc in general, and 
specifically about increasing the speed with which remote methods return 
their results to the client.

Right now the process of calling a remote method works as follows:

client:
generate some python objects
serialize those objects by cPickling them with cPickle.HIGHEST_PROTOCOL, 
then wrap the pickles with xmlrpclib.Binary() so the data can be sent safely
call the remote method via a ServerProxy object using the Binary object 
as the argument

server:
invoke the method and extract the pickled and Binary()'d arguments back 
into the actual objects
do some work
take result objects and cPickle them and wrap them in a Binary object as 
before
return the result to the client

client:
receive result and unpickle it into real data

All the above works fine...but I'm finding the following: while the 
actual creation and pickling of the objects only takes a millisecond or 
so, the actual time before the client call completes is a third of a 
second or more.

So where's the slowdown? It doesn't appear to be in the 
pickling/unpickling or object creation, so it has to be in xmlrpc 
itself...but what can I do to improve that? It looks like xmlrpclib uses 
xml.parsers.expat if it's available, but are there faster xml libs? 
Looking at the xmlrpclib code itself, it seems to want to find either: 
_xmlrpclib from the code in xmlrpclib.py:

try:
# optional xmlrpclib accelerator.  for more information on this
# component, contact [EMAIL PROTECTED]
import _xmlrpclib
FastParser = _xmlrpclib.Parser
FastUnmarshaller = _xmlrpclib.Unmarshaller
except (AttributeError, ImportError):
FastParser = FastUnmarshaller = None

or it tries to find sgmlop:

#
# the SGMLOP parser is about 15x faster than Python's builtin
# XML parser.  SGMLOP sources can be downloaded from:
#
# http://www.pythonware.com/products/xml/sgmlop.htm
#

Does anyone know what the performance gain from using either of those 
above libraries would be?
On the other hand, maybe the slowdown is in twisted.web.xmlrpc? What 
does that module use to do its work? Is it using xmlrpclib underneath?
Other xmlrpc libraries that are significantly faster that I should be 
using instead?

Any help in improving my xmlrpc performance would be greatly appreciated,
-Dave

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with super()

2006-01-13 Thread David Hirschfield
I tried that and super(B,self), but neither works.

Using super(A,self) in the _getv(self) method doesn't work, since the 
super() of A is "object" and that doesn't have the v property at all.
Not sure why you say that using self.__class__ is wrong as the first 
argument to super(), it should be the same as using the class name 
itself - both will result in  or whetever self is an 
instance of.

I still don't see a way to accomplish my original goal, but any other 
suggestions you might have would be appreciated.
Thanks,
-David

Mike Meyer wrote:

>David Hirschfield <[EMAIL PROTECTED]> writes:
>  
>
>>I'm having trouble with the new descriptor-based mechanisms like
>>super() and property() stemming, most likely, from my lack of
>>knowledge about how they work.
>>
>>Here's an example that's giving me trouble, I know it won't work, but
>>it illustrates what I want to do:
>>
>>class A(object):
>>_v = [1,2,3]
>> def _getv(self):
>>if self.__class__ == A:
>>return self._v
>>return super(self.__class__,self).v + self._v
>>
>>v = property(_getv)
>>   
>>
>>class B(A):
>>_v = [4,5,6]
>>   b = B()
>>print b.v
>>
>>What I want is for b.v to give me back [1,2,3,4,5,6], but this example
>>gets into a recursive infinite loop, since super(B,self).v is still
>>B._getv(), not A._getv().
>>
>>Is there a way to get what I'm after using super()?
>>
>>
>
>Yes. Call super with A as the first argument, not self.__class__.
>
>That's twice in the last little bit I've seen someone incorrectly use
>self.__class__ instead of using the class name. Is there bogus
>documentation somewhere that's recommending this?
>
>
>

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with super()

2006-01-13 Thread David Hirschfield
Yes, indeed that does work.
I tried using __mro__ based on some code that showed how super() works 
in pure-python, but I got lost.
I think this makes that clear...tho' I hate squishing around in instance 
innards like this...

Thanks a bunch, I'm sure I'll have more questions,
-Dave

Paul McNett wrote:

> David Hirschfield wrote:
>
>> Is there a way to get what I'm after using super()?
>
>
> Probably.
>
>
>> The idea is that I could have a chain of subclasses which only need 
>> to redefine _v, and getting the value of v as a property would give 
>> me back the full chain of _v values for that class and all its 
>> ancestor classes.
>
>
> Does this work? :
>
> class A(object):
>   _v = [1,2,3]
>
>   def _getv(self):
> ret = []
> mroList = list(self.__class__.__mro__)
> mroList.reverse()
> for c in mroList:
>   print c, ret
>   if hasattr(c, "_v"):
> ret += c._v
> return ret
>
>   v = property(_getv)
>
>
> class B(A):
>   _v = [4,5,6]
>
> b = B()
>
> print b.v
>
>

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with super()

2006-01-12 Thread David Hirschfield
Good point, I did notice that.

My example is total junk, actually. Now that I look at it, it isn't at 
all possible to do what I'm after this way, since _getv() is never going 
to be looking at class A's _v value.

So, the larger question is how to do anything that resembles what I 
want, which is to have a chain of subclasses with a single attribute 
that each subclass can define as it wishes to, but with the ability to 
get the combined value from all the ancestors down to the current 
subclass I access that attribute from.

Does that make any sense?

Something like (and this is totally made-up pseudo-code):

class A:
v = [1,2]

class B(A):
   v = [2,3]

class C(B):
v = [4,5]

c = C()
c.getv() ==> [1,2,2,3,4,5]

-Dave

Paul McNett wrote:

> David Hirschfield wrote:
>
>> I tried that and super(B,self), but neither works.
>>
>> Using super(A,self) in the _getv(self) method doesn't work, since the 
>> super() of A is "object" and that doesn't have the v property at all.
>> Not sure why you say that using self.__class__ is wrong as the first 
>> argument to super(), it should be the same as using the class name 
>> itself - both will result in  or whetever self is an 
>> instance of.
>>
>> I still don't see a way to accomplish my original goal, but any other 
>> suggestions you might have would be appreciated.
>
>
> Your basic problem is that property fset(), fget() and friends are 
> defined at the class level, not at the instance level. So if you want 
> to override the setter of a property in a subclass, you pretty much 
> have to redefine the property in that subclass. And therefore, you 
> also have to redefine the getter of the property as well. There is no 
> easy way to "subclass" property getters and setters.
>
> But, there's another problem with your example code as well. You 
> appear to assume that self._v is going to refer to the _v defined in 
> that class. But take a look at this:
>
> class A(object):
>   _v = [1,2,3]
>
>   def _getv(self):
> print self._v ## hey, look, I'm [4,5,6]!!!
>   v = property(_getv)
>
>
> class B(A):
>   _v = [4,5,6]
>
> b = B()
>
> print b.v
>
>

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list


Help with super()

2006-01-12 Thread David Hirschfield
I'm having trouble with the new descriptor-based mechanisms like super() 
and property() stemming, most likely, from my lack of knowledge about 
how they work.

Here's an example that's giving me trouble, I know it won't work, but it 
illustrates what I want to do:

class A(object):
_v = [1,2,3]
   
def _getv(self):
if self.__class__ == A:
return self._v
return super(self.__class__,self).v + self._v

v = property(_getv)
   

class B(A):
_v = [4,5,6]
   
b = B()
print b.v

What I want is for b.v to give me back [1,2,3,4,5,6], but this example 
gets into a recursive infinite loop, since super(B,self).v is still 
B._getv(), not A._getv().

Is there a way to get what I'm after using super()?

The idea is that I could have a chain of subclasses which only need to 
redefine _v, and getting the value of v as a property would give me back 
the full chain of _v values for that class and all its ancestor classes.

Thanks in advance,
-David

-- 
Presenting:
mediocre nebula.

-- 
http://mail.python.org/mailman/listinfo/python-list