super() in class defs?
I may be attempting something improper here, but maybe I'm just going about it the wrong way. I'm subclassing http.server.CGIHTTPRequestHandler, and I'm using a decorator to add functionality to several overridden methods. def do_decorate(func): . def wrapper(self): . if appropriate(): . return func() . complain_about_error() . return wrapper class myHandler(CGIHTTPRequestHandler): . @do_decorate . def do_GET(self): . return super().do_GET() . # also override do_HEAD and do_POST My first thought was that I could just replace that whole method definition with one line: class myHandler(CGIHTTPRequestHandler): . do_GET = do_decorate(super().do_GET) That generates the following error: SystemError: super(): __class__ cell not found So I guess that when super() is called in the context of a class def rather than that of a method def, it doesn't have the information it needs. Now I'll probably just say: do_GET = do_decorate(CGIHTTPRequestHandler.do_GET) but I wonder if there is a "correct" way to do this instead? Thanks! -- http://mail.python.org/mailman/listinfo/python-list
Re: __eq__() inconvenience when subclassing set
On Nov 1, 1:13 am, "Gabriel Genellina" wrote: > Looks like in 3.1 this can be done with bytes+str and viceversa, even if > bytes and str don't have a common ancestor (other than object; basestring > doesn't exist in 3.x): > > p3> Base = bytes > p3> Other = str > p3> > p3> class Derived(Base): > ... def __eq__(self, other): > ... print('Derived.__eq__') > ... return True > ... > p3> Derived()==Base() > Derived.__eq__ > True > p3> Base()==Derived() > Derived.__eq__ > True > p3> Derived()==Other() > Derived.__eq__ > True > p3> Other()==Derived() > Derived.__eq__ # !!! > True > p3> Base.mro() > [, ] > p3> Other.mro() > [, ] > > The same example with set+frozenset (the one you're actually interested > in) doesn't work, unfortunately. > After further analysis, this works for bytes and str because both types > refuse to guess and compare to each other; they return NotImplemented when > the right-side operand is not of the same type. And this gives that other > operand the chance of being called. > > set and frozenset, on the other hand, are promiscuous: their > tp_richcompare slot happily accepts any set of any kind, derived or not, > and compares their contents. I think it should be a bit more strict: if > the right hand side is not of the same type, and its tp_richcompare slot > is not the default one, it should return NotImplemented. This way the > other type has a chance to be called. Thanks for this, Gabriel! There seems to be a difference between the two cases, however: >>> str() == bytes() False >>> set() == frozenset() True I doubt that either of these invariants is amenable to modification, even for purposes of "consistency". I'm not sure how to resolve this, but you've definitely helped me here. Perhaps the test in set_richcompare can return NotImplemented in particular cases but not in others? I'll think about this; let me know if you come up with anything more. thanks, Jess -- http://mail.python.org/mailman/listinfo/python-list
Re: __eq__() inconvenience when subclassing set
On Oct 29, 10:41 pm, "Gabriel Genellina" wrote: > We know the last test fails because the == logic fails to recognize mySet > (on the right side) as a "more specialized" object than frozenset (on the > left side), because set and frozenset don't have a common base type > (although they share a lot of implementation) > > I think the only way would require modifying tp_richcompare of > set/frozenset objects, so it is aware of subclasses on the right side. > Currently, frozenset() == mySet() effectively ignores the fact that mySet > is a subclass of set. I don't think even that would work. By the time set_richcompare() is called (incidentally, it's used for both set and frozenset), it's too late. That function is not responsible for calling the subclass's method. It does call PyAnySet_Check(), but only to short-circuit equality and inequality for non-set objects. I believe that something higher-level in the interpreter decides to call the right-side type's method because it's a subclass of the left-side type, but I'm not familiar enough with the code to know where that happens. It may be best not to sully such generalized code with a special case for this. I may do some experiments with bytes, str, and unicode, since that seems to be an analogous case. There is a basestring type, but at this point I don't know that it really helps with anything. cheers, Jess -- http://mail.python.org/mailman/listinfo/python-list
Re: __eq__() inconvenience when subclassing set
On Oct 29, 3:54 pm, Mick Krippendorf wrote: > Jess Austin wrote: > > That's nice, but it means that everyone who imports my class will have > > to import the monkeypatch of frozenset, as well. I'm not sure I want > > that. More ruby than python, ne? > > I thought it was only a toy class? Well, I posted a toy, but it's a stand-in for something else more complicated. Trying to conserve bytes, you know. -- http://mail.python.org/mailman/listinfo/python-list
Re: __eq__() inconvenience when subclassing set
On Oct 28, 10:07 pm, Mick Krippendorf wrote: > You could just overwrite set and frozenset: > > class eqmixin(object): > def __eq__(self, other): > print "called %s.__eq__()" % self.__class__ > if isinstance(other, (set, frozenset)): > return True > return super(eqmixin, self).__eq__(other) > > class frozenset(eqmixin, frozenset): > pass That's nice, but it means that everyone who imports my class will have to import the monkeypatch of frozenset, as well. I'm not sure I want that. More ruby than python, ne? thanks, Jess -- http://mail.python.org/mailman/listinfo/python-list
__eq__() inconvenience when subclassing set
I'm subclassing set, and redefining __eq__(). I'd appreciate any relevant advice. >>> class mySet(set): ... def __eq__(self, other): ... print "called mySet.__eq__()!" ... if isinstance(other, (set, frozenset)): ... return True ... return set.__eq__(self, other) ... I stipulate that this is a weird thing to do, but this is a toy class to avoid the lengthy definition of the class I actually want to write. Now I want the builtin set and frozenset types to use the new __eq__() with mySet symmetrically. >>> mySet() == set([1]) called mySet.__eq__()! True >>> mySet() == frozenset([1]) called mySet.__eq__()! True >>> set([1]) == mySet() called mySet.__eq__()! True >>> frozenset([1]) == mySet() False frozenset doesn't use mySet.__eq__() because mySet is not a subclass of frozenset as it is for set. I've tried a number of techniques to mitigate this issue. If I multiple-inherit from both set and frozenset, I get the instance lay-out conflict error. I have similar problems setting mySet.__bases__ directly, and hacking mro() in a metaclass. So far nothing has worked. If it matters, I'm using 2.6, but I can change versions if it will help. Should I give up on this, or is there something else I can try? Keep in mind, I must redefine __eq__(), and I'd like to be able to compare instances of the class to both set and frozenset instances. cheers, Jess -- http://mail.python.org/mailman/listinfo/python-list
Re: generators shared among threads
Bryan, You'll get the same result without the lock. I'm not sure what this indicates. It may show that the contention on the lock and the race condition on i aren't always problems. It may show that generators, at least in CPython 2.4, provide thread safety for free. It does seem to disprove my statement that, "the yield leaves the lock locked". More than that, I don't know. When threading is involved, different runs of the same code can yield different results. Can we be sure that each thread starts where the last one left off? Why wouldn't a thread just start where it had left off before? Of course, this case would have the potential for problems that Alex talked about earlier. Why would a generator object be any more reentrant than a function object? Because it has a gi_frame attribute? Would generators be thread-safe only in CPython? I started the discussion with simpler versions of these same questions. I'm convinced that using Queue is safe, but now I'm not convinced that just using a generator is not safe. cheers, Jess -- http://mail.python.org/mailman/listinfo/python-list
Re: generators shared among threads
I just noticed, if you don't define maxsize in _init(), you need to override _full() as well: def _full(self): return False cheers, Jess -- http://mail.python.org/mailman/listinfo/python-list
Re: generators shared among threads
Paul wrote: >def f(): >lock = threading.Lock() >i = 0 >while True: >lock.acquire() >yield i >i += 1 >lock.release() > > but it's easy to make mistakes when implementing things like that > (I'm not even totally confident that the above is correct). The main problem with this is that the yield leaves the lock locked. If any other thread wants to read the generator it will block. Your class Synchronized fixes this with the "finally" hack (please note that from me this is NOT a pejorative). I wonder... is that future-proof? It seems that something related to this might change with 2.5? My notes from GvR's keynote don't seem to include this. Someone that knows more than I do about the intersection between "yield" and "finally" would have to speak to that. -- http://mail.python.org/mailman/listinfo/python-list
Re: generators shared among threads
Alex wrote: > Last, I'm not sure I'd think of this as a reentrantQueue, so > much as a ReentrantCounter;-). Of course! It must have been late when I named this class... I think I'll go change the name in my code right now. -- http://mail.python.org/mailman/listinfo/python-list
Re: generators shared among threads
Thanks for the great advice, Alex. Here is a subclass that seems to work: from Queue import Queue from itertools import count class reentrantQueue(Queue): def _init(self, maxsize): self.maxsize = 0 self.queue = [] # so we don't have to override put() self.counter = count() def _empty(self): return False def _get(self): return self.counter.next() def next(self): return self.get() def __iter__(self): return self -- http://mail.python.org/mailman/listinfo/python-list
Re: Easy immutability in python?
I guess we think a bit differently, and we think about different problems. When I hear, "immutable container", I think "tuple". When I hear, "my own class that is an immutable container", I think, "subclass tuple, and probably override __new__ because otherwise tuple would be good enough as is". I'm not sure how this relates to the clp thread that you cite. I didn't read the whole thing, but I didn't find it to be a flamewar so much as a typical clp contest of tedium, which failed to devolve into a flamewar simply due to the maturity of the interlocutors. To summarize: first post is a use case, second post is an implementation of that use case, and subsequent posts alternate between "that's not how I want to do it" and "please provide a more specific use case for which the provided implementation is not acceptable". good luck, Jess -- http://mail.python.org/mailman/listinfo/python-list
Re: do design patterns still apply with Python?
msoulier wrote: > I find that DP junkies don't tend to keep things simple. +1 QOTW. There's something about these "political" threads that seems to bring out the best quotes. b^) -- http://mail.python.org/mailman/listinfo/python-list
generators shared among threads
hi, This seems like a difficult question to answer through testing, so I'm hoping that someone will just know... Suppose I have the following generator, g: def f() i = 0 while True: yield i i += 1 g=f() If I pass g around to various threads and I want them to always be yielded a unique value, will I have a race condition? That is, is it possible that the cpython interpreter would interrupt one thread after the increment and before the yield, and then resume another thread to yield the first thread's value, or increment the stored i, or both, before resuming the first thread? If so, would I get different behavior if I just set g like: g=itertools.count() If both of these idioms will give me a race condition, how might I go about preventing such? I thought about using threading.Lock, but I'm sure that I don't want to put a lock around the yield statement. thanks, Jess -- http://mail.python.org/mailman/listinfo/python-list
Re: Easy immutability in python?
To be clear, in this simple example I gave you don't have to override anything. However, if you want to process the values you place in the container in some way before turning on immutability (which I assume you must want to do because otherwise why not just use a tuple to begin with?), then that processing should take place in a.__new__. cheers, Jess -- http://mail.python.org/mailman/listinfo/python-list
Re: Easy immutability in python?
Since this is a container that needs to be "immutable, like a tuple", why not just inherit from tuple? You'll need to override the __new__ method, rather than the __init__, since tuples are immutable: class a(tuple): def __new__(cls, t): return tuple.__new__(cls, t) cheers, Jess -- http://mail.python.org/mailman/listinfo/python-list
Re: Pulling all n-sized combinations from a list
hi, I'm not sure why this hasn't come up yet, but this seems to beg for list comprehensions, if not generator expressions. All of the following run in under 2 seconds on my old laptop: >>> alph = 'abcdefghijklmnopqrstuvwxyz' >>> len([''.join((a,b,c,d)) for a in alph for b in alph for c in alph for d in >>> alph]) 456976 >>> len([''.join((a,b,c,d)) for a in alph for b in alph for c in alph for d in >>> alph ... if (a>=b and b>=c and c>=d)]) 23751 >>> len([''.join((a,b,c,d)) for a in alph for b in alph for c in alph for d in >>> alph ... if (a!=b and b!=c and c!=d and d!=a and b!=d and a!=c)]) 358800 >>> len([''.join((a,b,c,d)) for a in alph for b in alph for c in alph for d in >>> alph ... if (a>b and b>c and c>d)]) 14950 cheers, Jess -- http://mail.python.org/mailman/listinfo/python-list