Matteo Dell'Amico added the comment:
Personally, I'd find a maxheap in the standard library helpful, and a quick
Google search tells me I'm not alone.
I generally have to deal with numeric values, so I have these choices:
- ugly code (e.g., `minus_distance, elem = heappop(heap)`, `distance
Matteo Dell'Amico added the comment:
Sorry for taking so long to answer, I didn't see notifications somehow.
Raymond, my use case is in general something that happens when I'm doing
analytics on sequences of events (e.g., URLs visited by a browser) or paths in
a graph. I look at pairs
New submission from Matteo Dell'Amico :
I use itertools.pairwise all the time and I wonder if the same happens to
others. I'm thinking that others may be in the same situation, and having this
simple recipe already included in the library would be definitely more
convenient than copy/pasting
Changes by Matteo Dell'Amico de...@linux.it:
--
nosy: +della
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24068
___
___
Python-bugs-list mailing
New submission from Matteo Dell'Amico de...@linux.it:
Is there a way to define an abstract classmethod? The two obvious ways
don't seem to work properly.
Python 3.0.1+ (r301:69556, Apr 15 2009, 17:25:52)
[GCC 4.3.3] on linux2
Type help, copyright, credits or license for more information
New submission from Matteo Dell'Amico de...@linux.it:
The current MutableSet.__iand__ implementation calls self.discard while
iterating on self. This creates strange problems while implementing
MutableSet with simple choices. For example, consider the attached file
which implements set
Matteo Dell'Amico de...@linux.it added the comment:
I suggest solving the problem by changing the implementation to:
def __iand__(self, c):
self -= self - c:
or to
def __iand__(self, c):
for item in self - c:
self.discard(item
New submission from Matteo Dell'Amico de...@linux.it:
I feel that the pairwise recipe could be slightly more elegant if for
elem in b: break became a simpler next(b) (or b.next() for Python 2.x).
It is also more natural to modify the recipes to suit one's needs (e.g.,
returning items
Matteo Dell'Amico de...@linux.it added the comment:
Georg, you're right, there's a StopIteration to catch. My thinko was
mistaking the function for a generator where the exception propagation
would have done the right thing. The amended version now becomes
next(b)
for x, y in zip(a, b): yield x
New submission from Matteo Dell'Amico de...@linux.it:
The str.count (http://docs.python.org/dev/py3k/library/stdtypes.html)
documentation does not report that it returns the number of
*non-overlapping* instances. For example, I expected 'aaa'.count('aa')
to be 2 and not 1. Compare
Matteo Dell'Amico de...@linux.it added the comment:
great Raymond! :)
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5350
___
___
Python-bugs-list mailing
Paolo Veronelli wrote:
Yes this is really strange.
from sets import Set
class H(Set):
def __hash__(self):
return id(self)
s=H()
f=set() #or f=Set()
f.add(s)
f.remove(s)
No errors.
So we had a working implementation of sets in the library an put a
broken one in the
Paolino wrote:
I thought rewriting __hash__ should be enough to avoid mutables problem
but:
class H(set):
def __hash__(self)
return id(self)
s=H()
f=set()
f.add(s)
f.remove(s)
the add succeeds
the remove fails eventually not calling hash(s).
Why don't you just use
Paolo Veronelli wrote:
And mostly with sets remove operation expense should be sublinear or am
I wrong?
Is this fast as with lists?
It's faster then with lists... in sets, as with dicts, remove is on
average O(1).
Obviously if I use the ids as hash value nothing is guaranted about the
Larry Bates wrote:
I received my copy on Friday (because I was a contributor).
I wanted to thank Alex, Anna, and David for taking the time to put
this together. I think it is a GREAT resource, especially for
beginners. This should be required reading for anyone that
is serous about learning
Raymond Hettinger wrote:
I would like to get everyone's thoughts on two new dictionary methods:
def count(self, value, qty=1):
try:
self[key] += qty
except KeyError:
self[key] = qty
def appendlist(self, key, *values):
Kay Schluehr wrote:
Why do You set
d.defaultValue(0)
d.defaultValue(function=list)
but not
d.defaultValue(0)
d.defaultValue([])
?
I think that's because you have to instantiate a different object for
each different key. Otherwise, you would instantiate just one list as a
default value for *all*
Kay Schluehr wrote:
I think that's because you have to instantiate a different object for
each different key. Otherwise, you would instantiate just one list as
a default value for *all* default values.
Or the default value will be copied, which is not very hard either or
type(self._default)() will
George Sakkis wrote:
I'm sure there must have been a past thread about this topic but I don't know
how to find it: How
about extending the for X in syntax so that X can include default arguments
? This would be very
useful for list/generator comprehensions, for example being able to write
nghoffma wrote:
sorry, that should have been:
pyimport sets
pydef doit(thelist):
... s = sets.Set(thelist)
... if s == sets.Set([None]):
... return None
... else:
... return max(s - sets.Set([None]))
Since a function that doesn't return is equivalent to one
20 matches
Mail list logo