Carsten Milkau added the comment:

No. The sample code is a demonstration how to do it, it's by no means a 
full-fledged patch.

The drawback of the current implementation is that if you tee n-fold, and then 
advance one of the iterators m times, it fills n queues with m references each, 
for a total of (n-1)*m references. The documentation explicitly mentions this 
is unfortunate.

I only demonstrate that it is perfectly unnecessary to fill n separate queues, 
as you can use a single queue and index into it. Instead of storing duplicate 
references, you can store a counter with each cached item reference. Replacing 
duplicates by refcounts, it turns (n-1)*m references into 2*m references (half 
of which being the counters). 

Not in the demo code: you can improve this further by storing items in 
iterator-local queues when that iterator is the only one that still needs to 
return it, and in a shared queue with refcount when there are more of them. 
That way, you eleminate the overhead of storing (item, 1) instead of item for a 
fix-cost per-iterator.

----------
versions: +Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue16394>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to