Chris Angelico wrote: > On Thu, Feb 9, 2012 at 2:55 PM, Steven D'Aprano > <steve+comp.lang.pyt...@pearwood.info> wrote: >> If your data is humongous but only available lazily, buy more memory :) > > Or if you have a huge iterable and only need a small index into it, > snag those first few entries into a list, then yield everything else, > then yield the saved ones:
> def cycle(seq,n): > seq=iter(seq) > lst=[next(seq) for i in range(n)] > try: > while True: yield next(seq) > except StopIteration: > for i in lst: yield i I think that should be spelt def cycle2(seq, n): seq = iter(seq) head = [next(seq) for i in range(n)] for item in seq: yield item for item in head: yield item or, if you are into itertools, def cycle3(seq, n): seq = iter(seq) return chain(seq, list(islice(seq, n))) $ python -m timeit -s'from tmp import cycle; data = range(1000); start=10' 'for item in cycle(data, 10): pass' 1000 loops, best of 3: 358 usec per loop $ python -m timeit -s'from tmp import cycle2; data = range(1000); start=10' 'for item in cycle2(data, 10): pass' 1000 loops, best of 3: 172 usec per loop $ python -m timeit -s'from tmp import cycle3; data = range(1000); start=10' 'for item in cycle3(data, 10): pass' 10000 loops, best of 3: 56.5 usec per loop For reference: $ python -m timeit -s'data = range(1000); start=10' 'for item in data[start:] + data[:start]: pass' 10000 loops, best of 3: 56.4 usec per loop -- http://mail.python.org/mailman/listinfo/python-list