Oh my, it turns out I don't really need to do this after all, due to previously
undiscovered uber-coolness in the tools I'm using!
My use case is that from inside of a Django view, I needed to retrieve a large
file via a HTTP GET, and serve that back up, with some time delays inserted
into the data stream. Turns out, requests (uber-cool tool #1) provides a way
to iterate over the content of a GET, and Django (uber-cool tool #2) provides a
way to build a HttpResponse from the data in an iterator. Epiphany! I ended
up with (essentially) this:
def stream_slow(request, song_id):
"""Streams a song, but does it extra slowly, for client testing
purposes.
"""
def _slow_stream(r, chunk_size):
for chunk in r.iter_content(chunk_size):
yield chunk
time.sleep(0.1)
url = get_url(song_id)
response = requests.get(url, stream=True)
return HttpResponse(_slow_stream(response, 1024))
On Nov 8, 2013, at 12:59 PM, Nick Cash wrote:
>> I have a long string (several Mbytes). I want to iterate over it in
>> manageable chunks
>
> This is a weirdly common question. See
> http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks-in-python
> for several solutions.
>
> It's been proposed to be added to itertools before, but rejected:
> https://mail.python.org/pipermail/python-ideas/2012-July/015671.html and
> http://bugs.python.org/issue13095
>
> - Nick Cash
>
---
Roy Smith
[email protected]
--
https://mail.python.org/mailman/listinfo/python-list