Here is an example of how I use it to build an arbitrary long SQL request
without having to pay for long intermediate strings, both in computation on
memory.
from itertools import chain #, join
def join(sep, iterable):
notfirst=False
for i in iterable:
if notf
I agree that itertools shouldn't be a collection of vaguely useful functions. I
proposed that one because I have needed this one many times, often used
str.join instead which comes at a greater cost than iterating pieces of string.
I didn't know about more-itertools library (which already has an
Hi
I like using itertools for creating long strings while not paying the cost of
intermediate strings (by eventually calling str.join on the whole iterator).
However, one missing feature is to mimic the behavior of str.join as an
iterator: an iterator that returns the items of an iterable, separ
Which means you cancel a running task but still have to wait for it and check
if it eventually has a result. This is OK for internal use, but highly counter
intuitive for the end-user. And it makes the cancelled status even more
inconsistent, as calling cancel on a running task does not ensure i
Oh only now it appears in the list ! I thought the post hadn't working, so I
posted again :/.
I've fixed my "library"
(https://github.com/aure-olli/aiokafka/blob/3acb88d6ece4502a78e230b234f47b90b9d30fd5/syncio.py),
and the `wrapped_consumer2` function. Now no double await, so no risk of
afterw
In asyncio, when a task awaits for another task (or future), it can be
cancelled right after the awaited task finished (before the callback have been
processed). Thus, if the awaited task has consumed data, the data is lost.
For instance, with the following code:
import asyncio
availab