Re: [Python-ideas] asyncio: return from multiple coroutines

2020-06-25 Thread Kyle Stanley
(Resending this email since it didn't originally go through to
python-list, sorry for the duplicate Pablo)

> Yes, I want to have multiple results: the connections listening forever, 
> returning a result for each message received.

> I forgot to mention thatI did try to use asyncio.wait with `FIRST_COMPLETED`; 
> however, the problem is that it seems to evict the not-completed coroutines, 
> so the messenger that arrives second does not send the message. To check it, 
> I have run that script without the random sleep. just msgr1 waits 1s and 
> msgr2 waits 2s, so msgr1 always ends first. I expect a result like this 
> (which I am currently getting with queues):

FYI, the other coroutines are not evicted or cancelled, they are
simply in the "pending" set of the "done, pending" tuple returned by
`asyncio.wait()` and were not returned. I misunderstood what you were
actually looking for, but I think that I understand now.

Since it seems like you want to be able to receive the results out of
order from both "messengers" and have them continuously listen to
their respective socket (without affecting the other), a queue is
likely going to be the best approach. I think you had the right
general idea with your example, but here's a way to make it
significantly less cumbersome and easy to expand upon (such as
expanding the number of websockets to listen to):

```
import asyncio

class MessengerQueue:
def __init__(self):
self._queue = asyncio.Queue()

async def get(self):
return await self._queue.get()

async def start_messengers(self):
# Could be easily modified to create any number of
"messengers" set to
# listen on specific websockets, by passing a list and
creating a task for
# each one.
asyncio.create_task(self._messenger("Messenger 1", 1))
asyncio.create_task(self._messender("Messenger 2", 2))

async def _messenger(self, message: str, sleep_time: int):
while True:
await asyncio.sleep(sleep_time)
await self._queue.put(f'{message} awaited for {sleep_time:.2f}s')


async def main():
mqueue = MessengerQueue()
asyncio.create_task(mqueue.start_messengers())
while True:
result = await mqueue.get()
print(result)

asyncio.run(main())
```

This results in your desired output:
Messenger 1 awaited for 1.00s
Messenger 2 awaited for 2.00s
Messenger 1 awaited for 1.00s
Messenger 1 awaited for 1.00s
Messenger 2 awaited for 2.00s
Messenger 1 awaited for 1.00s
Messenger 1 awaited for 1.00s
Messenger 2 awaited for 2.00s

Note: it would probably be more idiomatic to call these "consumers" or
"listeners" rather than "messengers"/"messagers" (the websocket docs
refer to them as "consumer handlers"), but I used "messengers" to make
it a bit more easily comparable to the original queue example from the
OP: https://pastebin.com/BzaxRbtF.

I hope the above example is of some use. :-)

Regards,
Kyle Stanley

On Thu, Jun 25, 2020 at 1:28 AM Kyle Stanley  wrote:
>
> > Yes, I want to have multiple results: the connections listening forever, 
> > returning a result for each message received.
>
> > I forgot to mention thatI did try to use asyncio.wait with 
> > `FIRST_COMPLETED`; however, the problem is that it seems to evict the 
> > not-completed coroutines, so the messenger that arrives second does not 
> > send the message. To check it, I have run that script without the random 
> > sleep. just msgr1 waits 1s and msgr2 waits 2s, so msgr1 always ends first. 
> > I expect a result like this (which I am currently getting with queues):
>
> FYI, the other coroutines are not evicted or cancelled, they are
> simply in the "pending" set of the "done, pending" tuple returned by
> `asyncio.wait()` and were not returned. I misunderstood what you were
> actually looking for, but I think that I understand now.
>
> Since it seems like you want to be able to receive the results out of
> order from both "messengers" and have them continuously listen to
> their respective socket (without affecting the other), a queue is
> likely going to be the best approach. I think you had the right
> general idea with your example, but here's a way to make it
> significantly less cumbersome and easy to expand upon (such as
> expanding the number of websockets to listen to):
>
> ```
> import asyncio
>
> class MessengerQueue:
> def __init__(self):
> self._queue = asyncio.Queue()
>
> async def get(self):
> return await self._queue.get()
>
> async def start_messengers(self):
> # Could be easily modified to create any number of
> "messengers" set to
> # li

Re: [Python-ideas] asyncio: return from multiple coroutines

2020-06-23 Thread Pablo Alcain
Thank you very much Kyle for your answer, I am moving this conversation to
the more proper python-list for whoever wants to chime in. I summarize here
the key points of my original question (full question on the quoted email):

I have an application that listens on two websockets through the async
library https://websockets.readthedocs.io/ and I have to perform the same
function on the result, no matter where the message came from. I have
implemented a rather cumbersome solution with async Queues:
https://pastebin.com/BzaxRbtF, but i think there has to be a more
async-friendly option I am missing.

Now I move on to the comments that Kyle made

On Tue, Jun 23, 2020 at 12:32 AM Kyle Stanley  wrote:

> I believe asyncio.wait() with "return_when=FIRST_COMPLETED" would
> perform the functionality you're looking for with the
> "asyncio.on_first_return()". For details on the functionality of
> asyncio.wait(), see
> https://docs.python.org/3/library/asyncio-task.html#asyncio.wait.
>
> I understand that I can create two coroutines that call the same
> function, but it would be much cleaner (because of implementation issues)
> if I can simply create a coroutine that yields the result of whichever
> connection arrives first.
>
> You can use an asynchronous generator that will continuously yield the
> result of the first recv() that finishes (I'm assuming you mean
> "yields" literally and want multiple results from a generator, but I
> might be misinterpreting that part).
>

Yes, I want to have multiple results: the connections listening forever,
returning a result for each message received.


>
> Here's a brief example, using the recv() coroutine function from the
> pastebin linked:
>
> ```
> import asyncio
> import random
>
> async def recv(message: str, max_sleep: int):
> sleep_time = max_sleep * random.random()
> await asyncio.sleep(sleep_time)
> return f'{message} awaited for {sleep_time:.2f}s'
>
> async def _start():
> while True:
> msgs = [
> asyncio.create_task(recv("Messager 1", max_sleep=1)),
> asyncio.create_task(recv("Messager 2", max_sleep=1))
> ]
> done, _ = await asyncio.wait(msgs,
> return_when=asyncio.FIRST_COMPLETED)
> result = done.pop()
> yield await result
>
> async def main():
> async for result in _start():
> print(result)
>
> asyncio.run(main())
> ```


I forgot to mention thatI did try to use asyncio.wait with
`FIRST_COMPLETED`; however, the problem is that it seems to evict the
not-completed coroutines, so the messenger that arrives second does not
send the message. To check it, I have run that script without the random
sleep. just msgr1 waits 1s and msgr2 waits 2s, so msgr1 always ends first.
I expect a result like this (which I am currently getting with queues):

Messenger 1 waits for 1.0s
Messenger 1 waits for 1.0s
Messenger 2 waits for 2.0s
Messenger 1 waits for 1.0s
Messenger 1 waits for 1.0s
Messenger 2 waits for 2.0s
Messenger 1 waits for 1.0s
...

but instead I got this:

Messenger 1 waits for 1.0s
Messenger 1 waits for 1.0s
Messenger 1 waits for 1.0s
Messenger 1 waits for 1.0s
Messenger 1 waits for 1.0s
...





> Note that in the above example, in "msgs", you can technically pass
> the coroutine objects directly to asyncio.wait(), as they will be
> implicitly converted to tasks. However, we decided to deprecate that
> functionality in Python 3.8 since it can be rather confusing. So
> creating and passing the tasks is a better practice.
>

Thanks for that info, I am still trying to grasp the best practices
surrounding mostly the explicitness in async.


> > Again, it's quite likely I am not seeing something obvious, but I didn't
> know where else to ask.
>
> If you're not mostly certain or relatively inexperienced with the
> specific area that the question pertains to, I'd recommend asking on
> python-list first (or another Python user community). python-ideas is
> primarily intended for new feature proposals/suggestions. Although if
> you've tried other resources and haven't found an answer, it's
> perfectly fine to ask a question as part of the suggestion post.
>
>
>
Original question, as posted in python-ideas:


> On Mon, Jun 22, 2020 at 6:24 PM Pablo Alcain 
> wrote:
> >
> > Hey everyone. I have been looking into asyncio lately, and even though I
> have had my fair share of work, I still have some of it very shaky, so
> first of all forgive me if what I am saying here is already implemented and
> I totally missed it (so far, it looks *really* likely).
> >
> > Basically this is the situation: I have an application that 

Awaiting coroutines inside the debugger (PDB)

2019-11-24 Thread darthdeus
Hi everyone,

this question is sort of in response to this issue
https://github.com/gotcha/ipdb/issues/174. I've noticed that there was
recently added support for an asyncio repl (run via `python -m
asyncio`), which allows the use of `await` directly on coroutine objects
in the REPL.

But this does not seem to work when in PDB (and consequently in iPDB)
when running `import pdb; pdb.set_trace()`.

Is there any workaround to get `await` working in PDB, or any simple way
to synchronously wait while in the debugger?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio await different coroutines on the same socket?

2018-10-05 Thread Russell Owen
On Oct 3, 2018, Ian Kelly wrote
(in 
article):

> On Wed, Oct 3, 2018 at 7:47 AM Russell Owen  wrote:
> > Using asyncio I am looking for a simple way to await multiple events where
> > notification comes over the same socket (or other serial stream) in
> > arbitrary
> > order. For example, suppose I am communicating with a remote device that can
> > run different commands simultaneously and I don't know which command will
> > finish first. I want to do this:
> >
> > coro1 = start(command1)
> > coro2 = start(command2)
> > asyncio.gather(coro1, coro2)
> >
> > where either command may finish first. I’m hoping for a simple and
> > idiomatic way to read the socket and tell each coroutine it is done. So far
> > everything I have come up with is ugly, using multiple layers of "async
> > def”, keeping a record of Tasks that are waiting and calling "set_result"
> > on those Tasks when finished. Also Task isn’t even documented to have the
> > set_result method (though "future" is)
>
> Because Tasks are used to wrap coroutines, and the result of the Task
> should be determined by the coroutine, not externally.
>
> Instead of tracking tasks (that's what the event loop is for) I would
> suggest tracking futures instead. Have start(command1) return a future
> (or create a future that it will await on itself) that is not a task.
> Whenever a response from the socket is parsed, that code would then
> look up the corresponding future and call set_result on it. It might
> look something like this:
>
> class Client:
> async def open(self, host, port):
> self.reader, self.writer = await asyncio.open_connection(host, port)
> asyncio.create_task(self.read_loop())
>
> async def read_loop(self):
> while not self.reader.at_eof():
> response = self.reader.read()
> id = get_response_id(response)
> self._futures.pop(id).set_result(response)
>
> def start(self, command):
> future = asyncio.Future()
> self._futures[get_command_id(command)] = future
> self.writer.write(command)
> return future
>
> In this case start() is not a coroutine but its result is a future and
> can be awaited.

That is exactly what I was looking for. Thank you very much!

-- Russell

(My apologies for double posting -- I asked this question again today because 
I did not think my original question -- this one -- had gone through).


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio await different coroutines on the same socket?

2018-10-03 Thread Ian Kelly
On Wed, Oct 3, 2018 at 7:47 AM Russell Owen  wrote:
> Using asyncio I am looking for a simple way to await multiple events where
> notification comes over the same socket (or other serial stream) in arbitrary
> order. For example, suppose I am communicating with a remote device that can
> run different commands simultaneously and I don't know which command will
> finish first. I want to do this:
>
> coro1 = start(command1)
> coro2 = start(command2)
> asyncio.gather(coro1, coro2)
>
> where either command may finish first. I’m hoping for a simple and
> idiomatic way to read the socket and tell each coroutine it is done. So far
> everything I have come up with is ugly, using multiple layers of "async
> def”, keeping a record of Tasks that are waiting and calling "set_result"
> on those Tasks when finished. Also Task isn’t even documented to have the
> set_result method (though "future" is)

Because Tasks are used to wrap coroutines, and the result of the Task
should be determined by the coroutine, not externally.

Instead of tracking tasks (that's what the event loop is for) I would
suggest tracking futures instead. Have start(command1) return a future
(or create a future that it will await on itself) that is not a task.
Whenever a response from the socket is parsed, that code would then
look up the corresponding future and call set_result on it. It might
look something like this:

class Client:
async def open(self, host, port):
self.reader, self.writer = await asyncio.open_connection(host, port)
asyncio.create_task(self.read_loop())

async def read_loop(self):
while not self.reader.at_eof():
response = self.reader.read()
id = get_response_id(response)
self._futures.pop(id).set_result(response)

def start(self, command):
future = asyncio.Future()
self._futures[get_command_id(command)] = future
self.writer.write(command)
return future

In this case start() is not a coroutine but its result is a future and
can be awaited.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio await different coroutines on the same socket?

2018-10-03 Thread Léo El Amri via Python-list
Hello Russell,

On 03/10/2018 15:44, Russell Owen wrote:
> Using asyncio I am looking for a simple way to await multiple events where 
> notification comes over the same socket (or other serial stream) in arbitrary 
> order. For example, suppose I am communicating with a remote device that can 
> run different commands simultaneously and I don't know which command will 
> finish first. I want to do this:
> 
> coro1 = start(command1)
> coro2 = start(command2)
> asyncio.gather(coro1, coro2)
> 
> where either command may finish first. I’m hoping for a simple and 
> idiomatic way to read the socket and tell each coroutine it is done. So far 
> everything I have come up with is ugly, using multiple layers of "async 
> def”, keeping a record of Tasks that are waiting and calling "set_result" 
> on those Tasks when finished. Also Task isn’t even documented to have the 
> set_result method (though "future" is)
I don't really get what you want to achieve. Do you want to signal other
coroutines that one of the others finished ?

From what I understand, you want to have several coroutines reading on
the same socket "simultaneously", and you want to stop all of them once
one of them is finished. Am I getting it right ?

-- 
Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


asyncio await different coroutines on the same socket?

2018-10-03 Thread Russell Owen

Using asyncio I am looking for a simple way to await multiple events where 
notification comes over the same socket (or other serial stream) in arbitrary 
order. For example, suppose I am communicating with a remote device that can 
run different commands simultaneously and I don't know which command will 
finish first. I want to do this:

coro1 = start(command1)
coro2 = start(command2)
asyncio.gather(coro1, coro2)

where either command may finish first. I’m hoping for a simple and 
idiomatic way to read the socket and tell each coroutine it is done. So far 
everything I have come up with is ugly, using multiple layers of "async 
def”, keeping a record of Tasks that are waiting and calling "set_result" 
on those Tasks when finished. Also Task isn’t even documented to have the 
set_result method (though "future" is)

Is there a simple, idiomatic way to do this?

-- Russell


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Problem with coroutines old-style / new-style usage and features

2018-02-01 Thread Yahya Abou 'Imran via Python-list
>> @asyncio.coroutine
>> def recorder():
>> dialog = []
>> while True:
>> sent = yield dialog
>> if sent is not None:
>> name, things = sent
>> dialog.append(f'{name} says : {things}')
>This is not an asyncio coroutine. This is just a normal generator that
> you're sending to. So you should probably remove the decorator to
> prevent confusion. For that matter I'm not really sure why this isn't
> just a class with synchronous "add" and "get" methods.

Yeah it's true. I would just instantiate it, send to it and ask for a result. 
To pass it to next to start it and to retrieve a value is not really explicit.

[...]
>>It works well, but I was wondering if I could turn recorder into a new style 
>>coroutine...
>Why? It doesn't do anything asynchronously.

Hey, you're right... I was just, you know... "Async is so cool, let's just 
async everything!"

>If it were a class, then you could make the individual methods be
> coroutines if desired and await those.

Thanks for your advise Ian!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Problem with coroutines old-style / new-style usage and features

2018-02-01 Thread Ian Kelly
On Thu, Feb 1, 2018 at 5:38 AM, Yahya Abou 'Imran via Python-list
 wrote:
> Hi guys.
>
> I am discovering coroutines and asynchronous programming, and I have a little 
> problem with a little example I'm coding myself as an excercice.
>
> Let say you take two guys in the street: Dave and Bryan.
> You ask dave to count from 1 to 50, 1 by 1. He will do it fast.
> And you ask Bryan to count from 208 to 166 in reversing order, 7 by 7! It 
> will take him some time between each number to think about it.
>
> Now I have a recorder wich is able to recognize voices. I use it to record 
> both of them counting at the same time.
>
> Here is the recorder:
>
>
> @asyncio.coroutine
> def recorder():
> dialog = []
> while True:
> sent = yield dialog
> if sent is not None:
> name, things = sent
> dialog.append(f'{name} says : {things}')

This is not an asyncio coroutine. This is just a normal generator that
you're sending to. So you should probably remove the decorator to
prevent confusion. For that matter I'm not really sure why this isn't
just a class with synchronous "add" and "get" methods.

> It is storing the dialog, and you can ask him fot it later by sending None to 
> it.
>
> For the calculation, I'm using a ascyn generator:
>
>
> async def calcul_mental(range_args, name, timeout=0.2):
> for i in range(*range_args):
> await asyncio.sleep(timeout)
> yield name, i
>
>
> To link the two, I came up with this little coroutine:
>
>
> async def record(recorder, gen):
> async for name, i in gen:
> recorder.send([name, i])
>
>
> And my main:
>
>
> def main():
>
> g1 = calcul_mental([1, 51],
>name='Dave',
>timeout=0.2)
>
> g2 = calcul_mental([208, 165, -7],
>name='Bryan',
>timeout=2)
>
> r = recorder()
> r.send(None)
>
> coros = asyncio.gather(record(r, g1), record(r, g2))
> loop = asyncio.get_event_loop()
> loop.run_until_complete(coros)
>
> dialog = r.send(None)
> for line in dialog:
> print(line)
>
>
> It works well, but I was wondering if I could turn recorder into a new style 
> coroutine...

Why? It doesn't do anything asynchronously.

> The problems are:
> - I can't await for an async generator;
> - I can't let an await alone to send data to it;
> - I can't turn it into an AsyncGenerator because it will lost the .send() 
> method.
>
> I think it's just a problem of design, but I wasn't able to solve it myself.
>
> Any thoughts about it?

If it were a class, then you could make the individual methods be
coroutines if desired and await those.
-- 
https://mail.python.org/mailman/listinfo/python-list


Problem with coroutines old-style / new-style usage and features

2018-02-01 Thread Yahya Abou 'Imran via Python-list
Hi guys.

I am discovering coroutines and asynchronous programming, and I have a little 
problem with a little example I'm coding myself as an excercice.

Let say you take two guys in the street: Dave and Bryan.
You ask dave to count from 1 to 50, 1 by 1. He will do it fast.
And you ask Bryan to count from 208 to 166 in reversing order, 7 by 7! It will 
take him some time between each number to think about it.

Now I have a recorder wich is able to recognize voices. I use it to record both 
of them counting at the same time.

Here is the recorder:


@asyncio.coroutine
def recorder():
dialog = []
while True:
sent = yield dialog
if sent is not None:
name, things = sent
dialog.append(f'{name} says : {things}')


It is storing the dialog, and you can ask him fot it later by sending None to 
it.

For the calculation, I'm using a ascyn generator:


async def calcul_mental(range_args, name, timeout=0.2):
for i in range(*range_args):
await asyncio.sleep(timeout)
yield name, i


To link the two, I came up with this little coroutine:


async def record(recorder, gen):
async for name, i in gen:
recorder.send([name, i])


And my main:


def main():

g1 = calcul_mental([1, 51],
   name='Dave',
   timeout=0.2)

g2 = calcul_mental([208, 165, -7],
   name='Bryan',
   timeout=2)

r = recorder()
r.send(None)

coros = asyncio.gather(record(r, g1), record(r, g2))
loop = asyncio.get_event_loop()
loop.run_until_complete(coros)

dialog = r.send(None)
for line in dialog:
print(line)


It works well, but I was wondering if I could turn recorder into a new style 
coroutine...

The problems are:
- I can't await for an async generator;
- I can't let an await alone to send data to it;
- I can't turn it into an AsyncGenerator because it will lost the .send() 
method.

I think it's just a problem of design, but I wasn't able to solve it myself.

Any thoughts about it?

Thanks!
-- 
https://mail.python.org/mailman/listinfo/python-list


coroutines

2017-08-03 Thread ast

Hello

Here a code for a simple demultiplexer coroutine

def demultiplexer(target1, target2):
   while True:
   data = yield
target1.send(data)
target2.send(data)

When data is sent to target1, what Cpython do 
immediately after ?


1- Go on execute demultiplexer, so send a data 
to target2 then pause demultiplexer when it reach
line data=yied, then execute coroutine target1, then 
execute coroutine target2


or

2- Go and execute target1 coroutine, then resume 
demultiplexer, so send a data to target2, go to target2, 
then resume demultiplexer and stops

--
https://mail.python.org/mailman/listinfo/python-list


Decorating coroutines

2017-07-15 Thread Michele Simionato
I have just released version 4.1.1 of the decorator module. The new feature is 
that it is possible to decorate coroutines. Here is an example of how
to define a decorator `log_start_stop` that can be used to trace coroutines:

$ cat x.py
import time
import logging
from asyncio import get_event_loop, sleep, wait
from decorator import decorator


@decorator
async def log_start_stop(coro, *args, **kwargs):
logging.info('Starting %s%s', coro.__name__, args)
t0 = time.time()
await coro(*args, **kwargs)
dt = time.time() - t0
logging.info('Ending %s%s after %d seconds', coro.__name__, args, dt)


@log_start_stop
async def task(n):  # a do nothing task
for i in range(n):
await sleep(1)

if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
tasks = [task(3), task(2), task(1)]
get_event_loop().run_until_complete(wait(tasks))

This will print something like this:

~$ python3 x.py
INFO:root:Starting task(1,)
INFO:root:Starting task(3,)
INFO:root:Starting task(2,)
INFO:root:Ending task(1,) after 1 seconds
INFO:root:Ending task(2,) after 2 seconds
INFO:root:Ending task(3,) after 3 seconds

The trouble is that at work I am forced to maintain compatibility with Python 
2.7, so I do not have significant code using coroutines. If there are people 
out there which use a lot of coroutines and would like to decorate them, I 
invite you to try out the decorator module and give me some feedback if you 
find errors or strange behaviors. I am not aware of any issues, but one is 
never sure with new features.

Thanks for your help,

 Michele Simionato
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio, coroutines, etc. and simultaneous execution

2015-08-24 Thread Sven R. Kunze

On 23.08.2015 23:43, Charles Hixson wrote:
If I understand correctly asyncio, coroutines, etc. (and, of course, 
Threads) are not simultaneously executed, and that if one wants that 
one must still use multiprocessing.  But I'm not sure. The note is 
still there at the start of threading, so I'm pretty sure about that one.


The requirement that coroutines always be awaited seems to confirm 
this, but doesn't really say so explicitly. And the concurrent.futures 
can clearly be either, depending on your choices, but the chart in 
18.5.3.1.3 Example: Chain coroutines is of a kind that I am more 
familiar with in the context of multiprocessing.  (E.g., the only gap 
in the chart, which extends across all headings is when a result is 
being waited for during a sleep.)  For threaded execution I would 
expect there to be a gap whenever processing is shifted from one 
column to another.


If someone has authority to edit the documentation a comment like:

If you want your application to make better use of the computational 
resources of multi-core machines, you are advised to use 
multiprocessing 
<https://docs.python.org/3.5/library/multiprocessing.html#module-multiprocessing> 
or concurrent.futures.ProcessPoolExecutor 
<https://docs.python.org/3.5/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor>. 
However, threading is still an appropriate model if you want to run 
multiple I/O-bound tasks simultaneously.


(to quote from the threading documentation) would be helpful at or 
very near the top of each of the appropriate modules.  It would also 
be useful if the modules that were definitely intended to result in 
simultaneous execution, when feasible, were so marked quite near the top.


OTOH, I may be mistaken about coroutines.  I haven't been able to tell.

P.S.:  I do note that the threading comment was a "*CPython 
implementation detail:"*, and not a part of the Python 
specifications.  But CPython is, I believe, a sufficiently dominant 
implementation that such details are quite important.





We already have a discussion on python-ideas where we collected many 
many aspects on this particular subject.


The results so far: 
https://mail.python.org/pipermail/python-ideas/2015-July/034813.html


I know the table is a bit screwed but you will be able to handle that.

Do you have anything you want to add to it?

Cheers,
Sven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio, coroutines, etc. and simultaneous execution

2015-08-23 Thread Akira Li
Charles Hixson  writes:

> If I understand correctly asyncio, coroutines, etc. (and, of course,
> Threads) are not simultaneously executed, and that if one wants that
> one must still use multiprocessing.  But I'm not sure.  The note is
> still there at the start of threading, so I'm pretty sure about that
> one.

Due to GIL (used by CPython, Pypy, not used by Jython, IronPython,
pypy-stm) only one thread executes Python bytecode at a time. GIL can be
released on I/O or in a C extension such as numpy, lxml, regex. It is
true for any Python module or concept you use.

Unrelated: use concurrent and parallel execution instead of
"simultaneously executed."

Parallelism might make a program faster (it implies that hardware
supports it).

Concurrency is a way to structure the code. The same concurrent program
can run in parallel and without parallelism.

Recommended: "Concurrency is not Parallelism
(it's better!)" talk by Rob Pike's talk
http://stackoverflow.com/questions/1050222/concurrency-vs-parallelism-what-is-the-difference


> The requirement that coroutines always be awaited seems to confirm
> this, but doesn't really say so explicitly. And the concurrent.futures
> can clearly be either, depending on your choices, but the chart in
> 18.5.3.1.3 Example: Chain coroutines is of a kind that I am more
> familiar with in the context of multiprocessing. (E.g., the only gap
> in the chart, which extends across all headings is when a result is
> being waited for during a sleep.)  For threaded execution I would
> expect there to be a gap whenever processing is shifted from one
> column to another.
>
> If someone has authority to edit the documentation a comment like:

Anybody can suggest a patch
https://docs.python.org/devguide/docquality.html

> If you want your application to make better use of the computational
> resources of multi-core machines, you are advised to use
> multiprocessing
> <https://docs.python.org/3.5/library/multiprocessing.html#module-multiprocessing>
> or concurrent.futures.ProcessPoolExecutor
> <https://docs.python.org/3.5/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor>.
>  However,
> threading is still an appropriate model if you want to run multiple
> I/O-bound tasks simultaneously.
>
> (to quote from the threading documentation) would be helpful at or
> very near the top of each of the appropriate modules.  It would also
> be useful if the modules that were definitely intended to result in
> simultaneous execution, when feasible, were so marked quite near the
> top.

It is not necessary that multiprocessing will make your code faster on a
multi-core machine. It may make it slower depending on the task.

Performance optimization is a vast topic. Short remarks such as you've
suggested are likely misleading.

> OTOH, I may be mistaken about coroutines.  I haven't been able to tell.

Many cooperative multitasking implementations use a *single*
thread. There is a way to offload blocking code e.g.,
loop.run_in_executor() in asyncio.


-- 
https://mail.python.org/mailman/listinfo/python-list


asyncio, coroutines, etc. and simultaneous execution

2015-08-23 Thread Charles Hixson
If I understand correctly asyncio, coroutines, etc. (and, of course, 
Threads) are not simultaneously executed, and that if one wants that one 
must still use multiprocessing.  But I'm not sure.  The note is still 
there at the start of threading, so I'm pretty sure about that one.


The requirement that coroutines always be awaited seems to confirm this, 
but doesn't really say so explicitly. And the concurrent.futures can 
clearly be either, depending on your choices, but the chart in 
18.5.3.1.3 Example: Chain coroutines is of a kind that I am more 
familiar with in the context of multiprocessing. (E.g., the only gap in 
the chart, which extends across all headings is when a result is being 
waited for during a sleep.)  For threaded execution I would expect there 
to be a gap whenever processing is shifted from one column to another.


If someone has authority to edit the documentation a comment like:

If you want your application to make better use of the computational 
resources of multi-core machines, you are advised to use multiprocessing 
<https://docs.python.org/3.5/library/multiprocessing.html#module-multiprocessing> 
or concurrent.futures.ProcessPoolExecutor 
<https://docs.python.org/3.5/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor>. 
However, threading is still an appropriate model if you want to run 
multiple I/O-bound tasks simultaneously.


(to quote from the threading documentation) would be helpful at or very 
near the top of each of the appropriate modules.  It would also be 
useful if the modules that were definitely intended to result in 
simultaneous execution, when feasible, were so marked quite near the top.


OTOH, I may be mistaken about coroutines.  I haven't been able to tell.

P.S.:  I do note that the threading comment was a "*CPython 
implementation detail:"*, and not a part of the Python specifications.  
But CPython is, I believe, a sufficiently dominant implementation that 
such details are quite important.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-08 Thread Chris Angelico
On Fri, May 8, 2015 at 9:53 PM, Dave Angel  wrote:
> One thing newbies get tripped up by is having some path through their code
> that doesn't explicitly return.  And in Python that path therefore returns
> None.  It's most commonly confusing when there are nested ifs, and one of
> the "inner ifs" doesn't have an else clause.
>
> Anyway, it's marginally more useful to that newbie if the compiler would
> produce an error instead of later seeing a runtime error due to an
> unexpected None result.
>
> I don't think Python would be improved by detecting such a condition and
> reporting on it.  That's a job for a linter, or a style guide program.

They're not hard for linters to notice. After all, there's a clear
difference of intent between "return" (or just falling off the end of
the function) and "return None", even though they compile to the same
byte code. Though if you start enforcing that in your checkin policy,
you might have to deal with this kind of thing:

def raise_with_code(code):
raise SpamError("Error %d in spamination: %s" % (code,
errortext.get(code, "")))

def spaminate_text(f):
rc = low_level_spamination_function(f)
if not rc: return low_level_get_spamination_result()
raise_with_code(rc)

Clearly one branch returns a value... but figuring out that the other
one always raises isn't easy. (Though I suppose in this case it could
be dealt with by having a function that constructs the error, and then
you "raise make_spam_error(rc)" instead.)

You're definitely right that Python shouldn't check for it.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-08 Thread Dave Angel

On 05/08/2015 02:42 AM, Chris Angelico wrote:

On Fri, May 8, 2015 at 4:36 PM, Rustom Mody  wrote:

On Friday, May 8, 2015 at 10:39:38 AM UTC+5:30, Chris Angelico wrote:

Why have the concept of a procedure?


On Friday, Chris Angelico ALSO wrote:

With print(), you have a conceptual procedure...


So which do you want to stand by?


A procedure, in Python, is simply a function which returns None.
That's all. It's not any sort of special concept. It doesn't need to
be taught. If your students are getting confused by it, stop teaching
it!


One thing newbies get tripped up by is having some path through their 
code that doesn't explicitly return.  And in Python that path therefore 
returns None.  It's most commonly confusing when there are nested ifs, 
and one of the "inner ifs" doesn't have an else clause.


Anyway, it's marginally more useful to that newbie if the compiler would 
produce an error instead of later seeing a runtime error due to an 
unexpected None result.


I don't think Python would be improved by detecting such a condition and 
reporting on it.  That's a job for a linter, or a style guide program.


No different than the compile time checks for variable type that most 
languages impose.  They don't belong in Python.


--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-07 Thread Chris Angelico
On Fri, May 8, 2015 at 4:36 PM, Rustom Mody  wrote:
> On Friday, May 8, 2015 at 10:39:38 AM UTC+5:30, Chris Angelico wrote:
>> Why have the concept of a procedure?
>
> On Friday, Chris Angelico ALSO wrote:
>> With print(), you have a conceptual procedure...
>
> So which do you want to stand by?

A procedure, in Python, is simply a function which returns None.
That's all. It's not any sort of special concept. It doesn't need to
be taught. If your students are getting confused by it, stop teaching
it!

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-07 Thread Rustom Mody
On Friday, May 8, 2015 at 10:39:38 AM UTC+5:30, Chris Angelico wrote:
> Why have the concept of a procedure? 

On Friday, Chris Angelico ALSO wrote:
> With print(), you have a conceptual procedure...

So which do you want to stand by?


Just to be clear I am not saying python should be any different on this front.

Gödel's (2nd) theorem guarantees that no formalism (aka programming language in 
our case)
can ever be complete and so informal borrowing is inevitable.
Its just that Pascal, Fortran, Basic, by ingesting this informal requirement 
into
the formal language make THIS aspect easier to learn/teach...
... at the cost of eleventeen others

[Regular expressions in Fortran anyone?]
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-07 Thread Chris Angelico
On Fri, May 8, 2015 at 2:53 PM, Rustom Mody  wrote:
> Yeah I know
> And if python did not try to be so clever, I'd save some time with
> student-surprises
>
>> In a program, an expression
>> statement simply discards its result, whether it's None or 42 or
>> [1,2,3] or anything else. You could write an interactive interpreter
>> that has some magic that recognizes that certain functions always
>> return None (maybe by checking their annotations), and omits printing
>> their return values, while still printing the return values of other
>> functions.
>
>
> Hoo Boy!  You seem to be in the 'the-more-the-better' (of magic) camp

No way! I wouldn't want the interactive interpreter to go too magical.
I'm just saying that it wouldn't break Python to have it do things
differently.

>> I'm not sure what it'd gain you, but it wouldn't change the
>> concepts or semantics surrounding None returns.
>
> It would sure help teachers who get paid by the hour and would rather spend
> time on technical irrelevantia than grapple with significant concepts

Why have the concept of a procedure? Python's rule is simple: Every
function either returns a value or raises an exception. (Even
generators. When you call a generator function, you get back a return
value which is the generator state object.) The procedure/function
distinction is irrelevant.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-07 Thread Rustom Mody
On Friday, May 8, 2015 at 10:24:06 AM UTC+5:30, Rustom Mody wrote:

> get is very much a function and the None return is semantically significant.
> print is just round peg -- what you call conceptual function -- stuffed into
> square hole -- function the only available syntax-category

Sorry "Conceptual procedure" of course
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-07 Thread Rustom Mody
On Friday, May 8, 2015 at 10:04:02 AM UTC+5:30, Chris Angelico wrote:
> On Fri, May 8, 2015 at 2:06 PM, Rustom Mody wrote:
> >> > If the classic Pascal (or Fortran or Basic) sibling balanced abstractions
> >> > of function-for-value procedure-for-effect were more in the collective
> >> > consciousness rather than C's travesty of function, things might not have
> >> > been so messy.
> >>
> >> I'm not really sure that having distinct procedures, as opposed to 
> >> functions
> >> that you just ignore their return result, makes *such* a big difference. 
> >> Can
> >> you explain what is the difference between these?
> >>
> >> sort(stuff)  # A procedure.
> >> sort(stuff)  # ignore the function return result
> >>
> >> And why the first is so much better than the second?
> >
> > Here are 3 None-returning functions/methods in python.
> > ie semantically the returns are identical. Are they conceptually identical?
> >
>  x=print(1)
> > 1
>  x
>  ["hello",None].__getitem__(1)
>  {"a":1, "b":2}.get("c")
> 
> 
> Your second example is a poor one, as it involves calling a dunder
> method. But all you've proven with that is that the empty return value
> of Python is a real value, and can thus be stored and retrieved.
> 
> With print(), you have a conceptual procedure - it invariably returns
> None, so it'll normally be called in contexts that don't care about
> the return value. What about sys.stdout.write(), though? That's most
> often going to be called procedurally - you just write your piece and
> move on - but it has a meaningful return value. In normal usage, both
> are procedures. The ability to upgrade a function from "always returns
> None" to "returns some occasionally-useful information" is one of the
> strengths of a unified function/procedure system.
> 
> With .get(), it's actually nothing to do with procedures vs functions,

That's backwards.

get is very much a function and the None return is semantically significant.
print is just round peg -- what you call conceptual function -- stuffed into
square hole -- function the only available syntax-category

> but more to do with Python's use of None where C would most likely use
> NULL. By default, .get() uses None as its default return value, so
> something that isn't there will be treated as None. In the same way as
> your second example, that's just a result of None being a real value
> that you can pass around - nothing more special than that.
> 
> The other thing you're seeing here is that the *interactive
> interpreter* treats None specially. 


Yeah I know
And if python did not try to be so clever, I'd save some time with
student-surprises

> In a program, an expression
> statement simply discards its result, whether it's None or 42 or
> [1,2,3] or anything else. You could write an interactive interpreter
> that has some magic that recognizes that certain functions always
> return None (maybe by checking their annotations), and omits printing
> their return values, while still printing the return values of other
> functions.


Hoo Boy!  You seem to be in the 'the-more-the-better' (of magic) camp


> I'm not sure what it'd gain you, but it wouldn't change the
> concepts or semantics surrounding None returns.

It would sure help teachers who get paid by the hour and would rather spend
time on technical irrelevantia than grapple with significant concepts
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-07 Thread Chris Angelico
On Fri, May 8, 2015 at 2:06 PM, Rustom Mody  wrote:
>> > If the classic Pascal (or Fortran or Basic) sibling balanced abstractions
>> > of function-for-value procedure-for-effect were more in the collective
>> > consciousness rather than C's travesty of function, things might not have
>> > been so messy.
>>
>> I'm not really sure that having distinct procedures, as opposed to functions
>> that you just ignore their return result, makes *such* a big difference. Can
>> you explain what is the difference between these?
>>
>> sort(stuff)  # A procedure.
>> sort(stuff)  # ignore the function return result
>>
>> And why the first is so much better than the second?
>
> Here are 3 None-returning functions/methods in python.
> ie semantically the returns are identical. Are they conceptually identical?
>
 x=print(1)
> 1
 x
 ["hello",None].__getitem__(1)
 {"a":1, "b":2}.get("c")


Your second example is a poor one, as it involves calling a dunder
method. But all you've proven with that is that the empty return value
of Python is a real value, and can thus be stored and retrieved.

With print(), you have a conceptual procedure - it invariably returns
None, so it'll normally be called in contexts that don't care about
the return value. What about sys.stdout.write(), though? That's most
often going to be called procedurally - you just write your piece and
move on - but it has a meaningful return value. In normal usage, both
are procedures. The ability to upgrade a function from "always returns
None" to "returns some occasionally-useful information" is one of the
strengths of a unified function/procedure system.

With .get(), it's actually nothing to do with procedures vs functions,
but more to do with Python's use of None where C would most likely use
NULL. By default, .get() uses None as its default return value, so
something that isn't there will be treated as None. In the same way as
your second example, that's just a result of None being a real value
that you can pass around - nothing more special than that.

The other thing you're seeing here is that the *interactive
interpreter* treats None specially. In a program, an expression
statement simply discards its result, whether it's None or 42 or
[1,2,3] or anything else. You could write an interactive interpreter
that has some magic that recognizes that certain functions always
return None (maybe by checking their annotations), and omits printing
their return values, while still printing the return values of other
functions. I'm not sure what it'd gain you, but it wouldn't change the
concepts or semantics surrounding None returns.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-07 Thread Rustom Mody
On Wednesday, May 6, 2015 at 6:41:38 PM UTC+5:30, Dennis Lee Bieber wrote:
> On Tue, 5 May 2015 21:47:17 -0700 (PDT), Rustom Mody declaimed the following:
> 
> >If the classic Pascal (or Fortran or Basic) sibling balanced abstractions of 
> >function-for-value
> >procedure-for-effect were more in the collective consciousness rather than 
> >C's
> >travesty of function, things might not have been so messy.
> >
>   I suspect just the term "subprogram" (as in "function subprogram" and
> "subroutine subprogram") would confuse a lot of cubs these days...
> 
> >C didn't start the mess of mixing procedure and function -- Lisp/Apl did.
> >Nor the confusion of = for assignment; Fortran did that.
> 
>   I don't think you can blame FORTRAN for that, given that it was one of
> the first of the higher level languages, and had no confusion internally...

BLAME?? Ha You are being funny Dennis!

There are fat regulation-books that pilots need to follow. Breaking them can
make one culpable all the way to homicide.
The Wright-brothers probably broke them all. Should we call them homicidal 
maniacs?

Shakespeare sometimes has funny spellings. I guess he's illiterate for not 
turning on the spell-checker in Word?

Or [my favorite] Abraham Lincoln used the word 'negro'. So he's a racist?

Speaking more conceptually, there are pioneers and us ordinary folks.¹
The world as we know it is largely a creation of these pioneers.
And if you take them and stuff them into the statistically ordinary mold then 
fit badly.

That puts people especially teachers into a bind.
If I expect my students to be 1/100 as pioneering as Backus, Thomson, Ritchie 
etc, I would be foolish
And if I dont spell out all their mistakes in minute detail and pull them up
for repeating them, I'd not be doing due diligence


I guess this is nowadays called the 'romantic' view.
Ask the family-members of any of these greats for the 'other' view  :-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-07 Thread Rustom Mody
On Wednesday, May 6, 2015 at 11:19:07 AM UTC+5:30, Steven D'Aprano wrote:
> On Wednesday 06 May 2015 14:47, Rustom Mody wrote:
> 
> > It strikes me that the FP crowd has stretched the notion of function
> > beyond recognition And the imperative/OO folks have distorted it beyond
> > redemption.
> 
> In what way?
> 
> 
> > And the middle road shown by Pascal has been overgrown with weeds for some
> > 40 years...
> 
> As much as I like Pascal, and am pleased to see someone defending it, I'm 
> afraid I have no idea what you mean by this.

There are many hats...

As a programmer what you say is fine.
As a language-designer maybe even required -- one would expect a language 
designer to invoke Occm's razor and throw away needless distinctions (read 
syntax categories)

But there are other hats.
Even if you disregard the teacher-hat as irrelevant, the learner-hat is 
universal
-- whatever you are master of now is because at some past point you walked its
learning curve.

> 
> 
> > If the classic Pascal (or Fortran or Basic) sibling balanced abstractions
> > of function-for-value procedure-for-effect were more in the collective
> > consciousness rather than C's travesty of function, things might not have
> > been so messy.
> 
> I'm not really sure that having distinct procedures, as opposed to functions 
> that you just ignore their return result, makes *such* a big difference. Can 
> you explain what is the difference between these?
> 
> sort(stuff)  # A procedure.
> sort(stuff)  # ignore the function return result
> 
> And why the first is so much better than the second?

Here are 3 None-returning functions/methods in python.
ie semantically the returns are identical. Are they conceptually identical?

>>> x=print(1)
1
>>> x
>>> ["hello",None].__getitem__(1)
>>> {"a":1, "b":2}.get("c")
>>> 



> 
> 
> 
> > Well... Dont feel right bashing C without some history...
> > 
> > C didn't start the mess of mixing procedure and function -- Lisp/Apl did.
> > Nor the confusion of = for assignment; Fortran did that.
> 
> Pardon, but = has been used for "assignment" for centuries, long before 
> George Boole and Ada Lovelace even started working on computer theory. Just 
> ask mathematicians:
> 
> "let y = 23"
> 
> Is that not a form of assignment?

Truth?? Lets see...

Here is a set of assignments (as you call) that could occur in a school text
teaching business-math, viz how to calculate 'simple-interest'

amt = prin + si
si = prin * n * rate/100.0 
# for instance
prin = 1000.0 
n = 4.0
rate = 12.0

Put it into python and you get Name errors.

Put it into Haskell or some such; (after changing the comment char from '#' to 
'--')
and you'll get amt and si

Now you can view this operationally (which in some cases even the Haskell docs 
do)
and say a bunch of ='s in python is a sequence whereas in haskell its
'simultaneous' ie a set.

But this would miss the point viz that in Python
amt = prin + si 
denotes an *action*
whereas in Haskell it denotes a *fact* and a bunch of facts that modified by 
being
permuted would be a strange bunch!

Note I am not saying Haskell is right; particularly in its semantics of '='
there are serious issues:
http://blog.languager.org/2012/08/functional-programming-philosophical.html

Just that
x = y
in a functional/declarative/math framework means something quite different 
than in an imperative one
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-05 Thread Steven D'Aprano
On Wednesday 06 May 2015 14:47, Rustom Mody wrote:

> It strikes me that the FP crowd has stretched the notion of function
> beyond recognition And the imperative/OO folks have distorted it beyond
> redemption.

In what way?


> And the middle road shown by Pascal has been overgrown with weeds for some
> 40 years...

As much as I like Pascal, and am pleased to see someone defending it, I'm 
afraid I have no idea what you mean by this.


> If the classic Pascal (or Fortran or Basic) sibling balanced abstractions
> of function-for-value procedure-for-effect were more in the collective
> consciousness rather than C's travesty of function, things might not have
> been so messy.

I'm not really sure that having distinct procedures, as opposed to functions 
that you just ignore their return result, makes *such* a big difference. Can 
you explain what is the difference between these?

sort(stuff)  # A procedure.
sort(stuff)  # ignore the function return result

And why the first is so much better than the second?



> Well... Dont feel right bashing C without some history...
> 
> C didn't start the mess of mixing procedure and function -- Lisp/Apl did.
> Nor the confusion of = for assignment; Fortran did that.

Pardon, but = has been used for "assignment" for centuries, long before 
George Boole and Ada Lovelace even started working on computer theory. Just 
ask mathematicians:

"let y = 23"

Is that not a form of assignment?




-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-05 Thread Rustom Mody
On Tuesday, May 5, 2015 at 11:15:42 PM UTC+5:30, Marko Rauhamaa wrote:
 
> Personally, I have never found futures a very useful idiom in any
> language (Scheme, Java, Python). Or more to the point, concurrency and
> the notion of a function don't gel well in my mind.

Interesting comment.

It strikes me that the FP crowd has stretched the notion of function beyond 
recognition
And the imperative/OO folks have distorted it beyond redemption.

And the middle road shown by Pascal has been overgrown with weeds for some 40 
years...

If the classic Pascal (or Fortran or Basic) sibling balanced abstractions of 
function-for-value
procedure-for-effect were more in the collective consciousness rather than C's
travesty of function, things might not have been so messy.

Well... Dont feel right bashing C without some history...

C didn't start the mess of mixing procedure and function -- Lisp/Apl did.
Nor the confusion of = for assignment; Fortran did that.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-05 Thread Terry Reedy

On 5/5/2015 1:46 PM, Ian Kelly wrote:

On Tue, May 5, 2015 at 9:22 AM, Paul  Moore  wrote:

I'm working my way through the asyncio documentation. I have got to the "Tasks and 
coroutines" section, but I'm frankly confused as to the difference between the 
various things described in that section: coroutines, tasks, and futures.

I think can understand a coroutine. Correct me if I'm wrong, but it's roughly 
"something that you can run which can suspend itself".

But I don't understand what a Future is. The document just says it's almost the same as a 
concurrent.futures.Future, which is described as something that "encapsulates the asynchronous 
execution of a callable". Which doesn't help a lot. In concurrent.futures, you don't create 
Futures, you get them back from submit(), but in the asyncio docs it looks like you can create them 
by hand (example "Future with run_until_complete"). And there's nothing that says what a 
Future is, just what it's like... :-(


Fundamentally, a future is a placeholder for something that isn't
available yet. You can use it to set a callback to be called when that
thing is available, and once it's available you can get that thing
from it.


A Task is a subclass of Future, but the documentation doesn't say what it *is*, but 
rather that it "schedules the execution of a coroutine". But that doesn't make 
sense to me - objects don't do things, they *are* things. I thought the event loop did 
the scheduling?


In asyncio, a Task is a a Future that serves as a placeholder for the
result of a coroutine. The event loop manages callbacks, and that's
all it does. An event that it's been told to listen for occurs, and
the event loop calls the callback associated with that event. The Task
manages a coroutine's interaction with the event loop; when the
coroutine yields a future, the Task instructs the event loop to listen
for the completion of that future, setting a callback that will resume
the coroutine.


Reading between the lines, it seems that the event loop schedules Tasks (which 
makes sense) and that Tasks somehow wrap up coroutines - but I don't see *why* 
you need to wrap a task in a coroutine rather than just scheduling coroutines. 
And I don't see where Futures fit in - why not just wrap a coroutine in a 
Future, if it needs to be wrapped up at all?


The coroutines themselves are not that interesting of an interface;
all you can do with them is resume them. The asynchronous execution
done by asyncio is all based on futures. Because a coroutine can
easily be wrapped in a Task, this allows for coroutines to be used
anywhere a future is expected.

I don't know if I've done a good job explaining, but I hope this helps.


It helps me.  Thanks.



--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-05 Thread Paul Moore
On Tuesday, 5 May 2015 18:48:09 UTC+1, Ian  wrote:
> Fundamentally, a future is a placeholder for something that isn't
> available yet. You can use it to set a callback to be called when that
> thing is available, and once it's available you can get that thing
> from it.

OK, that makes a lot of sense. There seems to be two distinct strands within 
the asyncio world, the callback model and the task/coroutine model. AIUI, 
coroutines/tasks are supposed to let you avoid callbacks.

So I guess in that model, a Future isn't something you should use directly in 
task-based code, because it works via callbacks. And yet tasks are futures, so 
it's not that simple :-)

> In asyncio, a Task is a a Future that serves as a placeholder for the
> result of a coroutine. The event loop manages callbacks, and that's
> all it does.

Um, I thought it scheduled tasks?

> An event that it's been told to listen for occurs, and
> the event loop calls the callback associated with that event. The Task
> manages a coroutine's interaction with the event loop; when the
> coroutine yields a future, the Task instructs the event loop to listen
> for the completion of that future, setting a callback that will resume
> the coroutine.

OK, that (to an extent) explains how the callback and task worlds link 
together. That's useful.

> The coroutines themselves are not that interesting of an interface;
> all you can do with them is resume them. The asynchronous execution
> done by asyncio is all based on futures. Because a coroutine can
> easily be wrapped in a Task, this allows for coroutines to be used
> anywhere a future is expected.

And yet, futures are callback-based whereas tasks are suspension-point (yield 
from) based. I get a sense of what you're saying, but it's not very clear yet 
:-)

> I don't know if I've done a good job explaining, but I hope this helps.

It's all helping. It's a shame the docs don't do a better job of explaining the 
underlying model, to be honest, but I do feel like I'm slowly getting closer to 
an understanding.

It's no wonder terminology is such a hot topic in the discussion of the new 
async PEP on python-dev :-(

Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-05 Thread Paul Moore
On Tuesday, 5 May 2015 17:11:39 UTC+1, Zachary Ware  wrote:
>On Tue, May 5, 2015 at 10:22 AM, Paul  Moore wrote:
>> I'm working my way through the asyncio documentation. I have got to the
>> "Tasks and coroutines" section, but I'm frankly confused as to the
>> difference between the various things described in that section:
>> coroutines, tasks, and futures.
> 
> I've been using (and, in part at least, understanding :)) asyncio for
> work for the past few months, so I can at least give you my
> impressions based on use.

Thanks for taking the time.
 
>> I think can understand a coroutine. Correct me if I'm wrong, but it's
>> roughly "something that you can run which can suspend itself".
> 
> That's basically how I understand it.

Cool, I got that right at least :-)

>> But I don't understand what a Future is.
[...]
> 
> My understanding is that Futures are somewhat like 'non-executable
> coroutines', if that makes any sense whatsoever.  Futures are used as
> something you can pass around when you need to start execution of the
> "job" in one method and finish it in another.  For instance, talking
> to a remote network entity, and you want to just do "yield from
> server.get(something)".  Your 'get' method can make the request,
> create a Future, stick it somewhere that the method monitoring
> incoming traffic can find it (along with some identifying metadata),
> and then do 'return (yield from future)' to wait for the Future to be
> fulfilled (by the listener) and return its result.  The listener then
> matches up incoming requests with Futures, and calls
> Future.set_result() or Future.set_exception() to fulfill the Future.
> Once one of those methods has been called on the Future (and control
> has been passed back to the scheduler), your server.get method
> unblocks from its '(yield from future)' call, and either raises the
> exception or returns the result that was set on the Future.

Hmm. My head hurts. I sort of think I see what you might mean, but I have no 
idea how that translates to "almost the same as a concurrent.futures.Future, 
which in my experience is nothing like that.

A c.f.Future is (in my naive view) a "promise" of a result, which you can later 
wait to be fulfilled. Creating one directly (as opposed to via submit) doesn't 
make sense because then there's nothing that's promised to provide a result.

What you described sounds more like a "container that can signal when it's been 
filled", which I can sort of see as related to the Future API, but not really 
as related to a c.f.Future.

>> Reading between the lines, it seems that the event loop schedules Tasks
>> (which makes sense) and that Tasks somehow wrap up coroutines - but I
>> don't see *why* you need to wrap a task in a coroutine rather than just
>> scheduling coroutines. And I don't see where Futures fit in - why not
>> just wrap a coroutine in a Future, if it needs to be wrapped up at all?
> 
> You kind of mixed things up in this paragraph (you said "Tasks somehow
> wrap up coroutines", then "wrap a task in a coroutine"; the first is
> more correct, I believe).

Whoops, sorry, my mistake - the first was what I meant, the second was a typo.

> As I understand it, Tasks are
> specializations of Futures that take care of the
> set_result/set_exception based on the execution of the coroutine.

That sounds far more like a concurrent.futures.Future. Except that (AIUI) tasks 
get scheduled, and you can list all tasks for an event loop, and things like 
that. So they seem like a little bit more than a c.f.Future, which would sort 
of match with the idea that an asyncio.Future was equivalent to a c.f.Future, 
except that it's not (as above).

There's also things like asyncio.async - which makes no sense to me at all (I 
can see how the examples *use* it, in much the same way as I can see how the 
examples work, in general, but I can't understand its role, so I'm not sure I'd 
be able to design code that used it from scratch :-()

>> Can anyone clarify for me?
> 
> I hope I've done some good on that front, but I'm still a bit hazy
> myself.  I found it worked best to just take examples or otherwise
> working code, and poke and prod at it until it breaks and you can
> figure out how you broke it.  Just try to avoid mixing asyncio and
> threads for as long as you can :).

Thanks for the insights. I think I've learned some things, but to be honest I'm 
still confused. Maybe it's because of the angle I'm coming at it from. I don't 
typically write network code directly[1], so the examples in the asyncio 

Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-05 Thread Skip Montanaro
Paul> ... I'm frankly confused ...

You and me both. I'm pretty sure I understand what a Future is, and
until the long discussion about PEP 492 (?) started up, I thought I
understood what a coroutine was from my days in school many years ago.
Now I'm not so sure.

Calling Dave Beazley... Calling Dave Beazley... We need a keynote...

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-05 Thread Marko Rauhamaa
Paul  Moore :

> But I don't understand what a Future is.

A future stands for a function that is scheduled to execute in the
background.

Personally, I have never found futures a very useful idiom in any
language (Scheme, Java, Python). Or more to the point, concurrency and
the notion of a function don't gel well in my mind.

See also: http://en.wikipedia.org/wiki/Futures_and_promises>

> A Task is a subclass of Future, but the documentation doesn't say what
> it *is*,

A task is almost synonymous with a future. It's something that needs to
get done.

--

I think asyncio is a worthwhile step up from the old asyncore module.
Its main advantage is portability between operating systems. Also, it
has one big advantage over threads: events can be multiplexed using
asyncio.wait(return_when=FIRST_COMPLETED).

However, I happen to have the privilege of working with linux so
select.epoll() gives me all the power I need to deal with asynchrony.
The asyncio model has the same basic flaw as threads: it makes the
developer think of concurrency in terms of a bunch of linear streaks of
execution. I much prefer to look at concurrency through states and
events (what happened? what next?).

My preferred model is known as "Callback Hell" (qv.). Asyncio seeks to
offer a salvation from it. However, I am at home in the Callback Hell;
equipped with Finite State Machines, the inherent complexities of the
Reality are managed in the most natural, flexible manner.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-05 Thread Ian Kelly
On Tue, May 5, 2015 at 9:22 AM, Paul  Moore  wrote:
> I'm working my way through the asyncio documentation. I have got to the 
> "Tasks and coroutines" section, but I'm frankly confused as to the difference 
> between the various things described in that section: coroutines, tasks, and 
> futures.
>
> I think can understand a coroutine. Correct me if I'm wrong, but it's roughly 
> "something that you can run which can suspend itself".
>
> But I don't understand what a Future is. The document just says it's almost 
> the same as a concurrent.futures.Future, which is described as something that 
> "encapsulates the asynchronous execution of a callable". Which doesn't help a 
> lot. In concurrent.futures, you don't create Futures, you get them back from 
> submit(), but in the asyncio docs it looks like you can create them by hand 
> (example "Future with run_until_complete"). And there's nothing that says 
> what a Future is, just what it's like... :-(

Fundamentally, a future is a placeholder for something that isn't
available yet. You can use it to set a callback to be called when that
thing is available, and once it's available you can get that thing
from it.

> A Task is a subclass of Future, but the documentation doesn't say what it 
> *is*, but rather that it "schedules the execution of a coroutine". But that 
> doesn't make sense to me - objects don't do things, they *are* things. I 
> thought the event loop did the scheduling?

In asyncio, a Task is a a Future that serves as a placeholder for the
result of a coroutine. The event loop manages callbacks, and that's
all it does. An event that it's been told to listen for occurs, and
the event loop calls the callback associated with that event. The Task
manages a coroutine's interaction with the event loop; when the
coroutine yields a future, the Task instructs the event loop to listen
for the completion of that future, setting a callback that will resume
the coroutine.

> Reading between the lines, it seems that the event loop schedules Tasks 
> (which makes sense) and that Tasks somehow wrap up coroutines - but I don't 
> see *why* you need to wrap a task in a coroutine rather than just scheduling 
> coroutines. And I don't see where Futures fit in - why not just wrap a 
> coroutine in a Future, if it needs to be wrapped up at all?

The coroutines themselves are not that interesting of an interface;
all you can do with them is resume them. The asynchronous execution
done by asyncio is all based on futures. Because a coroutine can
easily be wrapped in a Task, this allows for coroutines to be used
anywhere a future is expected.

I don't know if I've done a good job explaining, but I hope this helps.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-05 Thread Terry Reedy

On 5/5/2015 11:22 AM, Paul Moore wrote:

I'm working my way through the asyncio documentation. I have got to
the "Tasks and coroutines" section, but I'm frankly confused as to
the difference between the various things described in that section:
coroutines, tasks, and futures.

I think can understand a coroutine. Correct me if I'm wrong, but it's
roughly "something that you can run which can suspend itself".


The simple answer is that a coroutine within asyncio is a generator used 
as a (semi)coroutine rather than merely as an iterator.  IE, the send, 
throw, and close methods are used.  I believe PEP 492 will change that 
definition (for Python) to any object with with those methods.


However, if an asyncio coroutine is the result of the coroutine 
decorator, the situation is more complex, as the result is either a 
generator function, a coro function that can wrap any function, or a 
debug wrapper.  Details in asyncio/coroutine.py.



I concede that I've not read the rest of the asyncio documentation in
much detail yet, and I'm skipping everything to do with IO (I want to
understand the "async" bit for now, not so much the "IO" side). But I
don't really want to dive into the details while I am this hazy on
the basic concepts.


You might try reading the Python code for the task and future classes.
asyncio/futures.py, asyncio/tasks.py

--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-05 Thread Zachary Ware
On Tue, May 5, 2015 at 10:22 AM, Paul  Moore  wrote:
> I'm working my way through the asyncio documentation. I have got to the 
> "Tasks and coroutines" section, but I'm frankly confused as to the difference 
> between the various things described in that section: coroutines, tasks, and 
> futures.

I've been using (and, in part at least, understanding :)) asyncio for
work for the past few months, so I can at least give you my
impressions based on use.

> I think can understand a coroutine. Correct me if I'm wrong, but it's roughly 
> "something that you can run which can suspend itself".

That's basically how I understand it.

> But I don't understand what a Future is. The document just says it's almost 
> the same as a concurrent.futures.Future, which is described as something that 
> "encapsulates the asynchronous execution of a callable". Which doesn't help a 
> lot. In concurrent.futures, you don't create Futures, you get them back from 
> submit(), but in the asyncio docs it looks like you can create them by hand 
> (example "Future with run_until_complete"). And there's nothing that says 
> what a Future is, just what it's like... :-(

My understanding is that Futures are somewhat like 'non-executable
coroutines', if that makes any sense whatsoever.  Futures are used as
something you can pass around when you need to start execution of the
"job" in one method and finish it in another.  For instance, talking
to a remote network entity, and you want to just do "yield from
server.get(something)".  Your 'get' method can make the request,
create a Future, stick it somewhere that the method monitoring
incoming traffic can find it (along with some identifying metadata),
and then do 'return (yield from future)' to wait for the Future to be
fulfilled (by the listener) and return its result.  The listener then
matches up incoming requests with Futures, and calls
Future.set_result() or Future.set_exception() to fulfill the Future.
Once one of those methods has been called on the Future (and control
has been passed back to the scheduler), your server.get method
unblocks from its '(yield from future)' call, and either raises the
exception or returns the result that was set on the Future.

> A Task is a subclass of Future, but the documentation doesn't say what it 
> *is*, but rather that it "schedules the execution of a coroutine". But that 
> doesn't make sense to me - objects don't do things, they *are* things. I 
> thought the event loop did the scheduling?
>
> Reading between the lines, it seems that the event loop schedules Tasks 
> (which makes sense) and that Tasks somehow wrap up coroutines - but I don't 
> see *why* you need to wrap a task in a coroutine rather than just scheduling 
> coroutines. And I don't see where Futures fit in - why not just wrap a 
> coroutine in a Future, if it needs to be wrapped up at all?

You kind of mixed things up in this paragraph (you said "Tasks somehow
wrap up coroutines", then "wrap a task in a coroutine"; the first is
more correct, I believe).  As I understand it, Tasks are
specializations of Futures that take care of the
set_result/set_exception based on the execution of the coroutine.  I'm
not clear on how that is all implemented (Are all coroutines wrapped
in Tasks?  Just those wrapped in asyncio.async()? No idea.), but I use
it something like this:

   # save a reference to this task so it can be cancelled elsewhere if need be
   self.current_task = asyncio.async(some_coroutine())
   # now the coroutine is scheduled, but we still have control
   # we can do anything else we need to here, including scheduling
other coroutines
   return (yield from self.current_task)

> I concede that I've not read the rest of the asyncio documentation in much 
> detail yet, and I'm skipping everything to do with IO (I want to understand 
> the "async" bit for now, not so much the "IO" side). But I don't really want 
> to dive into the details while I am this hazy on the basic concepts.
>
> Can anyone clarify for me?

I hope I've done some good on that front, but I'm still a bit hazy
myself.  I found it worked best to just take examples or otherwise
working code, and poke and prod at it until it breaks and you can
figure out how you broke it.  Just try to avoid mixing asyncio and
threads for as long as you can :).

-- 
Zach
-- 
https://mail.python.org/mailman/listinfo/python-list


asyncio: What is the difference between tasks, futures, and coroutines?

2015-05-05 Thread Paul Moore
I'm working my way through the asyncio documentation. I have got to the "Tasks 
and coroutines" section, but I'm frankly confused as to the difference between 
the various things described in that section: coroutines, tasks, and futures.

I think can understand a coroutine. Correct me if I'm wrong, but it's roughly 
"something that you can run which can suspend itself".

But I don't understand what a Future is. The document just says it's almost the 
same as a concurrent.futures.Future, which is described as something that 
"encapsulates the asynchronous execution of a callable". Which doesn't help a 
lot. In concurrent.futures, you don't create Futures, you get them back from 
submit(), but in the asyncio docs it looks like you can create them by hand 
(example "Future with run_until_complete"). And there's nothing that says what 
a Future is, just what it's like... :-(

A Task is a subclass of Future, but the documentation doesn't say what it *is*, 
but rather that it "schedules the execution of a coroutine". But that doesn't 
make sense to me - objects don't do things, they *are* things. I thought the 
event loop did the scheduling?

Reading between the lines, it seems that the event loop schedules Tasks (which 
makes sense) and that Tasks somehow wrap up coroutines - but I don't see *why* 
you need to wrap a task in a coroutine rather than just scheduling coroutines. 
And I don't see where Futures fit in - why not just wrap a coroutine in a 
Future, if it needs to be wrapped up at all?

I concede that I've not read the rest of the asyncio documentation in much 
detail yet, and I'm skipping everything to do with IO (I want to understand the 
"async" bit for now, not so much the "IO" side). But I don't really want to 
dive into the details while I am this hazy on the basic concepts.

Can anyone clarify for me?

Thanks,
Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Emperor's New Coroutines?

2014-07-10 Thread Marko Rauhamaa
Marko Rauhamaa :

> The asyncio module comes with coroutine support. Investigating the
> topic on the net reveals that FSM's are for old people and the brave
> new world uses coroutines. Unfortunately, all examples I could find
> seem to be overly simplistic, and I'm left thinking coroutines have
> few practical uses in network programming.
>
> [...]
>
> but how would I modify the philosopher code to support such master
> resets?

Ok. I think I have found the answer to my question:

asyncio.wait(coroutines, return_when=asyncio.FIRST_COMPLETED)

That facility makes it possible to multiplex between several stimuli.

The code below implements a modified dining philosophers protocol. The
philosophers are accompanied by an assistant who occasionally prods the
philosophers to immediately resume thinking, thus breaking the deadlock.

Whether the coroutine style is easier on the eye than, say, callbacks
and state machines is a matter of personal opinion.


Marko

===clip-clip-clip===
#!/usr/bin/env python3

import os, sys, asyncio, random, enum, time

T0 = time.time()

def main():
loop = asyncio.get_event_loop()
try:
fork1 = Fork()
fork2 = Fork()
fork3 = Fork()
nag = Nag()
loop.run_until_complete(asyncio.wait([
Philosopher("Plato", fork1, fork2, nag).philosophize(),
Philosopher("Nietsche", fork2, fork3, nag).philosophize(),
Philosopher("Hintikka", fork3, fork1, nag).philosophize(),
assistant(nag) ]))
finally:
loop.close()

class Philosopher:
def __init__(self, name, left, right, nag):
self.name = name
self.left = left
self.right = right
self.nag = nag

@asyncio.coroutine
def philosophize(self):
yield from self.nag.acquire()
try:
while True:
self.nag_count = self.nag.count
pending = yield from self.think()
if pending is None:
continue
pending = yield from self.grab_fork("left", self.left, pending)
if pending is None:
continue
try:
pending = yield from self.wonder_absentmindedly(pending)
if pending is None:
continue
pending = yield from self.grab_fork(
"right", self.right, pending)
if pending is None:
continue
try:
pending = yield from self.dine(pending)
if pending is None:
continue
finally:
self.say("put back right fork")
self.right.release()
pending = yield from self.wonder_absentmindedly(pending)
if pending is None:
continue
finally:
self.say("put back left fork")
self.left.release()
finally:
self.nag.release()

def say(self, message):
report("{} {}".format(self.name, message))

def nagged(self):
return self.nag.count > self.nag_count

@asyncio.coroutine
def think(self):
self.say("thinking")
result, pending = yield from multiplex(
identify(Impulse.TIME, random_delay()),
identify(Impulse.NAG, self.nag.wait_for(self.nagged)))
if result is Impulse.NAG:
self.say("nagged")
return None
assert result is Impulse.TIME
self.say("hungry")
return pending

@asyncio.coroutine
def grab_fork(self, which, fork, pending):
self.say("grabbing {} fork".format(which))
result, pending = yield from multiplex(
identify(Impulse.FORK, fork.acquire()),
*pending)
if result is Impulse.NAG:
self.say("has been nagged")
return None
assert result is Impulse.FORK
self.say("got {} fork".format(which))
return pending

@asyncio.coroutine
def wonder_absentmindedly(self, pending):
self.say("now, what was I doing?")
result, pending = yield from multiplex(
identify(Impulse.TIME, random_delay()),
*pending)
if result is Impulse.NAG:
self.say("nagged")
return None
assert result is Impulse.TIME
self.say("oh, that's right!")
return pending

@asyncio.coroutine
def dine(self, pending):
self.say("eating")
result, pending = yield from multip

Emperor's New Coroutines?

2014-07-07 Thread Marko Rauhamaa

The asyncio module comes with coroutine support. Investigating the topic
on the net reveals that FSM's are for old people and the brave new world
uses coroutines. Unfortunately, all examples I could find seem to be
overly simplistic, and I'm left thinking coroutines have few practical
uses in network programming.

PEP-380 refers to the classic Dining Philosophers: http://www.cos
c.canterbury.ac.nz/greg.ewing/python/yield-from/yf_current/Examples/Sch
eduler/philosophers.py> (reproduced below).

Suppose I want to fix the protocol by supplying an "attendant" object
that periodically tells the philosophers to drop whatever they are doing
and start thinking. So I would add:

   schedule(attendant(..., [ plato, socrates, euclid ]))

but how would I modify the philosopher code to support such master
resets?


Marko


from scheduler import *

class Utensil:

  def __init__(self, id):
self.id = id
self.available = True
self.queue = []

  def acquire(self):
#print "--- acquire", self.id, "avail", self.available
if not self.available:
  block(self.queue)
  yield
#print "--- acquired", self.id
self.available = False

  def release(self):
#print "--- release", self.id
self.available = True
unblock(self.queue)

def philosopher(name, lifetime, think_time, eat_time, left_fork,
right_fork):
  for i in range(lifetime):
for j in range(think_time):
  print(name, "thinking")
  yield
print(name, "waiting for fork", left_fork.id)
yield from left_fork.acquire()
print(name, "acquired fork", left_fork.id)
print(name, "waiting for fork", right_fork.id)
yield from right_fork.acquire()
print(name, "acquired fork", right_fork.id)
for j in range(eat_time):
  # They're Python philosophers, so they eat spam rather than
  spaghetti
  print(name, "eating spam")
  yield
print(name, "releasing forks", left_fork.id, "and", right_fork.id)
left_fork.release()
right_fork.release()
  print(name, "leaving the table")

forks = [Utensil(i) for i in range(3)]
schedule(philosopher("Plato", 7, 2, 3, forks[0], forks[1]), "Plato")
schedule(philosopher("Socrates", 8, 3, 1, forks[1], forks[2]),
"Socrates")
schedule(philosopher("Euclid", 5, 1, 4, forks[2], forks[0]), "Euclid")
run()
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Trying to wrap my head around futures and coroutines

2014-01-15 Thread Oscar Benjamin
On Mon, Jan 06, 2014 at 09:15:56PM -0600, Skip Montanaro wrote:
> From the couple responses I've seen, I must have not made myself
> clear. Let's skip specific hypothetical tasks. Using coroutines,
> futures, or other programming paradigms that have been introduced in
> recent versions of Python 3.x, can traditionally event-driven code be
> written in a more linear manner so that the overall algorithms
> implemented in the code are easier to follow? My code is not
> multi-threaded, so using threads and locking is not really part of the
> picture. In fact, I'm thinking about this now precisely because the
> first sentence of the asyncio documentation mentions single-threaded
> concurrent code: "This module provides infrastructure for writing
> single-threaded concurrent code using coroutines, multiplexing I/O
> access over sockets and other resources, running network clients and
> servers, and other related primitives."
> 
> I'm trying to understand if it's possible to use coroutines or objects
> like asyncio.Future to write more readable code, that today would be
> implemented using callbacks, GTK signals, etc.

Hi Skip,

I don't yet understand how asyncio works in complete examples (I'm not sure
that many people do yet) but I have a loose idea of it so take the following
with a pinch of salt and expect someone else to correct me later. :)

With asyncio the idea is that you can run IO operations concurrently in the
a single thread. Execution can switch between different tasks while each task
can be written as a linear-looking generator function without the need for
callbacks and locks. Execution switching is based on which task has avilable
IO data. So the core switcher keeps track of a list of objects (open files,
sockets etc.) and executes the task when something is available.

>From the perspective of the task generator code what happens is that you yield
to allow other code to execute while you wait for some IO e.g.:

@asyncio.coroutine
def task_A():
a1 = yield from futureA1()
a2 = yield from coroutineA2(a1) # Indirectly yields from futures
a3 = yield from futureA3(a2)
return a3

At each yield statement you are specifying some operation that takes time.
During that time other coroutine code is allowed to execute in this thread.

If task_B has a reference to the future that task_A is waiting on then it can
be cancelled with the Future.cancel() method. I think that you can also cancel
with a reference to the task. So I think you can do something like

@asyncio.coroutine
def task_A():
# Cancel the other task and wait
if ref_taskB is not None:
ref_taskB.cancel()
asyncio.wait([ref_taskB])
try:
# Linear looking code with no callbacks
a1 = yield from futureA1()
a2 = yield from coroutineA2(a1) # Indirectly yields from futures
a3 = yield from futureA3(a2)
except CancelledError:
stop_A()
raise # Or return something...
return a3

Then task_B() would have the reciprocal structure. The general form with more
than just A or B would be to have a reference to the current task then you
could factor out the cancellation code with context managers, decorators or
something else.


Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Trying to wrap my head around futures and coroutines

2014-01-15 Thread Phil Connell
On Mon, Jan 06, 2014 at 06:56:00PM -0600, Skip Montanaro wrote:
> So, I'm looking for a little guidance. It seems to me that futures,
> coroutines, and/or the new Tulip/asyncio package might be my salvation, but
> I'm having a bit of trouble seeing exactly how that would work. Let me
> outline a simple hypothetical calculation. I'm looking for ways in which
> these new facilities might improve the structure of my code.

This instinct is exactly right -- the point of coroutines and tulip futures is
to liberate you from having to daisy chain callbacks together.


> 
> Let's say I have a dead simple GUI with two buttons labeled, "Do A" and "Do
> B". Each corresponds to executing a particular activity, A or B, which take
> some non-zero amount of time to complete (as perceived by the user) or
> cancel (as perceived by the state of the running system - not safe to run A
> until B is complete/canceled, and vice versa). The user, being the fickle
> sort that he is, might change his mind while A is running, and decide to
> execute B instead. (The roles can also be reversed.) If s/he wants to run
> task A, task B must be canceled or allowed to complete before A can be
> started. Logically, the code looks something like (I fear Gmail is going to
> destroy my indentation):
> 
> def do_A():
> when B is complete, _do_A()
> cancel_B()
> 
> def do_B():
> when A is complete, _do_B()
> cancel_A()
> 
> def _do_A():
> do the real A work here, we are guaranteed B is no longer running
> 
> def _do_B():
> do the real B work here, we are guaranteed A is no longer running
> 
> cancel_A and cancel_B might be no-ops, in which case they need to start up
> the other calculation immediately, if one is pending.

It strikes me that what you have two linear sequences of 'things to do':
- 'Tasks', started in reaction to some event.
- Cancellations, if a particular task happens to be running.

So, a reasonable design is to have two long-running coroutines, one that
executes your 'tasks' sequentially, and another that executes cancellations.
These are both fed 'things to do' via a couple of queues populated in event
callbacks.

Something like (apologies for typos/non-working code):


cancel_queue = asyncio.Queue()
run_queue = asyncio.Queue()

running_task = None
running_task_name = ""

def do_A():
cancel_queue.put_nowait("B")
run_queue.put_nowait(("A", _do_A()))

def do_B():
cancel_queue.put_nowait("A")
run_queue.put_nowait(("B", _do_B()))

def do_C():
run_queue.put_nowait(("C", _do_C()))

@asyncio.coroutine
def canceller():
while True:
name = yield from cancel_queue.get()
if running_task_name == name:
running_task.cancel()

@asyncio.coroutine
def runner():
while True:
name, coro = yield from run_queue.get()
running_task_name = name
running_task = asyncio.async(coro)
yield from running_task

def main():
...
cancel_task = asyncio.Task(canceller())
run_task = asyncio.Task(runner())
...



Cheers,
Phil

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Trying to wrap my head around futures and coroutines

2014-01-06 Thread MRAB

On 2014-01-07 02:29, Cameron Simpson wrote:

On 06Jan2014 18:56, Skip Montanaro  wrote:
[...]

Let's say I have a dead simple GUI with two buttons labeled, "Do A" and "Do
B". Each corresponds to executing a particular activity, A or B, which take
some non-zero amount of time to complete (as perceived by the user) or
cancel (as perceived by the state of the running system - not safe to run A
until B is complete/canceled, and vice versa). The user, being the fickle
sort that he is, might change his mind while A is running, and decide to
execute B instead. (The roles can also be reversed.) If s/he wants to run
task A, task B must be canceled or allowed to complete before A can be
started.


I take it we can ignore user's hammering on buttons faster than
jobs can run or be cancelled?


Logically, the code looks something like (I fear Gmail is going to
destroy my indentation):

def do_A():
when B is complete, _do_A()
cancel_B()

[...]

def _do_A():
do the real A work here, we are guaranteed B is no longer running

[...]

cancel_A and cancel_B might be no-ops, in which case they need to start up
the other calculation immediately, if one is pending.


I wouldn't have cancel_A do this, I'd have do_A do this more overtly.


This is pretty simple execution, and if my job was this simple, I'd
probably just keep doing things the way I do now, which is basically to
catch a "complete" or "canceled" signal from the A and B tasks and execute
the opposite task if it's pending. But it's not this simple. In reality
there are lots of, "oh, you want to do X? You need to make sure A, B, and C
are not active." And other stuff like that.


What's wrong with variations on:

   from threading import Lock

   lock_A = Lock()
   lock_B = Lock()

   def do_A():
 with lock_B():
   with lock_A():
 _do_A()

   def do_B():
 with lock_A():
   with lock_B():
 _do_B()


It's safer to lock in the same order to reduce the chance of deadlock:

def do_A():
  with lock_A():
with lock_B():
  _do_A()

def do_B():
  with lock_A():
with lock_B():
  _do_B()


You can extend this with multiple locks for A,B,C provided you take
the excluding locks before taking the inner lock for the core task.

Regarding cancellation, I presume your code polls some cancellation
flag regularly during the task?



--
https://mail.python.org/mailman/listinfo/python-list


Re: Trying to wrap my head around futures and coroutines

2014-01-06 Thread Skip Montanaro
>From the couple responses I've seen, I must have not made myself
clear. Let's skip specific hypothetical tasks. Using coroutines,
futures, or other programming paradigms that have been introduced in
recent versions of Python 3.x, can traditionally event-driven code be
written in a more linear manner so that the overall algorithms
implemented in the code are easier to follow? My code is not
multi-threaded, so using threads and locking is not really part of the
picture. In fact, I'm thinking about this now precisely because the
first sentence of the asyncio documentation mentions single-threaded
concurrent code: "This module provides infrastructure for writing
single-threaded concurrent code using coroutines, multiplexing I/O
access over sockets and other resources, running network clients and
servers, and other related primitives."

I'm trying to understand if it's possible to use coroutines or objects
like asyncio.Future to write more readable code, that today would be
implemented using callbacks, GTK signals, etc.

S
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Trying to wrap my head around futures and coroutines

2014-01-06 Thread Cameron Simpson
On 07Jan2014 13:29, I wrote:
>   def do_A():
> with lock_B():
>   with lock_A():
> _do_A()

Um, of course there would be a cancel_B() up front above, like this:

  def do_A():
cancel_B()
with lock_B():
  with lock_A():
_do_A()

I'm with MRAB: you don't really need futures unless you looking to
move to a multithreaded appraoch and aren't multithreaded already.
Even then, you don't need futures, just track running threads and
what's meant to run next.

You can do all your blocking with Locks fairly easily unless there
are complexities not yet revealed. (Of course, this is a truism,
but I mean "conveniently".)

Cheers,
-- 
Cameron Simpson 

Follow! But! Follow only if ye be men of valor, for the entrance to this cave
is guarded by a creature so foul, so cruel that no man yet has fought with it
and lived! Bones of four fifty men lie strewn about its lair.  So,
brave knights, if you do doubt your courage or your strength, come no
further, for death awaits you all with nasty big pointy teeth.
- Tim The Enchanter
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Trying to wrap my head around futures and coroutines

2014-01-06 Thread Cameron Simpson
On 06Jan2014 18:56, Skip Montanaro  wrote:
[...]
> Let's say I have a dead simple GUI with two buttons labeled, "Do A" and "Do
> B". Each corresponds to executing a particular activity, A or B, which take
> some non-zero amount of time to complete (as perceived by the user) or
> cancel (as perceived by the state of the running system - not safe to run A
> until B is complete/canceled, and vice versa). The user, being the fickle
> sort that he is, might change his mind while A is running, and decide to
> execute B instead. (The roles can also be reversed.) If s/he wants to run
> task A, task B must be canceled or allowed to complete before A can be
> started.

I take it we can ignore user's hammering on buttons faster than
jobs can run or be cancelled?

> Logically, the code looks something like (I fear Gmail is going to
> destroy my indentation):
> 
> def do_A():
> when B is complete, _do_A()
> cancel_B()
[...]
> def _do_A():
> do the real A work here, we are guaranteed B is no longer running
[...]
> cancel_A and cancel_B might be no-ops, in which case they need to start up
> the other calculation immediately, if one is pending.

I wouldn't have cancel_A do this, I'd have do_A do this more overtly.

> This is pretty simple execution, and if my job was this simple, I'd
> probably just keep doing things the way I do now, which is basically to
> catch a "complete" or "canceled" signal from the A and B tasks and execute
> the opposite task if it's pending. But it's not this simple. In reality
> there are lots of, "oh, you want to do X? You need to make sure A, B, and C
> are not active." And other stuff like that.

What's wrong with variations on:

  from threading import Lock

  lock_A = Lock()
  lock_B = Lock()

  def do_A():
with lock_B():
  with lock_A():
_do_A()

  def do_B():
with lock_A():
  with lock_B():
_do_B()

You can extend this with multiple locks for A,B,C provided you take
the excluding locks before taking the inner lock for the core task.

Regarding cancellation, I presume your code polls some cancellation
flag regularly during the task?

Cheers,
-- 
Cameron Simpson 

Many are stubborn in pursuit of the path they have chosen, few in pursuit
of the goal. - Friedrich Nietzsche
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Trying to wrap my head around futures and coroutines

2014-01-06 Thread MRAB

On 2014-01-07 00:56, Skip Montanaro wrote:

I've been programming for a long while in an event&callback-driven
world. While I am comfortable enough with the mechanisms available
(almost 100% of what I do is in a PyGTK world with its signal
mechanism), it's never been all that satisfying, breaking up my
calculations into various pieces, and thus having my algorithm scattered
all over the place.

So, I'm looking for a little guidance. It seems to me that futures,
coroutines, and/or the new Tulip/asyncio package might be my salvation,
but I'm having a bit of trouble seeing exactly how that would work. Let
me outline a simple hypothetical calculation. I'm looking for ways in
which these new facilities might improve the structure of my code.

Let's say I have a dead simple GUI with two buttons labeled, "Do A" and
"Do B". Each corresponds to executing a particular activity, A or B,
which take some non-zero amount of time to complete (as perceived by the
user) or cancel (as perceived by the state of the running system - not
safe to run A until B is complete/canceled, and vice versa). The user,
being the fickle sort that he is, might change his mind while A is
running, and decide to execute B instead. (The roles can also be
reversed.) If s/he wants to run task A, task B must be canceled or
allowed to complete before A can be started. Logically, the code looks
something like (I fear Gmail is going to destroy my indentation):

def do_A():
when B is complete, _do_A()
cancel_B()

def do_B():
when A is complete, _do_B()
cancel_A()

def _do_A():
do the real A work here, we are guaranteed B is no longer running

def _do_B():
do the real B work here, we are guaranteed A is no longer running

cancel_A and cancel_B might be no-ops, in which case they need to start
up the other calculation immediately, if one is pending.

This is pretty simple execution, and if my job was this simple, I'd
probably just keep doing things the way I do now, which is basically to
catch a "complete" or "canceled" signal from the A and B tasks and
execute the opposite task if it's pending. But it's not this simple. In
reality there are lots of, "oh, you want to do X? You need to make sure
A, B, and C are not active." And other stuff like that.


[snip]
Do you really need to use futures, etc?

What you could do is keep track of which tasks are active.

When the user clicks a button to start a task, the task checks whether
it can run. If it can run, it starts the real work. On the other hand,
if it can't run, it's set as the pending task.

When a task completes or is cancelled, if there is a pending task, that
task is unset as the pending task and retried; it'll then either start
the real work or be set as the pending task again.

--
https://mail.python.org/mailman/listinfo/python-list


Trying to wrap my head around futures and coroutines

2014-01-06 Thread Skip Montanaro
I've been programming for a long while in an event&callback-driven world.
While I am comfortable enough with the mechanisms available (almost 100% of
what I do is in a PyGTK world with its signal mechanism), it's never been
all that satisfying, breaking up my calculations into various pieces, and
thus having my algorithm scattered all over the place.

So, I'm looking for a little guidance. It seems to me that futures,
coroutines, and/or the new Tulip/asyncio package might be my salvation, but
I'm having a bit of trouble seeing exactly how that would work. Let me
outline a simple hypothetical calculation. I'm looking for ways in which
these new facilities might improve the structure of my code.

Let's say I have a dead simple GUI with two buttons labeled, "Do A" and "Do
B". Each corresponds to executing a particular activity, A or B, which take
some non-zero amount of time to complete (as perceived by the user) or
cancel (as perceived by the state of the running system - not safe to run A
until B is complete/canceled, and vice versa). The user, being the fickle
sort that he is, might change his mind while A is running, and decide to
execute B instead. (The roles can also be reversed.) If s/he wants to run
task A, task B must be canceled or allowed to complete before A can be
started. Logically, the code looks something like (I fear Gmail is going to
destroy my indentation):

def do_A():
when B is complete, _do_A()
cancel_B()

def do_B():
when A is complete, _do_B()
cancel_A()

def _do_A():
do the real A work here, we are guaranteed B is no longer running

def _do_B():
do the real B work here, we are guaranteed A is no longer running

cancel_A and cancel_B might be no-ops, in which case they need to start up
the other calculation immediately, if one is pending.

This is pretty simple execution, and if my job was this simple, I'd
probably just keep doing things the way I do now, which is basically to
catch a "complete" or "canceled" signal from the A and B tasks and execute
the opposite task if it's pending. But it's not this simple. In reality
there are lots of, "oh, you want to do X? You need to make sure A, B, and C
are not active." And other stuff like that.

I have this notion that I should be able to write do_A() something like
this:

def do_A():
cancel_B()
yield from ... ???
_do_A()
...

or

def do_A():
future = cancel_B()
future.on_completion(_do_A)
... or ???

with the obvious similar structure for do_B. To my mind's eye, the first
option is preferable, since it's obvious that when control reaches the line
after the yield from statement, it's fine to do the guts of task A.

So, is my simpleminded view of the world a possibility with the current
facilities available in 3.3 or 3.4?

Thx,

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: outsmarting context managers with coroutines

2013-12-29 Thread Ian Kelly
On Sun, Dec 29, 2013 at 7:44 AM, Burak Arslan
 wrote:
> On 12/29/13 07:06, Ian Kelly wrote:
>> On Sat, Dec 28, 2013 at 5:35 PM, Burak Arslan
>>  wrote:
>>> On 12/29/13 00:13, Burak Arslan wrote:
>>>> Hi,
>>>>
>>>> Have a look at the following code snippets:
>>>> https://gist.github.com/plq/8164035
>>>>
>>>> Observations:
>>>>
>>>> output2: I can break out of outer context without closing the inner one
>>>> in Python 2
>>>> output3: Breaking out of outer context closes the inner one, but the
>>>> closing order is wrong.
>>>> output3-yf: With yield from, the closing order is fine but yield returns
>>>> None before throwing.
>>> It doesn't, my mistake. Python 3 yield from case does the right thing, I
>>> updated the gist. The other two cases still seem weird to me though. I
>>> also added a possible fix for python 2 behaviour in a separate script,
>>> though I'm not sure that the best way of implementing poor man's yield from.
>> I don't see any problems here.  The context managers in question are
>> created in separate coroutines and stored on separate stacks, so there
>> is no "inner" and "outer" context in the thread that you posted.  I
>> don't believe that they are guaranteed to be called in any particular
>> order in this case, nor do I think they should be.
>
> First, Python 2 and Python 3 are doing two separate things here: Python
> 2 doesn't destroy an orphaned generator and waits until the end of the
> execution. The point of having raw_input at the end is to illustrate
> this. I'm tempted to call this a memory leak bug, especially after
> seeing that Python 3 doesn't behave the same way.

Ah, you may be right.  I'm not sure what's going on with Python 2
here, and all my attempts to collect the inaccessible generator have
failed.  The only times it seems to clean up are when Python exits
and, strangely, when dropping to an interactive shell using the -i
command line option.  It also cleans up properly if the first
generator explicitly dels its reference to the second before exiting.
This doesn't have anything to do with the context manager though, as I
see the same behavior without it.

> As for the destruction order, I don't agree that destruction order of
> contexts should be arbitrary. Triggering the destruction of a suspended
> stack should first make sure that any allocated objects get destroyed
> *before* destroying the parent object. But then, I can think of all
> sorts of reasons why this guarantee could be tricky to implement, so I
> can live with this fact if it's properly documented. We should just use
> 'yield from' anyway.

You generally get this behavior when objects are deleted by the
reference counting mechanism, but that is an implementation detail of
CPython.  Since different implementations use different collection
schemes, there is no language guarantee of when or in what order
finalizers will be called, and even in CPython with the new PEP 442,
there is no guarantee of what order finalizers will be called when
garbage collecting reference cycles.

>> For example, the first generator could yield the second generator back
>> to its caller and then exit, in which case the second generator would
>> still be active while the context manager in the first generator would
>> already have done its clean-up.
>
> Sure, if you pass the inner generator back to the caller of the outer
> one, the inner one should survive. The refcount of the inner is not zero
> yet. That's doesn't have much to do with what I'm trying to illustrate
> here though.

The point I'm trying to make is that either context has the potential
to long outlive the other one, so you can't make any guarantee that
they will be exited in LIFO order.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: outsmarting context managers with coroutines

2013-12-29 Thread Burak Arslan
On 12/29/13 07:06, Ian Kelly wrote:
> On Sat, Dec 28, 2013 at 5:35 PM, Burak Arslan
>  wrote:
>> On 12/29/13 00:13, Burak Arslan wrote:
>>> Hi,
>>>
>>> Have a look at the following code snippets:
>>> https://gist.github.com/plq/8164035
>>>
>>> Observations:
>>>
>>> output2: I can break out of outer context without closing the inner one
>>> in Python 2
>>> output3: Breaking out of outer context closes the inner one, but the
>>> closing order is wrong.
>>> output3-yf: With yield from, the closing order is fine but yield returns
>>> None before throwing.
>> It doesn't, my mistake. Python 3 yield from case does the right thing, I
>> updated the gist. The other two cases still seem weird to me though. I
>> also added a possible fix for python 2 behaviour in a separate script,
>> though I'm not sure that the best way of implementing poor man's yield from.
> I don't see any problems here.  The context managers in question are
> created in separate coroutines and stored on separate stacks, so there
> is no "inner" and "outer" context in the thread that you posted.  I
> don't believe that they are guaranteed to be called in any particular
> order in this case, nor do I think they should be.

First, Python 2 and Python 3 are doing two separate things here: Python
2 doesn't destroy an orphaned generator and waits until the end of the
execution. The point of having raw_input at the end is to illustrate
this. I'm tempted to call this a memory leak bug, especially after
seeing that Python 3 doesn't behave the same way.

As for the destruction order, I don't agree that destruction order of
contexts should be arbitrary. Triggering the destruction of a suspended
stack should first make sure that any allocated objects get destroyed
*before* destroying the parent object. But then, I can think of all
sorts of reasons why this guarantee could be tricky to implement, so I
can live with this fact if it's properly documented. We should just use
'yield from' anyway.


> For example, the first generator could yield the second generator back
> to its caller and then exit, in which case the second generator would
> still be active while the context manager in the first generator would
> already have done its clean-up.

Sure, if you pass the inner generator back to the caller of the outer
one, the inner one should survive. The refcount of the inner is not zero
yet. That's doesn't have much to do with what I'm trying to illustrate
here though.

Best,
Burak
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: outsmarting context managers with coroutines

2013-12-28 Thread Ian Kelly
On Sat, Dec 28, 2013 at 5:35 PM, Burak Arslan
 wrote:
> On 12/29/13 00:13, Burak Arslan wrote:
>> Hi,
>>
>> Have a look at the following code snippets:
>> https://gist.github.com/plq/8164035
>>
>> Observations:
>>
>> output2: I can break out of outer context without closing the inner one
>> in Python 2
>> output3: Breaking out of outer context closes the inner one, but the
>> closing order is wrong.
>> output3-yf: With yield from, the closing order is fine but yield returns
>> None before throwing.
>
> It doesn't, my mistake. Python 3 yield from case does the right thing, I
> updated the gist. The other two cases still seem weird to me though. I
> also added a possible fix for python 2 behaviour in a separate script,
> though I'm not sure that the best way of implementing poor man's yield from.

I don't see any problems here.  The context managers in question are
created in separate coroutines and stored on separate stacks, so there
is no "inner" and "outer" context in the thread that you posted.  I
don't believe that they are guaranteed to be called in any particular
order in this case, nor do I think they should be.

For example, the first generator could yield the second generator back
to its caller and then exit, in which case the second generator would
still be active while the context manager in the first generator would
already have done its clean-up.  To assure that the context managers
exit in the order you suggest, the call to the first context manager's
exit would have to be deferred until the second context manager exits,
which could happen much later.  That would complicate the
implementation and does not fit very well with the cooperative
multi-tasking model.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: outsmarting context managers with coroutines

2013-12-28 Thread Burak Arslan
On 12/29/13 00:13, Burak Arslan wrote:
> Hi,
>
> Have a look at the following code snippets:
> https://gist.github.com/plq/8164035
>
> Observations:
>
> output2: I can break out of outer context without closing the inner one
> in Python 2
> output3: Breaking out of outer context closes the inner one, but the
> closing order is wrong.
> output3-yf: With yield from, the closing order is fine but yield returns
> None before throwing.

It doesn't, my mistake. Python 3 yield from case does the right thing, I
updated the gist. The other two cases still seem weird to me though. I
also added a possible fix for python 2 behaviour in a separate script,
though I'm not sure that the best way of implementing poor man's yield from.

Sorry for the noise.

Best,
Burak

> All of the above seem buggy in their own way. And actually, Python 2
> seems to leak memory when generators and context managers are used this way.
>
> Are these behaviours intentional? How much of it is
> implementation-dependent? Are they documented somewhere? Neither PEP-342
> nor PEP-380 talk about context managers and PEP-343 talks about
> generators but not coroutines.
>
> My humble opinion:
>
> 1) All three should behave in the exact same way.
> 2) Throwing into a generator should not yield None before throwing.
>
> Best,
> Burak
>
> ps: I have:
>
> $ python -V; python3 -V
> Python 2.7.5
> Python 3.3.2

-- 
https://mail.python.org/mailman/listinfo/python-list


outsmarting context managers with coroutines

2013-12-28 Thread Burak Arslan
Hi,

Have a look at the following code snippets:
https://gist.github.com/plq/8164035

Observations:

output2: I can break out of outer context without closing the inner one
in Python 2
output3: Breaking out of outer context closes the inner one, but the
closing order is wrong.
output3-yf: With yield from, the closing order is fine but yield returns
None before throwing.

All of the above seem buggy in their own way. And actually, Python 2
seems to leak memory when generators and context managers are used this way.

Are these behaviours intentional? How much of it is
implementation-dependent? Are they documented somewhere? Neither PEP-342
nor PEP-380 talk about context managers and PEP-343 talks about
generators but not coroutines.

My humble opinion:

1) All three should behave in the exact same way.
2) Throwing into a generator should not yield None before throwing.

Best,
Burak

ps: I have:

$ python -V; python3 -V
Python 2.7.5
Python 3.3.2
-- 
https://mail.python.org/mailman/listinfo/python-list


Coroutines framework for Python 3

2012-08-24 Thread Thibaut

Hi,

The question is in the title, is there any coroutine framework available 
for Python 3.2+ ? I checked gevent and eventlet but none of them seems 
to have a Py3k version.


Thanks,
--
http://mail.python.org/mailman/listinfo/python-list


[ANN]: asyncoro: Framework for asynchronous programming with coroutines

2012-04-06 Thread Giridhar Pemmasani
I posted this message earlier to the list, but realized that URLs appear
broken with '.' at the end of URL. Sorry for that mistake and this duplicate!

asyncoro is a framework for developing concurrent programs with
asynchronous event completions and coroutines. Asynchronous
completions currently implemented in asyncoro are socket I/O
operations, sleep timers, (conditional) event notification and
semaphores. Programs developed with asyncoro will have same logic as
Python programs with synchronous sockets and threads, except for a few
syntactic changes. asyncoro supports polling mechanisms epoll, kqueue,
/dev/poll (and poll and select if necessary), and Windows I/O
Completion Ports (IOCP) for high performance and scalability, and SSL
for security.

More details about asyncoro are at
http://dispy.sourceforge.net/asyncoro.html and can be downloaded from
https://sourceforge.net/projects/dispy/files

asyncoro is used in development of dispy (http://dispy.sourceforge.net), which 
is a
framework for parallel execution of computations by distributing them
across multiple processors on a single machine (SMP), multiple nodes
in a cluster, or large clusters of nodes. The computations can be
standalone programs or Python functions.

Both asyncoro and dispy have been tested with Python versions 2.7 and
3.2 under Linux, OS X and Windows.

Cheers,
Giri-- 
http://mail.python.org/mailman/listinfo/python-list


[ANN]: asyncoro: Framework for asynchronous sockets and coroutines

2012-04-05 Thread Giridhar Pemmasani
asyncoro is a framework for developing concurrent programs with
asynchronous event completions and coroutines. Asynchronous
completions currently implemented in asyncoro are socket I/O
operations, sleep timers, (conditional) event notification and
semaphores. Programs developed with asyncoro will have same logic as
Python programs with synchronous sockets and threads, except for a few
syntactic changes. asyncoro supports polling mechanisms epoll, kqueue,
/dev/poll (and poll, select if necessary), and Windows I/O
Completion Ports (IOCP) for high performance and scalability, and SSL
for security.

More details about asyncoro are at http://dispy.sourceforge.net/asyncoro.html.
It can be downloaded from https://sourceforge.net/projects/dispy/files.

asyncoro is used in development of dispy (http://dispy.sourceforge.net), which
is a framework for parallel execution of computations by distributing them
across multiple processors on a single machine (SMP), multiple nodes
in a cluster, or large clusters of nodes. The computations can be
standalone programs or Python functions.

Both asyncoro and dispy have been tested with Python versions 2.7 and
3.2 under Linux, OS X and Windows.

Cheers,
Giri-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines: unexpected behaviour

2010-06-16 Thread Jérôme Mainka
On 16 juin, 20:11, Carl Banks  wrote:

> I suggest, if you intend to use this kind of thing in real code (and I
> would not recommend that) that you get in a habit of explicitly
> closing the generator after the last send(), even when you don't think
> you have to.

Very clear explanation. Thanks for the investigation...


> IMHO, coroutines are the one time during the PEP-era that Python can
> be accused of feature creep.  All other changes seemed driven by
> thoughtful analysis, this one seemed like it was, "OMG that would be
> totally cool".  PEP 342 doesn't give any compelling use cases (it only
> gives examples of "cool things you can do with coroutines"), no
> discussion on how in improves the language.  Suffice to say that,
> thanks to this example, I'm more averse to using them than I was
> before.

The wonderful David Beazley's course [1] piqued my curiosity. But,
indeed, this is a hard piece difficult to master.


Thanks again,

Jérôme

[1] http://www.dabeaz.com/coroutines/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines: unexpected behaviour

2010-06-16 Thread Carl Banks
On Jun 16, 5:03 am, Jérôme Mainka  wrote:
> Hello,
>
> I try to experiment with coroutines and I don't understand why this
> snippet doesn't work as expected... In python 2.5 and python 2.6 I get
> the following output:
>
> 0
> Exception exceptions.TypeError: "'NoneType' object is not callable" in
>  ignored
>
> The TypeError exception comes from the pprint instruction...
>
> If i replace the main part with:
>
> ==
> p1 = dump()
> p2 = sort(p1)
> for item in my_list: p2.send(item)
> ==
>
> it works as expected.
>
> I don't understand what is goind wrong. Has someone an explanation for
> this issue?
>
> Thanks,
>
> Jérôme
>
> ===
> from functools import wraps
> from pprint import pprint
> import random
>
> def coroutine(f):
>     @wraps(f)
>     def start(*args, **kwargs):
>         res = f(*args, **kwargs)
>         res.next()
>         return res
>     return start
>
> @coroutine
> def sort(target):
>     l = []
>
>     try:
>         while True:
>             l.append((yield))
>     except GeneratorExit:
>         l.sort()
>         for item in l:
>             target.send(item)
>
> @coroutine
> def dump():
>     while True:
>         pprint((yield))
>
> if __name__ == "__main__":
>     my_list = range(100)
>     random.shuffle(my_list)
>
>     p = sort(dump())
>
>     for item in my_list:
>         p.send(item)

Tricky, but pretty simple once you know what happens.

What's happening here is that the GeneratorExit exception isn't raised
until the generator is destroyed (because how does the generator know
there are no more sends coming?).  That only happens when the __main__
module is being destroyed.

Well, when destroying __main__, Python overwrites all its attributes
with None, one-by-one, in arbitrary order.  In the first scenario,
"pprint" was being set to None before "p" was, thus by the time the
generator got about to running, the pprint was None.  In the second
scenario, "p2" was being set to None before "pprint", so that pprint
still pointed at the appropriate function when the generator began
executing.

The solution is to explicity close the generator after the loop, this
signaling it to run before __main__ is destroyed.

p.close()


I suggest, if you intend to use this kind of thing in real code (and I
would not recommend that) that you get in a habit of explicitly
closing the generator after the last send(), even when you don't think
you have to.

IMHO, coroutines are the one time during the PEP-era that Python can
be accused of feature creep.  All other changes seemed driven by
thoughtful analysis, this one seemed like it was, "OMG that would be
totally cool".  PEP 342 doesn't give any compelling use cases (it only
gives examples of "cool things you can do with coroutines"), no
discussion on how in improves the language.  Suffice to say that,
thanks to this example, I'm more averse to using them than I was
before.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines: unexpected behaviour

2010-06-16 Thread Thomas Jollans
On 06/16/2010 06:35 PM, Steven D'Aprano wrote:
> On Wed, 16 Jun 2010 05:03:13 -0700, Jérôme Mainka wrote:
> 
>> Hello,
>>
>> I try to experiment with coroutines and I don't understand why this
>> snippet doesn't work as expected... In python 2.5 and python 2.6 I get
>> the following output:
>>
>> 0
>> Exception exceptions.TypeError: "'NoneType' object is not callable" in
>>  ignored

Heureka! I think I've found the answer.

Have a look at this:

###
from itertools import izip

def gen():
print globals()
try:
while True:
yield 'things'
except GeneratorExit:
print globals()

if __name__ == "__main__":
g = gen()
g.next()
###
% python genexit.py
{'izip': , 'g': , '__builtins__': ,
'__file__': 'genexit.py', 'gen': ,
'__name__': '__main__', '__doc__': None}
{'izip': None, 'g': None, '__builtins__': , '__file__': 'genexit.py', 'gen': , '__name__': '__main__', '__doc__': None}
###

Note that izip (the thing I imported, could just as well have been
pprint) is None the second time I print globals(). why?

This second print happens when GeneratorExit is raised. Just like all
occurences of dump.send(..) in the OP's code are underneath
"except GeneratorExit:"
When does GeneratorExit happen? Simple: When the generator is destroyed.
And when is it destroyed? *looks at the end of the file*
Since it's a global variable, it's destroyed when the module is
destroyed. Python sees the end of the script, and del's the module,
which means all the items in the module's __dict__ are del'd one-by-one
and, apparently, to avoide causing havoc and mayhem, replaced by None.
Fair enough.

So why did the "solutions" we found work?

(1) changing the variable name:
I expect Python del's the globals in some particular order, and this
order is probably influenced by the key in the globals() dict. One key
happens to be deleted before pprint (then it works, pprint is still
there), another one happens to come after pprint. For all I know, it
might not even be deterministic.

(2) moving pprint into the dump() function
As long as there's a reference to the dump generator around, there will
be a reference to pprint around. sort() has a local reference to
target == dump(), so no problem there.

(3) wrapping the code in a function
As the generator is now a local object, it gets destroyed with the
locals, when the function exits. The module gets destroyed afterwards.


What should you do to fix it then?

If you really want to keep the except GeneratorExit: approach, make sure
you exit it manually. Though really, you should do something like
p.send(None) at the end, and check for that in the generator: recieving
None would mean "we're done here, do post processing!"

-- Thomas
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines: unexpected behaviour

2010-06-16 Thread Jérôme Mainka
On Jun 16, 6:35 pm, Steven D'Aprano  wrote:

> How bizarre is that?
Sure...


> I have to say that your code is horribly opaque and unclear to me.
Welcome to the coroutines world :-)

This is mainly a pipeline where each function suspends execution
waiting for data (yield), and feeding other functions (the sort
function's target argument) with data (.send())

 -> sort() -> dump()



> Because yield will often be returning None, you should always check for
> this case. Don't just use its value in expressions unless you're sure
> that the send() method will be the only method used resume your generator
> function.

This remark deals with constructions like:

value = (yield i)

For an excellent introduction to coroutines in python, I recommend:

http://www.dabeaz.com/coroutines/


Thanks,

Jérôme
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines: unexpected behaviour

2010-06-16 Thread Steven D'Aprano
On Wed, 16 Jun 2010 05:03:13 -0700, Jérôme Mainka wrote:

> Hello,
> 
> I try to experiment with coroutines and I don't understand why this
> snippet doesn't work as expected... In python 2.5 and python 2.6 I get
> the following output:
> 
> 0
> Exception exceptions.TypeError: "'NoneType' object is not callable" in
>  ignored

I changed the while loop in dump() to print the value of pprint:

while True:
print type(pprint), pprint
pprint((yield))


and sure enough I get this output:

[st...@sylar ~]$ python2.6 test_coroutine.py
 
0
 None
Exception TypeError: "'NoneType' object is not callable" in  ignored


BUT then I wrapped the __main__ code in a function:


def f():
my_list = range(100)
random.shuffle(my_list)
p = sort(dump())
for item in my_list:
p.send(item)

if __name__ == "__main__":
f()


and the problem went away:

[st...@sylar ~]$ python2.6 test_coroutine.py
 
0
 
1
 
2
[...]
 
99
 


How bizarre is that?


I have to say that your code is horribly opaque and unclear to me. Maybe 
that's just because I'm not familiar with coroutines, but trying to 
follow the program flow is giving me a headache. It's like being back in 
1977 trying to follow GOTOs around the code, only without line numbers. 
But if I have understood it correctly, I think the generator function 
coroutine(sort) is being resumed by both the send() method and the next() 
method. I draw your attention to this comment from the PEP that 
introduced coroutines:


[quote]
Because yield will often be returning None, you should always check for 
this case. Don't just use its value in expressions unless you're sure 
that the send() method will be the only method used resume your generator 
function.
[end quote]

http://docs.python.org/release/2.5/whatsnew/pep-342.html


So I wonder if that has something to do with it?



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines: unexpected behaviour

2010-06-16 Thread Thomas Jollans
On 06/16/2010 02:03 PM, Jérôme Mainka wrote:
> Hello,
> 
> I try to experiment with coroutines and I don't understand why this
> snippet doesn't work as expected... In python 2.5 and python 2.6 I get
> the following output:
> 
> 0
> Exception exceptions.TypeError: "'NoneType' object is not callable" in
>  ignored
> 
> The TypeError exception comes from the pprint instruction...
> 
> If i replace the main part with:
> 
> ==
> p1 = dump()
> p2 = sort(p1)
> for item in my_list: p2.send(item)
> ==
> 
> it works as expected.

What a strange problem. I hope somebody else can shed more light on what
is going on

Anyway, it appears (to me) to be a scoping issue: if you move
"from pprint import pprint" into the dump() function body, it works. I
don't understand why this changes anything: pprint was in the global
namespace all along, and it's present and functioning exactly *once*,
after which it's None. It's still there, otherwise there'd be a
NameError, but it was set to None without any line of code doing this
explicitly as far as I can see.

Another VERY strange thing is, of course, your "fix": renaming "p" to
"p2" helps. "p", "p1", "p3", and "yyy" are just some examples of
variable names that don't work.

As for Python versions: I changed the code to work with Python 3 and get
the same odd behaviour.

> 
> I don't understand what is goind wrong. Has someone an explanation for
> this issue?
> 
> Thanks,
> 
> Jérôme
> 
> 
> ===
> from functools import wraps
> from pprint import pprint
> import random
> 
> def coroutine(f):
> @wraps(f)
> def start(*args, **kwargs):
> res = f(*args, **kwargs)
> res.next()
> return res
> return start
> 
> @coroutine
> def sort(target):
> l = []
> 
> try:
> while True:
> l.append((yield))
> except GeneratorExit:
> l.sort()
> for item in l:
> target.send(item)
> 
> @coroutine
> def dump():
> while True:
> pprint((yield))
> 
> if __name__ == "__main__":
> my_list = range(100)
> random.shuffle(my_list)
> 
> p = sort(dump())
> 
> for item in my_list:
> p.send(item)

-- 
http://mail.python.org/mailman/listinfo/python-list


Coroutines: unexpected behaviour

2010-06-16 Thread Jérôme Mainka
Hello,

I try to experiment with coroutines and I don't understand why this
snippet doesn't work as expected... In python 2.5 and python 2.6 I get
the following output:

0
Exception exceptions.TypeError: "'NoneType' object is not callable" in
 ignored

The TypeError exception comes from the pprint instruction...

If i replace the main part with:

==
p1 = dump()
p2 = sort(p1)
for item in my_list: p2.send(item)
==

it works as expected.

I don't understand what is goind wrong. Has someone an explanation for
this issue?

Thanks,

Jérôme


===
from functools import wraps
from pprint import pprint
import random

def coroutine(f):
@wraps(f)
def start(*args, **kwargs):
res = f(*args, **kwargs)
res.next()
return res
return start

@coroutine
def sort(target):
l = []

try:
while True:
l.append((yield))
except GeneratorExit:
l.sort()
for item in l:
target.send(item)

@coroutine
def dump():
while True:
pprint((yield))

if __name__ == "__main__":
my_list = range(100)
random.shuffle(my_list)

p = sort(dump())

for item in my_list:
p.send(item)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: n00b question: Better Syntax for Coroutines?

2008-09-13 Thread Paul McGuire
On Sep 12, 8:08 pm, [EMAIL PROTECTED] wrote:
> First off, I'm a python n00b, so feel free to comment on anything if
> I'm doing it "the wrong way." I'm building a discrete event simulation
> tool. I wanted to use coroutines. However, I want to know if there's

You could "look in the back of the book" - download SimPy and see how
they does this.

-- Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: n00b question: Better Syntax for Coroutines?

2008-09-13 Thread Arnaud Delobelle
On Sep 13, 2:08 am, [EMAIL PROTECTED] wrote:
> First off, I'm a python n00b, so feel free to comment on anything if
> I'm doing it "the wrong way." I'm building a discrete event simulation
> tool. I wanted to use coroutines. However, I want to know if there's
> any way to hide a yield statement.
>
> I have a class that I'd like to look like this:
>
> class Pinger(Actor):
>     def go(self):
>         success = True
>         while success:
>             result = self.ping("128.111.41.38")
>             if result != "success":
>                 success = False
>         print "Pinger done"
>
> But because I can't hide yield inside ping, and because I can't find a
> convenient way to get a self reference to the coroutine (which is used
> by the event queue to pass back results), my code looks like this:
>
> class Pinger(Actor):
>     def go(self):
>         # I dislike this next line
>         self.this_pointer = (yield None)
>         success = True
>         while success:
>             # I want to get rid of the yield in the next line
>             result = (yield self.ping("128.111.41.38"))
>             if result != "success":
>                 success = False
>         print "Pinger done"
>
> I posted a more detailed version of this as a rant 
> here:http://illusorygoals.com/post/49926627/
>
> I'd like to know, is there a way to get the syntax I want? After
> staying up late last night to get a proof-of-concept working with
> coroutines, my boss expressed disappointment in the ugliness of the
> Pinger code (we agreed on the desired syntax above). I spent almost 2
> hours today to migrate the system over to threads. That made my boss
> happy, but I'm still curious if I can do something to salvage the
> coroutine version.
>
> Regards,
> IG

You can't avoid the yield, and generators are not coroutines.  A
little while ago when thinking about this kind of problem I defined
"cogenerators", which roughly are to generators what coroutines are to
routines, i.e. they can pass "yielding control" on to another
cogenerator [1].

[1] http://www.marooned.org.uk/~arno/python/cogenerator.html

--
Arnaud

--
http://mail.python.org/mailman/listinfo/python-list


Re: n00b question: Better Syntax for Coroutines?

2008-09-12 Thread Carl Banks
On Sep 12, 9:08 pm, [EMAIL PROTECTED] wrote:
> First off, I'm a python n00b, so feel free to comment on anything if
> I'm doing it "the wrong way." I'm building a discrete event simulation
> tool. I wanted to use coroutines. However, I want to know if there's
> any way to hide a yield statement.
>
> I have a class that I'd like to look like this:
>
> class Pinger(Actor):
>     def go(self):
>         success = True
>         while success:
>             result = self.ping("128.111.41.38")
>             if result != "success":
>                 success = False
>         print "Pinger done"
>
> But because I can't hide yield inside ping, and because I can't find a
> convenient way to get a self reference to the coroutine (which is used
> by the event queue to pass back results), my code looks like this:
>
> class Pinger(Actor):
>     def go(self):
>         # I dislike this next line
>         self.this_pointer = (yield None)
>         success = True
>         while success:
>             # I want to get rid of the yield in the next line
>             result = (yield self.ping("128.111.41.38"))
>             if result != "success":
>                 success = False
>         print "Pinger done"
>
> I'd like to know, is there a way to get the syntax I want?

I think you're stuck with using threads in a standard Python release.
Generators can't serve as coroutines when you're yielding from a
nested call (they only go one level deep).

You can get coroutines with Stackless Python, a non-standard version
of Python.  But even with Stackless I got the impression that you'd be
building coroutines atop something that was fairly thread-like.  There
is no coroutine syntax.


Carl Banks


--
http://mail.python.org/mailman/listinfo/python-list


Re: n00b question: Better Syntax for Coroutines?

2008-09-12 Thread Aaron "Castironpi" Brady
On Sep 12, 8:08 pm, [EMAIL PROTECTED] wrote:
> First off, I'm a python n00b, so feel free to comment on anything if
> I'm doing it "the wrong way." I'm building a discrete event simulation
> tool. I wanted to use coroutines. However, I want to know if there's
> any way to hide a yield statement.
>
> I have a class that I'd like to look like this:
>
> class Pinger(Actor):
>     def go(self):
>         success = True
>         while success:
>             result = self.ping("128.111.41.38")
>             if result != "success":
>                 success = False
>         print "Pinger done"
>
> But because I can't hide yield inside ping, and because I can't find a
> convenient way to get a self reference to the coroutine (which is used
> by the event queue to pass back results), my code looks like this:
>
> class Pinger(Actor):
>     def go(self):
>         # I dislike this next line
>         self.this_pointer = (yield None)
>         success = True
>         while success:
>             # I want to get rid of the yield in the next line
>             result = (yield self.ping("128.111.41.38"))
>             if result != "success":
>                 success = False
>         print "Pinger done"
>
> I posted a more detailed version of this as a rant 
> here:http://illusorygoals.com/post/49926627/
>
> I'd like to know, is there a way to get the syntax I want? After
> staying up late last night to get a proof-of-concept working with
> coroutines, my boss expressed disappointment in the ugliness of the
> Pinger code (we agreed on the desired syntax above). I spent almost 2
> hours today to migrate the system over to threads. That made my boss
> happy, but I'm still curious if I can do something to salvage the
> coroutine version.
>
> Regards,
> IG

I did not read you whole post though I did skim your link.  I don't
know your whole problem but it seems you are trying to inform a
generator of its own identity.  Here's a solution to that problem;
perhaps it relates to yours.  (The shorter retort is, "Just use
closures.")

def gen( ):
def _gen( ):
while 1:
yield 1, 'i am %r'% ob
yield 2, 'i am %r'% ob
yield 3, 'i am %r'% ob
ob= _gen( )
return ob

gens= [ gen( ) for _ in range( 4 ) ]
for i in range( 6 ):
print i
for g in gens:
print g.next( )

/Output (lengthy):

0
(1, 'i am ')
(1, 'i am ')
(1, 'i am ')
(1, 'i am ')
1
(2, 'i am ')
(2, 'i am ')
(2, 'i am ')
(2, 'i am ')
2
(3, 'i am ')
(3, 'i am ')
(3, 'i am ')
(3, 'i am ')
3
(1, 'i am ')
(1, 'i am ')
(1, 'i am ')
(1, 'i am ')
4
(2, 'i am ')
(2, 'i am ')
(2, 'i am ')
(2, 'i am ')
5
(3, 'i am ')
(3, 'i am ')
(3, 'i am ')
(3, 'i am ')
--
http://mail.python.org/mailman/listinfo/python-list


n00b question: Better Syntax for Coroutines?

2008-09-12 Thread ig
First off, I'm a python n00b, so feel free to comment on anything if
I'm doing it "the wrong way." I'm building a discrete event simulation
tool. I wanted to use coroutines. However, I want to know if there's
any way to hide a yield statement.

I have a class that I'd like to look like this:

class Pinger(Actor):
def go(self):
success = True
while success:
result = self.ping("128.111.41.38")
if result != "success":
success = False
print "Pinger done"

But because I can't hide yield inside ping, and because I can't find a
convenient way to get a self reference to the coroutine (which is used
by the event queue to pass back results), my code looks like this:

class Pinger(Actor):
def go(self):
# I dislike this next line
self.this_pointer = (yield None)
success = True
while success:
# I want to get rid of the yield in the next line
result = (yield self.ping("128.111.41.38"))
if result != "success":
success = False
print "Pinger done"

I posted a more detailed version of this as a rant here:
http://illusorygoals.com/post/49926627/

I'd like to know, is there a way to get the syntax I want? After
staying up late last night to get a proof-of-concept working with
coroutines, my boss expressed disappointment in the ugliness of the
Pinger code (we agreed on the desired syntax above). I spent almost 2
hours today to migrate the system over to threads. That made my boss
happy, but I'm still curious if I can do something to salvage the
coroutine version.

Regards,
IG
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python generators (coroutines)

2008-04-23 Thread rocco . rossi

> Anyway, if you have a blocking operation, the only solution is to use
> a thread or a separate process.
>
>   Michele Simionato

That's what I thought. It was in fact rather obvious, but I wanted to
be sure that I hadn't overlooked some arcane possibility (ex. with the
use of exceptions or something like that). Thanks for the confirmation.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python generators (coroutines)

2008-04-23 Thread Michele Simionato
On Apr 23, 8:26 pm, Jean-Paul Calderone <[EMAIL PROTECTED]> wrote:
> On Wed, 23 Apr 2008 10:53:03 -0700 (PDT), Michele Simionato:
> You could have #2.  It's a trivial variation of sending a value.  For
> example,
>
> http://twistedmatrix.com/trac/browser/trunk/twisted/internet/defer.py...
>
> Jean-Paul

Yep, I stand corrected, it is only #1 which is the real
improvement.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python generators (coroutines)

2008-04-23 Thread Jean-Paul Calderone

On Wed, 23 Apr 2008 10:53:03 -0700 (PDT), Michele Simionato <[EMAIL PROTECTED]> 
wrote:

On Apr 23, 4:17 pm, [EMAIL PROTECTED] wrote:

I would really like to know more about python 2.5's new generator
characteristics that make them more powerful and analogous to
coroutines. Is it possible for instance to employ them in situations
where I would normally use a thread with a blocking I/O (or socket)
operation? If it is, could someone show me how it can be done? There
appears to be a very limited amount of documentation in this repect,
unfortunately.

Thank you.


The real changes between Python 2.4 and Python 2.5 generators are
1) now you can have a yield inside a try .. finally statement
2) now you can send an exception to a generator

The fact that now you can send values to a generator
is less important, since you could implement the
same in Python 2.4 with little effort (granted, with an uglier syntax)
whereas there was no way to get 1) and 2).
Anyway, if you have a blocking operation, the only solution is to use
a thread or a separate process.


You could have #2.  It's a trivial variation of sending a value.  For
example,

http://twistedmatrix.com/trac/browser/trunk/twisted/internet/defer.py#L555

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python generators (coroutines)

2008-04-23 Thread Michele Simionato
On Apr 23, 4:17 pm, [EMAIL PROTECTED] wrote:
> I would really like to know more about python 2.5's new generator
> characteristics that make them more powerful and analogous to
> coroutines. Is it possible for instance to employ them in situations
> where I would normally use a thread with a blocking I/O (or socket)
> operation? If it is, could someone show me how it can be done? There
> appears to be a very limited amount of documentation in this repect,
> unfortunately.
>
> Thank you.

The real changes between Python 2.4 and Python 2.5 generators are
1) now you can have a yield inside a try .. finally statement
2) now you can send an exception to a generator

The fact that now you can send values to a generator
is less important, since you could implement the
same in Python 2.4 with little effort (granted, with an uglier syntax)
whereas there was no way to get 1) and 2).
Anyway, if you have a blocking operation, the only solution is to use
a thread or a separate process.

  Michele Simionato
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python generators (coroutines)

2008-04-23 Thread Jean-Paul Calderone

On Wed, 23 Apr 2008 07:17:46 -0700 (PDT), [EMAIL PROTECTED] wrote:

I would really like to know more about python 2.5's new generator
characteristics that make them more powerful and analogous to
coroutines. Is it possible for instance to employ them in situations
where I would normally use a thread with a blocking I/O (or socket)
operation? If it is, could someone show me how it can be done? There
appears to be a very limited amount of documentation in this repect,
unfortunately.


They're not coroutines.  The difference between generators in Python 2.4
and in Python 2.5 is that in Python 2.5, `yield´ can be used as part of
an expression.  If a generator is resumed via its `send´ method, then the
`yield´ expression evaluates to the value passed to `send´.

You still can't suspend execution through arbitrary stack frames; `yield´
only suspends the generator it is in, returning control to the frame
immediately above it on the stack.

You can build a trampoline based on generators, but you can do that in
Python 2.4 just as easily as you can do it in Python 2.5.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list

Re: Python generators (coroutines)

2008-04-23 Thread Diez B. Roggisch

[EMAIL PROTECTED] schrieb:

I would really like to know more about python 2.5's new generator
characteristics that make them more powerful and analogous to
coroutines. Is it possible for instance to employ them in situations
where I would normally use a thread with a blocking I/O (or socket)
operation? If it is, could someone show me how it can be done? There
appears to be a very limited amount of documentation in this repect,
unfortunately.


What do you mean by "new generator characteristics"? AFAIK generators 
have been around at least since python2.3. Newer versions of python 
include things like generator expressions (since 2.4 I believe).


However, generators don't help anything in blocking IO-situations, 
because they rely on co-operatively re-scheduling using yield.


Of course you could try & use non-blocking IO with them, but I don't see 
that there is much to them.


Or you could go & use a single poll/select in your main-thread, and on 
arrival of data process that data with generators that re-schedule in 
between so that you can react on newer data. Depending on your scenario 
this might reduce latency.


Diez
--
http://mail.python.org/mailman/listinfo/python-list


Python generators (coroutines)

2008-04-23 Thread rocco . rossi
I would really like to know more about python 2.5's new generator
characteristics that make them more powerful and analogous to
coroutines. Is it possible for instance to employ them in situations
where I would normally use a thread with a blocking I/O (or socket)
operation? If it is, could someone show me how it can be done? There
appears to be a very limited amount of documentation in this repect,
unfortunately.

Thank you.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines and argument tupling

2007-08-16 Thread Bjoern Schliessmann
Marshall T. Vandegrift wrote:

> I'd seen the consumer decorator, and it certainly is cleaner than
> just using a generator.  I don't like how it hides the parameter
> signature in the middle of the consumer function though, and it
> also doesn't provide for argument default values.  

Mh, that may be. :( I didn't use it much yet.
 
> Thanks in any case for the replies!  Since I've apparently decided
> my ArgPacker is worth it, the complete code for my coroutine
> decorator follows.

Thanks for reporting back, and happy coding.

Regards,


Björn

-- 
BOFH excuse #198:

Post-it Note Sludge leaked into the monitor.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines and argument tupling

2007-08-16 Thread Marshall T. Vandegrift
Bjoern Schliessmann <[EMAIL PROTECTED]> writes:

> The solution I'd use is a decorator that calls next automatically one
> time after instantiation. Then you can use send normally, and don't
> have to care about any initial parameters, which makes the code
> clearer (initial parameters should be used for setup purposes, but not
> for the first iteration, IMHO). It'd look like this (from PEP 342,
> http://www.python.org/dev/peps/pep-0342/):

I'd seen the consumer decorator, and it certainly is cleaner than just
using a generator.  I don't like how it hides the parameter signature in
the middle of the consumer function though, and it also doesn't provide
for argument default values.  It's the difference between:

...
def __init__(self, ...):
...
self.consumer = self._function(value)
...

def function(self, first, second=3, *args, **kwargs):
self.consumer.send((first, second, args, kwargs))

@consumer
def _function(self, setup):
...
first, second, args, kwargs = yield # initial 'next'
while condition:
...
first, second, args, kwargs = yield retval

Versus just:

@coroutine
def function(self, first, second=3, *args, **kwargs):
...
while condition:
...
first, second, args, kwargs = yield retval

Thanks in any case for the replies!  Since I've apparently decided my
ArgPacker is worth it, the complete code for my coroutine decorator
follows.

-Marshall


import inspect
import types
import functools
from itertools import izip

__all__ = [ 'coroutine' ]

class ArgPacker(object):
def __init__(self, function):
args, varargs, varkw, defaults = inspect.getargspec(function)
self.args = args or []
self.varargs = (varargs is not None) and 1 or 0
self.varkw = (varkw is not None) and 1 or 0
self.nargs = len(self.args) + self.varargs + self.varkw
defaults = defaults or []
defargs = self.args[len(self.args) - len(defaults):]
self.defaults = dict([(k, v) for k, v in izip(defargs, defaults)])

def pack(self, *args, **kwargs):
args = list(args)
result = [None] * self.nargs
for i, arg in izip(xrange(len(self.args)), self.args):
if args:
result[i] = args.pop(0)
elif arg in kwargs:
result[i] = kwargs[arg]
del kwargs[arg]
elif arg in self.defaults:
result[i] = self.defaults[arg]
else:
return None
if self.varargs:
result[len(self.args)] = args
elif args:
return None
if self.varkw:
result[-1] = kwargs
elif kwargs:
return None
return tuple(result)

class coroutine(object):
"""Convert a function to be a simple coroutine.

A simple coroutine is a generator bound to act as much as possible like a
normal function.  Callers call the function as usual while the coroutine
produces new return values and receives new arguments with `yield'.
"""
def __init__(self, function):
self.function = function
self.gname = ''.join(['__', function.__name__, '_generator'])
self.packer = ArgPacker(function)
coroutine = self
def method(self, *args, **kwargs):
return coroutine.generate(self, self, *args, **kwargs)
self.method = method
functools.update_wrapper(self, function)
functools.update_wrapper(method, function)

def __get__(self, obj, objtype=None):
return types.MethodType(self.method, obj, objtype)

def __call__(self, *args, **kwargs):
return self.generate(self, *args, **kwargs)

def generate(self, obj, *args, **kwargs):
try:
generator = getattr(obj, self.gname)
except AttributeError:
generator = self.function(*args, **kwargs)
setattr(obj, self.gname, generator)
retval = generator.next()
else:
packed = self.packer.pack(*args, **kwargs)
if packed is None:
self.function(*args, **kwargs) # Should raise TypeError
raise RuntimeError("ArgPacker reported spurious error")
retval = generator.send(packed)
return retval

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines and argument tupling

2007-08-16 Thread Bjoern Schliessmann
Marshall T. Vandegrift wrote:
> Without the decorator that becomes:
> 
> gen = nextn(2)
> print gen.next()  # => [0, 1]
> print gen.send(3) # => [2, 3, 4]
> print gen.send(1) # => [5]
> 
> The former is just that smidgen nicer, and allows you to continue
> to make use of argument defaults and varadic arguments if so
> desired.

The solution I'd use is a decorator that calls next automatically
one time after instantiation. Then you can use send normally, and
don't have to care about any initial parameters, which makes the
code clearer (initial parameters should be used for setup purposes,
but not for the first iteration, IMHO). It'd look like this (from
PEP 342, http://www.python.org/dev/peps/pep-0342/):

def consumer(func):
def wrapper(*args,**kw):
gen = func(*args, **kw)
gen.next()
return gen
wrapper.__name__ = func.__name__
wrapper.__dict__ = func.__dict__
wrapper.__doc__  = func.__doc__
return wrapper

@consumer
def nextn():
...

gen = nextn()
print gen.send(2) # => [0, 1]
print gen.send(3) # => [2, 3, 4]
print gen.send(1) # => [5]

Regards,


Björn

-- 
BOFH excuse #60:

system has been recalled

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines and argument tupling

2007-08-15 Thread Marshall T. Vandegrift
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes:

> Do you really need a generator or co-routine to do this?  Maybe
> you can just use a closure:

For my trivial example, sure -- there are lots of ways to do it.  Here's
a slightly better example: the `read' method of a file-like object which
sequentially joins the contents of a sequence of other file-like
objects:

@coroutine
def read(self, size=-1):
data = ''
for file in self.files:
this = file.read(size - len(data))
data = ''.join([data, this])
while this:
if len(data) == size:
_, size = (yield data)
data = ''
this = file.read(size - len(data))
data = ''.join([data, this])
yield data
while True:
yield ''

-Marshall

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines and argument tupling

2007-08-15 Thread [EMAIL PROTECTED]
On Aug 15, 3:37 pm, "Marshall T. Vandegrift" <[EMAIL PROTECTED]>
wrote:
> Bjoern Schliessmann <[EMAIL PROTECTED]> writes:
> >> I'm trying to write a decorator which allows one to produce simple
> >> coroutines by just writing a function as a generator expression
> >> which re-receives it's arguments as a tuple from each yield.
>
> > May I ask why? Passing it the same arguments over and over is no
> > use; and there is the send method.
>
> That's what I meant.  The wrapper produced by the decorator passes the
> arguments back into the generator as a tuple via the `send' method.
>
> >> The ugliness of the ArgPacker class makes me suspect that I should
> >> perhaps just manually create and track a generator when I need a
> >> function with generator-like properties.
>
> > What do you mean? I don't quite understand why you'd have to "track"
> > a generator for getting generator-like properties.
>
> Using the trivial `nextn' example from my original post with my
> decorator lets you do just:
>
> print nextn(2)# => [0, 1]
> print nextn(3)# => [2, 3, 4]
> print nextn() # => [5]
>
> Without the decorator that becomes:
>
> gen = nextn(2)
> print gen.next()  # => [0, 1]
> print gen.send(3) # => [2, 3, 4]
> print gen.send(1) # => [5]
>
> The former is just that smidgen nicer, and allows you to continue to
> make use of argument defaults and varadic arguments if so desired.
>


Do you really need a generator or co-routine to do this?  Maybe
you can just use a closure:

import itertools
class Foo(object):
it = itertools.count(0)

def __call__(self):
return self.y

def __init__(self, n=1):
self.y = [ Foo.it.next() for each in xrange(n) ]

def nextn(n=1):
return Foo(n)()

print nextn()
print nextn(3)
print nextn(n=2)
print nextn()

--
Hope this helps,
Steven





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines and argument tupling

2007-08-15 Thread Marshall T. Vandegrift
Bjoern Schliessmann <[EMAIL PROTECTED]> writes:

>> I'm trying to write a decorator which allows one to produce simple
>> coroutines by just writing a function as a generator expression
>> which re-receives it's arguments as a tuple from each yield.  
>
> May I ask why? Passing it the same arguments over and over is no
> use; and there is the send method.

That's what I meant.  The wrapper produced by the decorator passes the
arguments back into the generator as a tuple via the `send' method.

>> The ugliness of the ArgPacker class makes me suspect that I should
>> perhaps just manually create and track a generator when I need a
>> function with generator-like properties.
>
> What do you mean? I don't quite understand why you'd have to "track"
> a generator for getting generator-like properties.

Using the trivial `nextn' example from my original post with my
decorator lets you do just:

print nextn(2)# => [0, 1]
print nextn(3)# => [2, 3, 4]
print nextn() # => [5]

Without the decorator that becomes:

gen = nextn(2)
print gen.next()  # => [0, 1] 
print gen.send(3) # => [2, 3, 4]
print gen.send(1) # => [5]

The former is just that smidgen nicer, and allows you to continue to
make use of argument defaults and varadic arguments if so desired.

-Marshall
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Coroutines and argument tupling

2007-08-15 Thread Bjoern Schliessmann
Marshall T. Vandegrift wrote:
> I'm trying to write a decorator which allows one to produce simple
> coroutines by just writing a function as a generator expression
> which re-receives it's arguments as a tuple from each yield.  

May I ask why? Passing it the same arguments over and over is no
use; and there is the send method.

> The ugliness of the ArgPacker class makes me suspect that I should
> perhaps just manually create and track a generator when I need a
> function with generator-like properties.

What do you mean? I don't quite understand why you'd have to "track"
a generator for getting generator-like properties.

Regards,


Björn

-- 
BOFH excuse #103:

operators on strike due to broken coffee machine

-- 
http://mail.python.org/mailman/listinfo/python-list


Coroutines and argument tupling

2007-08-15 Thread Marshall T. Vandegrift
Hi,

I'm trying to write a decorator which allows one to produce simple
coroutines by just writing a function as a generator expression which
re-receives it's arguments as a tuple from each yield.  For example:

@coroutine
def nextn(n=1):
values = []
for i in itertools.count():
while n <= 0:
(n,) = (yield values)
values = []
values.append(i)
n = n - 1

print nextn() # => [0]
print nextn(3)# => [1, 2, 3]
print nextn(n=2)  # => [4, 5]
print nextn() # => [6]

I've got this working, but have two questions.  First, is there a better
way to generically transform function arguments into a tuple than I'm
doing now?  That way being with this pretty hideous chunk of code:

class ArgPacker(object):
def __init__(self, function):
args, varargs, varkw, defaults = inspect.getargspec(function)
self.args = args or []
self.varargs = (varargs is not None) and 1 or 0
self.varkw = (varkw is not None) and 1 or 0
self.nargs = len(self.args) + self.varargs + self.varkw
defaults = defaults or []
defargs = self.args[len(self.args) - len(defaults):]
self.defaults = dict([(k, v) for k, v in izip(defargs, defaults)])

def pack(self, *args, **kwargs):
args = list(args)
result = [None] * self.nargs
for i, arg in izip(xrange(len(self.args)), self.args):
if args:
result[i] = args.pop(0)
elif arg in kwargs:
result[i] = kwargs[arg]
del kwargs[arg]
elif arg in self.defaults:
result[i] = self.defaults[arg]
else:
return None
if self.varargs:
result[len(self.args)] = args
elif args:
return None
if self.varkw:
result[-1] = kwargs
elif kwargs:
return None
return tuple(result)

I also tried a version using exec, which was much tighter, but used
exec.

Second, am I trying to hammer a nail with a glass bottle here?  The
ugliness of the ArgPacker class makes me suspect that I should perhaps
just manually create and track a generator when I need a function with
generator-like properties.

Thanks!

-Marshall

-- 
http://mail.python.org/mailman/listinfo/python-list