Re: Pythonic style

2020-09-21 Thread Léo El Amri via Python-list
On 21/09/2020 15:15, Tim Chase wrote:
> You can use tuple unpacking assignment and Python will take care of
> the rest for you:
> 
> so you can do
> 
>   def fn(iterable):
> x, = iterable
> return x
> 
> I'm not sure it qualifies as Pythonic, but it uses Pythonic features
> like tuple unpacking and the code is a lot more concise.

I guess you just beat the topic. I think it is Pythonic and I'd be
surprised if someone came with something more Pythonic.

FI: The behavior of this assignation is detailed here:
https://docs.python.org/3/reference/simple_stmts.html#assignment-statements
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pythonic style

2020-09-21 Thread Léo El Amri via Python-list
On 21/09/2020 00:34, Stavros Macrakis wrote:
> I'm trying to improve my Python style.
> 
> Consider a simple function which returns the first element of an iterable
> if it has exactly one element, and throws an exception otherwise. It should
> work even if the iterable doesn't terminate. I've written this function in
> multiple ways, all of which feel a bit clumsy.
> 
> I'd be interested to hear thoughts on which of these solutions is most
> Pythonic in style. And of course if there is a more elegant way to solve
> this, I'm all ears! I'm probably missing something obvious!

Hello Stavros,

As there is no formal definition nor consensus on what is Pythonic and
what isn't, this reply will be very subjective.



My opinion on your code suggestions is the following:

> 1 def firstf(iterable):
> 2 n = -1
> 3 for n,i in enumerate(iterable):
> 4 if n>0:
> 5 raise ValueError("first1: arg not exactly 1 long")
> 6 if n==0:
> 7 return i
> 8 else:
> 9 raise ValueError("first1: arg not exactly 1 long")

firstf isn't Pythonic:

1. We are checking twice for the same thing at line 4 (n>0)
   and 6 (n==0).

2. We are using enumarate(), from which we ignore the second element
   it yields in its tuple

> 1 def firstd(iterable):
> 2 it = iter(iterable)
> 3 try:
> 4 val = next(it)
> 5 except StopIteration:
> 6 raise ValueError("first1: arg not exactly 1 long")
> 7 for i in it:
> 8 raise ValueError("first1: arg not exactly 1 long")
> 9 return val

firstd isn't Pythonic. While the usage of a for statement in place of a
try..except saves two lines, it is at the expense of showing a clear
intent: When I see a for statement, I expect a "complex" operation on
the iterable items (which we are ignoring here).

>  1 def firsta(iterable):
>  2 it = iter(iterable)
>  3 try:
>  4 val = next(it)
>  5 except StopIteration:
>  6 raise ValueError("first1: arg not exactly 1 long")
>  7 try:
>  8 next(it)
>  9 except StopIteration:
> 10 return val
> 11 raise ValueError("first1: arg not exactly 1 long")

>  1 def firstb(iterable):
>  2 it = iter(iterable)
>  3 try:
>  4 val = next(it)
>  5 except StopIteration:
>  6 raise ValueError("first1: arg not exactly 1 long")
>  7 try:
>  8 next(it)
>  9 except StopIteration:
> 10 return val
> 11 else:
> 12 raise ValueError("first1: arg not exactly 1 long")

>  1 def firstc(iterable):
>  2 it = iter(iterable)
>  3 try:
>  4 val = next(it)
>  5 except StopIteration:
>  6 raise ValueError("first1: arg not exactly 1 long")
>  7 try:
>  8 next(it)
>  9 raise ValueError("first1: arg not exactly 1 long")
> 10 except StopIteration:
> 11 return val

firsta, firstb and firstc are equally Pythonic. I have a preference for
firsta, which is more concise and have a better "reading flow".

>  1 def firste(iterable):
>  2 it = iter(iterable)
>  3 try:
>  4 good = False
>  5 val = next(it)
>  6 good = True
>  7 val = next(it)
>  8 good = False
>  9 raise StopIteration   # or raise ValueError
> 10 except StopIteration:
> 11 if good:
> 12 return val
> 13 else:
> 14 raise ValueError("first1: arg not exactly 1 long")

firste might be Pythonic although it's very "C-ish". I can grasp the
intent and there is no repetition. I wouldn't write the assignation at
line 7, though.



Mixing firsta and firste would make something more Pythonic:

def firstg(iterable):
it = iter(iterable)
try:
val = next(it)
try:
next(it)
except StopIteration:
return val
except StopIteration:
pass
raise ValueError("first: arg not exactly 1 long")

1. The code isn't repetitive (The "raise ValueError" is written
   only once)

2. The intent is a bit harder to grasp than for firsta or firste, but
   the code is shorter than firste

3. The try..catch nesting is considered a bad practice, but the code
   here is simple enough so it shouldn't trigger a strong aversion
   reading it



- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio Queue implementation suggestion

2020-09-17 Thread Léo El Amri via Python-list
On 17/09/2020 16:51, Dennis Lee Bieber wrote:
> On Wed, 16 Sep 2020 13:39:51 -0400, Alberto Sentieri <2...@tripolho.com>
> declaimed the following:
> 
>> devices tested simultaneously, I soon run out of file descriptor. Well, 
>> I increased the number of file descriptor in the application and then I 
>> started running into problems like “ValueError: filedescriptor out of 
>> range in select()”. I guess this problem is related to a package called 
> 
> https://man7.org/linux/man-pages/man2/select.2.html
> """
> An fd_set is a fixed size buffer. 
> """
> and
> """
> On success, select() and pselect() return the number of file
>descriptors contained in the three returned descriptor sets (that
> is,
>the total number of bits that are set in readfds, writefds,
>exceptfds).
> """
> Emphasis "number of bits that are set" -- the two together implies that
> these are words/longwords/quadwords used a bitmaps, one fd per bit, and
> hence inherently limited to only as many bits as the word-length supports.

By the way, Alberto, you can change the selector used by your event loop
by instantiating the loop class by yourself [1]. You may want to use
selectors.PollSelector [2].

[1]
https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.SelectorEventLoop
[2] https://docs.python.org/3/library/selectors.html#selectors.PollSelector

- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio Queue implementation suggestion

2020-09-17 Thread Léo El Amri via Python-list
Hello Alberto,

I scrambled your original message a bit here.

> Apparently asyncio Queues use a Linux pipe and each queue require 2 file
> descriptors. Am I correct?

As far as I know (And I know a bit about asyncio in CPython 3.5+)
asyncio.queues.Queue doesn't use any file descriptor. It is implemented
using futures which are themselves tightly implemented with the inner
workings of the event loop in mind.

> I wrote a daemon in Python 3 (running in Linux) which test many devices
> at the same time, to be used in a factory environment. This daemon
> include multiple communication events to a back-end running in another
> country. I am using a class for each device I test, and embedded into
> the class I use asyncio. Due to the application itself and the number of
> devices tested simultaneously, I soon run out of file descriptor. Well,
> I increased the number of file descriptor in the application and then I
> started running into problems like “ValueError: filedescriptor out of
> range in select()”. I guess this problem is related to a package called
> serial_asyncio, and of course, that could be corrected. However I became
> curious about the number of open file descriptors opened: why so many?

I don't know about a CPython internal called serial_asyncio.

I wonder whether the issue is coming from too much file descriptors...
1. If so, then I assume the issue you encounter with the selector is
coming from the number of sockets you open (Too many sockets).
2. Otherwise it could come from the fact that some sockets gets
destroyed before the selector gets a chance to await on the file
descriptors.

Without a complete traceback I can't be 100% certain, but the error
message you get looks like a EBADF error. So the second hypothesis is my
preferred.

Have you tried enabling asyncio's debug mode ? I hope it can give you
more information on why this is occurring. I believe it's a bug, not a
limitation of CPython.

- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: There is LTS?

2020-08-24 Thread Léo El Amri via Python-list
On 24/08/2020 04:54, 황병희 wrote:
> Hi, just i am curious. There is LTS for *Python*? If so, i am very thank
> you for Python Project.

Hi Byung-Hee,

Does the "LTS" acronym you are using here stands for "Long Term Support" ?

If so, then the short answer is: Yes, kind of. There is a 5 years
maintenance for each major version of Python 3.

You can find information on how Python is released and maintained at PEP
101 [1] and on the developers documentation [2].

Each major Python release typically have a PEP associated with it where
past minor releases dates and future plans are recorded.
For example: PEP 494 [3] for Python 3.6, PEP 537 [4] for Python 3.7 and
PEP 569 [5] for Python 3.8.

[1] https://www.python.org/dev/peps/pep-0101/
[2] https://devguide.python.org/devcycle/
[3] https://www.python.org/dev/peps/pep-0494/
[4] https://www.python.org/dev/peps/pep-0537/
[5] https://www.python.org/dev/peps/pep-0569/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio tasks getting cancelled

2018-11-05 Thread Léo El Amri via Python-list
On 05/11/2018 16:38, i...@koeln.ccc.de wrote:
> I just saw, actually
> using the same loop gets rid of the behavior in this case and now I'm
> not sure about my assertions any more.

It's fixing the issue because you're running loop with the
run_forever(). As Ian and myself pointed out, using both asyncio.run()
and loop.run_forever() is not what you are looking for.

> Yet it still looks like asyncio
> doen'st keep strong references.

You may want to look at PEP 3156. It's the PEP defining asyncio.
Ian made a good explanation about why your loop wasn't running the
coroutine scheduled from main().

>> This should indeed fix the issue, but this is definitely not what one is
>> looking for if one really want to _schedule_ a coroutine.
> 
> Which is what I want in this case. Scheduling a new (long-running) task
> as a side effect, but returning early oneself. The new task can't be
> awaited right there, because the creating one should return already.
> 
>> If the goal here is for the task created by main() to complete before
>> the loop exits, then main() should await it, and not just create it
>> without awaiting it.
> 
> So if this happens somewhere deep in the hirarchy of your application
> you would need some mechanism to pass the created tasks back up the
> chain to the main function?

Sorry, I don't get what is you're trying to express.

Either you await for a coroutine, which means the calling one is
"paused" until the awaited coroutine finish its job, either you schedule
a coroutine, which means it will be ran once the already stacked
events/coroutines on the loop are all ran.

In your case, you probably want to change

> loop = asyncio.get_event_loop()
> asyncio.run(main())
> loop.run_forever()

to

> loop = asyncio.get_event_loop()
> loop.create_task(main())
> loop.run_forever()

- Léo

[1] https://www.python.org/dev/peps/pep-3156/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio tasks getting cancelled

2018-11-05 Thread Léo El Amri via Python-list
On 05/11/2018 07:55, Ian Kelly wrote:
>> I assume it's kind of a run_forever() with some code before it
>> to schedule the coroutine.
> 
> My understanding of asyncio.run() from
> https://github.com/python/asyncio/pull/465 is that asyncio.run() is
> more or less the equivalent of loop.run_until_complete() with
> finalization, not loop.run_forever(). Thus this result makes sense to
> me: asyncio.run() runs until main() finishes, then stops. Without the
> sleep(2), main() starts creates the foobar() task and then returns
> without awaiting it, so the sleep(1) never finishes. asyncio.run()
> also finalizes its event loop, so I assume that the loop being
> run_forever must be a different event loop since running it doesn't
> just raise an exception. Therefore it doesn't resume the foobar()
> coroutine since it doesn't know about it.

That totally make sense. I agree with this supposition.


> If the goal here is for the task created by main() to complete before
> the loop exits, then main() should await it, and not just create it
> without awaiting it.

I said the following:

> Usually, when you're already running into a coroutine, you're using
> "await other_coroutine()" instead of
> "asyncio.create_task(other_coroutine())".

Which is not accurate. What Ian said is accurate. One indeed may need to
schedule a coroutine in some situation. I just assumed that's wasn't
what the OP intended to do given the name of the "main" function.

I also said:

> But anyway, I highly recommend you to use the "await other_coroutine()"
> syntax I talked about earlier. It may even fix the issue (90% chance).

This should indeed fix the issue, but this is definitely not what one is
looking for if one really want to _schedule_ a coroutine.

- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio tasks getting cancelled

2018-11-04 Thread Léo El Amri via Python-list
On 04/11/2018 20:25, i...@koeln.ccc.de wrote:
> I'm having trouble with asyncio. Apparently tasks (asyncio.create_task)
> are not kept referenced by asyncio itself, causing the task to be
> cancelled when the creating function finishes (and noone is awaiting the
> corresponding futue). Am I doing something wrong or is this expected
> behavior?
> 
> The code sample I tried:
> 
>> import asyncio
>>
>> async def foobar():
>> print(1)
>> await asyncio.sleep(1)
>> print(2)
>>
>> async def main():
>> asyncio.create_task(foobar())
>> #await asyncio.sleep(2)
>>
>> loop = asyncio.get_event_loop()
>> asyncio.run(main())
>> loop.run_forever()
> 

I don't know anything about asyncio in Python 3.7, but given the
documentation, asyncio.run() will start a loop and run the coroutine
into it until there is nothing to do anymore, then free the loop it
created. I assume it's kind of a run_forever() with some code before it
to schedule the coroutine.

Thus, I don't think it's appropriate to allocate the loop by yourself
with get_event_loop(), then to run it with run_forever().
Usually, when you're already running into a coroutine, you're using
"await other_coroutine()" instead of
"asyncio.create_task(other_coroutine())".

Buy it should not be the root cause of your issue. With theses
information only, it looks like a Python bug to me (Or an implementation
defined behavior. By the way what is your platform ?).

You could try to change either one of asyncio.create_task() or
asyncio.run() with "older" Python API (Python 3.6, preferably).
In the case of run() it would be a simple loop.create_task(main())
instead of the asyncio.asyncio.run(main()).
In the case of asyncio.create_task() it would be
asyncio.get_event_loop(foobar()).
But anyway, I highly recommend you to use the "await other_coroutine()"
syntax I talked about earlier. It may even fix the issue (90% chance).

I don't have Python 3.7 right now, so I can't help further. I hope
someone with knowledge in asyncio under Python 3.7 will be able to help.

- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Package creation documentation?

2018-10-16 Thread Léo El Amri via Python-list
Hello Spencer,

On 16/10/2018 17:15, Spencer Graves wrote:
>   Where can I find a reasonable tutorial on how to create a Python
> package?

IMO, the best documentation about this is the tutorial:
https://docs.python.org/3/tutorial/modules.html#packages

>   According to the Python 3 Glossary, "a package is a Python module
> with an __path__ attribute."[1]

What you are looking at are the technical details of what a package is.
Incidentally, if you follow the tutorial, everything will get in-place.

>   I found "packaging.python.org", which recommends "Packaging Python
> Projects"[2] and "An Overview of Packaging for Python".[3]

packaging.python.org is centered on "How to install and distribute
Python packages (Or modules)"

- Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Package creation documentation?

2018-10-16 Thread Léo El Amri via Python-list
Given your coding experience also you may want to look at
https://docs.python.org/3/reference/import.html#packages,
which is the technical detail of what a package is (And "how" it's
implemented).
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio await different coroutines on the same socket?

2018-10-03 Thread Léo El Amri via Python-list
Hello Russell,

On 03/10/2018 15:44, Russell Owen wrote:
> Using asyncio I am looking for a simple way to await multiple events where 
> notification comes over the same socket (or other serial stream) in arbitrary 
> order. For example, suppose I am communicating with a remote device that can 
> run different commands simultaneously and I don't know which command will 
> finish first. I want to do this:
> 
> coro1 = start(command1)
> coro2 = start(command2)
> asyncio.gather(coro1, coro2)
> 
> where either command may finish first. I’m hoping for a simple and 
> idiomatic way to read the socket and tell each coroutine it is done. So far 
> everything I have come up with is ugly, using multiple layers of "async 
> def”, keeping a record of Tasks that are waiting and calling "set_result" 
> on those Tasks when finished. Also Task isn’t even documented to have the 
> set_result method (though "future" is)
I don't really get what you want to achieve. Do you want to signal other
coroutines that one of the others finished ?

From what I understand, you want to have several coroutines reading on
the same socket "simultaneously", and you want to stop all of them once
one of them is finished. Am I getting it right ?

-- 
Léo
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [OT] master/slave debate in Python

2018-09-26 Thread Léo El Amri via Python-list
On 26/09/2018 06:34, Ian Kelly wrote:
> Chris Angelico  wrote:
>> What I know about them is that they (and I am assuming there are
>> multiple people, because there are reports of multiple reports, if
>> that makes sense) are agitating for changes to documentation without
>> any real backing.
> 
> The terminology should be changed because it's offensive, full stop.
> It may be normalized to many who are accustomed to it, but that
> doesn't make it any less offensive.

Come on ! I have nothing to add to what Terry said:
https://bugs.python.org/msg324773

Now, the bug report is still pointing out some places in the code where
the master/slave terminology is misused, for _technical_ reasons. And
none of us should be blinded by the non-technical motive of the
bug-report. We should do what we have to do, and just let sink the rest
of it.

> Imagine if the terminology were instead "dominant / submissive".
> Without meaning to assume too much, might the cultural context
> surrounding those terms make you feel uncomfortable when using them?

I couldn't care less as well. The meaning of words is given by the context.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [OT] master/slave debate in Python

2018-09-24 Thread Léo El Amri via Python-list
On 24/09/2018 18:30, Dan Purgert wrote:
> Robin Becker wrote:
>> [...] just thought control of the wrong sort..
> 
> Is there "thought control of the right sort"?

We may have to ask to Huxley
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [OT] master/slave debate in Python

2018-09-24 Thread Léo El Amri via Python-list
On 24/09/2018 14:52, Robin Becker wrote:
> On 23/09/2018 15:45, Albert-Jan Roskam wrote:
>> *sigh*. I'm with Hettinger on this.
>>
>> https://www.theregister.co.uk/2018/09/11/python_purges_master_and_slave_in_political_pogrom/
>>
>>
> I am as well. Don't fix what is not broken. The semantics (in
> programming) might not be an exact match, but people have been using
> these sorts of terms for a long time without anyone objecting. This sort
> of language control is just thought control of the wrong sort.

All receivable arguments have been told already, thanks to Terry,
Steven, Raymond and others, who all managed to keep their capability to
think rationally after the motive of this bug report.

Hopefully we have competent core devs.
-- 
https://mail.python.org/mailman/listinfo/python-list


>< swap operator

2018-08-13 Thread Léo El Amri via Python-list
On 13/08/2018 21:54, skybuck2...@hotmail.com wrote:
> I just had a funny idea how to implement a swap operator for types:
> 
> A >< B
> 
> would mean swap A and B.

I think that:

a, b = b, a

is pretty enough
-- 
https://mail.python.org/mailman/listinfo/python-list


Python 3.6 Logging time is not listed

2018-08-13 Thread Léo El Amri via Python-list
On 13/08/2018 19:23, MRAB wrote:
> Here you're configuring the logger, setting the name of the logfile and
> the logging level, but not specifying the format, so it uses the default
> format:
> 
>> logging.basicConfig(filename='example.log',level=logging.DEBUG)
> 
> Here you're configuring the logger again, this time specifying the
> format and the logging level, but not a path of a logging file, so it'll
> write to the console:
> 
>> logging.basicConfig(format='%(asctime)s;%(levelname)s:%(message)s',
>> level=logging.DEBUG)
> 
> The second configuration is overriding the first.

No, the second is not overriding the first one. The second call simply
does nothing at all. See
https://docs.python.org/3/library/logging.html#logging.basicConfig :
"This function does nothing if the root logger already has handlers
configured for it."
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: Warning message when waiting for an Event set by AbstractLoop.add_reader

2018-08-12 Thread Léo El Amri via Python-list
I found out what was the problem.

The behavior of my "reader" (The callback passed to
AbstractEventLoop.add_reader()) is to set an event. This event is
awaited for in a coroutine which actually reads what is written on a
pipe. The execution flow is the following:

* NEW LOOP TURN
* The selector awaits for read-ready state on a file descriptor AND The
coroutine awaits for the event
* At some point, the file descriptor becomes read-ready, the selector
queue the callback to set the event on the loop
* The callback put on the queue is called and the event is set
* NEW LOOP TURN
* The selector awaits for read-ready state on a file descriptor AND The
coroutine awaits for the event
* The file descriptor is read-ready, so the selector queue the callback
to set the event on the loop
* The corouting is resumed, because the event is now set, the corouting
then reads what was written on the file descriptor, then clears the
event and awaits again for it
* The callback put on the queue is called and the event is set
* NEW LOOP TURN
* The selector awaits for read-ready state on a file descriptor AND The
coroutine awaits for the event
* The corouting is resumed, because the event was set at the end of the
last turn, so the coroutine attempts a read, but blocks on it, because
the file descriptor is actually not read-ready

Rince and repeat.

This behavior actually depends on how the loop is implemented. So the
best practice here is to actually do the read from the callback you
gives to AbstractEventLoop.add_reader(). I think we should put a mention
about it in the documentation.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Embedded Python and multiprocessing on Windows?

2018-08-10 Thread Léo El Amri via Python-list
On 09/08/2018 19:33, Apple wrote:> So my program runs one script file,
and multiprocessing commands from that script file seem to fail to spawn
new processes.
> 
> However, if that script file calls a function in a separate script file that 
> it has imported, and that function calls multiprocessing functions, it all 
> works exactly the way it should.
> On Thursday, August 9, 2018 at 12:09:36 PM UTC-4, Apple wrote:
>> I've been working on a project involving embedding Python into a Windows 
>> application. I've got all of that working fine on the C++ side, but the 
>> script side seems to be hitting a dead end with multiprocessing. When my 
>> script tries to run the same multiprocessing code that works in a 
>> non-embedded environment, the code doesn't appear to be executed at all.
>>
>> Still no joy. However, a Python.exe window does pop up for a tenth of a 
>> second, so *something* is happening.

That may be something simple: Did you actually protected the entry-point
of your Python script with if __name__ == '__main__': ?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio: Warning message when waiting for an Event set by AbstractLoop.add_reader

2018-08-03 Thread Léo El Amri via Python-list
Thanks for the answer, but the problem is that this is happening in the
built-in Event of the asyncio package; which is actually a coroutine. I
don't expect the built-in to have this kind of behavior. I guess I'll
have to dig on the source code of the asyncio default loop to actually
understand how all the thing is behaving in the shadows. Maybe I'm
simply doing something wrong. But it would mean that the documentation
is lacking some details, or maybe that I'm just really stupid on this one.

On 03/08/2018 07:05, dieter wrote:
> Léo El Amri via Python-list  writes:
>> ...
>> WARNING:asyncio:Executing  took 1.000 seconds
>> ...
>>  But there is still this warning...
> 
> At your place, I would look at the code responsible for the warning.
> I assume that it is produced because the waiting time is rather
> high -- but this is just a guess.
> 
-- 
https://mail.python.org/mailman/listinfo/python-list


asyncio: Warning message when waiting for an Event set by AbstractLoop.add_reader

2018-08-02 Thread Léo El Amri via Python-list
Hello list,

During my attempt to bring asyncio support to the multiprocessing Queue,
I found warning messages when executing my code with asyncio debug
logging enabled.

It seems that awaiting for an Event will make theses messages appears
after the second attempt to wait for the Event.

Here is a code sample to test this (47 lines long):

import asyncio
import threading
import os
import io
import time
import logging
logging.basicConfig()
logging.getLogger('asyncio').setLevel(logging.DEBUG)
fd1, fd2 = os.pipe()  # (r, w)
loop = asyncio.get_event_loop()
event = asyncio.Event()
def readfd(fd, size):
buf = io.BytesIO()
handle = fd
remaining = size
while remaining > 0:
chunk = os.read(handle, remaining)
n = len(chunk)
if n == 0:
raise Exception()
buf.write(chunk)
remaining -= n
return buf.getvalue()
def f():
loop.add_reader(fd1, event.set)
loop.run_until_complete(g())
loop.close()
async def g():
while True:
await event.wait()
msg = readfd(fd1, 4)
event.clear()
if msg == b'STOP':
break
print(msg)
if __name__ == '__main__':
t = threading.Thread(target=f)
t.start()
time.sleep(1)
os.write(fd2, b'TST1')
time.sleep(1)
os.write(fd2, b'TST2')
time.sleep(1)
os.write(fd2, b'TST3')
time.sleep(1)
os.write(fd2, b'STOP')
t.join()

I get this output:

DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:poll took 999.868 ms: 1 events
b'TST1'
b'TST2'
WARNING:asyncio:Executing  took 1.001 seconds
INFO:asyncio:poll took 1000.411 ms: 1 events
b'TST3'
WARNING:asyncio:Executing  took 1.001 seconds
DEBUG:asyncio:Close <_UnixSelectorEventLoop [CUT]>

I don't get the message, because it seems that the event is awaited
properly. I'm especially bugged, because I get this output from the
actual code I'm writing:

WARNING:asyncio:Executing  took 1.000 seconds
DEBUG:asyncio:poll took 0.012 ms: 1 events

Here, it seems that "polling" went for 0.012 ms. But it should have gone
for 1000.0 ms. But in any case, it still works. However, the fact that
polling seems to "return" instantly is a strange fact.

The test code is not producing this difference. In fact, it seems that
polling is working as intended (Polling takes 1000.0 ms and Event is
awaited for 1000.0 ms). But there is still this warning...

Maybe one of you have an insight on this ?
-- 
https://mail.python.org/mailman/listinfo/python-list


Why is multiprocessing.Queue creating a thread ?

2018-07-29 Thread Léo El Amri via Python-list
Hello list,

This is a simple question: I wonder what is the reason behind
multiprocessing.Queue creating a thread to send objects through a
multiprocessing.connection.Connection.
I plan to implement an asyncio "aware" Connection class. And while
reading the source code of the multiprocessing module, I found that (As
outlined in the documentation) Queue is indeed creating a thread. I want
to know if I'm missing something.

-- 
Leo
-- 
https://mail.python.org/mailman/listinfo/python-list