Jonas Obrist <ojiido...@gmail.com> added the comment:
I realized I have to call __await__ of the inner coroutine object in
NonTrueAwaitable.__await__. This is not a bug, but my mistake.
--
resolution: -> not a bug
stage: -> resolved
status: op
Jonas Obrist <ojiido...@gmail.com> added the comment:
On 9c463ec88ba21764f6fff8e01d6045a932a89438 (master/3.7) both cases fail to
execute. I would argue that this code should be allowed...
--
___
Python tracker <rep...@bugs.python.or
Jonas Obrist <ojiido...@gmail.com> added the comment:
I've just realized the difference between the environments wasn't the operating
system, but PYTHONASYNCIODEBUG. If it is set, the code works, however if it is
unset the code does not work. See the updated (attached) code for ref
New submission from Jonas Obrist <ojiido...@gmail.com>:
The attached code runs fine on MacOS using 3.6.5 from homebrew. However on
Windows (I tested on 3.6.4 with the 32bit installer from the website) and Linux
(using the python:3.6.5 docker image) it errors with "TypeError: ca
Changes by Jonas Obrist <ojiido...@gmail.com>:
--
resolution: -> duplicate
status: open -> closed
___
Python tracker <rep...@bugs.python.org>
<http://bugs.
Changes by Jonas Obrist ojiido...@gmail.com:
Added file: http://bugs.python.org/file40248/process_segfault.py
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24927
New submission from Jonas Obrist:
When using multiprocessing.Pool, if the function run in the pool segfaults, the
program will simply hang forever. However when using multiprocessing.Process
directly, it runs fine, setting the exitcode to -11 as expected.
I would expect the Pool to behave
Jonas Obrist added the comment:
So the reason this is happening is very simple:
When using Pool.apply, the task (function) is sent to the task queue, which is
consumed by the worker. At this point the task is in progress. However, the
worker dies without being able to finish the task