Thomas Moreau added the comment:
I did GH 19788 with a few modifications. There is only one lock that seems to
mater for the perf, and I actually added one other (the one in _python_exit,
which necessitate another bug fix for fork context).
I did not benchmark to see if it was worth it in
Change by Thomas Moreau :
--
pull_requests: +19111
pull_request: https://github.com/python/cpython/pull/19788
___
Python tracker
<https://bugs.python.org/issue39
Thomas Moreau added the comment:
I think this is a reasonable way to move on.Some of the locks can probably be
removed but this needs careful investigation and in the mean time, it hinders
everyone. Thanks victor for the fast fix up!
To me, an interesting observation is that the failure
Thomas Moreau added the comment:
Sorry I just saw this. It seems that I introduced this regression.
One of the goal of having a `ThreadWakeup` and not a `SimpleQueue` is to avoid
using locks that can hinder the performance of the executor. I don't remember
the exact details but I th
Change by Thomas Moreau :
--
keywords: +patch
pull_requests: +17931
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/18551
___
Python tracker
<https://bugs.python.org/issu
New submission from Thomas Moreau :
As discussed in GH#17670, the the `_queue_management_worker` function has grown
quite long and complicated.
It could be turned into an object with a bunch of short and readable helper
methods.
--
components: Library (Lib)
messages: 362218
nosy
Change by Thomas Moreau :
--
keywords: +patch
pull_requests: +17134
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/17670
___
Python tracker
<https://bugs.python.org/issu
New submission from Thomas Moreau :
The attached scripts hangs on python3.7+.
This is due to the fact that the main process closes the communication channels
directly while the queue_management_thread might still use them.
To prevent that, all the closing should be handled by the
Thomas Moreau added the comment:
The deadlocks I refer to in this issue are fixed by the PR #3895.
Subsequent failures (like the fact that the Executor is set in a broken state
when there is an unpickling error) are tracked in other issues so I think it is
safe to close this one
Change by Thomas Moreau :
--
keywords: +patch
pull_requests: +13158
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue36888>
___
___
Py
New submission from Thomas Moreau :
In the std lib, the semaphore_tracker and the Manager rely on daemonized
processes that are launched with server like loops. The cleaning of such
processes is made complicated by the fact that there is no cannonical way to
check that the parent process is
Change by Thomas Moreau :
--
keywords: +patch
pull_requests: +12805
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue36668>
___
___
Py
New submission from Thomas Moreau :
The current implementation of the semaphore_tracker creates a new process for
each children.
The easy fix would be to pass the _pid to the children but the current
mechanism to check if the semaphore_tracker is alive relies on waitpid which
cannot be used
Thomas Moreau added the comment:
This behavior results from the fact that in 3.6, the result_queue is used to
pass messages to the queue_manager_thread. This behavior has been changed in
3.7 as we rely on a _ThreadWakeup object.
In 3.6, when the result_queue is filled with many large
Change by Thomas Moreau :
--
pull_requests: +5929
stage: needs patch -> patch review
___
Python tracker
<https://bugs.python.org/issue33078>
___
___
Python-
Change by Thomas Moreau :
--
keywords: +patch
pull_requests: +5883
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue33078>
___
___
Py
New submission from Thomas Moreau :
The fix for the Queue._feeder does not properly handle the size of the Queue.
This can lead to a situation where the Queue is considered as Full when it is
empty. Here is a reproducing script:
```
import multiprocessing as mp
q = mp.Queue(1)
class
New submission from Thomas Moreau :
The recent changes introduced by https://github.com/python/cpython/pull/3895
leaks some file descriptors (the Pipe open in _ThreadWakeup).
They should be properly closed at shutdown.
--
components: Library (Lib)
messages: 313656
nosy: tomMoral
Thomas Moreau added the comment:
> Is it an optimization problem, or does it risk leaking semaphores?
I do not think it risks leaking semaphores as the clean-up is performed by the
process which created the Semaphore. So I would say it is more an optimization
issue.
It is true that I do
Thomas Moreau added the comment:
For new processes created with spawn or forkserver, only the
semaphore_tracker._fd is send and shared to the child. Thus, as the _pid
argument is None in the new process, it launches a new tracker if it needs to
create a new Semaphore, regardless of crashes
Thomas Moreau added the comment:
With this fix, the semaphore_tracker is not shared with the children anymore
and each process launches its own tracker.
I opened a PR to try to fix it. Let me know if I should open a new ticket.
--
nosy: +tomMoral
Change by Thomas Moreau :
--
pull_requests: +5025
___
Python tracker
<https://bugs.python.org/issue31310>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Thomas Moreau :
If the methods `set` and `clear` of `multiprocessing.Event` are called one
after another, while a `multiprocessing.Process` calls `wait`, the `Event`
does not match the documented behavior
(https://docs.python.org/3.7/library/threading.html
Change by Thomas Moreau :
--
keywords: +patch
pull_requests: +4218
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue31699>
___
___
Py
Change by Thomas Moreau :
--
pull_requests: +4207
stage: needs patch -> patch review
___
Python tracker
<https://bugs.python.org/issue22281>
___
___
Python-
New submission from Thomas Moreau :
When using `concurrent.futures.ProcessPoolExecutor` with objects that are not
picklable or unpicklable, several situations results in a deadlock, with the
interpreter freezed.
This is the case for different scenario, for instance these three :
https
New submission from Thomas Moreau:
The `ProcessPoolExecutor` processes start method can only be change by changing
the global default context with `set_start_method` at the beginning of a
script. We propose to allow passing a context argument in the constructor to
allow more flexible control
Thomas Moreau added the comment:
I think this is a good solution as it let the user define easily the behavior
it needs in other situation too. I would recommend adding the object
responsible for the failure to the _on_queue_thread_error callback. This would
simplify the error handling
Changes by Thomas Moreau :
--
pull_requests: +2001
___
Python tracker
<http://bugs.python.org/issue30414>
___
___
Python-bugs-list mailing list
Unsubscribe:
Thomas Moreau added the comment:
This fix, while preventing the Queue to crash, does not give any way to
programatically detect that the message was dropped. This is a problem as we
can no longer assume that the Queue will not drop messages. For instance, we
can no longer detect deadlocks in
New submission from Thomas Moreau:
The design of ProcessPoolExecutor contains some possible race conditions that
may freeze the interpreter due to deadlocks. This is notably the case with
pickling and unpickling errors for a submitted job and returned results. This
makes it hard to reuse a
31 matches
Mail list logo