> It looks like you have a good handle on the code -- do you want to submit a
> PR to GitHub to add such a parameter?
Thanks, but I'm not really sure how to implement it in the ProcessPoolExecutor,
I just think the solution is probably related to the code responsible of
handling a failed
> But I don’t think “terminate” is the right name. Maybe “cancel”? Or even
> “shutdown(wait=whatever, cancel=True)?”
"terminate" was definitely not a good name, especially because it doesn't
actually terminate anything, it just cancels some of the operations. Since it
also has to cooperate
On Jan 3, 2020, at 10:11, Miguel Ángel Prosper
wrote:
>
>
>>
>> Having a way to clear the queue and then shutdown once existing jobs are
>> done is a lot
>> more manageable.
> ...
>> So the only clean way to do this is cooperative: flush the queue, send some
>> kind of
>> message to all
On Fri, Jan 3, 2020 at 3:28 PM Miguel Ángel Prosper <
miguelangel.pros...@gmail.com> wrote:
> > gets one item from the queue, runs it, and then checks if the executor
> is being shut down.
>
> That's exactly what I thought at first, but just after that the continue
> statement prevents that
> gets one item from the queue, runs it, and then checks if the executor is
> being shut down.
That's exactly what I thought at first, but just after that the continue
statement prevents that check, so all futures always get processed. Only when
the sentinel is reached, which it's placed at
There have been a number of comments in this thread, and some slight
variations in the idea.
I'm afraid that it all looks awkward to me, so far. I understand trying to
work out the smallest change that would allow the style, but it just feels
"bolted on" in a bad way.
I believe that if Python
This is very cool, well thought-out, and solves an important problem. I
would have definitely benefited from Python having better pattern matching.
As an idealist though, if you're going to add pattern matching, why not
just do it completely? I agree that your proposal would be useful in some
Looking at the implementation in concurrent/futures/thread.py, it looks
like each of the worker threads repeatedly gets one item from the queue,
runs it, and then checks if the executor is being shut down. Worker threads
get added dynamically until the executor's max thread count is reached. New
> Having a way to clear the queue and then shutdown once existing jobs are done
> is a lot
> more manageable.
...
> So the only clean way to do this is cooperative: flush the queue, send some
> kind of
> message to all children telling them to finish as quickly as possible, then
> wait for them
Where’s the initial email you’re replying to here? I don’t have it in my inbox,
and it isn’t on the Mailman archive either, and since you snipped it down to a
single line I have no idea what that snippet was referring to.
Meanwhile:
> On Jan 2, 2020, at 18:14, James Lu wrote:
>
>
>
On Jan 2, 2020, at 20:40, Miguel Ángel Prosper
wrote:
>
> I think it would be very helpful to have an additional argument (cancel for
> example) added to Executor.shutdown that cancels all pending futures
> submitted to the executor.
> Then context manager would gain the ability to abort all
On Sat, Jan 4, 2020 at 1:16 AM Stephen J. Turnbull
wrote:
> If you really want to save disk space measured in MB, I guess you'd
> probably want LRU semantics, and possibly a blacklist of modules that
> are only used interactively and at most once a day or so, so the time
> to import doesn't
James Lu writes:
> Ideally, we'd have a functional package manager: one that can
> delete binaries when disk space is running low, and recompiles from
> sources when the binary is needed again.
First, that's not what package managers do. Package managers manage
dependencies, not disk space.
13 matches
Mail list logo