> On 4 Sep 2019, at 22:58, Andrew Barnert wrote:
>
>> On Sep 4, 2019, at 10:17, Anders Hovmöller wrote:
>>
>>> .
>>
>> Doesn't all that imply that it'd be good if you could just pass it the queue
>> object you want?
>
> Pass it a queue object that you construct? Or a queue factory (which w
On Thu, Sep 5, 2019 at 1:02 AM Andrew Barnert wrote:
>
> > I dislike runtime behavior of static types because I am very afraid
> > accidental
> > large performance or memory footprint regression.
> >
> > ABC has extension module for speedup, but `isinstance([], Iterable)` is
> > 4x slower than `i
On Sep 4, 2019, at 19:52, Bar Harel wrote:
>
> I'm sorry but I truly fail to see the complication:
>
> sem = Semaphore(10) # line num 1 somewhere near executor creation
> sem.acquire() # line number 2, right before submit
> future = executor.sumbit(...)
> future.add_done_callback(lambda x: sem.
I'm sorry but I truly fail to see the complication:
sem = Semaphore(10) # line num 1 somewhere near executor creation
sem.acquire() # line number 2, right before submit
future = executor.sumbit(...)
future.add_done_callback(lambda x: sem.release()) # line number 3, right
after submit.
It's only
On Sep 4, 2019, at 08:54, Dan Sommers <2qdxy4rzwzuui...@potatochowder.com>
wrote:
>
> How does blocking the submit call differ from setting max_workers
> in the call to ThreadPoolExecutor?
Here’s a concrete example from my own code:
I need to create thousands of images, each of which is about 1
On Sep 4, 2019, at 12:37, Bar Harel wrote:
>
> The way you solve it is with a semaphore mentioned earlier. It's a standard
> mechanism, to be used in cases of asynchronous work, regardless of the
> underlying library.
Is that really true for asynchronous APIs that are explicitly queue-based?
On Sep 4, 2019, at 10:17, Anders Hovmöller wrote:
>
>
>> On 4 Sep 2019, at 18:31, Andrew Barnert via Python-ideas
>> wrote:
>>
>> On Sep 4, 2019, at 04:21, Chris Simmons wrote:
>>
>> I have seen deployed servers that wrap an Executor with a Semaphore to add
>> this functionality (which is
On Wed, Sep 4, 2019, 10:40 PM Dan Sommers <
2qdxy4rzwzuui...@potatochowder.com> wrote:
> I'm sure I'm missing something, but isn't that the point of a
> ThreadPoolExecutor? Yes, you can submit more requests than you
> have resources to execute concurrently, but the executor itself
> limits the nu
On 9/4/19 11:08 AM, Joao S. O. Bueno wrote:
I second that such a feature would be useful, as I am on the verge of
implementing
a work-around for that in a project right now.
I'm sure I'm missing something, but isn't that the point of a
ThreadPoolExecutor? Yes, you can submit more requests tha
>
> I must ask again about the actual necessity of adding a blocking call to
executor.submit in that particular case.
If I may intervene, the issue we're discussing about is frequently
encountered with asyncio: You can have an enormous queue of clients or
requests, creating a coroutine for each on
> On 4 Sep 2019, at 18:31, Andrew Barnert via Python-ideas
> wrote:
>
> On Sep 4, 2019, at 04:21, Chris Simmons wrote:
>
> I have seen deployed servers that wrap an Executor with a Semaphore to add
> this functionality (which is mildly silly, but not when the “better”
> alternative is to s
On Sep 4, 2019, at 08:08, Joao S. O. Bueno wrote:
>
> I second that such a feature would be useful, as I am on the verge of
> implementing
> a work-around for that in a project right now.
This seems common enough that, whatever the final design is, someone should put
a concurrent39 or whatever
On Sep 4, 2019, at 04:21, Chris Simmons wrote:
I have seen deployed servers that wrap an Executor with a Semaphore to add this
functionality (which is mildly silly, but not when the “better” alternative is
to subclass the Executor and use knowledge of its implementation intervals…).
Which impl
On Sep 4, 2019, at 01:29, Inada Naoki wrote:
>
> On Wed, Sep 4, 2019 at 1:15 PM Andrew Barnert via Python-ideas
> wrote:
>>
>>
>> But that implies that you can also write this:
>>
>>isinstance(x, Union[str, int])
>>
>> … because, after all, str|int is defined as meaning exactly that. Whi
I second that such a feature would be useful, as I am on the verge of
implementing
a work-around for that in a project right now.
And maybe, instead of a "submit_blocking" create the new method so that
it takes the arguments to the future as a explict sequence and mapping in
named parameters?
so
(Somehow your post came in twice. I'm replying to the second one.)
This seems a reasonable idea. A problem may be how to specify this, since
all positional and keyword arguments to `submit()` after the function
object are passed to the call. A possible solution would be to add a second
call, `subm
I have a script that uploads files to Google Drive. It presently performs the
uploads serially, but I want to do the uploads in parallel--with a reasonable
number of simultaneous uploads--and see if that improves performance. I think
that an Executor is the best way to accomplish this task.
The
I have a script that uploads files to Google Drive. It presently performs the
uploads serially, but I want to do the uploads in parallel--with a reasonable
number of simultaneous uploads--and see if that improves performance. I think
that an Executor is the best way to accomplish this task.
The
I recommend you take at look at the "toolz" library, which provides
assorted APIs to help in data structure manipulation:
https://toolz.readthedocs.io/en/latest/index.html
Especially this function:
https://toolz.readthedocs.io/en/latest/api.html#toolz.dicttoolz.get_in
Regards
Antoine.
On Thu
On Wed, Sep 4, 2019 at 1:15 PM Andrew Barnert via Python-ideas
wrote:
>
> On Sep 3, 2019, at 19:45, Steven D'Aprano wrote:
> >
> > On Thu, Aug 29, 2019 at 06:20:55PM +0100, Rob Cliffe via Python-ideas wrote:
> >
> >>> isinstance(x, str | int) ==> "is x an instance of str or int"
> >>
> >> Er, is
20 matches
Mail list logo