On 14 September 2017 at 11:44, Eric Snow
wrote:
> Examples
>
>
> Run isolated code
> -
>
> ::
>
>interp = interpreters.create()
>print('before')
>interp.run('print("during")')
>print('after')
>
A few more suggestions for examples:
Running a module:
On 7 October 2017 at 02:29, Koos Zevenhoven wrote:
> While I'm actually trying not to say much here so that I can avoid this
> discussion now, here's just a couple of ideas and thoughts from me at this
> point:
>
> (A)
> Instead of sending bytes and receiving memoryviews, one could consider
> sen
While I'm actually trying not to say much here so that I can avoid this
discussion now, here's just a couple of ideas and thoughts from me at this
point:
(A)
Instead of sending bytes and receiving memoryviews, one could consider
sending *and* receiving memoryviews for now. That could then be exten
On 6 October 2017 at 11:48, Eric Snow wrote:
> > And that's the real pay-off that comes from defining this in terms of the
> > memoryview protocol: Py_buffer structs *aren't* Python objects, so it's
> only
> > a regular C struct that gets passed across the interpreter boundary (the
> > reference
On Thu, Oct 5, 2017 at 4:57 AM, Nick Coghlan wrote:
> This would be hard to get to work reliably, because "orig.tp_share()" would
> be running in the receiving interpreter, but all the attributes of "orig"
> would have been allocated by the sending interpreter. It gets more reliable
> if it's *Cha
On 5 October 2017 at 18:45, Eric Snow wrote:
> After we move to not sharing the GIL between interpreters:
>
> Channel.send(obj): # in interp A
> incref(obj)
> if type(obj).tp_share == NULL:
> raise ValueError("not a shareable type")
> set_owner(obj) # obj.owner or add an obj
On Tue, Oct 3, 2017 at 8:55 AM, Antoine Pitrou wrote:
> I think we need a sharing protocol, not just a flag. We also need to
> think carefully about that protocol, so that it does not imply
> unnecessary memory copies. Therefore I think the protocol should be
> something like the buffer protocol
On 4 October 2017 at 23:51, Eric Snow wrote:
> On Tue, Oct 3, 2017 at 11:36 PM, Nick Coghlan wrote:
>> The problem relates to the fact that there aren't any memory barriers
>> around CPython's INCREF operations (they're implemented as an ordinary
>> C post-increment operation), so you can get the
On Wed, 4 Oct 2017 17:50:33 +0200
Antoine Pitrou wrote:
> On Mon, 2 Oct 2017 21:31:30 -0400
> Eric Snow wrote:
> >
> > > By contrast, if we allow an actual bytes object to be shared, then
> > > either every INCREF or DECREF on that bytes object becomes a
> > > synchronisation point, or else we
On Wed, Oct 4, 2017 at 4:51 PM, Eric Snow
wrote:
> On Tue, Oct 3, 2017 at 11:36 PM, Nick Coghlan wrote:
> > The problem relates to the fact that there aren't any memory barriers
> > around CPython's INCREF operations (they're implemented as an ordinary
> > C post-increment operation), so you can
On Mon, 2 Oct 2017 21:31:30 -0400
Eric Snow wrote:
>
> > By contrast, if we allow an actual bytes object to be shared, then
> > either every INCREF or DECREF on that bytes object becomes a
> > synchronisation point, or else we end up needing some kind of
> > secondary per-interpreter refcount whe
On Tue, Oct 3, 2017 at 11:36 PM, Nick Coghlan wrote:
> The problem relates to the fact that there aren't any memory barriers
> around CPython's INCREF operations (they're implemented as an ordinary
> C post-increment operation), so you can get the following scenario:
>
> * thread on CPU A has the
On 3 October 2017 at 11:31, Eric Snow wrote:
> There shouldn't be a need to synchronize on INCREF. If both
> interpreters have at least 1 reference then either one adding a
> reference shouldn't be a problem. If only one interpreter has a
> reference then the other won't be adding any references
On 03Oct2017 0755, Antoine Pitrou wrote:
On Tue, 3 Oct 2017 08:36:55 -0600
Eric Snow wrote:
On Tue, Oct 3, 2017 at 5:00 AM, Antoine Pitrou wrote:
On Mon, 2 Oct 2017 22:15:01 -0400
Eric Snow wrote:
I'm still not convinced that sharing synchronization primitives is
important enough to be wor
On Tue, 3 Oct 2017 08:36:55 -0600
Eric Snow wrote:
> On Tue, Oct 3, 2017 at 5:00 AM, Antoine Pitrou wrote:
> > On Mon, 2 Oct 2017 22:15:01 -0400
> > Eric Snow wrote:
> >>
> >> I'm still not convinced that sharing synchronization primitives is
> >> important enough to be worth including it in t
On Tue, Oct 3, 2017 at 5:00 AM, Antoine Pitrou wrote:
> On Mon, 2 Oct 2017 22:15:01 -0400
> Eric Snow wrote:
>>
>> I'm still not convinced that sharing synchronization primitives is
>> important enough to be worth including it in the PEP. It can be added
>> later, or via an extension module in t
On Mon, 2 Oct 2017 22:15:01 -0400
Eric Snow wrote:
>
> I'm still not convinced that sharing synchronization primitives is
> important enough to be worth including it in the PEP. It can be added
> later, or via an extension module in the meantime. To that end, I'll
> add a mechanism to the PEP f
On Wed, Sep 27, 2017 at 1:26 AM, Nick Coghlan wrote:
> It's also the case that unlike Go channels, which were designed from
> scratch on the basis of implementing pure CSP,
FWIW, Go's channels (and goroutines) don't implement pure CSP. They
provide a variant that the Go authors felt was more in-
On Mon, Sep 25, 2017 at 8:42 PM, Nathaniel Smith wrote:
> It's fairly reasonable to implement a mutex using a CSP-style
> unbuffered channel (send = acquire, receive = release). And the same
> trick turns a channel with a fixed-size buffer into a bounded
> semaphore. It won't be as efficient as a
On Mon, Oct 2, 2017 at 9:31 PM, Eric Snow wrote:
> On DECREF there shouldn't be a problem except possibly with a small
> race between decrementing the refcount and checking for a refcount of
> 0. We could address that several different ways, including allowing
> the pending call to get queued onl
After having looked it over, I'm leaning toward supporting buffering,
as well as not blocking by default. Neither adds much complexity to
the implementation.
On Sat, Sep 23, 2017 at 5:45 AM, Antoine Pitrou wrote:
> On Fri, 22 Sep 2017 19:09:01 -0600
> Eric Snow wrote:
>> > send() blocking until
On Thu, Sep 14, 2017 at 8:44 PM, Nick Coghlan wrote:
> Not really, because the only way to ensure object separation (i.e no
> refcounted objects accessible from multiple interpreters at once) with
> a bytes-based API would be to either:
>
> 1. Always copy (eliminating most of the low overhead comm
On 26 September 2017 at 17:04, Antoine Pitrou wrote:
> On Mon, 25 Sep 2017 17:42:02 -0700 Nathaniel Smith wrote:
>> Unbounded queues also introduce unbounded latency and memory usage in
>> realistic situations.
>
> This doesn't seem to pose much a problem in common use cases, though.
> How many P
On 23 Sep 2017, at 3:09, Eric Snow wrote:
[...]
``list_all()``::
Return a list of all existing interpreters.
See my naming proposal in the previous thread.
Sorry, your previous comment slipped through the cracks. You
suggested:
As for the naming, let's make it both unconfusing a
On Mon, 25 Sep 2017 17:42:02 -0700
Nathaniel Smith wrote:
> On Sat, Sep 23, 2017 at 2:45 AM, Antoine Pitrou wrote:
> >> As to "running_interpreters()" and "idle_interpreters()", I'm not sure
> >> what the benefit would be. You can compose either list manually with
> >> a simple comprehension:
>
On Sat, Sep 23, 2017 at 2:45 AM, Antoine Pitrou wrote:
>> As to "running_interpreters()" and "idle_interpreters()", I'm not sure
>> what the benefit would be. You can compose either list manually with
>> a simple comprehension:
>>
>> [interp for interp in interpreters.list_all() if interp.is_
On 2017-09-23 10:45, Antoine Pitrou wrote:
Hi Eric,
On Fri, 22 Sep 2017 19:09:01 -0600
Eric Snow wrote:
Please elaborate. I'm interested in understanding what you mean here.
Do you have some subinterpreter-based concurrency improvements in
mind? What aspect of CSP is the PEP following too
Hi Eric,
On Fri, 22 Sep 2017 19:09:01 -0600
Eric Snow wrote:
>
> Please elaborate. I'm interested in understanding what you mean here.
> Do you have some subinterpreter-based concurrency improvements in
> mind? What aspect of CSP is the PEP following too faithfully?
See below the discussion
Thanks for the feedback, Antoine. Sorry for the delay; it's been a
busy week for me. I just pushed an updated PEP to the repo. Once
I've sorted out the question of passing bytes through channels I plan
on posting the PEP to the list again for another round of discussion.
In the meantime, I've re
Hi,
First my high-level opinion about the PEP: the CSP model can probably
be already implemented using Queues. To me, the interesting promise of
subinterpreters is if they allow to remove the GIL while sharing memory
for big objects (such as Numpy arrays). This means the PEP should
probably foc
On 14 September 2017 at 11:44, Eric Snow wrote:
> About Subinterpreters
> =
>
> Shared data
> ---
[snip]
> To make this work, the mutable shared state will be managed by the
> Python runtime, not by any of the interpreters. Initially we will
> support only one type o
On 15 September 2017 at 12:04, Nathaniel Smith wrote:
> On Thu, Sep 14, 2017 at 5:44 PM, Nick Coghlan wrote:
>> The reason we're OK with this is that it means that only reading a new
>> message from a channel (i.e creating a cross-interpreter view) or
>> discarding a previously read message (i.e.
On Thu, Sep 14, 2017 at 5:44 PM, Nick Coghlan wrote:
> On 14 September 2017 at 15:27, Nathaniel Smith wrote:
>> I don't get it. With bytes, you can either share objects or copy them and
>> the user can't tell the difference, so you can change your mind later if you
>> want.
>> But memoryviews req
On 14 September 2017 at 15:27, Nathaniel Smith wrote:
> On Sep 13, 2017 9:01 PM, "Nick Coghlan" wrote:
>
> On 14 September 2017 at 11:44, Eric Snow
> wrote:
>>send(obj):
>>
>>Send the object to the receiving end of the channel. Wait until
>>the object is received. If the ch
On Sep 13, 2017 9:01 PM, "Nick Coghlan" wrote:
On 14 September 2017 at 11:44, Eric Snow
wrote:
>send(obj):
>
>Send the object to the receiving end of the channel. Wait until
>the object is received. If the channel does not support the
>object then TypeError is raise
On Wed, Sep 13, 2017 at 11:56 PM, Nick Coghlan wrote:
[..]
>>send(obj):
>>
>>Send the object to the receiving end of the channel. Wait until
>>the object is received. If the channel does not support the
>>object then TypeError is raised. Currently only bytes are
>>
On 14 September 2017 at 11:44, Eric Snow wrote:
> I've updated PEP 554 in response to feedback. (thanks all!) There
> are a few unresolved points (some of them added to the Open Questions
> section), but the current PEP has changed enough that I wanted to get
> it out there first.
>
> Notably ch
37 matches
Mail list logo