On Mon, Jul 16, 2018 at 12:24 PM, Brett Cannon wrote:
>
> Since it isn't necessary for Python to function, I would say we probably
> don''t want to pull it up. Then the maintenance burden grows much more.
>
might make sense to put it on pypi though, if someone want to take
responsibility for it.
On Thu, 12 Jul 2018 at 11:21 Andre Roberge wrote:
> In the cPython repository, there is an unparse module in the Tools section.
> https://github.com/python/cpython/blob/master/Tools/parser/unparse.py
>
> However, as it is not part of the standard library, it cannot be easily
> used; to do so, one
If you've for example an application with a GUI. Every time bigger
objects are pickled/unpickled the complete GUI blocks.
Or if you've a server application with multiple network connections and
the application should have a guaranteed response time, then it is
impossible if one single client c
If you've for example an application with a GUI. Every time bigger
objects are pickled/unpickled the complete GUI blocks.
Or if you've a server application with multiple network connections and
the application should have a guaranteed response time, then it is
impossible if one single client c
Hi,
On Mon, 16 Jul 2018 19:56:34 +0200
Martin Bammer wrote:
> Hi,
>
> the old and slow python implementation of pickle didn't block background
> thread.
>
> But the newer C-implementation blocks other threads while dump/load is
> running.
This is a fair comment.
Please open an issue on the
The GIL must be held to allocate memory for Python objects and to
invoke the Python code to deserialize user defined picklable objects.
I don't think there is a long span of time where the code could leave
the GIL released. The Python implementation is just pausing to let
other Python threads run,
Hi,
the old and slow python implementation of pickle didn't block background
thread.
But the newer C-implementation blocks other threads while dump/load is
running.
Wouldn't it be possible to allow other threads during this time?
Especially could load/loads release the GIL, because Python
On Mon, 16 Jul 2018 18:00:37 +0100
MRAB wrote:
> Could you explicitly share an object in a similar way to how you
> explicitly open a file?
>
> The shared object's refcount would be incremented and the sharing
> function would return a proxy to the shared object.
>
> Refcounting in the thread/
On 2018-07-16 05:24, Chris Angelico wrote:
On Mon, Jul 16, 2018 at 1:21 PM, Nathaniel Smith wrote:
On Sun, Jul 15, 2018 at 6:00 PM, Chris Angelico wrote:
On Mon, Jul 16, 2018 at 10:31 AM, Nathaniel Smith wrote:
On Sun, Jul 8, 2018 at 11:27 AM, David Foster wrote:
* The Actor model can be
On Mon, 16 Jul 2018 07:00:34 +0200
Stephan Houben
wrote:
> What about the following model: you have N Python interpreters, each with
> their own GIL. Each *Python* object belongs to precisely one interpreter.
This is roughly what Eric's subinterpreters approach tries to do.
Regards
Antoine.
_
On Sun, 15 Jul 2018 20:21:56 -0700
Nathaniel Smith wrote:
>
> If you need shared-memory threads, on multiple cores, for CPU-bound
> logic, where the logic is implemented in Python, then yeah, you
> basically need a free-threaded implementation of Python. Jython is
> such an implementation. PyPy c
Nick Coghlan wrote:
> It was never extended beyond Windows, and a Windows-only solution
> doesn't meet the needs of a lot of folks interested in more efficient
> exploitation of multiple local CPU cores.
On the other hand Windows has a higher need for a better multi-core
story. A reasonable Unix-
12 matches
Mail list logo