On Thu, 2 Mar 2023 12:45:50 +1100, Chris Angelico
declaimed the following:
>
>As have all CPUs since; it's the only way to implement locks (push the
>locking all the way down to the CPU level).
>
Xerox Sigma (circa 1970): Modify and Test (byte/halfword/word)
Granted, that was a
On 2023-03-02, Chris Angelico wrote:
> On Thu, 2 Mar 2023 at 08:01, <2qdxy4rzwzuui...@potatochowder.com> wrote:
>> On 2023-03-01 at 14:35:35 -0500,
>> avi.e.gr...@gmail.com wrote:
>> > What would have happened if all processors had been required to have
>> > some low level instruction that effecti
On Thu, 2 Mar 2023 at 13:02, Weatherby,Gerard wrote:
>
> So I guess we know what would have happened.
>
Yep. It's not what I was talking about, but it's also a very important
concurrency management feature.
ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
So I guess we know what would have happened.
Get Outlook for iOS<https://aka.ms/o0ukef>
From: Python-list on
behalf of Chris Angelico
Sent: Wednesday, March 1, 2023 8:45:50 PM
To: python-list@python.org
Subject: Re: Look free ID genertion (was: Is there
On Thu, 2 Mar 2023 at 08:01, <2qdxy4rzwzuui...@potatochowder.com> wrote:
>
> On 2023-03-01 at 14:35:35 -0500,
> avi.e.gr...@gmail.com wrote:
>
> > What would have happened if all processors had been required to have
> > some low level instruction that effectively did something in an atomic
> > way
On 2023-03-01 at 14:35:35 -0500,
avi.e.gr...@gmail.com wrote:
> What would have happened if all processors had been required to have
> some low level instruction that effectively did something in an atomic
> way that allowed a way for anyone using any language running on that
> machine a way to do
On Thu, 2 Mar 2023 at 06:37, wrote:
>
> If a workaround like itertools.count.__next__() is used because it will not
> be interrupted as it is implemented in C, then I have to ask if it would
> make sense for Python to supply something similar in the standard library
> for the sole purpose of a use
very directly using the atomic operation directly.
-Original Message-
From: Python-list On
Behalf Of Dieter Maurer
Sent: Wednesday, March 1, 2023 1:43 PM
To: Chris Angelico
Cc: python-list@python.org
Subject: Look free ID genertion (was: Is there a more efficient threading
lock?)
Chris
Chris Angelico wrote at 2023-3-1 12:58 +1100:
> ...
> The
>atomicity would be more useful in that context as it would give
>lock-free ID generation, which doesn't work in Python.
I have seen `itertools.count` for that.
This works because its `__next__` is implemented in "C" and
therefore will not
On Wed, 1 Mar 2023 at 10:04, Barry wrote:
>
> > Though it's still probably not as useful as you might hope. In C, if I
> > can do "int id = counter++;" atomically, it would guarantee me a new
> > ID that no other thread could ever have.
>
> C does not have to do that atomically. In fact it is free
> Though it's still probably not as useful as you might hope. In C, if I
> can do "int id = counter++;" atomically, it would guarantee me a new
> ID that no other thread could ever have.
C does not have to do that atomically. In fact it is free to use lots of
instructions to build the int value.
On Mon, 27 Feb 2023 at 17:28, Michael Speer wrote:
>
> https://github.com/python/cpython/commit/4958f5d69dd2bf86866c43491caf72f774ddec97
>
> it's a quirk of implementation. the scheduler currently only checks if it
> needs to release the gil after the POP_JUMP_IF_FALSE, POP_JUMP_IF_TRUE,
> JUMP_AB
https://stackoverflow.com/questions/69993959/python-threads-difference-for-3-10-and-others
https://github.com/python/cpython/commit/4958f5d69dd2bf86866c43491caf72f774ddec97
it's a quirk of implementation. the scheduler currently only checks if it
needs to release the gil after the POP_JUMP_IF_FAL
I wanted to provide an example that your claimed atomicity is simply wrong,
but I found there is something different in the 3.10+ cpython
implementations.
I've tested the code at the bottom of this message using a few docker
python images, and it appears there is a difference starting in 3.10.0
p
On Mon, 27 Feb 2023 at 10:42, Jon Ribbens via Python-list
wrote:
>
> On 2023-02-26, Chris Angelico wrote:
> > On Sun, 26 Feb 2023 at 16:16, Jon Ribbens via Python-list
> > wrote:
> >> On 2023-02-25, Paul Rubin wrote:
> >> > The GIL is an evil thing, but it has been around for so long that most
>
> And yet, it appears that *something* changed between Python 2 and Python
3 such that it *is* atomic:
I haven't looked, but something to check in the source is opcode
prediction. It's possible that after the BINARY_OP executes, opcode
prediction jumps straight to the STORE_FAST opcode, avoiding t
On 2023-02-26, Chris Angelico wrote:
> On Sun, 26 Feb 2023 at 16:16, Jon Ribbens via Python-list
> wrote:
>> On 2023-02-25, Paul Rubin wrote:
>> > The GIL is an evil thing, but it has been around for so long that most
>> > of us have gotten used to it, and some user code actually relies on it.
>>
On 2023-02-26, Barry Scott wrote:
> On 25/02/2023 23:45, Jon Ribbens via Python-list wrote:
>> I think it is the case that x += 1 is atomic but foo.x += 1 is not.
>
> No that is not true, and has never been true.
>
>:>>> def x(a):
>:... a += 1
>:...
>:>>>
>:>>> dis.dis(x)
> 1 0 RESU
On 25/02/2023 23:45, Jon Ribbens via Python-list wrote:
I think it is the case that x += 1 is atomic but foo.x += 1 is not.
No that is not true, and has never been true.
:>>> def x(a):
:... a += 1
:...
:>>>
:>>> dis.dis(x)
1 0 RESUME 0
2 2 LOAD_FAST
On Sun, 26 Feb 2023 at 16:27, Dennis Lee Bieber wrote:
>
> On Sat, 25 Feb 2023 15:41:52 -0600, Skip Montanaro
> declaimed the following:
>
>
> >concurrent.futures.ThreadPoolExecutor() with the default number of workers (
> >os.cpu_count() * 1.5, or 12 threads on my system) to process each month,
On Sun, 26 Feb 2023 at 16:16, Jon Ribbens via Python-list
wrote:
>
> On 2023-02-25, Paul Rubin wrote:
> > The GIL is an evil thing, but it has been around for so long that most
> > of us have gotten used to it, and some user code actually relies on it.
> > For example, with the GIL in place, a st
On Sat, 25 Feb 2023 15:41:52 -0600, Skip Montanaro
declaimed the following:
>concurrent.futures.ThreadPoolExecutor() with the default number of workers (
>os.cpu_count() * 1.5, or 12 threads on my system) to process each month, so
>12 active threads at a time. Given that the process is pretty mu
On 2023-02-25, Paul Rubin wrote:
> Jon Ribbens writes:
>>> 1) you generally want to use RLock rather than Lock
>> Why?
>
> So that a thread that tries to acquire it twice doesn't block itself,
> etc. Look at the threading lib docs for more info.
Yes, I know what the docs say, I was asking why y
On 2023-02-25, Paul Rubin wrote:
> Skip Montanaro writes:
>> from threading import Lock
>
> 1) you generally want to use RLock rather than Lock
Why?
> 2) I have generally felt that using locks at the app level at all is an
> antipattern. The main way I've stayed sane in multi-threaded Python
>
Re sqlite and threads. The C API can be compiled to be thread safe from my
Reading if the sqlite docs. What I have not checked is how python’s bundled
sqlite
is compiled. There are claims python’s sqlite is not thread safe.
Barry
--
https://mail.python.org/mailman/listinfo/python-list
On 2/25/2023 4:41 PM, Skip Montanaro wrote:
Thanks for the responses.
Peter wrote:
Which OS is this?
MacOS Ventura 13.1, M1 MacBook Pro (eight cores).
Thomas wrote:
> I'm no expert on locks, but you don't usually want to keep a lock while
> some long-running computation goes on. You wan
“I'm no expert on locks, but you don't usually want to keep a lock while
some long-running computation goes on. You want the computation to be
done by a separate thread, put its results somewhere, and then notify
the choreographing thread that the result is ready.”
Maybe. There are so many poss
Thanks for the responses.
Peter wrote:
> Which OS is this?
MacOS Ventura 13.1, M1 MacBook Pro (eight cores).
Thomas wrote:
> I'm no expert on locks, but you don't usually want to keep a lock while
> some long-running computation goes on. You want the computation to be
> done by a separate thr
On 2023-02-25 09:52:15 -0600, Skip Montanaro wrote:
> BLOB_LOCK = Lock()
>
> def get_terms(text):
> with BLOB_LOCK:
> phrases = TextBlob(text, np_extractor=EXTRACTOR).noun_phrases
> for phrase in phrases:
> yield phrase
>
> When I monitor the application using py-spy, that
On 2/25/2023 10:52 AM, Skip Montanaro wrote:
I have a multi-threaded program which calls out to a non-thread-safe
library (not mine) in a couple places. I guard against multiple
threads executing code there using threading.Lock. The code is
straightforward:
from threading import Lock
# Somethin
On 2023-02-25 09:52:15 -0600, Skip Montanaro wrote:
> I have a multi-threaded program which calls out to a non-thread-safe
> library (not mine) in a couple places. I guard against multiple
> threads executing code there using threading.Lock. The code is
> straightforward:
>
> from threading import
I have a multi-threaded program which calls out to a non-thread-safe
library (not mine) in a couple places. I guard against multiple
threads executing code there using threading.Lock. The code is
straightforward:
from threading import Lock
# Something in textblob and/or nltk doesn't play nice wit
32 matches
Mail list logo