[Python-checkins] [3.13] gh-125041: test_zlib: For s390x HW acceleration, only skip checking the compressed bytes (GH-125042) (#125527)

2024-10-16 Thread encukou
https://github.com/python/cpython/commit/e3ae56468254004544b0a02f069a78214c964c36
commit: e3ae56468254004544b0a02f069a78214c964c36
branch: 3.13
author: Miss Islington (bot) <[email protected]>
committer: encukou 
date: 2024-10-16T13:33:47+02:00
summary:

[3.13] gh-125041: test_zlib: For s390x HW acceleration, only skip checking the 
compressed bytes (GH-125042) (#125527)

(cherry picked from commit cc5a225cdc2a5d4e035dd08d59cef39182c10a6c)

Co-authored-by: Petr Viktorin 

files:
A Misc/NEWS.d/next/Tests/2024-10-07-14-13-38.gh-issue-125041.PKLWDf.rst
M Lib/test/support/__init__.py
M Lib/test/test_zlib.py

diff --git a/Lib/test/support/__init__.py b/Lib/test/support/__init__.py
index 6413af50a7c0be..7938b46012c853 100644
--- a/Lib/test/support/__init__.py
+++ b/Lib/test/support/__init__.py
@@ -2588,9 +2588,9 @@ def exceeds_recursion_limit():
 return get_c_recursion_limit() * 3
 
 
-#Windows doesn't have os.uname() but it doesn't support s390x.
-skip_on_s390x = unittest.skipIf(hasattr(os, 'uname') and os.uname().machine == 
's390x',
-'skipped on s390x')
+# Windows doesn't have os.uname() but it doesn't support s390x.
+is_s390x = hasattr(os, 'uname') and os.uname().machine == 's390x'
+skip_on_s390x = unittest.skipIf(is_s390x, 'skipped on s390x')
 
 Py_TRACE_REFS = hasattr(sys, 'getobjects')
 
diff --git a/Lib/test/test_zlib.py b/Lib/test/test_zlib.py
index ef02c64f886f8a..8b4bb8750f8f5c 100644
--- a/Lib/test/test_zlib.py
+++ b/Lib/test/test_zlib.py
@@ -6,7 +6,7 @@
 import pickle
 import random
 import sys
-from test.support import bigmemtest, _1G, _4G, skip_on_s390x
+from test.support import bigmemtest, _1G, _4G, is_s390x
 
 
 zlib = import_helper.import_module('zlib')
@@ -33,8 +33,9 @@ def 
_zlib_runtime_version_tuple(zlib_version=zlib.ZLIB_RUNTIME_VERSION):
 ZLIB_RUNTIME_VERSION_TUPLE = _zlib_runtime_version_tuple()
 
 
-# bpo-46623: On s390x, when a hardware accelerator is used, using different
-# ways to compress data with zlib can produce different compressed data.
+# bpo-46623: When a hardware accelerator is used (currently only on s390x),
+# using different ways to compress data with zlib can produce different
+# compressed data.
 # Simplified test_pair() code:
 #
 #   def func1(data):
@@ -57,8 +58,10 @@ def 
_zlib_runtime_version_tuple(zlib_version=zlib.ZLIB_RUNTIME_VERSION):
 #
 #   zlib.decompress(func1(data)) == zlib.decompress(func2(data)) == data
 #
-# Make the assumption that s390x always has an accelerator to simplify the skip
-# condition.
+# To simplify the skip condition, make the assumption that s390x always has an
+# accelerator, and nothing else has it.
+HW_ACCELERATED = is_s390x
+
 
 class VersionTestCase(unittest.TestCase):
 
@@ -223,12 +226,14 @@ def test_keywords(self):
  bufsize=zlib.DEF_BUF_SIZE),
  HAMLET_SCENE)
 
-@skip_on_s390x
 def test_speech128(self):
 # compress more data
 data = HAMLET_SCENE * 128
 x = zlib.compress(data)
-self.assertEqual(zlib.compress(bytearray(data)), x)
+# With hardware acceleration, the compressed bytes
+# might not be identical.
+if not HW_ACCELERATED:
+self.assertEqual(zlib.compress(bytearray(data)), x)
 for ob in x, bytearray(x):
 self.assertEqual(zlib.decompress(ob), data)
 
@@ -275,7 +280,6 @@ def test_64bit_compress(self, size):
 
 class CompressObjectTestCase(BaseCompressTestCase, unittest.TestCase):
 # Test compression object
-@skip_on_s390x
 def test_pair(self):
 # straightforward compress/decompress objects
 datasrc = HAMLET_SCENE * 128
@@ -286,7 +290,10 @@ def test_pair(self):
 x1 = co.compress(data)
 x2 = co.flush()
 self.assertRaises(zlib.error, co.flush) # second flush should not 
work
-self.assertEqual(x1 + x2, datazip)
+# With hardware acceleration, the compressed bytes might not
+# be identical.
+if not HW_ACCELERATED:
+self.assertEqual(x1 + x2, datazip)
 for v1, v2 in ((x1, x2), (bytearray(x1), bytearray(x2))):
 dco = zlib.decompressobj()
 y1 = dco.decompress(v1 + v2)
diff --git 
a/Misc/NEWS.d/next/Tests/2024-10-07-14-13-38.gh-issue-125041.PKLWDf.rst 
b/Misc/NEWS.d/next/Tests/2024-10-07-14-13-38.gh-issue-125041.PKLWDf.rst
new file mode 100644
index 00..c7181eb9c1f3a9
--- /dev/null
+++ b/Misc/NEWS.d/next/Tests/2024-10-07-14-13-38.gh-issue-125041.PKLWDf.rst
@@ -0,0 +1,3 @@
+Re-enable skipped tests for :mod:`zlib` on the s390x architecture: only skip
+checks of the compressed bytes, which can be different between zlib's
+software implementation and the hardware-accelerated implementation.

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to python-checkins-le...@

[Python-checkins] [3.13] gh-125444: Fix illegal instruction for older Arm architectures (GH-125574) (GH-125595)

2024-10-16 Thread colesbury
https://github.com/python/cpython/commit/18b9079ddbc149d6b99c922630c246812e4d8ae7
commit: 18b9079ddbc149d6b99c922630c246812e4d8ae7
branch: 3.13
author: Miss Islington (bot) <[email protected]>
committer: colesbury 
date: 2024-10-16T14:48:40Z
summary:

[3.13] gh-125444: Fix illegal instruction for older Arm architectures 
(GH-125574) (GH-125595)

On Arm v5 it is not possible to get the thread ID via c13 register
hence the illegal instruction. The c13 register started to provide
thread ID since Arm v6K architecture variant. Other variants of
Arm v6 (T2, Z and base) don’t provide the thread ID via c13.
For the sake of simplicity we group v5 and v6 together and
consider that instructions for Arm v7 only.
(cherry picked from commit feda9aa73ab95d17a291db22c416146f8e70edeb)

Co-authored-by: Diego Russo 

files:
A 
Misc/NEWS.d/next/Core_and_Builtins/2024-10-16-12-12-39.gh-issue-125444.9tG2X6.rst
M Include/internal/mimalloc/mimalloc/prim.h
M Include/object.h

diff --git a/Include/internal/mimalloc/mimalloc/prim.h 
b/Include/internal/mimalloc/mimalloc/prim.h
index 8a60d528458e6c..322ab29e6b41c2 100644
--- a/Include/internal/mimalloc/mimalloc/prim.h
+++ b/Include/internal/mimalloc/mimalloc/prim.h
@@ -151,9 +151,9 @@ static inline mi_threadid_t _mi_prim_thread_id(void) 
mi_attr_noexcept {
 // If you test on another platform and it works please send a PR :-)
 // see also https://akkadia.org/drepper/tls.pdf for more info on the TLS 
register.
 #elif defined(__GNUC__) && ( \
-   (defined(__GLIBC__)   && (defined(__x86_64__) || defined(__i386__) 
|| defined(__arm__) || defined(__aarch64__))) \
+   (defined(__GLIBC__)   && (defined(__x86_64__) || defined(__i386__) 
|| (defined(__arm__) && __ARM_ARCH >= 7) || defined(__aarch64__))) \
 || (defined(__APPLE__)   && (defined(__x86_64__) || 
defined(__aarch64__))) \
-|| (defined(__BIONIC__)  && (defined(__x86_64__) || defined(__i386__) 
|| defined(__arm__) || defined(__aarch64__))) \
+|| (defined(__BIONIC__)  && (defined(__x86_64__) || defined(__i386__) 
|| (defined(__arm__) && __ARM_ARCH >= 7) || defined(__aarch64__))) \
 || (defined(__FreeBSD__) && (defined(__x86_64__) || defined(__i386__) 
|| defined(__aarch64__))) \
 || (defined(__OpenBSD__) && (defined(__x86_64__) || defined(__i386__) 
|| defined(__aarch64__))) \
   )
diff --git a/Include/object.h b/Include/object.h
index 78aa7ad0f459ff..b53f9acfebdb0c 100644
--- a/Include/object.h
+++ b/Include/object.h
@@ -259,7 +259,7 @@ _Py_ThreadId(void)
 __asm__("movq %%gs:0, %0" : "=r" (tid));  // x86_64 macOSX uses GS
 #elif defined(__x86_64__)
__asm__("movq %%fs:0, %0" : "=r" (tid));  // x86_64 Linux, BSD uses FS
-#elif defined(__arm__)
+#elif defined(__arm__) && __ARM_ARCH >= 7
 __asm__ ("mrc p15, 0, %0, c13, c0, 3\nbic %0, %0, #3" : "=r" (tid));
 #elif defined(__aarch64__) && defined(__APPLE__)
 __asm__ ("mrs %0, tpidrro_el0" : "=r" (tid));
diff --git 
a/Misc/NEWS.d/next/Core_and_Builtins/2024-10-16-12-12-39.gh-issue-125444.9tG2X6.rst
 
b/Misc/NEWS.d/next/Core_and_Builtins/2024-10-16-12-12-39.gh-issue-125444.9tG2X6.rst
new file mode 100644
index 00..13c1e745edf8d5
--- /dev/null
+++ 
b/Misc/NEWS.d/next/Core_and_Builtins/2024-10-16-12-12-39.gh-issue-125444.9tG2X6.rst
@@ -0,0 +1 @@
+Fix illegal instruction for older Arm architectures. Patch by Diego Russo, 
testing by Ross Burton.

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] gh-125217: Turn off optimization around_PyEval_EvalFrameDefault to avoid MSVC crash (#125477)

2024-10-16 Thread mdboom
https://github.com/python/cpython/commit/51410d8bdcfe0fd215f94a098dc6cd0919c648a1
commit: 51410d8bdcfe0fd215f94a098dc6cd0919c648a1
branch: main
author: Michael Droettboom 
committer: mdboom 
date: 2024-10-16T12:51:15Z
summary:

gh-125217: Turn off optimization around_PyEval_EvalFrameDefault to avoid MSVC 
crash (#125477)

files:
M Python/ceval.c

diff --git a/Python/ceval.c b/Python/ceval.c
index f4e0add3034707..43776e773e0deb 100644
--- a/Python/ceval.c
+++ b/Python/ceval.c
@@ -761,6 +761,16 @@ _PyObjectArray_Free(PyObject **array, PyObject **scratch)
  * so consume 3 units of C stack */
 #define PY_EVAL_C_STACK_UNITS 2
 
+#if defined(_MSC_VER) && defined(_Py_USING_PGO) && defined(_Py_JIT)
+/* _PyEval_EvalFrameDefault is too large to optimize for speed with
+   PGO on MSVC when the JIT is enabled. Disable that optimization
+   around this function only. If this is fixed upstream, we should
+   gate this on the version of MSVC.
+ */
+#  pragma optimize("t", off)
+/* This setting is reversed below following _PyEval_EvalFrameDefault */
+#endif
+
 PyObject* _Py_HOT_FUNCTION
 _PyEval_EvalFrameDefault(PyThreadState *tstate, _PyInterpreterFrame *frame, 
int throwflag)
 {
@@ -1136,6 +1146,10 @@ _PyEval_EvalFrameDefault(PyThreadState *tstate, 
_PyInterpreterFrame *frame, int
 
 }
 
+#if defined(_MSC_VER) && defined(_Py_USING_PGO) && defined(_Py_JIT)
+#  pragma optimize("", on)
+#endif
+
 #if defined(__GNUC__)
 #  pragma GCC diagnostic pop
 #elif defined(_MSC_VER) /* MS_WINDOWS */

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] gh-125451: Fix deadlock in ProcessPoolExecutor shutdown (#125492)

2024-10-16 Thread colesbury
https://github.com/python/cpython/commit/760872efecb95017db8e38a8eda614bf23d2a22c
commit: 760872efecb95017db8e38a8eda614bf23d2a22c
branch: main
author: Sam Gross 
committer: colesbury 
date: 2024-10-16T11:39:17-04:00
summary:

gh-125451: Fix deadlock in ProcessPoolExecutor shutdown (#125492)

There was a deadlock when `ProcessPoolExecutor` shuts down at the same
time that a queueing thread handles an error processing a task.

Don't use `_shutdown_lock` to protect the `_ThreadWakeup` pipes -- use
an internal lock instead. This fixes the ordering deadlock where the
`ExecutorManagerThread` holds the `_shutdown_lock` and joins the
queueing thread, while the queueing thread is attempting to acquire the
`_shutdown_lock` while closing the `_ThreadWakeup`.

files:
A Misc/NEWS.d/next/Library/2024-10-14-17-29-34.gh-issue-125451.fmP3T9.rst
M Lib/concurrent/futures/process.py
M Lib/test/test_concurrent_futures/test_shutdown.py

diff --git a/Lib/concurrent/futures/process.py 
b/Lib/concurrent/futures/process.py
index 7092b4757b5429..42eee72bc1457f 100644
--- a/Lib/concurrent/futures/process.py
+++ b/Lib/concurrent/futures/process.py
@@ -68,27 +68,31 @@
 class _ThreadWakeup:
 def __init__(self):
 self._closed = False
+self._lock = threading.Lock()
 self._reader, self._writer = mp.Pipe(duplex=False)
 
 def close(self):
-# Please note that we do not take the shutdown lock when
+# Please note that we do not take the self._lock when
 # calling clear() (to avoid deadlocking) so this method can
 # only be called safely from the same thread as all calls to
-# clear() even if you hold the shutdown lock. Otherwise we
+# clear() even if you hold the lock. Otherwise we
 # might try to read from the closed pipe.
-if not self._closed:
-self._closed = True
-self._writer.close()
-self._reader.close()
+with self._lock:
+if not self._closed:
+self._closed = True
+self._writer.close()
+self._reader.close()
 
 def wakeup(self):
-if not self._closed:
-self._writer.send_bytes(b"")
+with self._lock:
+if not self._closed:
+self._writer.send_bytes(b"")
 
 def clear(self):
-if not self._closed:
-while self._reader.poll():
-self._reader.recv_bytes()
+if self._closed:
+raise RuntimeError('operation on closed _ThreadWakeup')
+while self._reader.poll():
+self._reader.recv_bytes()
 
 
 def _python_exit():
@@ -167,10 +171,8 @@ def __init__(self, work_id, fn, args, kwargs):
 
 class _SafeQueue(Queue):
 """Safe Queue set exception to the future object linked to a job"""
-def __init__(self, max_size=0, *, ctx, pending_work_items, shutdown_lock,
- thread_wakeup):
+def __init__(self, max_size=0, *, ctx, pending_work_items, thread_wakeup):
 self.pending_work_items = pending_work_items
-self.shutdown_lock = shutdown_lock
 self.thread_wakeup = thread_wakeup
 super().__init__(max_size, ctx=ctx)
 
@@ -179,8 +181,7 @@ def _on_queue_feeder_error(self, e, obj):
 tb = format_exception(type(e), e, e.__traceback__)
 e.__cause__ = _RemoteTraceback('\n"""\n{}"""'.format(''.join(tb)))
 work_item = self.pending_work_items.pop(obj.work_id, None)
-with self.shutdown_lock:
-self.thread_wakeup.wakeup()
+self.thread_wakeup.wakeup()
 # work_item can be None if another process terminated. In this
 # case, the executor_manager_thread fails all work_items
 # with BrokenProcessPool
@@ -296,12 +297,10 @@ def __init__(self, executor):
 # if there is no pending work item.
 def weakref_cb(_,
thread_wakeup=self.thread_wakeup,
-   shutdown_lock=self.shutdown_lock,
mp_util_debug=mp.util.debug):
 mp_util_debug('Executor collected: triggering callback for'
   ' QueueManager wakeup')
-with shutdown_lock:
-thread_wakeup.wakeup()
+thread_wakeup.wakeup()
 
 self.executor_reference = weakref.ref(executor, weakref_cb)
 
@@ -429,11 +428,6 @@ def wait_result_broken_or_wakeup(self):
 elif wakeup_reader in ready:
 is_broken = False
 
-# No need to hold the _shutdown_lock here because:
-# 1. we're the only thread to use the wakeup reader
-# 2. we're also the only thread to call thread_wakeup.close()
-# 3. we want to avoid a possible deadlock when both reader and writer
-#would block (gh-105829)
 self.thread_wakeup.clear()
 
 return result_item, is_broken, cause
@@ -721,10 +715,9 @@ def __init__(self, max_workers=None, mp_context=None,
 # as it co

[Python-checkins] gh-125444: Fix illegal instruction for older Arm architectures (#125574)

2024-10-16 Thread colesbury
https://github.com/python/cpython/commit/feda9aa73ab95d17a291db22c416146f8e70edeb
commit: feda9aa73ab95d17a291db22c416146f8e70edeb
branch: main
author: Diego Russo 
committer: colesbury 
date: 2024-10-16T09:13:07-04:00
summary:

gh-125444: Fix illegal instruction for older Arm architectures (#125574)

On Arm v5 it is not possible to get the thread ID via c13 register
hence the illegal instruction. The c13 register started to provide
thread ID since Arm v6K architecture variant. Other variants of
Arm v6 (T2, Z and base) don’t provide the thread ID via c13.
For the sake of simplicity we group v5 and v6 together and
consider that instructions for Arm v7 only.

files:
A 
Misc/NEWS.d/next/Core_and_Builtins/2024-10-16-12-12-39.gh-issue-125444.9tG2X6.rst
M Include/internal/mimalloc/mimalloc/prim.h
M Include/object.h

diff --git a/Include/internal/mimalloc/mimalloc/prim.h 
b/Include/internal/mimalloc/mimalloc/prim.h
index 8a60d528458e6c..322ab29e6b41c2 100644
--- a/Include/internal/mimalloc/mimalloc/prim.h
+++ b/Include/internal/mimalloc/mimalloc/prim.h
@@ -151,9 +151,9 @@ static inline mi_threadid_t _mi_prim_thread_id(void) 
mi_attr_noexcept {
 // If you test on another platform and it works please send a PR :-)
 // see also https://akkadia.org/drepper/tls.pdf for more info on the TLS 
register.
 #elif defined(__GNUC__) && ( \
-   (defined(__GLIBC__)   && (defined(__x86_64__) || defined(__i386__) 
|| defined(__arm__) || defined(__aarch64__))) \
+   (defined(__GLIBC__)   && (defined(__x86_64__) || defined(__i386__) 
|| (defined(__arm__) && __ARM_ARCH >= 7) || defined(__aarch64__))) \
 || (defined(__APPLE__)   && (defined(__x86_64__) || 
defined(__aarch64__))) \
-|| (defined(__BIONIC__)  && (defined(__x86_64__) || defined(__i386__) 
|| defined(__arm__) || defined(__aarch64__))) \
+|| (defined(__BIONIC__)  && (defined(__x86_64__) || defined(__i386__) 
|| (defined(__arm__) && __ARM_ARCH >= 7) || defined(__aarch64__))) \
 || (defined(__FreeBSD__) && (defined(__x86_64__) || defined(__i386__) 
|| defined(__aarch64__))) \
 || (defined(__OpenBSD__) && (defined(__x86_64__) || defined(__i386__) 
|| defined(__aarch64__))) \
   )
diff --git a/Include/object.h b/Include/object.h
index 5be4dedadc20eb..7e1b0966fc5e34 100644
--- a/Include/object.h
+++ b/Include/object.h
@@ -192,7 +192,7 @@ _Py_ThreadId(void)
 __asm__("movq %%gs:0, %0" : "=r" (tid));  // x86_64 macOSX uses GS
 #elif defined(__x86_64__)
__asm__("movq %%fs:0, %0" : "=r" (tid));  // x86_64 Linux, BSD uses FS
-#elif defined(__arm__)
+#elif defined(__arm__) && __ARM_ARCH >= 7
 __asm__ ("mrc p15, 0, %0, c13, c0, 3\nbic %0, %0, #3" : "=r" (tid));
 #elif defined(__aarch64__) && defined(__APPLE__)
 __asm__ ("mrs %0, tpidrro_el0" : "=r" (tid));
diff --git 
a/Misc/NEWS.d/next/Core_and_Builtins/2024-10-16-12-12-39.gh-issue-125444.9tG2X6.rst
 
b/Misc/NEWS.d/next/Core_and_Builtins/2024-10-16-12-12-39.gh-issue-125444.9tG2X6.rst
new file mode 100644
index 00..13c1e745edf8d5
--- /dev/null
+++ 
b/Misc/NEWS.d/next/Core_and_Builtins/2024-10-16-12-12-39.gh-issue-125444.9tG2X6.rst
@@ -0,0 +1 @@
+Fix illegal instruction for older Arm architectures. Patch by Diego Russo, 
testing by Ross Burton.

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] [3.13] CI: Bump Python to 3.13 and mypy to 1.12 in mypy workflow (GH-… (#125596)

2024-10-16 Thread Eclips4
https://github.com/python/cpython/commit/3fda8a824678ea5409509a22c09ab435acccd8c3
commit: 3fda8a824678ea5409509a22c09ab435acccd8c3
branch: 3.13
author: Kirill Podoprigora 
committer: Eclips4 
date: 2024-10-16T15:31:00Z
summary:

[3.13] CI: Bump Python to 3.13 and mypy to 1.12 in mypy workflow (GH-… (#125596)

[3.13] CI: Bump Python to 3.13 and mypy to 1.12 in mypy workflow (GH-125592)

(cherry picked from commit d83fcf8371f2f33c7797bc8f5423a8bca8c46e5c)

files:
M .github/workflows/mypy.yml
M Tools/requirements-dev.txt

diff --git a/.github/workflows/mypy.yml b/.github/workflows/mypy.yml
index 1b2d998182e0f7..e5b05302b5ac27 100644
--- a/.github/workflows/mypy.yml
+++ b/.github/workflows/mypy.yml
@@ -53,7 +53,7 @@ jobs:
   - uses: actions/checkout@v4
   - uses: actions/setup-python@v5
 with:
-  python-version: "3.11"
+  python-version: "3.13"
   cache: pip
   cache-dependency-path: Tools/requirements-dev.txt
   - run: pip install -r Tools/requirements-dev.txt
diff --git a/Tools/requirements-dev.txt b/Tools/requirements-dev.txt
index 1767727373918f..a4261ff0a38d1b 100644
--- a/Tools/requirements-dev.txt
+++ b/Tools/requirements-dev.txt
@@ -1,6 +1,6 @@
 # Requirements file for external linters and checks we run on
 # Tools/clinic, Tools/cases_generator/, and Tools/peg_generator/ in CI
-mypy==1.10.0
+mypy==1.12
 
 # needed for peg_generator:
 types-psutil==5.9.5.20240423

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] gh-124872: Replace enter/exit events with "switched" (#125532)

2024-10-16 Thread ambv
https://github.com/python/cpython/commit/bee112a94d688c8048ddeddaa7bbd5150aecad11
commit: bee112a94d688c8048ddeddaa7bbd5150aecad11
branch: main
author: Kirill Podoprigora 
committer: ambv 
date: 2024-10-16T13:53:21+02:00
summary:

gh-124872: Replace enter/exit events with "switched" (#125532)

Users want to know when the current context switches to a different
context object.  Right now this happens when and only when a context
is entered or exited, so the enter and exit events are synonymous with
"switched".  However, if the changes proposed for gh-99633 are
implemented, the current context will also switch for reasons other
than context enter or exit.  Since users actually care about context
switches and not enter or exit, replace the enter and exit events with
a single switched event.

The former exit event was emitted just before exiting the context.
The new switched event is emitted after the context is exited to match
the semantics users expect of an event with a past-tense name.  If
users need the ability to clean up before the switch takes effect,
another event type can be added in the future.  It is not added here
because YAGNI.

I skipped 0 in the enum as a matter of practice.  Skipping 0 makes it
easier to troubleshoot when code forgets to set zeroed memory, and it
aligns with best practices for other tools (e.g.,
https://protobuf.dev/programming-guides/dos-donts/#unspecified-enum).

Co-authored-by: Richard Hansen 
Co-authored-by: Victor Stinner 

files:
M Doc/c-api/contextvars.rst
M Include/cpython/context.h
M Lib/test/test_capi/test_watchers.py
M Modules/_testcapi/watchers.c
M Python/context.c
M Tools/c-analyzer/cpython/ignored.tsv

diff --git a/Doc/c-api/contextvars.rst b/Doc/c-api/contextvars.rst
index 8eba54a80dc80d..b7c6550ff34aac 100644
--- a/Doc/c-api/contextvars.rst
+++ b/Doc/c-api/contextvars.rst
@@ -123,16 +123,10 @@ Context object management functions:
 
Enumeration of possible context object watcher events:
 
-   - ``Py_CONTEXT_EVENT_ENTER``: A context has been entered, causing the
- :term:`current context` to switch to it.  The object passed to the watch
- callback is the now-current :class:`contextvars.Context` object.  Each
- enter event will eventually have a corresponding exit event for the same
- context object after any subsequently entered contexts have themselves 
been
- exited.
-   - ``Py_CONTEXT_EVENT_EXIT``: A context is about to be exited, which will
- cause the :term:`current context` to switch back to what it was before the
- context was entered.  The object passed to the watch callback is the
- still-current :class:`contextvars.Context` object.
+   - ``Py_CONTEXT_SWITCHED``: The :term:`current context` has switched to a
+ different context.  The object passed to the watch callback is the
+ now-current :class:`contextvars.Context` object, or None if no context is
+ current.
 
.. versionadded:: 3.14
 
diff --git a/Include/cpython/context.h b/Include/cpython/context.h
index 3c9be7873b9399..3a7a4b459c09ad 100644
--- a/Include/cpython/context.h
+++ b/Include/cpython/context.h
@@ -29,20 +29,11 @@ PyAPI_FUNC(int) PyContext_Exit(PyObject *);
 
 typedef enum {
 /*
- * A context has been entered, causing the "current context" to switch to
- * it.  The object passed to the watch callback is the now-current
- * contextvars.Context object.  Each enter event will eventually have a
- * corresponding exit event for the same context object after any
- * subsequently entered contexts have themselves been exited.
+ * The current context has switched to a different context.  The object
+ * passed to the watch callback is the now-current contextvars.Context
+ * object, or None if no context is current.
  */
-Py_CONTEXT_EVENT_ENTER,
-/*
- * A context is about to be exited, which will cause the "current context"
- * to switch back to what it was before the context was entered.  The
- * object passed to the watch callback is the still-current
- * contextvars.Context object.
- */
-Py_CONTEXT_EVENT_EXIT,
+Py_CONTEXT_SWITCHED = 1,
 } PyContextEvent;
 
 /*
diff --git a/Lib/test/test_capi/test_watchers.py 
b/Lib/test/test_capi/test_watchers.py
index f21d2627c6094b..4bb764bf9d0963 100644
--- a/Lib/test/test_capi/test_watchers.py
+++ b/Lib/test/test_capi/test_watchers.py
@@ -577,68 +577,66 @@ class TestContextObjectWatchers(unittest.TestCase):
 def context_watcher(self, which_watcher):
 wid = _testcapi.add_context_watcher(which_watcher)
 try:
-yield wid
+switches = _testcapi.get_context_switches(which_watcher)
+except ValueError:
+switches = None
+try:
+yield switches
 finally:
 _testcapi.clear_context_watcher(wid)
 
-def assert_event_counts(self, exp_enter_0, exp_exit_0,
-exp_enter_1, exp_exit_1):
-self.assertEqual(
-

[Python-checkins] gh-125584: Require network resource in ``test_urllib2.HandlerTests.test_ftp_error`` (#125586)

2024-10-16 Thread Eclips4
https://github.com/python/cpython/commit/e4d90be84536746a966478acc4c0cf43a201f492
commit: e4d90be84536746a966478acc4c0cf43a201f492
branch: main
author: Michał Górny 
committer: Eclips4 
date: 2024-10-16T13:24:41Z
summary:

gh-125584: Require network resource in 
``test_urllib2.HandlerTests.test_ftp_error`` (#125586)

files:
M Lib/test/test_urllib2.py

diff --git a/Lib/test/test_urllib2.py b/Lib/test/test_urllib2.py
index 19179fdc9508ca..b90ccc2f125b93 100644
--- a/Lib/test/test_urllib2.py
+++ b/Lib/test/test_urllib2.py
@@ -794,6 +794,7 @@ def connect_ftp(self, user, passwd, host, port, dirs,
 self.assertEqual(headers.get("Content-type"), mimetype)
 self.assertEqual(int(headers["Content-length"]), len(data))
 
[email protected]_resource("network")
 def test_ftp_error(self):
 class ErrorFTPHandler(urllib.request.FTPHandler):
 def __init__(self, exception):

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] CI: Bump Python to 3.13 and mypy to 1.12 in mypy workflow (#125592)

2024-10-16 Thread Eclips4
https://github.com/python/cpython/commit/d83fcf8371f2f33c7797bc8f5423a8bca8c46e5c
commit: d83fcf8371f2f33c7797bc8f5423a8bca8c46e5c
branch: main
author: Kirill Podoprigora 
committer: Eclips4 
date: 2024-10-16T14:27:19Z
summary:

CI: Bump Python to 3.13 and mypy to 1.12 in mypy workflow (#125592)

* Bump mypy to 1.12 & Python to 3.13

* Remove unnecessary `type: ignore`

files:
M .github/workflows/mypy.yml
M Tools/clinic/libclinic/converter.py
M Tools/requirements-dev.txt

diff --git a/.github/workflows/mypy.yml b/.github/workflows/mypy.yml
index 1b2d998182e0f7..e5b05302b5ac27 100644
--- a/.github/workflows/mypy.yml
+++ b/.github/workflows/mypy.yml
@@ -53,7 +53,7 @@ jobs:
   - uses: actions/checkout@v4
   - uses: actions/setup-python@v5
 with:
-  python-version: "3.11"
+  python-version: "3.13"
   cache: pip
   cache-dependency-path: Tools/requirements-dev.txt
   - run: pip install -r Tools/requirements-dev.txt
diff --git a/Tools/clinic/libclinic/converter.py 
b/Tools/clinic/libclinic/converter.py
index 2abf06dc4e89a2..86853bb4fba253 100644
--- a/Tools/clinic/libclinic/converter.py
+++ b/Tools/clinic/libclinic/converter.py
@@ -545,9 +545,7 @@ def closure(f: CConverterClassT) -> CConverterClassT:
 if not kwargs:
 added_f = f
 else:
-# type ignore due to a mypy regression :(
-# https://github.com/python/mypy/issues/17646
-added_f = functools.partial(f, **kwargs)  # type: ignore[misc]
+added_f = functools.partial(f, **kwargs)
 if format_unit:
 legacy_converters[format_unit] = added_f
 return f
diff --git a/Tools/requirements-dev.txt b/Tools/requirements-dev.txt
index 408a9ea6607f9e..57f0b982b00f5d 100644
--- a/Tools/requirements-dev.txt
+++ b/Tools/requirements-dev.txt
@@ -1,6 +1,6 @@
 # Requirements file for external linters and checks we run on
 # Tools/clinic, Tools/cases_generator/, and Tools/peg_generator/ in CI
-mypy==1.11.2
+mypy==1.12
 
 # needed for peg_generator:
 types-psutil==6.0.0.20240901

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] [3.12] gh-125041: test_zlib: For s390x HW acceleration, only skip checking the compressed bytes (GH-125042) (GH-125526)

2024-10-16 Thread encukou
https://github.com/python/cpython/commit/cbd50a4bdc7ec474221334324a09e5f2053adea6
commit: cbd50a4bdc7ec474221334324a09e5f2053adea6
branch: 3.12
author: Miss Islington (bot) <[email protected]>
committer: encukou 
date: 2024-10-16T14:44:37+02:00
summary:

[3.12] gh-125041: test_zlib: For s390x HW acceleration, only skip checking the 
compressed bytes (GH-125042) (GH-125526)

(cherry picked from commit cc5a225cdc2a5d4e035dd08d59cef39182c10a6c)

Co-authored-by: Petr Viktorin 

files:
A Misc/NEWS.d/next/Tests/2024-10-07-14-13-38.gh-issue-125041.PKLWDf.rst
M Lib/test/support/__init__.py
M Lib/test/test_zlib.py

diff --git a/Lib/test/support/__init__.py b/Lib/test/support/__init__.py
index 78f410e1455852..ba57eb307c0b8f 100644
--- a/Lib/test/support/__init__.py
+++ b/Lib/test/support/__init__.py
@@ -2450,9 +2450,9 @@ def adjust_int_max_str_digits(max_digits):
 else:
 C_RECURSION_LIMIT = 1
 
-#Windows doesn't have os.uname() but it doesn't support s390x.
-skip_on_s390x = unittest.skipIf(hasattr(os, 'uname') and os.uname().machine == 
's390x',
-'skipped on s390x')
+# Windows doesn't have os.uname() but it doesn't support s390x.
+is_s390x = hasattr(os, 'uname') and os.uname().machine == 's390x'
+skip_on_s390x = unittest.skipIf(is_s390x, 'skipped on s390x')
 
 _BASE_COPY_SRC_DIR_IGNORED_NAMES = frozenset({
 # SRC_DIR/.git
diff --git a/Lib/test/test_zlib.py b/Lib/test/test_zlib.py
index 0a13986a847f0a..8654b93ec64ac8 100644
--- a/Lib/test/test_zlib.py
+++ b/Lib/test/test_zlib.py
@@ -7,7 +7,7 @@
 import pickle
 import random
 import sys
-from test.support import bigmemtest, _1G, _4G, skip_on_s390x
+from test.support import bigmemtest, _1G, _4G, is_s390x
 
 
 zlib = import_helper.import_module('zlib')
@@ -34,8 +34,9 @@ def 
_zlib_runtime_version_tuple(zlib_version=zlib.ZLIB_RUNTIME_VERSION):
 ZLIB_RUNTIME_VERSION_TUPLE = _zlib_runtime_version_tuple()
 
 
-# bpo-46623: On s390x, when a hardware accelerator is used, using different
-# ways to compress data with zlib can produce different compressed data.
+# bpo-46623: When a hardware accelerator is used (currently only on s390x),
+# using different ways to compress data with zlib can produce different
+# compressed data.
 # Simplified test_pair() code:
 #
 #   def func1(data):
@@ -58,8 +59,10 @@ def 
_zlib_runtime_version_tuple(zlib_version=zlib.ZLIB_RUNTIME_VERSION):
 #
 #   zlib.decompress(func1(data)) == zlib.decompress(func2(data)) == data
 #
-# Make the assumption that s390x always has an accelerator to simplify the skip
-# condition.
+# To simplify the skip condition, make the assumption that s390x always has an
+# accelerator, and nothing else has it.
+HW_ACCELERATED = is_s390x
+
 
 class VersionTestCase(unittest.TestCase):
 
@@ -224,12 +227,14 @@ def test_keywords(self):
  bufsize=zlib.DEF_BUF_SIZE),
  HAMLET_SCENE)
 
-@skip_on_s390x
 def test_speech128(self):
 # compress more data
 data = HAMLET_SCENE * 128
 x = zlib.compress(data)
-self.assertEqual(zlib.compress(bytearray(data)), x)
+# With hardware acceleration, the compressed bytes
+# might not be identical.
+if not HW_ACCELERATED:
+self.assertEqual(zlib.compress(bytearray(data)), x)
 for ob in x, bytearray(x):
 self.assertEqual(zlib.decompress(ob), data)
 
@@ -276,7 +281,6 @@ def test_64bit_compress(self, size):
 
 class CompressObjectTestCase(BaseCompressTestCase, unittest.TestCase):
 # Test compression object
-@skip_on_s390x
 def test_pair(self):
 # straightforward compress/decompress objects
 datasrc = HAMLET_SCENE * 128
@@ -287,7 +291,10 @@ def test_pair(self):
 x1 = co.compress(data)
 x2 = co.flush()
 self.assertRaises(zlib.error, co.flush) # second flush should not 
work
-self.assertEqual(x1 + x2, datazip)
+# With hardware acceleration, the compressed bytes might not
+# be identical.
+if not HW_ACCELERATED:
+self.assertEqual(x1 + x2, datazip)
 for v1, v2 in ((x1, x2), (bytearray(x1), bytearray(x2))):
 dco = zlib.decompressobj()
 y1 = dco.decompress(v1 + v2)
diff --git 
a/Misc/NEWS.d/next/Tests/2024-10-07-14-13-38.gh-issue-125041.PKLWDf.rst 
b/Misc/NEWS.d/next/Tests/2024-10-07-14-13-38.gh-issue-125041.PKLWDf.rst
new file mode 100644
index 00..c7181eb9c1f3a9
--- /dev/null
+++ b/Misc/NEWS.d/next/Tests/2024-10-07-14-13-38.gh-issue-125041.PKLWDf.rst
@@ -0,0 +1,3 @@
+Re-enable skipped tests for :mod:`zlib` on the s390x architecture: only skip
+checks of the compressed bytes, which can be different between zlib's
+software implementation and the hardware-accelerated implementation.

___
Python-checkins mailing list -- [email protected]
To unsubscribe

[Python-checkins] [3.12] gh-124958: fix asyncio.TaskGroup and _PyFuture refcycles (#124959) (#125466)

2024-10-16 Thread 1st1
https://github.com/python/cpython/commit/32d457941e8b39c0300e02632f932d1556b7beee
commit: 32d457941e8b39c0300e02632f932d1556b7beee
branch: 3.12
author: Thomas Grainger 
committer: 1st1 
date: 2024-10-16T21:45:59-07:00
summary:

[3.12] gh-124958: fix asyncio.TaskGroup and _PyFuture refcycles (#124959) 
(#125466)

gh-124958: fix asyncio.TaskGroup and _PyFuture refcycles (#124959)

files:
A Misc/NEWS.d/next/Library/2024-10-04-08-46-00.gh-issue-124958.rea9-x.rst
M Lib/asyncio/futures.py
M Lib/asyncio/taskgroups.py
M Lib/test/test_asyncio/test_futures.py
M Lib/test/test_asyncio/test_taskgroups.py

diff --git a/Lib/asyncio/futures.py b/Lib/asyncio/futures.py
index fd486f02c67c8e..0c530bbdbcf2d8 100644
--- a/Lib/asyncio/futures.py
+++ b/Lib/asyncio/futures.py
@@ -194,8 +194,7 @@ def result(self):
 the future is done and has an exception set, this exception is raised.
 """
 if self._state == _CANCELLED:
-exc = self._make_cancelled_error()
-raise exc
+raise self._make_cancelled_error()
 if self._state != _FINISHED:
 raise exceptions.InvalidStateError('Result is not ready.')
 self.__log_traceback = False
@@ -212,8 +211,7 @@ def exception(self):
 InvalidStateError.
 """
 if self._state == _CANCELLED:
-exc = self._make_cancelled_error()
-raise exc
+raise self._make_cancelled_error()
 if self._state != _FINISHED:
 raise exceptions.InvalidStateError('Exception is not set.')
 self.__log_traceback = False
diff --git a/Lib/asyncio/taskgroups.py b/Lib/asyncio/taskgroups.py
index d264e51f1fd4e6..aada3ffa8e0f29 100644
--- a/Lib/asyncio/taskgroups.py
+++ b/Lib/asyncio/taskgroups.py
@@ -66,6 +66,20 @@ async def __aenter__(self):
 return self
 
 async def __aexit__(self, et, exc, tb):
+tb = None
+try:
+return await self._aexit(et, exc)
+finally:
+# Exceptions are heavy objects that can have object
+# cycles (bad for GC); let's not keep a reference to
+# a bunch of them. It would be nicer to use a try/finally
+# in __aexit__ directly but that introduced some diff noise
+self._parent_task = None
+self._errors = None
+self._base_error = None
+exc = None
+
+async def _aexit(self, et, exc):
 self._exiting = True
 
 if (exc is not None and
@@ -126,25 +140,34 @@ async def __aexit__(self, et, exc, tb):
 assert not self._tasks
 
 if self._base_error is not None:
-raise self._base_error
+try:
+raise self._base_error
+finally:
+exc = None
 
 # Propagate CancelledError if there is one, except if there
 # are other errors -- those have priority.
-if propagate_cancellation_error and not self._errors:
-raise propagate_cancellation_error
+try:
+if propagate_cancellation_error and not self._errors:
+try:
+raise propagate_cancellation_error
+finally:
+exc = None
+finally:
+propagate_cancellation_error = None
 
 if et is not None and et is not exceptions.CancelledError:
 self._errors.append(exc)
 
 if self._errors:
-# Exceptions are heavy objects that can have object
-# cycles (bad for GC); let's not keep a reference to
-# a bunch of them.
 try:
-me = BaseExceptionGroup('unhandled errors in a TaskGroup', 
self._errors)
-raise me from None
+raise BaseExceptionGroup(
+'unhandled errors in a TaskGroup',
+self._errors,
+) from None
 finally:
-self._errors = None
+exc = None
+
 
 def create_task(self, coro, *, name=None, context=None):
 """Create a new task in this group and return it.
diff --git a/Lib/test/test_asyncio/test_futures.py 
b/Lib/test/test_asyncio/test_futures.py
index 47daa0e9f410a8..050d33f4fab3ed 100644
--- a/Lib/test/test_asyncio/test_futures.py
+++ b/Lib/test/test_asyncio/test_futures.py
@@ -640,6 +640,28 @@ def __del__(self):
 fut = self._new_future(loop=self.loop)
 fut.set_result(Evil())
 
+def test_future_cancelled_result_refcycles(self):
+f = self._new_future(loop=self.loop)
+f.cancel()
+exc = None
+try:
+f.result()
+except asyncio.CancelledError as e:
+exc = e
+self.assertIsNotNone(exc)
+self.assertListEqual(gc.get_referrers(exc), [])
+
+def test_future_cancelled_exception_refcycles(self):
+f = self._new_future(loop=self.loop)
+f.cancel()
+exc = None
+try:
+f.exception()
+exce

[Python-checkins] gh-125615: Fix grammar nit in tutorial's interactive interpreter appendix (GH-125617)

2024-10-16 Thread zware
https://github.com/python/cpython/commit/aab3210271136ad8e8fecd927b806602c463e1f2
commit: aab3210271136ad8e8fecd927b806602c463e1f2
branch: main
author: Cornelius Roemer 
committer: zware 
date: 2024-10-16T15:53:30-05:00
summary:

gh-125615: Fix grammar nit in tutorial's interactive interpreter appendix 
(GH-125617)

Replace "without ... nor" with "with neither ... nor"

files:
M Doc/tutorial/appendix.rst

diff --git a/Doc/tutorial/appendix.rst b/Doc/tutorial/appendix.rst
index da664f2f360ff1..6a1611afadb57c 100644
--- a/Doc/tutorial/appendix.rst
+++ b/Doc/tutorial/appendix.rst
@@ -20,7 +20,7 @@ This one supports color, multiline editing, history browsing, 
and
 paste mode.  To disable color, see :ref:`using-on-controlling-color` for
 details.  Function keys provide some additional functionality.
 :kbd:`F1` enters the interactive help browser :mod:`pydoc`.
-:kbd:`F2` allows for browsing command-line history without output nor the
+:kbd:`F2` allows for browsing command-line history with neither output nor the
 :term:`>>>` and :term:`...` prompts. :kbd:`F3` enters "paste mode", which
 makes pasting larger blocks of code easier. Press :kbd:`F3` to return to
 the regular prompt.

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] [3.13] gh-125615: Fix grammar nit in tutorial's interactive interpreter appendix (GH-125619)

2024-10-16 Thread zware
https://github.com/python/cpython/commit/ca9bbafb492815bcb10c4c9e4eba080c60c81e2a
commit: ca9bbafb492815bcb10c4c9e4eba080c60c81e2a
branch: 3.13
author: Miss Islington (bot) <[email protected]>
committer: zware 
date: 2024-10-16T20:58:49Z
summary:

[3.13] gh-125615: Fix grammar nit in tutorial's interactive interpreter 
appendix (GH-125619)

Replace "without ... nor" with "with neither ... nor"

(cherry picked from commit aab3210271136ad8e8fecd927b806602c463e1f2)

Authored-by: Cornelius Roemer 

files:
M Doc/tutorial/appendix.rst

diff --git a/Doc/tutorial/appendix.rst b/Doc/tutorial/appendix.rst
index da664f2f360ff1..6a1611afadb57c 100644
--- a/Doc/tutorial/appendix.rst
+++ b/Doc/tutorial/appendix.rst
@@ -20,7 +20,7 @@ This one supports color, multiline editing, history browsing, 
and
 paste mode.  To disable color, see :ref:`using-on-controlling-color` for
 details.  Function keys provide some additional functionality.
 :kbd:`F1` enters the interactive help browser :mod:`pydoc`.
-:kbd:`F2` allows for browsing command-line history without output nor the
+:kbd:`F2` allows for browsing command-line history with neither output nor the
 :term:`>>>` and :term:`...` prompts. :kbd:`F3` enters "paste mode", which
 makes pasting larger blocks of code easier. Press :kbd:`F3` to return to
 the regular prompt.

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] gh-115382: Fix cross compiles when host and target use same SOABI

2024-10-16 Thread FFY00
https://github.com/python/cpython/commit/aecbc2e6f40f8066f478c2d0f3be5b550e36cfd3
commit: aecbc2e6f40f8066f478c2d0f3be5b550e36cfd3
branch: main
author: Vincent Fazio 
committer: FFY00 
date: 2024-10-16T23:01:42+01:00
summary:

gh-115382: Fix cross compiles when host and target use same SOABI

Co-authored-by: Erlend E. Aasland 

files:
A Misc/NEWS.d/next/Build/2024-03-03-20-28-23.gh-issue-115382.97hJFE.rst
M Lib/sysconfig/__init__.py
M Lib/test/libregrtest/main.py
M Lib/test/pythoninfo.py
M configure
M configure.ac

diff --git a/Lib/sysconfig/__init__.py b/Lib/sysconfig/__init__.py
index 80aef3447117e5..43f9276799b848 100644
--- a/Lib/sysconfig/__init__.py
+++ b/Lib/sysconfig/__init__.py
@@ -340,7 +340,20 @@ def _init_posix(vars):
 """Initialize the module as appropriate for POSIX systems."""
 # _sysconfigdata is generated at build time, see _generate_posix_vars()
 name = _get_sysconfigdata_name()
-_temp = __import__(name, globals(), locals(), ['build_time_vars'], 0)
+
+# For cross builds, the path to the target's sysconfigdata must be 
specified
+# so it can be imported. It cannot be in PYTHONPATH, as foreign modules in
+# sys.path can cause crashes when loaded by the host interpreter.
+# Rely on truthiness as a valueless env variable is still an empty string.
+# See OS X note in _generate_posix_vars re _sysconfigdata.
+if (path := os.environ.get('_PYTHON_SYSCONFIGDATA_PATH')):
+from importlib.machinery import FileFinder, SourceFileLoader, 
SOURCE_SUFFIXES
+from importlib.util import module_from_spec
+spec = FileFinder(path, (SourceFileLoader, 
SOURCE_SUFFIXES)).find_spec(name)
+_temp = module_from_spec(spec)
+spec.loader.exec_module(_temp)
+else:
+_temp = __import__(name, globals(), locals(), ['build_time_vars'], 0)
 build_time_vars = _temp.build_time_vars
 vars.update(build_time_vars)
 
diff --git a/Lib/test/libregrtest/main.py b/Lib/test/libregrtest/main.py
index f693a788048694..2ef4349552bf5f 100644
--- a/Lib/test/libregrtest/main.py
+++ b/Lib/test/libregrtest/main.py
@@ -594,6 +594,7 @@ def _add_cross_compile_opts(self, regrtest_opts):
 '_PYTHON_PROJECT_BASE',
 '_PYTHON_HOST_PLATFORM',
 '_PYTHON_SYSCONFIGDATA_NAME',
+"_PYTHON_SYSCONFIGDATA_PATH",
 'PYTHONPATH'
 }
 old_environ = os.environ
diff --git a/Lib/test/pythoninfo.py b/Lib/test/pythoninfo.py
index 05a28bda2d38ba..0b2e4b1c1988c4 100644
--- a/Lib/test/pythoninfo.py
+++ b/Lib/test/pythoninfo.py
@@ -334,6 +334,7 @@ def format_groups(groups):
 "_PYTHON_HOST_PLATFORM",
 "_PYTHON_PROJECT_BASE",
 "_PYTHON_SYSCONFIGDATA_NAME",
+"_PYTHON_SYSCONFIGDATA_PATH",
 "__PYVENV_LAUNCHER__",
 
 # Sanitizer options
diff --git 
a/Misc/NEWS.d/next/Build/2024-03-03-20-28-23.gh-issue-115382.97hJFE.rst 
b/Misc/NEWS.d/next/Build/2024-03-03-20-28-23.gh-issue-115382.97hJFE.rst
new file mode 100644
index 00..f8d19651fc5854
--- /dev/null
+++ b/Misc/NEWS.d/next/Build/2024-03-03-20-28-23.gh-issue-115382.97hJFE.rst
@@ -0,0 +1 @@
+Fix cross compile failures when the host and target SOABIs match.
diff --git a/configure b/configure
index 17c70d25f9e70c..b11f41d5379958 100755
--- a/configure
+++ b/configure
@@ -3708,7 +3708,7 @@ fi
 fi
 ac_cv_prog_PYTHON_FOR_REGEN=$with_build_python
 PYTHON_FOR_FREEZE="$with_build_python"
-PYTHON_FOR_BUILD='_PYTHON_PROJECT_BASE=$(abs_builddir) 
_PYTHON_HOST_PLATFORM=$(_PYTHON_HOST_PLATFORM) PYTHONPATH=$(shell test -f 
pybuilddir.txt && echo $(abs_builddir)/`cat pybuilddir.txt`:)$(srcdir)/Lib 
_PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata_$(ABIFLAGS)_$(MACHDEP)_$(MULTIARCH) 
'$with_build_python
+PYTHON_FOR_BUILD='_PYTHON_PROJECT_BASE=$(abs_builddir) 
_PYTHON_HOST_PLATFORM=$(_PYTHON_HOST_PLATFORM) PYTHONPATH=$(srcdir)/Lib 
_PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata_$(ABIFLAGS)_$(MACHDEP)_$(MULTIARCH) 
_PYTHON_SYSCONFIGDATA_PATH=$(shell test -f pybuilddir.txt && echo 
$(abs_builddir)/`cat pybuilddir.txt`) '$with_build_python
 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $with_build_python" 
>&5
 printf "%s\n" "$with_build_python" >&6; }
 
diff --git a/configure.ac b/configure.ac
index 56daa8b0f79bc0..d5bc739c34c90f 100644
--- a/configure.ac
+++ b/configure.ac
@@ -164,7 +164,7 @@ AC_ARG_WITH([build-python],
 dnl Build Python interpreter is used for regeneration and freezing.
 ac_cv_prog_PYTHON_FOR_REGEN=$with_build_python
 PYTHON_FOR_FREEZE="$with_build_python"
-PYTHON_FOR_BUILD='_PYTHON_PROJECT_BASE=$(abs_builddir) 
_PYTHON_HOST_PLATFORM=$(_PYTHON_HOST_PLATFORM) PYTHONPATH=$(shell test -f 
pybuilddir.txt && echo $(abs_builddir)/`cat pybuilddir.txt`:)$(srcdir)/Lib 
_PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata_$(ABIFLAGS)_$(MACHDEP)_$(MULTIARCH) 
'$with_build_python
+PYTHON_FOR_BUILD='_PYTHON_PROJECT_BASE=$(abs_builddir) 
_PYTHON_HOST_PLATFORM=$(

[Python-checkins] gh-125550: Enable py.exe to detect Store installs of 3.14 (GH-125551)

2024-10-16 Thread zooba
https://github.com/python/cpython/commit/8e7b2a1161744c7d3d90966a65ed6ae1019a65cb
commit: 8e7b2a1161744c7d3d90966a65ed6ae1019a65cb
branch: main
author: Steve Dower 
committer: zooba 
date: 2024-10-16T23:05:20+01:00
summary:

gh-125550: Enable py.exe to detect Store installs of 3.14 (GH-125551)

files:
A Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst
M PC/launcher2.c

diff --git 
a/Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst 
b/Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst
new file mode 100644
index 00..c3ae00c74b3d91
--- /dev/null
+++ b/Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst
@@ -0,0 +1,2 @@
+Enable the :ref:`launcher` to detect Python 3.14 installs from the Windows
+Store.
diff --git a/PC/launcher2.c b/PC/launcher2.c
index b372044e353202..befcbe30600f2c 100644
--- a/PC/launcher2.c
+++ b/PC/launcher2.c
@@ -1962,6 +1962,7 @@ struct AppxSearchInfo {
 
 struct AppxSearchInfo APPX_SEARCH[] = {
 // Releases made through the Store
+{ L"PythonSoftwareFoundation.Python.3.14_qbz5n2kfra8p0", L"3.14", 10 },
 { L"PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0", L"3.13", 10 },
 { L"PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0", L"3.12", 10 },
 { L"PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0", L"3.11", 10 },
@@ -1970,8 +1971,9 @@ struct AppxSearchInfo APPX_SEARCH[] = {
 { L"PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0", L"3.8", 10 },
 
 // Side-loadable releases. Note that the publisher ID changes whenever we
-// renew our code-signing certificate, so the newer ID has a higher
-// priority (lower sortKey)
+// change our code signing certificate subject, so the newer IDs have 
higher
+// priorities (lower sortKey)
+{ L"PythonSoftwareFoundation.Python.3.14_3847v3x7pw1km", L"3.14", 11 },
 { L"PythonSoftwareFoundation.Python.3.13_3847v3x7pw1km", L"3.13", 11 },
 { L"PythonSoftwareFoundation.Python.3.12_3847v3x7pw1km", L"3.12", 11 },
 { L"PythonSoftwareFoundation.Python.3.11_3847v3x7pw1km", L"3.11", 11 },
@@ -2054,7 +2056,8 @@ struct StoreSearchInfo {
 
 
 struct StoreSearchInfo STORE_SEARCH[] = {
-{ L"3", /* 3.12 */ L"9NCVDN91XZQP" },
+{ L"3", /* 3.13 */ L"9PNRBTZXMB4Z" },
+{ L"3.14", L"9NTRHQCBBPR8" },
 { L"3.13", L"9PNRBTZXMB4Z" },
 { L"3.12", L"9NCVDN91XZQP" },
 { L"3.11", L"9NRWMJP3717K" },

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] gh-125550: Enable py.exe to detect Store installs of 3.14 (GH-125551)

2024-10-16 Thread zooba
https://github.com/python/cpython/commit/42b8e52de41fc82f2985ddda9d40112b6e28a80c
commit: 42b8e52de41fc82f2985ddda9d40112b6e28a80c
branch: 3.12
author: Miss Islington (bot) <[email protected]>
committer: zooba 
date: 2024-10-16T22:25:16Z
summary:

gh-125550: Enable py.exe to detect Store installs of 3.14 (GH-125551)

(cherry picked from commit 8e7b2a1161744c7d3d90966a65ed6ae1019a65cb)

Co-authored-by: Steve Dower 

files:
A Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst
M PC/launcher2.c

diff --git 
a/Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst 
b/Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst
new file mode 100644
index 00..c3ae00c74b3d91
--- /dev/null
+++ b/Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst
@@ -0,0 +1,2 @@
+Enable the :ref:`launcher` to detect Python 3.14 installs from the Windows
+Store.
diff --git a/PC/launcher2.c b/PC/launcher2.c
index f331aab3f51e56..7a78ed1655a9ee 100644
--- a/PC/launcher2.c
+++ b/PC/launcher2.c
@@ -1938,6 +1938,7 @@ struct AppxSearchInfo {
 
 struct AppxSearchInfo APPX_SEARCH[] = {
 // Releases made through the Store
+{ L"PythonSoftwareFoundation.Python.3.14_qbz5n2kfra8p0", L"3.14", 10 },
 { L"PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0", L"3.13", 10 },
 { L"PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0", L"3.12", 10 },
 { L"PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0", L"3.11", 10 },
@@ -1946,8 +1947,9 @@ struct AppxSearchInfo APPX_SEARCH[] = {
 { L"PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0", L"3.8", 10 },
 
 // Side-loadable releases. Note that the publisher ID changes whenever we
-// renew our code-signing certificate, so the newer ID has a higher
-// priority (lower sortKey)
+// change our code signing certificate subject, so the newer IDs have 
higher
+// priorities (lower sortKey)
+{ L"PythonSoftwareFoundation.Python.3.14_3847v3x7pw1km", L"3.14", 11 },
 { L"PythonSoftwareFoundation.Python.3.13_3847v3x7pw1km", L"3.13", 11 },
 { L"PythonSoftwareFoundation.Python.3.12_3847v3x7pw1km", L"3.12", 11 },
 { L"PythonSoftwareFoundation.Python.3.11_3847v3x7pw1km", L"3.11", 11 },
@@ -2030,7 +2032,8 @@ struct StoreSearchInfo {
 
 
 struct StoreSearchInfo STORE_SEARCH[] = {
-{ L"3", /* 3.12 */ L"9NCVDN91XZQP" },
+{ L"3", /* 3.13 */ L"9PNRBTZXMB4Z" },
+{ L"3.14", L"9NTRHQCBBPR8" },
 { L"3.13", L"9PNRBTZXMB4Z" },
 { L"3.12", L"9NCVDN91XZQP" },
 { L"3.11", L"9NRWMJP3717K" },

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] gh-125550: Enable py.exe to detect Store installs of 3.14 (GH-125551)

2024-10-16 Thread zooba
https://github.com/python/cpython/commit/06dc0bc6bfd7a9b47cb8a526709b7f3bd68deabb
commit: 06dc0bc6bfd7a9b47cb8a526709b7f3bd68deabb
branch: 3.13
author: Miss Islington (bot) <[email protected]>
committer: zooba 
date: 2024-10-16T22:32:21Z
summary:

gh-125550: Enable py.exe to detect Store installs of 3.14 (GH-125551)

(cherry picked from commit 8e7b2a1161744c7d3d90966a65ed6ae1019a65cb)

Co-authored-by: Steve Dower 

files:
A Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst
M PC/launcher2.c

diff --git 
a/Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst 
b/Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst
new file mode 100644
index 00..c3ae00c74b3d91
--- /dev/null
+++ b/Misc/NEWS.d/next/Windows/2024-10-15-21-28-43.gh-issue-125550.hmGWCP.rst
@@ -0,0 +1,2 @@
+Enable the :ref:`launcher` to detect Python 3.14 installs from the Windows
+Store.
diff --git a/PC/launcher2.c b/PC/launcher2.c
index b372044e353202..befcbe30600f2c 100644
--- a/PC/launcher2.c
+++ b/PC/launcher2.c
@@ -1962,6 +1962,7 @@ struct AppxSearchInfo {
 
 struct AppxSearchInfo APPX_SEARCH[] = {
 // Releases made through the Store
+{ L"PythonSoftwareFoundation.Python.3.14_qbz5n2kfra8p0", L"3.14", 10 },
 { L"PythonSoftwareFoundation.Python.3.13_qbz5n2kfra8p0", L"3.13", 10 },
 { L"PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0", L"3.12", 10 },
 { L"PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0", L"3.11", 10 },
@@ -1970,8 +1971,9 @@ struct AppxSearchInfo APPX_SEARCH[] = {
 { L"PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0", L"3.8", 10 },
 
 // Side-loadable releases. Note that the publisher ID changes whenever we
-// renew our code-signing certificate, so the newer ID has a higher
-// priority (lower sortKey)
+// change our code signing certificate subject, so the newer IDs have 
higher
+// priorities (lower sortKey)
+{ L"PythonSoftwareFoundation.Python.3.14_3847v3x7pw1km", L"3.14", 11 },
 { L"PythonSoftwareFoundation.Python.3.13_3847v3x7pw1km", L"3.13", 11 },
 { L"PythonSoftwareFoundation.Python.3.12_3847v3x7pw1km", L"3.12", 11 },
 { L"PythonSoftwareFoundation.Python.3.11_3847v3x7pw1km", L"3.11", 11 },
@@ -2054,7 +2056,8 @@ struct StoreSearchInfo {
 
 
 struct StoreSearchInfo STORE_SEARCH[] = {
-{ L"3", /* 3.12 */ L"9NCVDN91XZQP" },
+{ L"3", /* 3.13 */ L"9PNRBTZXMB4Z" },
+{ L"3.14", L"9NTRHQCBBPR8" },
 { L"3.13", L"9PNRBTZXMB4Z" },
 { L"3.12", L"9NCVDN91XZQP" },
 { L"3.11", L"9NRWMJP3717K" },

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] [3.13] gh-125451: Fix deadlock in ProcessPoolExecutor shutdown (GH-125492) (GH-125598)

2024-10-16 Thread colesbury
https://github.com/python/cpython/commit/4fc4067796d223ed63240a2db425855ddbc3dd24
commit: 4fc4067796d223ed63240a2db425855ddbc3dd24
branch: 3.13
author: Miss Islington (bot) <[email protected]>
committer: colesbury 
date: 2024-10-16T14:03:17-04:00
summary:

[3.13] gh-125451: Fix deadlock in ProcessPoolExecutor shutdown (GH-125492) 
(GH-125598)

There was a deadlock when `ProcessPoolExecutor` shuts down at the same
time that a queueing thread handles an error processing a task.

Don't use `_shutdown_lock` to protect the `_ThreadWakeup` pipes -- use
an internal lock instead. This fixes the ordering deadlock where the
`ExecutorManagerThread` holds the `_shutdown_lock` and joins the
queueing thread, while the queueing thread is attempting to acquire the
`_shutdown_lock` while closing the `_ThreadWakeup`.
(cherry picked from commit 760872efecb95017db8e38a8eda614bf23d2a22c)

Co-authored-by: Sam Gross 

files:
A Misc/NEWS.d/next/Library/2024-10-14-17-29-34.gh-issue-125451.fmP3T9.rst
M Lib/concurrent/futures/process.py

diff --git a/Lib/concurrent/futures/process.py 
b/Lib/concurrent/futures/process.py
index bb4892ebdfedf5..d73ef716134175 100644
--- a/Lib/concurrent/futures/process.py
+++ b/Lib/concurrent/futures/process.py
@@ -68,27 +68,31 @@
 class _ThreadWakeup:
 def __init__(self):
 self._closed = False
+self._lock = threading.Lock()
 self._reader, self._writer = mp.Pipe(duplex=False)
 
 def close(self):
-# Please note that we do not take the shutdown lock when
+# Please note that we do not take the self._lock when
 # calling clear() (to avoid deadlocking) so this method can
 # only be called safely from the same thread as all calls to
-# clear() even if you hold the shutdown lock. Otherwise we
+# clear() even if you hold the lock. Otherwise we
 # might try to read from the closed pipe.
-if not self._closed:
-self._closed = True
-self._writer.close()
-self._reader.close()
+with self._lock:
+if not self._closed:
+self._closed = True
+self._writer.close()
+self._reader.close()
 
 def wakeup(self):
-if not self._closed:
-self._writer.send_bytes(b"")
+with self._lock:
+if not self._closed:
+self._writer.send_bytes(b"")
 
 def clear(self):
-if not self._closed:
-while self._reader.poll():
-self._reader.recv_bytes()
+if self._closed:
+raise RuntimeError('operation on closed _ThreadWakeup')
+while self._reader.poll():
+self._reader.recv_bytes()
 
 
 def _python_exit():
@@ -167,10 +171,8 @@ def __init__(self, work_id, fn, args, kwargs):
 
 class _SafeQueue(Queue):
 """Safe Queue set exception to the future object linked to a job"""
-def __init__(self, max_size=0, *, ctx, pending_work_items, shutdown_lock,
- thread_wakeup):
+def __init__(self, max_size=0, *, ctx, pending_work_items, thread_wakeup):
 self.pending_work_items = pending_work_items
-self.shutdown_lock = shutdown_lock
 self.thread_wakeup = thread_wakeup
 super().__init__(max_size, ctx=ctx)
 
@@ -179,8 +181,7 @@ def _on_queue_feeder_error(self, e, obj):
 tb = format_exception(type(e), e, e.__traceback__)
 e.__cause__ = _RemoteTraceback('\n"""\n{}"""'.format(''.join(tb)))
 work_item = self.pending_work_items.pop(obj.work_id, None)
-with self.shutdown_lock:
-self.thread_wakeup.wakeup()
+self.thread_wakeup.wakeup()
 # work_item can be None if another process terminated. In this
 # case, the executor_manager_thread fails all work_items
 # with BrokenProcessPool
@@ -296,12 +297,10 @@ def __init__(self, executor):
 # if there is no pending work item.
 def weakref_cb(_,
thread_wakeup=self.thread_wakeup,
-   shutdown_lock=self.shutdown_lock,
mp_util_debug=mp.util.debug):
 mp_util_debug('Executor collected: triggering callback for'
   ' QueueManager wakeup')
-with shutdown_lock:
-thread_wakeup.wakeup()
+thread_wakeup.wakeup()
 
 self.executor_reference = weakref.ref(executor, weakref_cb)
 
@@ -429,11 +428,6 @@ def wait_result_broken_or_wakeup(self):
 elif wakeup_reader in ready:
 is_broken = False
 
-# No need to hold the _shutdown_lock here because:
-# 1. we're the only thread to use the wakeup reader
-# 2. we're also the only thread to call thread_wakeup.close()
-# 3. we want to avoid a possible deadlock when both reader and writer
-#would block (gh-105829)
 self.thread_wakeup.clear()
 
 ret

[Python-checkins] [3.12] gh-125451: Fix deadlock in ProcessPoolExecutor shutdown (GH-125492) (#125599)

2024-10-16 Thread colesbury
https://github.com/python/cpython/commit/4256847190e3f87ec357a1a4e8d9eb5c57367d5e
commit: 4256847190e3f87ec357a1a4e8d9eb5c57367d5e
branch: 3.12
author: Sam Gross 
committer: colesbury 
date: 2024-10-16T14:03:32-04:00
summary:

[3.12] gh-125451: Fix deadlock in ProcessPoolExecutor shutdown (GH-125492) 
(#125599)

There was a deadlock when `ProcessPoolExecutor` shuts down at the same
time that a queueing thread handles an error processing a task.

Don't use `_shutdown_lock` to protect the `_ThreadWakeup` pipes -- use
an internal lock instead. This fixes the ordering deadlock where the
`ExecutorManagerThread` holds the `_shutdown_lock` and joins the
queueing thread, while the queueing thread is attempting to acquire the
`_shutdown_lock` while closing the `_ThreadWakeup`.
(cherry picked from commit 760872efecb95017db8e38a8eda614bf23d2a22c)

files:
A Misc/NEWS.d/next/Library/2024-10-14-17-29-34.gh-issue-125451.fmP3T9.rst
M Lib/concurrent/futures/process.py

diff --git a/Lib/concurrent/futures/process.py 
b/Lib/concurrent/futures/process.py
index 0e452883963c17..ff7c17efaab694 100644
--- a/Lib/concurrent/futures/process.py
+++ b/Lib/concurrent/futures/process.py
@@ -68,27 +68,31 @@
 class _ThreadWakeup:
 def __init__(self):
 self._closed = False
+self._lock = threading.Lock()
 self._reader, self._writer = mp.Pipe(duplex=False)
 
 def close(self):
-# Please note that we do not take the shutdown lock when
+# Please note that we do not take the self._lock when
 # calling clear() (to avoid deadlocking) so this method can
 # only be called safely from the same thread as all calls to
-# clear() even if you hold the shutdown lock. Otherwise we
+# clear() even if you hold the lock. Otherwise we
 # might try to read from the closed pipe.
-if not self._closed:
-self._closed = True
-self._writer.close()
-self._reader.close()
+with self._lock:
+if not self._closed:
+self._closed = True
+self._writer.close()
+self._reader.close()
 
 def wakeup(self):
-if not self._closed:
-self._writer.send_bytes(b"")
+with self._lock:
+if not self._closed:
+self._writer.send_bytes(b"")
 
 def clear(self):
-if not self._closed:
-while self._reader.poll():
-self._reader.recv_bytes()
+if self._closed:
+raise RuntimeError('operation on closed _ThreadWakeup')
+while self._reader.poll():
+self._reader.recv_bytes()
 
 
 def _python_exit():
@@ -167,10 +171,8 @@ def __init__(self, work_id, fn, args, kwargs):
 
 class _SafeQueue(Queue):
 """Safe Queue set exception to the future object linked to a job"""
-def __init__(self, max_size=0, *, ctx, pending_work_items, shutdown_lock,
- thread_wakeup):
+def __init__(self, max_size=0, *, ctx, pending_work_items, thread_wakeup):
 self.pending_work_items = pending_work_items
-self.shutdown_lock = shutdown_lock
 self.thread_wakeup = thread_wakeup
 super().__init__(max_size, ctx=ctx)
 
@@ -179,8 +181,7 @@ def _on_queue_feeder_error(self, e, obj):
 tb = format_exception(type(e), e, e.__traceback__)
 e.__cause__ = _RemoteTraceback('\n"""\n{}"""'.format(''.join(tb)))
 work_item = self.pending_work_items.pop(obj.work_id, None)
-with self.shutdown_lock:
-self.thread_wakeup.wakeup()
+self.thread_wakeup.wakeup()
 # work_item can be None if another process terminated. In this
 # case, the executor_manager_thread fails all work_items
 # with BrokenProcessPool
@@ -305,12 +306,10 @@ def __init__(self, executor):
 # will wake up the queue management thread so that it can terminate
 # if there is no pending work item.
 def weakref_cb(_,
-   thread_wakeup=self.thread_wakeup,
-   shutdown_lock=self.shutdown_lock):
+   thread_wakeup=self.thread_wakeup):
 mp.util.debug('Executor collected: triggering callback for'
   ' QueueManager wakeup')
-with shutdown_lock:
-thread_wakeup.wakeup()
+thread_wakeup.wakeup()
 
 self.executor_reference = weakref.ref(executor, weakref_cb)
 
@@ -438,11 +437,6 @@ def wait_result_broken_or_wakeup(self):
 elif wakeup_reader in ready:
 is_broken = False
 
-# No need to hold the _shutdown_lock here because:
-# 1. we're the only thread to use the wakeup reader
-# 2. we're also the only thread to call thread_wakeup.close()
-# 3. we want to avoid a possible deadlock when both reader and writer
-#would block (gh-105829)
 self.thread_wakeup.clear()
 
 return resu

[Python-checkins] gh-125620: Remove unnecessary import of subprocess in spawnv_passfds (#125624)

2024-10-16 Thread gpshead
https://github.com/python/cpython/commit/a38fef4439139743e3334c1d69f24cafdf4d71da
commit: a38fef4439139743e3334c1d69f24cafdf4d71da
branch: main
author: Furkan Onder 
committer: gpshead 
date: 2024-10-16T22:42:29Z
summary:

gh-125620: Remove unnecessary import of subprocess in spawnv_passfds (#125624)

Remove unnecessary import of subprocess in multiprocessing.util.spawnv_passfds.

files:
M Lib/multiprocessing/util.py

diff --git a/Lib/multiprocessing/util.py b/Lib/multiprocessing/util.py
index d48ef8a86b34e1..b7192042b9cf47 100644
--- a/Lib/multiprocessing/util.py
+++ b/Lib/multiprocessing/util.py
@@ -438,7 +438,6 @@ def _flush_std_streams():
 
 def spawnv_passfds(path, args, passfds):
 import _posixsubprocess
-import subprocess
 passfds = tuple(sorted(map(int, passfds)))
 errpipe_read, errpipe_write = os.pipe()
 try:

___
Python-checkins mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-checkins.python.org/
Member address: [email protected]


[Python-checkins] gh-124694: Add concurrent.futures.InterpreterPoolExecutor (gh-124548)

2024-10-16 Thread ericsnowcurrently
https://github.com/python/cpython/commit/a5a7f5e16d8c3938d266703ea8fba8ffee3e3ae5
commit: a5a7f5e16d8c3938d266703ea8fba8ffee3e3ae5
branch: main
author: Eric Snow 
committer: ericsnowcurrently 
date: 2024-10-16T16:50:46-06:00
summary:

gh-124694: Add concurrent.futures.InterpreterPoolExecutor (gh-124548)

This is an implementation of InterpreterPoolExecutor that builds on 
ThreadPoolExecutor.

(Note that this is not tied to PEP 734, which is strictly about adding a new 
stdlib module.)

Possible future improvements:

* support passing a script for the initializer or to submit()
* support passing (most) arbitrary functions without pickling
* support passing closures
* optionally exec functions against __main__ instead of the their original 
module

files:
A Lib/concurrent/futures/interpreter.py
A Lib/test/test_concurrent_futures/test_interpreter_pool.py
A Misc/NEWS.d/next/Library/2024-09-27-15-42-55.gh-issue-124694.uUy32y.rst
M Doc/library/asyncio-dev.rst
M Doc/library/asyncio-eventloop.rst
M Doc/library/asyncio-llapi-index.rst
M Doc/library/concurrent.futures.rst
M Doc/whatsnew/3.14.rst
M Lib/concurrent/futures/__init__.py
M Lib/concurrent/futures/thread.py
M Lib/test/test_concurrent_futures/executor.py
M Lib/test/test_concurrent_futures/util.py

diff --git a/Doc/library/asyncio-dev.rst b/Doc/library/asyncio-dev.rst
index a9c3a0183bb72d..44b507a986 100644
--- a/Doc/library/asyncio-dev.rst
+++ b/Doc/library/asyncio-dev.rst
@@ -103,7 +103,8 @@ To handle signals the event loop must be
 run in the main thread.
 
 The :meth:`loop.run_in_executor` method can be used with a
-:class:`concurrent.futures.ThreadPoolExecutor` to execute
+:class:`concurrent.futures.ThreadPoolExecutor` or
+:class:`~concurrent.futures.InterpreterPoolExecutor` to execute
 blocking code in a different OS thread without blocking the OS thread
 that the event loop runs in.
 
@@ -128,7 +129,8 @@ if a function performs a CPU-intensive calculation for 1 
second,
 all concurrent asyncio Tasks and IO operations would be delayed
 by 1 second.
 
-An executor can be used to run a task in a different thread or even in
+An executor can be used to run a task in a different thread,
+including in a different interpreter, or even in
 a different process to avoid blocking the OS thread with the
 event loop.  See the :meth:`loop.run_in_executor` method for more
 details.
diff --git a/Doc/library/asyncio-eventloop.rst 
b/Doc/library/asyncio-eventloop.rst
index 943683f6b8a7f6..14fd153f640f05 100644
--- a/Doc/library/asyncio-eventloop.rst
+++ b/Doc/library/asyncio-eventloop.rst
@@ -1305,6 +1305,12 @@ Executing code in thread or process pools
   pool, cpu_bound)
   print('custom process pool', result)
 
+  # 4. Run in a custom interpreter pool:
+  with concurrent.futures.InterpreterPoolExecutor() as pool:
+  result = await loop.run_in_executor(
+  pool, cpu_bound)
+  print('custom interpreter pool', result)
+
   if __name__ == '__main__':
   asyncio.run(main())
 
@@ -1329,7 +1335,8 @@ Executing code in thread or process pools
 
Set *executor* as the default executor used by :meth:`run_in_executor`.
*executor* must be an instance of
-   :class:`~concurrent.futures.ThreadPoolExecutor`.
+   :class:`~concurrent.futures.ThreadPoolExecutor`, which includes
+   :class:`~concurrent.futures.InterpreterPoolExecutor`.
 
.. versionchanged:: 3.11
   *executor* must be an instance of
diff --git a/Doc/library/asyncio-llapi-index.rst 
b/Doc/library/asyncio-llapi-index.rst
index 3e21054aa4fe9e..f5af888f31f186 100644
--- a/Doc/library/asyncio-llapi-index.rst
+++ b/Doc/library/asyncio-llapi-index.rst
@@ -96,7 +96,7 @@ See also the main documentation section about the
   - Invoke a callback *at* the given time.
 
 
-.. rubric:: Thread/Process Pool
+.. rubric:: Thread/Interpreter/Process Pool
 .. list-table::
 :widths: 50 50
 :class: full-width-table
diff --git a/Doc/library/concurrent.futures.rst 
b/Doc/library/concurrent.futures.rst
index ce72127127c7a6..45a73705f10e92 100644
--- a/Doc/library/concurrent.futures.rst
+++ b/Doc/library/concurrent.futures.rst
@@ -15,9 +15,10 @@ The :mod:`concurrent.futures` module provides a high-level 
interface for
 asynchronously executing callables.
 
 The asynchronous execution can be performed with threads, using
-:class:`ThreadPoolExecutor`, or separate processes, using
-:class:`ProcessPoolExecutor`.  Both implement the same interface, which is
-defined by the abstract :class:`Executor` class.
+:class:`ThreadPoolExecutor` or :class:`InterpreterPoolExecutor`,
+or separate processes, using :class:`ProcessPoolExecutor`.
+Each implements the same interface, which is defined
+by the abstract :class:`Executor` class.
 
 .. include:: ../includes/wasm-notavail.rst
 
@@ -63,7 +64,8 @@ Executor Objects
   setting *chunksize* to a positive integer.  For very long iterables,
   using a large value for *chunksize* can sign