[issue45417] Enum creation non-linear in the number of values
Armin Rigo added the comment: The timing is clearly quadratic: number of attributes time 1500 0.24s 3000 0.94s 6000 3.74s 1200015.57s Pressing Ctrl-C in the middle of the execution of the largest examples points directly to the cause: when we consider the next attribute, we loop over all previous ones at enum.py:238. -- nosy: +arigo ___ Python tracker <https://bugs.python.org/issue45417> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38659] enum classes cause slow startup time
Armin Rigo added the comment: Nobody seemed to mention it so I might as well: defining a regular Enum class takes an amount of time that is clearly quadratic in the number of attributes. That means that the problem is not Python-versus-C or small speed-ups or adding secret APIs to do the simple case faster. The problem is in the algorithm which needs to be fixed somewhere. My timings: number of attributes time 1500 0.24s 3000 0.94s 6000 3.74s 1200015.57s -- nosy: +arigo ___ Python tracker <https://bugs.python.org/issue38659> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41097] confusing BufferError: Existing exports of data: object cannot be re-sized
Change by Armin Rigo : -- nosy: -arigo ___ Python tracker <https://bugs.python.org/issue41097> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41097] confusing BufferError: Existing exports of data: object cannot be re-sized
New submission from Armin Rigo : The behavior (tested in 3.6 and 3.9) of io.BytesIO().getbuffer() gives a unexpected exception message: >>> b = io.BytesIO() >>> b.write(b'abc') 3 >>> buf = b.getbuffer() >>> b.seek(0) 0 >>> b.write(b'?') # or anything up to 3 bytes BufferError: Existing exports of data: object cannot be re-sized The error message pretends that the problem is in resizing the BytesIO object, but the write() is not actually causing any resize. I am not sure if the bug is a wrong error message (and all writes are supposed to be forbidden) or a wrongly forbidden write() (after all, we can use the buffer itself to write into the same area of memory). -- components: Interpreter Core messages: 372237 nosy: arigo priority: normal severity: normal stage: test needed status: open title: confusing BufferError: Existing exports of data: object cannot be re-sized type: behavior ___ Python tracker <https://bugs.python.org/issue41097> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38091] Import deadlock detection causes deadlock
Change by Armin Rigo : -- keywords: +patch pull_requests: +16996 stage: -> patch review pull_request: https://github.com/python/cpython/pull/17518 ___ Python tracker <https://bugs.python.org/issue38091> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36229] Avoid unnecessary copies for list, set, and bytearray ops.
Change by Armin Rigo : -- nosy: -arigo ___ Python tracker <https://bugs.python.org/issue36229> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] Race in PyThread_release_lock - can lead to memory corruption and deadlock
Armin Rigo added the comment: I agree with your analysis. I guess you (or someone) needs to write an explicit pull request, even if it just contains 187aa545165d cherry-picked. (I'm not a core dev any more nowadays) -- ___ Python tracker <https://bugs.python.org/issue38106> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] Race in PyThread_release_lock - can lead to memory corruption and deadlock
Armin Rigo added the comment: I may be wrong, but I believe that the bug requires using the C API (not just pure Python code). This is because Python-level lock objects have their own lifetime, and should never be freed while another thread is in PyThread_release_lock() with them. Nevertheless, the example shows that using this C API "correctly" is very hard. Most direct users of the C API could run into the same problem in theory. -- ___ Python tracker <https://bugs.python.org/issue38106> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29535] datetime hash is deterministic in some cases
Change by Armin Rigo : -- nosy: -arigo ___ Python tracker <https://bugs.python.org/issue29535> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34880] About the "assert" bytecode
Change by Armin Rigo : -- nosy: -arigo ___ Python tracker <https://bugs.python.org/issue34880> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7946] Convoy effect with I/O bound threads and New GIL
Armin Rigo added the comment: Note that PyPy has implemented a GIL which does not suffer from this problem, possibly using a simpler approach than the patches here do. The idea is described and implemented here: https://bitbucket.org/pypy/pypy/src/default/rpython/translator/c/src/thread_gil.c -- nosy: +arigo ___ Python tracker <https://bugs.python.org/issue7946> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1875] "if 0: return" not raising SyntaxError
Change by Armin Rigo : -- nosy: -arigo ___ Python tracker <https://bugs.python.org/issue1875> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36229] Avoid unnecessary copies for list, set, and bytearray ops.
Armin Rigo added the comment: ...or PySequence_Concat() instead of PyNumber_Add(); same reasoning. -- ___ Python tracker <https://bugs.python.org/issue36229> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36229] Avoid unnecessary copies for list, set, and bytearray ops.
Armin Rigo added the comment: This patch is based on the following reasoning: if 'a' is a list and the reference count of 'a' is equal to 1, then we can mutate in-place 'a' in a call to 'a->ob_type->tp_as_sequence->list_concat'. Typically that is called from 'PyNumber_Add(a, b)'. The patch is only correct for the case where PyNumber_Add() is called from Python/ceval.c. It is clearly wrong if you consider calls to PyNumber_Add() from random C extension modules. Some extension modules' authors would be very surprised if the following code starts giving nonsense: PyObject *a = PyList_New(); PyObject *b = PyNumber_Add(a, some_other_list); /* here, OF COURSE a must still be an empty list and b != a */ By comparison, if you consider the hack that I'm guilty for doing long ago to improve string concatenation, you'll see that it is done entirely inside ceval.c, and not in stringobject.c or unicodeobject.c. For this reason I consider the whole patch, as written now, as bogus. -- ___ Python tracker <https://bugs.python.org/issue36229> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36124] Provide convenient C API for storing per-interpreter state
Armin Rigo added the comment: PyModule_GetState() requires having the module object that corresponds to the given interpreter state. I'm not sure how a C extension module is supposed to get its own module object corresponding to the current interpreter state, without getting it from the caller in some way. The mess in cffi's call_python.c would be much reduced if we had bpo-36124 (fixed to call Py_CLEAR(), see comment in bpo-36124). If you want to point out a different approach that might work too, that's OK too. It's just that the current approach was arrived at after multiple generations of crash reports, which makes me uneasy about changing it in more subtle ways than just killing it in favor of a careful PyInterpreterState_GetDict(). If you want to see details of the current hacks, I can explain https://bitbucket.org/cffi/cffi/src/d765c36df047cf9d5e766777049c4107e1f4cb00/c/call_python.c : The goal is that we are given a finite (but unknown at compile-time) number of 'externpy' data structures, and for each pair (externpy, interp) the user can assign a callable 'PyObject *'. The annoying part of the logic is that we have a C-exposed callback function (line 204) which is called with a pointer to one of these 'externpy' structures, and we need to look up the right 'PyObject *' to call. At line 255 we just got the GIL and need to check if the 'PyThreadState_GET()->interp' is equal to the one previously seen (an essential optimization: we can't do complicated logic in the fast path). We hack by checking for 'interp->modules' because that's a PyObject. The previous time this code was invoked, we stored a reference to 'interp->modules' in the C structure 'externpy', with an incref. So this fast-path pointer comparison is always safe (no freed object whose address can be accidentally reused). This test will quickly pass if this function is called in the same 'interp' many times in a row. The slow path is in _update_cache_to_call_python(), which calls _get_interpstate_dict(), whose only purpose is to return a dictionary that depends on 'interp'. Note how we need to be very careful about various cases, like shutdown. _get_interpstate_dict() can fail and return NULL, but it cannot give a fatal error. That's why we couldn't call, say, PyImport_GetModuleDict(), because this gives a fatal error if 'interp' is being shut down at the moment. Overall, the logic uses both 'interp->modules' and 'interp->builtins'. The 'modules' is used only for the pointer equality check, because that's an object that is not supposed to be freed until the very last moment. The 'builtins' is used to store the special name "__cffi_backend_extern_py" in it, because we can't store that in 'interp->modules' directly without crashing various 3rd-party Python code if this special key shows up in 'sys.modules'. The value corresponding to this special name is a dictionary {PyLong_FromVoidPtr(externpy): infotuple-describing-the-final-callable}. -- ___ Python tracker <https://bugs.python.org/issue36124> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35886] Move PyInterpreterState into Include/internal/pycore_pystate.h
Armin Rigo added the comment: Done. -- ___ Python tracker <https://bugs.python.org/issue35886> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35886] Move PyInterpreterState into Include/internal/pycore_pystate.h
Armin Rigo added the comment: Cool. Also, no bugfix release of cffi was planned, but I can make one if you think it might help for testing the current pre-release of CPython 3.8. -- ___ Python tracker <https://bugs.python.org/issue35886> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35886] Move PyInterpreterState into Include/internal/pycore_pystate.h
Armin Rigo added the comment: @nick the C sources produced by cffi don't change. When they are compiled, they use Py_LIMITED_API so you can continue using a single compiled module version for any recent-enough Python 3.x. The required fix is only inside the cffi module itself. -- ___ Python tracker <https://bugs.python.org/issue35886> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35886] Move PyInterpreterState into Include/internal/pycore_pystate.h
Armin Rigo added the comment: Just so you know, when we look at the changes to CPython, the easiest fix is to add these lines in cffi: #if PY_VERSION_HEX >= 0x0308 # define Py_BUILD_CORE # include "internal/pycore_pystate.h" # undef Py_BUILD_CORE #endif But if we're looking for a cleaner way to fix this, then cffi's needs could be covered by adding a general-purpose-storage dict to PyInterpreterState (like PyThreadState->dict, but per sub-interpreter instead of per thread), and an API function to get it. -- ___ Python tracker <https://bugs.python.org/issue35886> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34880] About the "assert" bytecode
Armin Rigo added the comment: A middle ground might be to copy the behavior of ``__import__``: it is loaded from the builtins module where specific hacks can change its definition, but it is not loaded from the globals. -- nosy: +arigo ___ Python tracker <https://bugs.python.org/issue34880> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9134] sre bug: lastmark_save/restore
Change by Armin Rigo : -- nosy: -arigo ___ Python tracker <https://bugs.python.org/issue9134> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25750] tp_descr_get(self, obj, type) is called without owning a reference to "self"
Change by Armin Rigo : -- nosy: -arigo ___ Python tracker <https://bugs.python.org/issue25750> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34123] ambiguous documentation for dict.popitem
Armin Rigo added the comment: Agreed with Raymond. -- ___ Python tracker <https://bugs.python.org/issue34123> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16899] Add support for C99 complex type (_Complex) as ctypes.c_complex
Armin Rigo added the comment: cffi supports complex numbers since release 1.11---for direct calls using the API mode. That means that neither CFFI's ABI mode, nor ctypes, can possibly work yet. The problem is still libffi's own support, which is still not implemented (apart on a very uncommon CPU architecture, the s390). -- ___ Python tracker <https://bugs.python.org/issue16899> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1617161] Instance methods compare equal when their self's are equal
Change by Armin Rigo : -- nosy: -arigo ___ Python tracker <https://bugs.python.org/issue1617161> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1366311] SRE engine should release the GIL when/if possible
Change by Armin Rigo : -- nosy: -arigo ___ Python tracker <https://bugs.python.org/issue1366311> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32922] dbm.open() encodes filename with default encoding rather than the filesystem encoding
Armin Rigo added the comment: It's not a new feature. See for example all functions from posixmodule.c: it should at least be PyArg_ParseTuple(args, "et", Py_FileSystemDefaultEncoding, &char_star_variable). -- nosy: +arigo ___ Python tracker <https://bugs.python.org/issue32922> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17852] Built-in module _io can lose data from buffered files at exit
Change by Armin Rigo : -- nosy: -arigo ___ Python tracker <https://bugs.python.org/issue17852> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1617161] Instance methods compare equal when their self's are equal
Armin Rigo added the comment: Sorry, I think it is pointless to spend efforts to keep a relatively uncontroversial and small patch up-to-date, when it was not accepted in 9 years. Someone else would need to take it up. -- ___ Python tracker <https://bugs.python.org/issue1617161> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10544] yield expression inside generator expression does nothing
Change by Armin Rigo : -- nosy: -arigo ___ Python tracker <https://bugs.python.org/issue10544> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30744] Local variable assignment is broken when combined with threads + tracing + closures
Armin Rigo added the comment: Guido: you must be tired and forgot that locals() is a regular function :-) The compiler cannot recognize it reliably. Moreover, if f_locals can be modified outside a tracing hook, then we have the same problem in a cross-function way, e.g. if function f1() calls function f2() which does sys._getframe(1).f_locals['foo'] = 42. -- ___ Python tracker <https://bugs.python.org/issue30744> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30744] Local variable assignment is broken when combined with threads + tracing + closures
Armin Rigo added the comment: Thanks Nick for the clarification. Yes, that's what I meant: supporting such code in simple JITs is a nightmare. Perhaps more importantly, I am sure that if Python starts supporting random mutation of locals outside tracing hooks, then it would open the door to various hacks that are best not done at all, from a code quality point of view. -- ___ Python tracker <https://bugs.python.org/issue30744> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30744] Local variable assignment is broken when combined with threads + tracing + closures
Armin Rigo added the comment: FWIW, a Psyco-level JIT compiler can support reads from locals() or f_locals, but writes are harder. The need to support writes would likely become another hard step on the way towards adding some simple JIT support to CPython in the future, should you decide you ever want to go that way. (It is not a problem for PyPy but PyPy is not a simple JIT.) Well, I guess CPython is not ever going down that path anyway. -- ___ Python tracker <https://bugs.python.org/issue30744> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29943] PySlice_GetIndicesEx change broke ABI in 3.5 and 3.6 branches
Armin Rigo added the comment: An update to Serhiy's proposed fix: #if PY_VERSION_HEX < 0x0307 && defined(PySlice_GetIndicesEx) #if !defined(PYPY_VERSION) #undef PySlice_GetIndicesEx #endif #endif All PyXxx functions are macros on PyPy, and undefining a macro just makes everything go wrong. And there is not much PyPy can do about that. -- nosy: +arigo ___ Python tracker <https://bugs.python.org/issue29943> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31265] Remove doubly-linked list from C OrderedDict
Armin Rigo added the comment: I would side with Inada in thinking they both give the same amortized complexity, but beyond that, benchmarks are the real answer. There is little value in keeping the current implementation of OrderedDict *if* benchmarks show that it is rarely faster. -- ___ Python tracker <https://bugs.python.org/issue31265> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31119] Signal tripped flags need memory barriers
Armin Rigo added the comment: For reference, no, it can't happen on x86 or x86-64. I've found that this simplified model actually works for reasoning about ordering at the hardware level: think about each core as a cache-less machine that always *reads* from the central RAM, but that has a delay for writes---i.e. writes issued by a core are queued internally, and actually sent to the central RAM at some unspecified later time, but in order. (Of course that model fails on other machines like ARM.) -- nosy: +arigo ___ Python tracker <http://bugs.python.org/issue31119> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31105] Cyclic GC threshold may need tweaks
New submission from Armin Rigo: The cyclic GC uses a simple and somewhat naive policy to know when it must run. It is based on counting "+1" for every call to _PyObject_GC_Alloc(). Explicit calls to PyObject_GC_Del() are counted as "-1". The cyclic GC will only be executed after the count reaches 700. There is then a scheme with multiple generations, but the point is that nothing is done at all before _PyObject_GC_Alloc() has been called 700 times. The problem is that each of these _PyObject_GC_Alloc() can be directly or indirectly responsible for a large quantity of memory. Take this example: while True: l = [None] * 1000# 80 MB, on 64-bit l[-1] = l del l This loop actually consumes 700 times 80 MB, which is unexpected to say the least, and looks like a very fast memory leak. The same program on 32-bit architectures simply runs out of virtual address space and fails with a MemoryError---even if we lower the length of the list to 10**9/700 = 1428571. The same problem exists whenever a single object is "large", we allocate and forget many such objects in sequence, and they are kept alive by a cycle. This includes the case where the large object is not part of a cycle, but merely referenced from a cycle. For examples of "large" objects with potentially low lifetimes, maybe more natural than large lists, would include bz2 objects (17MB each) or Numpy arrays. To fix it, the basic idea would be to have the "large" allocations count for more than "+1" in _PyObject_GC_Alloc(). Maybe they would also need to decrease the count by the same amount in PyObject_GC_Del(), though that may be less important. Still, I am unsure about how it could be implemented. Maybe a new C API is needed, which could then be used by a few built-in types (lists, bz2 objects, numpy arrays...) where the bulk of the memory allocation is not actually done by _PyObject_GC_Alloc() but by a separate call. I am thinking about something like PyMem_AddPressure(size), which would simply increase the count by a number based on 'size'. -- components: Interpreter Core messages: 299656 nosy: arigo priority: normal severity: normal status: open title: Cyclic GC threshold may need tweaks versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 ___ Python tracker <http://bugs.python.org/issue31105> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30966] multiprocessing.queues.SimpleQueue leaks 2 fds
New submission from Armin Rigo: multiprocessing.queues.SimpleQueue should have a close() method. This is needed to explicitly release the two file descriptors of the Pipe used internally. Without it, the file descriptors leak if a reference to the SimpleQueue object happens to stay around for longer than expected (e.g. in a reference cycle, or with PyPy). I think the following would do: diff -r 0b72fd1a7641 lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py Sun Jul 16 13:41:28 2017 +0200 +++ b/lib-python/2.7/multiprocessing/queues.py Wed Jul 19 10:45:03 2017 +0200 @@ -358,6 +358,11 @@ self._wlock = Lock() self._make_methods() +def close(self): +# PyPy extension: CPython doesn't have this method! +self._reader.close() +self._writer.close() + def empty(self): return not self._reader.poll() -- messages: 298645 nosy: arigo priority: normal severity: normal status: open title: multiprocessing.queues.SimpleQueue leaks 2 fds versions: Python 2.7, Python 3.3, Python 3.4, Python 3.5, Python 3.6, Python 3.7 ___ Python tracker <http://bugs.python.org/issue30966> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30879] os.listdir(bytes) gives a list of bytes, but os.listdir(buffer) gives a list of unicodes
Armin Rigo added the comment: I've also been pointed to https://bugs.python.org/issue26800 . -- ___ Python tracker <http://bugs.python.org/issue30879> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30879] os.listdir(bytes) gives a list of bytes, but os.listdir(buffer) gives a list of unicodes
New submission from Armin Rigo: The ``os`` functions generally accept any buffer-supporting object as file names, and interpret it as if ``bytes()`` had been called on it. However, ``os.listdir(x)`` uses the type of ``x`` to know if it should return a list of bytes or a list of unicodes---and the condition seems to be ``isinstance(x, bytes)``. So we get this kind of inconsistent behaviour: >>> os.listdir(b".") [b'python', b'Include', b'python-config.py', ...] >>> os.listdir(bytearray(b".")) ['python', 'Include', 'python-config.py', ...] -- components: Library (Lib) messages: 297960 nosy: arigo priority: normal severity: normal status: open title: os.listdir(bytes) gives a list of bytes, but os.listdir(buffer) gives a list of unicodes type: behavior versions: Python 3.5, Python 3.7 ___ Python tracker <http://bugs.python.org/issue30879> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30744] Local variable assignment is broken when combined with threads + tracing + closures
Armin Rigo added the comment: (Note: x.py is for Python 2.7; for 3.x, of course, replace ``.next()`` with ``.__next__()``. The result is the same) -- ___ Python tracker <http://bugs.python.org/issue30744> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30744] Local variable assignment is broken when combined with threads + tracing + closures
Armin Rigo added the comment: A version of the same problem without threads, using generators instead to get the bug deterministically. Prints 1, 1, 1, 1 on CPython and 1, 2, 3, 3 on PyPy; in both cases we would rather expect 1, 2, 3, 4. -- Added file: http://bugs.python.org/file46972/x.py ___ Python tracker <http://bugs.python.org/issue30744> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30744] Local variable assignment is broken when combined with threads + tracing + closures
Changes by Armin Rigo : -- nosy: +arigo ___ Python tracker <http://bugs.python.org/issue30744> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30480] samefile and sameopenfile fail for WebDAV mapped drives
Armin Rigo added the comment: Another example of this misbehaviour: there are cases where ``os.stat()`` will internally fail to obtain the whole stat info (in some case related to permissions) and silently fall back to the same behaviour as Python 2.7. In particular, it will return a result with ``st_dev == st_ino == 0``. Of course, ``os.path.samefile()`` will then consider all such files as "the same one", which is nonsense. -- nosy: +arigo ___ Python tracker <http://bugs.python.org/issue30480> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29535] datetime hash is deterministic in some cases
Changes by Armin Rigo : -- pull_requests: +2016 ___ Python tracker <http://bugs.python.org/issue29535> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24340] co_stacksize estimate can be highly off
Changes by Armin Rigo : -- versions: +Python 3.7 ___ Python tracker <http://bugs.python.org/issue24340> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28647] python --help: -u is misdocumented as binary mode
Changes by Armin Rigo : -- nosy: -arigo ___ Python tracker <http://bugs.python.org/issue28647> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18943] argparse: default args in mutually exclusive groups
Changes by Armin Rigo : -- nosy: -arigo ___ Python tracker <http://bugs.python.org/issue18943> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30347] itertools.groupby() can fail a C assert()
New submission from Armin Rigo: This triggers an assert() failure on debug-mode Python (or a leak in release Python): from itertools import groupby def f(n): print("enter:", n) if n == 5: list(b) print("leave:", n) return n != 6 for (k, b) in groupby(range(10), f): print(list(b)) With current trunk we get: python: ./Modules/itertoolsmodule.c:303: _grouper_next: Assertion `gbo->currkey == NULL' failed. -- components: Interpreter Core messages: 293517 nosy: arigo priority: normal severity: normal status: open title: itertools.groupby() can fail a C assert() type: crash versions: Python 2.7, Python 3.7 ___ Python tracker <http://bugs.python.org/issue30347> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30080] Add the --duplicate option for timeit
Changes by Armin Rigo : -- nosy: -arigo ___ Python tracker <http://bugs.python.org/issue30080> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29694] race condition in pathlib mkdir with flags parents=True
Armin Rigo added the comment: https://github.com/python/cpython/pull/1089 (I fixed the problem with my CLA check. Now https://cpython-devguide.readthedocs.io/pullrequest.html#licensing says "you can ask for the CLA check to be run again" but doesn't tell how to do that, so as far as I can tell, I have to ask e.g. here.) -- ___ Python tracker <http://bugs.python.org/issue29694> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29694] race condition in pathlib mkdir with flags parents=True
Armin Rigo added the comment: Update: a review didn't show any other similar problems (pathlib.py is a thin layer after all). Applied the fix and test (x2.diff) inside PyPy. -- ___ Python tracker <http://bugs.python.org/issue29694> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29694] race condition in pathlib mkdir with flags parents=True
Armin Rigo added the comment: Maybe we should review pathlib.py for this kind of issues and first apply the fixes and new tests inside PyPy. That sounds like a better way to get things done for these rare issues, where CPython is understandably reluctant to do much changes. Note that the PyPy version of the stdlib already contains fixes that have not been merged back to CPython (or only very slowly), though so far they are the kind of issues that trigger more often on PyPy than on CPython, like GC issues. -- ___ Python tracker <http://bugs.python.org/issue29694> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29694] race condition in pathlib mkdir with flags parents=True
Armin Rigo added the comment: Changes including a test. The test should check all combinations of "concurrent" creation of the directory. It hacks around at pathlib._normal_accessor.mkdir (patching "os.mkdir" has no effect, as the built-in function was already extracted and stored inside pathlib._normal_accessor). -- Added file: http://bugs.python.org/file46764/x2.diff ___ Python tracker <http://bugs.python.org/issue29694> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29694] race condition in pathlib mkdir with flags parents=True
Armin Rigo added the comment: It's a mess to write a test, because the exact semantics of .mkdir() are not defined as far as I can tell. This patch is a best-effort attempt at making .mkdir() work in the presence of common parallel filesystem changes, that is, other processes that would create the same directories at the same time. This patch is by no means an attempt at being a complete solution for similar problems. The exact semantics have probably never been discussed at all. For example, what should occur if a parent directory is removed just after .mkdir() created it? I'm not suggesting to discuss these issues now, but to simply leave them open. I'm trying instead to explain why writing a test is a mess (more than "just" creating another thread and creating/removing directories very fast while the main thread calls .mkdir()), because we have no exact notion of what should work and what shouldn't. -- ___ Python tracker <http://bugs.python.org/issue29694> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29694] race condition in pathlib mkdir with flags parents=True
Armin Rigo added the comment: A different bug in the same code: if someone creates the directory itself between the two calls to ``self._accessor.mkdir(self, mode)``, then the function will fail with an exception even if ``exist_ok=True``. Attached is a patch that fixes both cases. -- keywords: +patch nosy: +arigo Added file: http://bugs.python.org/file46707/x1.diff ___ Python tracker <http://bugs.python.org/issue29694> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29602] complex() on object with __complex__ function loses sign of zero imaginary part
Armin Rigo added the comment: 4 lines before the new "ci.real = cr.imag;", we have "cr.imag = 0.0; /* Shut up compiler warning */". The comment is now wrong: we really need to set cr.imag to 0.0. -- ___ Python tracker <http://bugs.python.org/issue29602> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29602] complex() on object with __complex__ function loses sign of zero imaginary part
Armin Rigo added the comment: Maybe I should be more explicit: what seems strange to me is that some complex numbers have a repr that, when entered in the source, produces a different result. For example, if you want the result ``(-0-0j)`` you have to enter something different. However, I missed the fact that calling explicitly ``complex(a, b)`` with a and b being floats always gives exactly a+bj with the correct signs. So I retract my comments. -- ___ Python tracker <http://bugs.python.org/issue29602> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29602] complex() on object with __complex__ function loses sign of zero imaginary part
Armin Rigo added the comment: CPython 2.7 and 3.5 have issues with the sign of zeroes even without any custom class: >>> -(0j) # this is -(0+0j) (-0-0j) >>> (-0-0j) # but this equals to the difference between 0 and 0+0j 0j >>> (-0.0-0j) # this is the difference between -0.0 and 0+0j (-0+0j) >>> -0j -0j # <- on CPython 2.7 (-0-0j) # <- on CPython 3.5 It's unclear if the signs of the two potential zeroes in a complex number have a meaning, but the C standard considers these cases for all functions in the complex number's header. -- nosy: +arigo ___ Python tracker <http://bugs.python.org/issue29602> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29534] _decimal difference with _pydecimal
Armin Rigo added the comment: Sorry! It should be repr(a) inside the print. Here is the fixed version: class X(Decimal): def __init__(self, a): print('__init__:', repr(a)) X.from_float(42.5) # __init__: Decimal('42.5') X.from_float(42) # with _pydecimal: __init__: 42 # with _decimal: __init__: Decimal('42') -- ___ Python tracker <http://bugs.python.org/issue29534> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29535] datetime hash is deterministic in some cases
Armin Rigo added the comment: That's not what the docs say. E.g.: https://docs.python.org/3/reference/datamodel.html#object.__hash__ says By default, the __hash__() values of str, bytes and datetime objects are “salted” with an unpredictable random value. Although they remain constant within an individual Python process, they are not predictable between repeated invocations of Python. Morever, this command really prints changing numbers: ~/svn/python/3.7-debug/python -c "import datetime;print(hash(d atetime.datetime(2016,10,10,0,0,0,0)))" -- ___ Python tracker <http://bugs.python.org/issue29535> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29535] datetime hash is deterministic in some cases
New submission from Armin Rigo: The documentation on the hash randomization says that date, time and datetime have a hash based on strings, that is therefore nondeterministic in several runs of Python. I may either be missing a caveat, or the actual implementation does not follow its promise in case a timezone is attached to the datetime or time object: ~/svn/python/3.7-debug/python -c "import datetime;print(hash(d atetime.datetime(2016,10,10,0,0,0,0,datetime.timezone(datetime.timedelta(0, 36000)" (this gives -6021186165085109055 all the time) ~/svn/python/3.7-debug/python -c "import datetime;print(hash(datetime.time(0,0,0,0, datetime.timezone(datetime.timedelta(0, 36000)" (this gives -3850122659820237607 all the time) -- messages: 287601 nosy: arigo priority: normal severity: normal status: open title: datetime hash is deterministic in some cases type: security versions: Python 3.5, Python 3.7 ___ Python tracker <http://bugs.python.org/issue29535> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29534] _decimal difference with _pydecimal
New submission from Armin Rigo: A difference in behavior between _decimal and _pydecimal (it seems that _decimal is more consistent in this case): class X(Decimal): def __init__(self, a): print('__init__:', a) X.from_float(42.5) # __init__: Decimal('42.5') X.from_float(42) # with _pydecimal: __init__: 42 # with _decimal: __init__: Decimal('42') -- messages: 287600 nosy: arigo priority: normal severity: normal status: open title: _decimal difference with _pydecimal type: behavior versions: Python 3.5, Python 3.7 ___ Python tracker <http://bugs.python.org/issue29534> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16899] Add support for C99 complex type (_Complex) as ctypes.c_complex
Armin Rigo added the comment: * Tom: the issue is unrelated to cffi, but both ctypes and cffi could proceed to support C complexes, now that libffi support has been added. * Mark: the problem is that the text you quote from the C standard fixes the representation of a complex in memory, but doesn't say anything about directly passing a complex as argument or return value to a function call. Platforms use custom ways to do that. The text you quote says a complex is an array of two real numbers; but passing an array as argument to a function works by passing a pointer to the first element. Typically, this is not how complexes are passed: instead, some pointerless form of "passing two real numbers" is used. -- nosy: +arigo ___ Python tracker <http://bugs.python.org/issue16899> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10544] yield expression inside generator expression does nothing
Armin Rigo added the comment: Let's see if the discussion goes anywhere or if this issue remains in limbo for the next 7 years. In the meantime, if I may humbly make a suggestion: whether the final decision is to give SyntaxError or change the semantics, one or a few intermediate versions with a SyntaxWarning might be a good idea. -- ___ Python tracker <http://bugs.python.org/issue10544> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10544] yield expression inside generator expression does nothing
Armin Rigo added the comment: Just to add my comment to this 7-years-old never-resolved issue: in PyPy 3.5, which behaves like Python 3.x in this respect, I made the following constructions give a warning. def wrong_listcomp(): return [(yield 42) for i in j] def wrong_gencomp(): return ((yield 42) for i in j) def wrong_dictcomp(): return {(yield 42):2 for i in j} def wrong_setcomp(): return {(yield 42) for i in j} SyntaxWarning: 'yield' inside a list or generator comprehension behaves unexpectedly (http://bugs.python.org/issue10544) The motivation is that none of the constructions above gives the "expected" result. In more details: - wrong_listcomp() doesn't even return a list at all. It's possible to have a clue about why this occurs, but I would say that it is just plain wrong given the ``return [...]`` part of the syntax. The same is true for wrong_dictcomp() and wrong_setcomp(). - wrong_gencomp() returns a generator as expected. However, it is a generator that yields two elements for each i in j: first 42, and then whatever was ``send()`` into the generator. I would say that it is in contradiction with the general idea that this syntax should give a generator that yields one item for each i in j. In fact, when the user writes such code he might be expecting the "yield" to apply to the function level instead of the genexpr level---but none of the functions above end up being themselves generators. For completeness, I think there is no problem with "await" instead of "yield" in Python 3.6. How about fixing CPython to raise SyntaxWarning or even SyntaxError? -- nosy: +arigo ___ Python tracker <http://bugs.python.org/issue10544> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11992] sys.settrace doesn't disable tracing if a local trace function returns None
Armin Rigo added the comment: Confirmed. More interestingly, nowadays (at least in 3.5) test_pdb.py depends on this bug. If we really clear f->f_trace when the trace function returns None, then test_pdb_until_command_for_generator() fails. This is because pdb.py incorrectly thinks there is no breakpoint in the generator function, and returns None. This doesn't actually clear anything, and so it works anyway. I'd suggest to fix the documentation to reflect the actual behavior of all versions from 2.3 to (at least) 3.5. -- nosy: +arigo ___ Python tracker <http://bugs.python.org/issue11992> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29006] 2.7.13 _sqlite more prone to "database table is locked"
Armin Rigo added the comment: larry: unless someone else comments, I think now that the current status of 3.5.3 is fine enough (nothing was done in this branch, and the problem I describe and just fixed in PyPy can be left for later). The revert dd13098a5dc2 needs to be itself reverted in the 2.7 branch. -- ___ Python tracker <http://bugs.python.org/issue29006> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29006] 2.7.13 _sqlite more prone to "database table is locked"
Armin Rigo added the comment: Managed to write a patch in PyPy that seems to pass all tests including the new one, including on Windows. I know think that dd13098a5dc2 should be backed out (i.e. 030e100f048a should be kept). Reference to the PyPy changes: https://bitbucket.org/pypy/pypy/commits/235e8a3889790042b3f148bcf04891b27f97a1fc Maybe something similar should be added to CPython, to avoid the unexpected "database table is locked" case; but such a change should probably be done only in trunk, because the <= 2.6 experience seems to suggest it is rare enough in practice. -- ___ Python tracker <http://bugs.python.org/issue29006> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29006] 2.7.13 _sqlite more prone to "database table is locked"
Armin Rigo added the comment: ...hah, of course the commit dd13098a5dc2 also reverted away the new test. That test fails. Sorry about that, and feel free to redo that commit. It's just one more case in which the implicit refcounting is used, but I guess as it fixes a real issue it's a good idea anyway and should be fixed first in PyPy. -- ___ Python tracker <http://bugs.python.org/issue29006> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29006] 2.7.13 _sqlite more prone to "database table is locked"
Armin Rigo added the comment: Gian-Carlo is right: I can modify the 2.6 tests in the same way as I described, and then I get the same error with python2.6. So it seems that all of 2.6 was prone to the same issue, and it was never found, but went away in 2.7 accidentally. That seems to mean that reverting 030e100f048a was not really necessary (beyond the issue with PyPy/Jython/IronPython). According to http://bugs.python.org/issue23129 it was not a good idea to revert 030e100f048a. But I'm surprized that the test added with 030e100f048a continues to pass with the revertion. Maybe we should investigate why it does. (Again, don't rely on me for that, because I don't know sqlite.) If 030e100f048a stays in, I'll probably figure out a hack to avoid this pitfall with PyPy. -- ___ Python tracker <http://bugs.python.org/issue29006> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29006] 2.7.13 _sqlite more prone to "database table is locked"
Armin Rigo added the comment: If I had a say, I would indeed revert 030e100f048a (2.7 branch) and 81f614dd8136 (3.5 branch) as well as forward-port the revert to 3.6 and trunk. Then we wait for someone that really knows why the change was done in the first place. -- ___ Python tracker <http://bugs.python.org/issue29006> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28885] Python 3.5.2 strange-behavior issues (from PyPy)
Armin Rigo added the comment: > Armin, it would help if you report all cases as separate issues. I asked on python-dev before creating these three issues, and got the opposite answer. If you decide it was a bad idea after all, I will open separate issues in the future. -- ___ Python tracker <http://bugs.python.org/issue28885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28885] Python 3.5.2 strange-behavior issues (from PyPy)
Armin Rigo added the comment: (S6) 'xxx' % b'foo' == 'xxx' b'xxx' % b'foo' raises TypeError The first case is because PyMapping_Check() is true on b'foo', so it works like 'xxx' % {...}, which always just returns 'xxx'. The second case is because _PyBytes_Format() contains more special cases, for bytes and bytearray, which are not present in PyUnicode_Format(). -- ___ Python tracker <http://bugs.python.org/issue28885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29096] unicode_concatenate() optimization is not signal-safe (not atomic)
Changes by Armin Rigo : Added file: http://bugs.python.org/file46153/patch1.diff ___ Python tracker <http://bugs.python.org/issue29096> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29096] unicode_concatenate() optimization is not signal-safe (not atomic)
Changes by Armin Rigo : Removed file: http://bugs.python.org/file46150/patch1.diff ___ Python tracker <http://bugs.python.org/issue29096> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29096] unicode_concatenate() optimization is not signal-safe (not atomic)
Changes by Armin Rigo : Added file: http://bugs.python.org/file46151/patch2.diff ___ Python tracker <http://bugs.python.org/issue29096> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29096] unicode_concatenate() optimization is not signal-safe (not atomic)
Armin Rigo added the comment: The signal handler is called between the INPLACE_ADD and the following STORE_FAST opcode, never from string_concatenate() itself. A fix would be to make sure signal handlers are not called between these two opcodes. See the minimal, proof-of-concept patch #1. A possibly better fix, which at least should match the SETUP_FINALLY handling in the ticker handler, is attached as patch #2. (Patches against 2.7.13) -- keywords: +patch Added file: http://bugs.python.org/file46150/patch1.diff ___ Python tracker <http://bugs.python.org/issue29096> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29006] 2.7.13 _sqlite more prone to "database table is locked"
Armin Rigo added the comment: Tried that, but reverted because on Windows CheckTypeMapUsage() would fail with SQLITE_MISUSE ("ProgrammingError: database table is locked"). For now PyPy will not implement this 2.7.13 change. I really suspect you can get the same problems on CPython in some cases, as described. -- ___ Python tracker <http://bugs.python.org/issue29006> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29006] 2.7.13 _sqlite more prone to "database table is locked"
Armin Rigo added the comment: Or maybe it would be enough to change commit() so that if Sqlite fails with "table is locked", pysqlite would reset all cursors and then try again? -- ___ Python tracker <http://bugs.python.org/issue29006> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29006] 2.7.13 _sqlite more prone to "database table is locked"
New submission from Armin Rigo: 2.7.13 did a small change to the sqlite commit() method, http://bugs.python.org/issue10513, which I think is causing troubles. I noticed the problem in PyPy, which (with the same change) fails another test in Lib/sqlite3/test/regression.py, CheckTypeMapUsage(), containing this code: ... con.execute(SELECT) con.execute("drop table foo") ... The first execute() method creates a cursor; assuming it is not promptly deleted, its mere existence causes the second execute() method to fail inside Sqlite with "OperationalError: database table is locked". As a second step, I could reproduce the problem in CPython by changing the test like this: ... keepalive = con.execute(SELECT)# the cursor stays alive con.execute("drop table foo") ... The reason is that in the commit() done by the second execute(), we no longer reset all this connection's cursors before proceeding. But depending on the operation, Sqlite may then complain that the "table is locked" by these old cursors. In other words, this new situation introduced in 2.7.13 potentially makes a few complicated cases crash by "table is locked" on CPython, where they would work fine previously---which is bad IMHO. About PyPy, many more cases would crash, to the point that we may have no choice but not implement this 2.7.13 change at all. -- messages: 283569 nosy: arigo priority: normal severity: normal status: open title: 2.7.13 _sqlite more prone to "database table is locked" versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue29006> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28885] Python 3.5.2 strange-behavior issues (from PyPy)
Armin Rigo added the comment: (S5) gregory: actually there is also fchown/chown in the same situation. -- ___ Python tracker <http://bugs.python.org/issue28885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28883] Python 3.5.2 crashers (from PyPy)
Changes by Armin Rigo : -- type: -> crash ___ Python tracker <http://bugs.python.org/issue28883> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28885] Python 3.5.2 strange-behavior issues (from PyPy)
Changes by Armin Rigo : -- type: -> behavior ___ Python tracker <http://bugs.python.org/issue28885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28884] Python 3.5.2 non-segfaulting bugs (from PyPy)
Changes by Armin Rigo : -- type: -> behavior ___ Python tracker <http://bugs.python.org/issue28884> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28885] Python 3.5.2 strange-behavior issues (from PyPy)
Changes by Armin Rigo : -- versions: +Python 3.5 ___ Python tracker <http://bugs.python.org/issue28885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28885] Python 3.5.2 strange-behavior issues (from PyPy)
Armin Rigo added the comment: (S2) argument clinic turns the "bool" specifier into PyObject_IsTrue(), accepting any argument whatsoever. This can easily get very confusing for the user, e.g. after messing up the number of arguments. For example: os.symlink("/path1", "/path2", "/path3") doesn't fail, it just considers the 3rd argument as some true value. -- ___ Python tracker <http://bugs.python.org/issue28885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28885] Python 3.5.2 strange-behavior issues (from PyPy)
Armin Rigo added the comment: (S1) ceval.c: GET_AITER: calls _PyCoro_GetAwaitableIter(), which might get an exception from calling the user-defined __await__() or checking what it returns; such an exception is completely eaten. -- ___ Python tracker <http://bugs.python.org/issue28885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28885] Python 3.5.2 strange-behavior issues (from PyPy)
Armin Rigo added the comment: (S5) pep 475: unclear why 'os.fchmod(fd)' retries automatically when it gets EINTR but the otherwise-equivalent 'os.chmod(fd)' does not. (The documentation says they are fully equivalent, so someone is wrong.) -- ___ Python tracker <http://bugs.python.org/issue28885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28885] Python 3.5.2 strange-behavior issues (from PyPy)
Armin Rigo added the comment: (S3) hash({}.values()) works (but hash({}.keys()) correctly gives TypeError). That's a bit confusing and, as far as I can tell, always pointless. Also, related: d.keys()==d.keys() but d.values()!=d.values(). -- ___ Python tracker <http://bugs.python.org/issue28885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28885] Python 3.5.2 strange-behavior issues (from PyPy)
Armin Rigo added the comment: (S4) if you write ``from .a import b`` inside the Python prompt, or in a module not in any package, then you get a SystemError(!) with an error message that is unlikely to help newcomers. -- ___ Python tracker <http://bugs.python.org/issue28885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28885] Python 3.5.2 strange-behavior issues (from PyPy)
New submission from Armin Rigo: As discussed on python-dev, I am creating omnibus issues from the lists of crashers, of wrong-according-to-the-docs, and of strange-behavior-only issues that I found while developing Python 3.5.2 support for PyPy. These occur with CPython 3.5.2 but most of them are likely still here in trunk. This is the issue containing the "strange behaviors" and some of them, or possibly most, will turn out to be my own feelings only and not python-dev's, which is fine by me. -- messages: 282537 nosy: arigo priority: normal severity: normal status: open title: Python 3.5.2 strange-behavior issues (from PyPy) ___ Python tracker <http://bugs.python.org/issue28885> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28884] Python 3.5.2 non-segfaulting bugs (from PyPy)
Armin Rigo added the comment: (B10) Follow-up on issue #25388: running ``python x.py`` if x.py contains the following bytes... * ``b"#\xfd\n"`` => we get a SyntaxError: Non-UTF-8 code * ``b"# coding: utf-8\n#\xfd\n"`` => we get no error! -- ___ Python tracker <http://bugs.python.org/issue28884> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28884] Python 3.5.2 non-segfaulting bugs (from PyPy)
Armin Rigo added the comment: (B9) CPython 3.5.2: this ``nonlocal`` seems not to have a reasonable effect (note that if we use a different name instead of ``__class__``, this example correctly complain that there is no binding in the outer scope of ``Y``):: class Y: class X: nonlocal __class__ __class__ = 42 print(locals()['__class__']) # 42 print(__class__) # but this is a NameError -- ___ Python tracker <http://bugs.python.org/issue28884> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28884] Python 3.5.2 non-segfaulting bugs (from PyPy)
Armin Rigo added the comment: (B8) also discussed in connection with https://bugs.python.org/issue28427 weak dicts (both kinds) and weak sets have an implementation of __len__ which doesn't give the "expected" result on PyPy, and in some cases on CPython too. I'm not sure what is expected and what is not. Here is an example on CPython 3.5.2+ (using a thread to run the weakref callbacks only, not to explicitly inspect or modify 'd'):: import weakref, _thread from queue import Queue queue = Queue() def subthread(queue): while True: queue.get() _thread.start_new_thread(subthread, (queue,)) class X: pass d = weakref.WeakValueDictionary() while True: x = X() d[52] = x queue.put(x) del x while list(d) != []: pass assert len(d) == 0 # we've checked that list(d)==[], but this may fail On CPython I've seen the assert fail only after editing the function WeakValueDictionary.__init__.remove() to add ``time.sleep(0.01)`` as the first line. Otherwise I guess the timings happen to make that test pass. -- ___ Python tracker <http://bugs.python.org/issue28884> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28884] Python 3.5.2 non-segfaulting bugs (from PyPy)
Armin Rigo added the comment: (B7) frame.clear() does not clear f_locals, unlike what a test says (Lib/test/test_frame.py):: def test_locals_clear_locals(self): # Test f_locals before and after clear() (to exercise caching) f, outer, inner = self.make_frames() outer.f_locals inner.f_locals outer.clear() inner.clear() self.assertEqual(outer.f_locals, {}) self.assertEqual(inner.f_locals, {}) This test passes, but the C-level PyFrameObject has got a strong reference to f_locals, which is only updated (to be empty) if the Python code tries to read this attribute. In the normal case, code that calls clear() but doesn't read f_locals afterwards will still leak everything contained in the C-level f_locals field. This can be shown by this failing test:: import sys def g(): x = 42 return sys._getframe() frame = g() d = frame.f_locals frame.clear() print(d) assert d == {} # fails! but 'assert d is frame.f_locals' passes, # which shows that this dict is kept alive by # 'frame'; and we've seen that it is non-empty # as long as we don't read frame.f_locals. -- ___ Python tracker <http://bugs.python.org/issue28884> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28884] Python 3.5.2 non-segfaulting bugs (from PyPy)
Armin Rigo added the comment: (B5) this is an old issue that was forgotten twice on the issue tracker: ``class C: __new__=int.__new__`` and ``class C(int): __new__=object.__new__`` can each be instantiated, even though they shouldn't. This is because ``__new__`` is completely ignored if it is set to any built-in function that uses ``tp_new_wrapper`` as its C code (many of the built-in types' ``__new__`` are like that). http://bugs.python.org/issue1694663#msg75957, http://bugs.python.org/issue5322#msg84112. In (at least) CPython 3.5, a few classes work only thanks to abuse of this bug: for example, ``io.UnsupportedOperation.__new__(io.UnsupportedOperation)`` doesn't work, but that was not noticed because ``io.UnsupportedOperation()`` mistakenly works. -- ___ Python tracker <http://bugs.python.org/issue28884> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28884] Python 3.5.2 non-segfaulting bugs (from PyPy)
Armin Rigo added the comment: (B6) this program fails the check for no sys.exc_info(), even though at the point this assert runs (called from the <== line) we are not in any except/finally block. This is a generalization of test_exceptions:test_generator_doesnt_retain_old_exc:: import sys def g(): try: raise ValueError except ValueError: yield 1 assert sys.exc_info() == (None, None, None) yield 2 gen = g() try: raise IndexError except IndexError: assert next(gen) is 1 assert next(gen) is 2# <== -- ___ Python tracker <http://bugs.python.org/issue28884> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28883] Python 3.5.2 crashers (from PyPy)
Changes by Armin Rigo : -- Removed message: http://bugs.python.org/msg282524 ___ Python tracker <http://bugs.python.org/issue28883> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28884] Python 3.5.2 non-segfaulting bugs (from PyPy)
Armin Rigo added the comment: (B2) fcntl.ioctl(x, y, buf, mutate_flag): mutate_flag is there for the case of buf being a read-write buffer, which is then mutated in-place. But if we call with a read-only buffer, mutate_flag is ignored (instead of rejecting a True value)---ioctl(x, y, "foo", True) will not actually mutate the string "foo", but the True is completely ignored. (I think this is a bug introduced during the Argument Clinic refactoring.) -- ___ Python tracker <http://bugs.python.org/issue28884> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com