[issue46965] Enable informing callee it's awaited via vector call flag
Vladimir Matveev added the comment: - introducing dedicated opcodes for each kind of awaited call is definitely an option. In fact first implementation used it however as Dino has mentioned it was more of a logistical issue (there were several spots that produced .pyc files so compiler needed to be up to date across all of them). - there was some perf win that was coming from rewriting `gather` in C however main reason for us to do it was the ability to be await-aware which we made only available in C (also returning completed waithandle is not exposed to Python either) to reduce the scope. Providing ability to follow await-aware protocol (read indicator that call is awaited + return completed waithandle for eagerly completed calls) in pure Python is definitely possible - to provide some context why it was beneficial: typical implementation of endpoint in IG is an async function that in turn calls into numerous other async functions to generate an output. - `gather` is used all over the place in case if there are no sequential dependency between calls - amount of unique pieces of data that are ultimately fetched by async calls is not very big, i.e. the same fragment of information can be requested by different async calls which makes memoization a very attractive strategy to reduce I/O and heavyweight computations. - memoized pieces of data is represented effectively by completed futures and it is very common to have `gather` accept either memoized value or coroutine object that will be completed synchronously by awaiting memoized value. Before making gather await-aware if always have to follow the standard process and convert awaitables into tasks that are queued into the event loop for execution. In our workload task creation/queueing were adding a noticeable overhead. With await-aware gather we can execute coroutine objects eagerly and if they were not suspended - bypass task creation entirely. ``` import asyncio import time async def step(i): if i == 0: return await asyncio.gather(*[step(i - 1) for _ in range(6)]) async def main(): t0 = time.perf_counter() await step(6) t1 = time.perf_counter() print(f"{t1 - t0} s") N = 0 def create_task(loop, coro): global N N += 1 return asyncio.Task(coro, loop=loop) loop = asyncio.get_event_loop() loop.set_task_factory(create_task) loop.run_until_complete(main()) print(f"{N} tasks created") # Cinder # 0.028410961851477623 s # 1 tasks created # Python 3.8 # 1.2157012447714806 s # 55987 tasks created ``` -- ___ Python tracker <https://bugs.python.org/issue46965> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46965] Enable informing callee it's awaited via vector call flag
Change by Vladimir Matveev : -- nosy: +v2m ___ Python tracker <https://bugs.python.org/issue46965> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43563] Use dedicated opcodes to speed up calls/attribute lookups with super() as receiver
Vladimir Matveev added the comment: Apologies for the delay in reply: in more concrete numbers for IG codebase enabling this optimization resulted in 0.2% CPU win. -- ___ Python tracker <https://bugs.python.org/issue43563> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43563] Use dedicated opcodes to speed up calls/attribute lookups with super() as receiver
Vladimir Matveev added the comment: >Currently, super() is decoupled from the core language. It is just a builtin >that provides customized attribute lookup. This PR makes super() more tightly >integrated with the core language, treating it as if it were a keyword and >part of the grammar. Also note, users can currently create their own versions >of super(), shadowing the builtin super(). This is true however: - this patch does not block people from introducing custom version of `super` so this scenario still work. The idea was to streamline the common case - based on digging into Instagram codebase and its transitive dependencies (which is reasonably large amount of code) all spots where `super()` appear in sources assume `super` to be builtin and for a pretty common use-case its cost is noticeable in profiler. - zero-argument `super()` still a bit magical since it requires compiler support to create cell for `__class__` and assumes certain shape of the frame object so this patch is a step forward with a better compiler support and removing runtime dependency on the frame > Do you have any evidence that the overhead of super() is significant in real > programs I do see the non-negligible cost of allocation/initialization of `super` object in IG profiling data. -- ___ Python tracker <https://bugs.python.org/issue43563> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43563] Use dedicated opcodes to speed up calls/attribute lookups with super() as receiver
Change by Vladimir Matveev : -- keywords: +patch pull_requests: +23696 stage: -> patch review pull_request: https://github.com/python/cpython/pull/24936 ___ Python tracker <https://bugs.python.org/issue43563> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43563] Use dedicated opcodes to speed up calls/attribute lookups with super() as receiver
New submission from Vladimir Matveev : Calling methods and lookup up attributes when receiver is `super()` has extra cost comparing to regular attribute lookup. It mainly comes from the need to allocate and initialize the instance of the `super` which for zero argument case also include peeking into frame/code object for the `__class__` cell and first argument. In addition because `PySuper_Type` has custom implementation of tp_getattro - `_PyObject_GetMethod` would always return bound method. ``` import timeit setup = """ class A: def f(self): pass class B(A): def f(self): super().f() def g(self): A.f(self) b = B() """ print(timeit.timeit("b.f()", setup=setup, number=2000)) print(timeit.timeit("b.g()", setup=setup, number=2000)) 7.329449548968114 3.892987059080042 ``` One option to improve it could be to make compiler/interpreter aware of super calls so they can be treated specially. Attached patch introduces two new opcodes LOAD_METHOD_SUPER and LOAD_ATTR_SUPER that are intended to be counterparts for LOAD_METHOD and LOAD_ATTR for cases when receiver is super with either zero or two arguments. Immediate argument for both LOAD_METHOD_SUPER and LOAD_ATTR_SUPER is a pair that consist of: 0: index of method/attribute in co_names 1: Py_True if super was originally called with 0 arguments and Py_False otherwise. Both LOAD_METHOD_SUPER and LOAD_ATTR_SUPER expect 3 elements on the stack: TOS3: global_super TOS2: type TOS1: self/cls Result of LOAD_METHOD_SUPER is the same as LOAD_METHOD. Result of LOAD_ATTR_SUPER is the same as LOAD_ATTR In runtime both LOAD_METHOD_SUPER and LOAD_ATTR_SUPER will check if `global_super` is `PySuper_Type` to handle situations when `super` is patched. If `global_super` is `PySuper_Type` then it can use dedicated routine to perform the lookup for provided `__class__` and `cls/self` without allocating new `super` instance. If `global_super` is different from `PySuper_Type` then runtime will fallback to the original logic using `global_super` and original number of arguments that was captured in immediate. Benchmark results with patch: 4.381768501014449 3.9492998640052974 -- components: Interpreter Core messages: 389114 nosy: v2m priority: normal severity: normal status: open title: Use dedicated opcodes to speed up calls/attribute lookups with super() as receiver versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue43563> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42085] Add dedicated slot for sending values
Change by Vladimir Matveev : -- pull_requests: +22267 pull_request: https://github.com/python/cpython/pull/23374 ___ Python tracker <https://bugs.python.org/issue42085> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42113] Replace _asyncio.TaskWakeupMethWrapper with PyCFunction
Change by Vladimir Matveev : -- keywords: +patch pull_requests: +21817 stage: -> patch review pull_request: https://github.com/python/cpython/pull/22875 ___ Python tracker <https://bugs.python.org/issue42113> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42113] Replace _asyncio.TaskWakeupMethWrapper with PyCFunction
New submission from Vladimir Matveev : `TaskWakeupMethWrapper` looks like a more limited version of `PyCFunction` so it can be replaced with one. Pros: remove a bunch of code, use better calling convention Cons: now `wakeup` object will expose slightly more properties but I'm not sure whether this is bad -- components: asyncio messages: 379258 nosy: asvetlov, v2m, yselivanov priority: normal severity: normal status: open title: Replace _asyncio.TaskWakeupMethWrapper with PyCFunction versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue42113> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42085] Add dedicated slot for sending values
Change by Vladimir Matveev : -- type: -> performance ___ Python tracker <https://bugs.python.org/issue42085> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42085] Add dedicated slot for sending values
Change by Vladimir Matveev : -- keywords: +patch pull_requests: +21739 stage: -> patch review pull_request: https://github.com/python/cpython/pull/22780 ___ Python tracker <https://bugs.python.org/issue42085> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42085] Add dedicated slot for sending values
New submission from Vladimir Matveev : https://bugs.python.org/issue41756 has introduced PyIter_Send as a common entrypoint for sending values however currently fast path that does not use StopIteration exception is only available for generators/coroutines. It would be quite nice if this machinery was extensible and other types (both internal and 3rd party) could opt-in into using exception-free way of returning values without needing to update the implementation of PyIter_Send. One way of solving this is adding a new slot with a signature that matches PyIter_Send. With it: - it should be possible to implement this slot for coroutines/generators and remove special casing for them in PyIter_Send - provide implementation for this slot for internal types (i.e. FutureIter in _asynciomodule.c) - results of this experiment can be found below - enable external native extensions to provide efficient implementation of coroutines (i.e. Cython could benefit from it) Microbenchmark to demonstrate the difference of accessing the value of fulfiled Future without and with dedicated slot: ``` import asyncio import time N = 1 async def run(): fut = asyncio.Future() fut.set_result(42) t0 = time.time() for _ in range(N): await fut t1 = time.time() print(f"Time: {t1 - t0} s") asyncio.run(run()) ``` Time: 8.365560054779053 s - without the slot Time: 5.799655914306641 s - with the slot -- components: Interpreter Core messages: 378999 nosy: v2m priority: normal severity: normal status: open title: Add dedicated slot for sending values versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue42085> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Change by Vladimir Matveev : -- pull_requests: +21649 stage: resolved -> patch review pull_request: https://github.com/python/cpython/pull/22677 ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Change by Vladimir Matveev : -- pull_requests: +21639 pull_request: https://github.com/python/cpython/pull/22663 ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Change by Vladimir Matveev : -- pull_requests: +21473 stage: resolved -> patch review pull_request: https://github.com/python/cpython/pull/22443 ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Vladimir Matveev added the comment: No, I don't think so but I can definitely make one. A few questions first: - having PySendResult as a result type of PyIterSend seems ok, however prefix for each concrete value (PYGEN_*) is not aligned with the prefix of the function itself (PyIter_) - should it also deal with tstate->c_tracefunc (probably not) or just be something like ``` PySendResult PyIter_Send(PyObject *iter, PyObject *arg, PyObject **result) { _Py_IDENTIFIER(send); assert(result != NULL); if (PyGen_CheckExact(iter) || PyCoro_CheckExact(iter)) { return PyGen_Send((PyGenObject *)iter, arg, result); } if (arg == Py_None && Py_TYPE(iter)->tp_iternext != NULL) { *result = Py_TYPE(iter)->tp_iternext(iter); } else { *result = _PyObject_CallMethodIdOneArg(iter, _send, arg); } if (*result == NULL) { if (_PyGen_FetchStopIterationValue(result) == 0) { return PYGEN_RETURN; } return PYGEN_ERROR; } return PYGEN_NEXT; } ``` -- ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Vladimir Matveev added the comment: Serhiy, AFAIR PyIter_Send in my PR appear only as a rename from placeholder `Name_TBD` and it still was specific to PyGenObjects. Do you mean something that was listed in https://bugs.python.org/msg377007 ? -- ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Vladimir Matveev added the comment: Yes, it should be -- ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Vladimir Matveev added the comment: Sounds like a good middleground to start: add ``PySendResult `` and `PySendResult PyGen_Send(PyGenObject*, PyObject* PyObject**)` specific to generators and coroutines. Subsequent changes could introduce `PySendResult PyIter_Send(PyObject*, PyObject*, PyObject**)` that would be more generic (call tp_next, invoke "send" or maybe in the future use dedicated slot for "send" operator so i.e. asyncio.Future or Cython coroutines could benefit from the same optimization) -- ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Vladimir Matveev added the comment: so to summarize: Proposed function signature: ``` PySendResult PyIter_Send(PyObject *obj, PyObject *arg, PyObject **result); ``` For generators/coroutines function will delegate to specialized implementation that does not raise StopIteration exception For types that provide `tp_iternext` if arg is Py_None function call invoke `Py_TYPE(obj)->tp_iternext(obj)` For all other cases function will try to call `send` method Regarding of the case function will not raise StopIteration and will always return pair status/result. Does it sound correct? -- ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Vladimir Matveev added the comment: I guess `PyIter_Send` would imply that this function should work for all inputs (like in https://bugs.python.org/msg377007) which also sounds reasonable. -- ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Vladimir Matveev added the comment: Also should it be specific to generators/coroutines and accept PyGenObject* or should it try to handle multiple cases and expose the result for them in uniform way, i.e. ``` if (PyGen_CheckExact(gen) || PyCoro_CheckExact(gen)) { // use coroutine/generator specific code that avoids raising exceptions *result = ... return PYGEN_RETURN; } PyObject *ret; if (arg == Py_None) { ret = Py_TYPE(gen)->tp_iternext(gen); } else { ret = _PyObject_CallMethodIdOneArg(coro, _send, arg); } if (ret != NULL) { *result = ret; return PYGEN_YIELD; } if (_PyGen_FetchStopIterationValue(result) == 0) { return PYGEN_RETURN; } return PYGEN_ERROR; ``` -- ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Vladimir Matveev added the comment: If I understand proposed shape of API correctly - it was not supposed to return exception via "result" so contract for new `PyGen_Send` function is something like: Return value | result | Comment - PYGEN_RETURN | not NULL | Returned value PYGEN_NEXT | not NULL | Yielded value PYGEN_ERROR | NULL | Regular PyErr_* functions should be used to work with error case -- ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
Change by Vladimir Matveev : -- keywords: +patch pull_requests: +21255 stage: -> patch review pull_request: https://github.com/python/cpython/pull/22196 ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41756] Do not always use exceptions to return result from coroutine
New submission from Vladimir Matveev : Currently async functions are more expensive to use comparing to their sync counterparts. A simple microbenchmark shows that difference could be quite significant: ``` import time def f(a): if a == 0: return 0 return f(a - 1) async def g(a): if a == 0: return 0 return await g(a - 1) N = 10 C = 200 t0 = time.time() for _ in range(N): f(C) t1 = time.time() for _ in range(N): try: g(C).send(None) except StopIteration: pass t2 = time.time() print(f"Sync functions: {t1 - t0} s") print(f"Coroutines: {t2 - t1} s") ``` Results from master on my machine: Sync functions: 2.8642687797546387 s Coroutines: 9.172159910202026 s NOTE: Due to viral nature of async functions their number in codebase could become quite significant so having hundreds of them in a single call stack is not something uncommon. One of reasons of such performance gap is that async functions always return its results via raising StopIteration exception which is not cheap. This can be avoided if in addition to `_PyGen_Send` always return result via exception we could have another function that will allow us to distinguish whether value that was returned from generator is a final result (return case) of whether this is yielded value. In linked PR I've added function `_PyGen_SendNoStopIteration` with this behavior and updated ceval.c and _asynciomodule.c to use it instead of `_PyGen_Send` which resulted in a measurable difference: Sync functions: 2.8861589431762695 s Coroutines: 5.730362176895142 s -- messages: 376698 nosy: v2m, yselivanov priority: normal severity: normal status: open title: Do not always use exceptions to return result from coroutine type: performance versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue41756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35568] Expose the C raise() function in the signal module, for use on Windows
Change by Vladimir Matveev : -- keywords: +patch, patch pull_requests: +10606, 10607 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35568> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35568] Expose the C raise() function in the signal module, for use on Windows
Change by Vladimir Matveev : -- keywords: +patch, patch, patch pull_requests: +10606, 10607, 10608 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35568> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35568] Expose the C raise() function in the signal module, for use on Windows
Change by Vladimir Matveev : -- keywords: +patch pull_requests: +10606 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35568> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35568] Expose the C raise() function in the signal module, for use on Windows
Change by Vladimir Matveev : -- nosy: +v2m ___ Python tracker <https://bugs.python.org/issue35568> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14094] ntpath.realpath() should use GetFinalPathNameByHandle()
Change by Vladimir Matveev : -- keywords: +patch pull_requests: +10488 stage: needs patch -> patch review ___ Python tracker <https://bugs.python.org/issue14094> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14094] ntpath.realpath() should use GetFinalPathNameByHandle()
Vladimir Matveev added the comment: I can give it a try. -- nosy: +v2m ___ Python tracker <https://bugs.python.org/issue14094> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31446] _winapi.CreateProcess (used by subprocess) is not thread-safe
Change by Vladimir Matveev : -- keywords: +patch pull_requests: +10371 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue31446> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23057] [Windows] asyncio: support signal handlers on Windows (feature request)
Change by Vladimir Matveev : -- keywords: +patch pull_requests: +10367 stage: needs patch -> patch review ___ Python tracker <https://bugs.python.org/issue23057> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35306] OSError [WinError 123] when testing if pathlib.Path('*') (asterisks) exists
Change by Vladimir Matveev : -- keywords: +patch pull_requests: +10365 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35306> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34872] investigate task/future cancellation in asynciomodule.c
Change by Vladimir Matveev : -- nosy: +v2m ___ Python tracker <https://bugs.python.org/issue34872> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34603] ctypes on Windows: error calling C function that returns a struct containing 3 bools
Change by Vladimir Matveev : -- pull_requests: +8843 ___ Python tracker <https://bugs.python.org/issue34603> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34688] Segfault in pandas that works fine on 3.7
Vladimir Matveev added the comment: somewhat shortened repro that fails with the same error on master: ``` import pandas import numpy now = pandas.Timestamp.now() arr = numpy.array([ ['a', now] for i in range(0, 3)]) arr.sum(0) ``` -- nosy: +v2m ___ Python tracker <https://bugs.python.org/issue34688> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34603] ctypes on Windows: error calling C function that returns a struct containing 3 bools
Change by Vladimir Matveev : -- keywords: +patch pull_requests: +8690 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue34603> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34603] ctypes on Windows: error calling C function that returns a struct containing 3 bools
Vladimir Matveev added the comment: I think the problem is that FFI layer assumes that MSVC compiler will try to pass any structure less than 8 bytes in registers whereis it is not always true: To be returned by value in RAX, user-defined types must have a length of 1, 2, 4, 8, 16, 32, or 64 bits (from https://msdn.microsoft.com/en-us/library/7572ztz4.aspx). I have a fix, now adding tests. -- nosy: +v2m ___ Python tracker <https://bugs.python.org/issue34603> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34606] Unable to read zip file with extra
Vladimir Matveev added the comment: In this particular case looks like a crux of the problem was in the fact that compression encodes extra fields only if either zip64 is set or length of the field is larger than threshold but decompression always tries to decode it. Attached PR switches decoding to be conditioned on the presence of zip64 end of central directory record. -- nosy: +v2m ___ Python tracker <https://bugs.python.org/issue34606> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34486] "RuntimeError: release unlocked lock" when starting a thread
Vladimir Matveev added the comment: To bring in analogy: C# has lock statement that allow to run a block of code holding a mutual-exclusion lock on some object. ``` lock(o) { } ``` is compiled as ``` object _lock = o; bool _lockTaken = false; try { System.Threading.Monitor.Enter(_lock, out _lockTaken); ... } finally { if (_lockTaken) { System.Threading.Monitor.Exit(_lock); } } ``` In C# System.ThreadAbortException can be raised in arbitrary point in code and yet lock statement needs to to enforce the invariant "if lock is taken it will be released". In order to do so: - lock acquisition is performed inside the try block, as a side effect it sets the value of '_lockTaken' passed as out parameter - these two actions are performed atomically and cannot be interrupted by the asynchronous exception - lock is released in finally block only if lock was previously taken. Also finally blocks in .NET has a property that they cannot be interrupted by asynchronous exceptions so call to Monitor.Exit is guaranteed to be run if control flow has entered matching try block. I feel that something similar can be used to solve this issue as well. Discussions for issue29988 has already mentioned adding special semantic to __enter__/__exit__ methods or marking bytecode ranges as atomic to make sure that they are not interrupted. While the former case is specific to with statements, the latter one can probably be generalized to support finally blocks as well. -- ___ Python tracker <https://bugs.python.org/issue34486> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34602] python3 resource.setrlimit strange behaviour under macOS
Vladimir Matveev added the comment: I can repro it with a given sample file ``` vladima-mbp $ cat test.c #include #include #include #include #include int main() { struct rlimit rl; if(getrlimit(RLIMIT_STACK, ) < 0) { perror("getrlimit"); exit(1); } rl.rlim_cur = rl.rlim_max; if(setrlimit(RLIMIT_STACK, ) < 0) { perror("setrlimit"); exit(1); } return 0; }vladima-mbp $ gcc -Wl,-stack_size,100 -o test test.c vladima-mbp $ ./test setrlimit: Invalid argument ``` Similar settings were added to Python in https://github.com/python/cpython/commit/335ab5b66f4 -- nosy: +v2m ___ Python tracker <https://bugs.python.org/issue34602> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34606] Unable to read zip file with extra
Change by Vladimir Matveev : -- pull_requests: +8561 ___ Python tracker <https://bugs.python.org/issue34606> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34276] urllib.parse doesn't round-trip file URI's with multiple leading slashes
Vladimir Matveev added the comment: file URI scheme is covered by RFC8089, specifically https://tools.ietf.org/html/rfc8089#appendix-E.3.2. -- nosy: +v2m ___ Python tracker <https://bugs.python.org/issue34276> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34486] "RuntimeError: release unlocked lock" when starting a thread
Vladimir Matveev added the comment: I agree. From code in threading.Condition.wait looks like if it is interrupted either after calling _release_save and before entering try block or in finally block before calling _acquire_restore - it will leave the lock in non-acquired state. First part in theory can be solved if _release_save is moved into try block and instead of returning saved_state as a result it will accept reference to saved_state local and set it in C code. Second part looks more interesting ... :) -- nosy: +v2m ___ Python tracker <https://bugs.python.org/issue34486> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34200] importlib: python -m test test_pkg -m test_7 fails randomly
Vladimir Matveev added the comment: I've tried to repro this on Mac, Windows box and Windows VM - works fine for all cases. -- nosy: +v2m ___ Python tracker <https://bugs.python.org/issue34200> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6700] inspect.getsource() returns incorrect source lines at the module level
Change by Vladimir Matveev : -- pull_requests: +8338 ___ Python tracker <https://bugs.python.org/issue6700> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com