[Python-Dev] Re: What to do about invalid escape sequences
> When you take a text string and create a string literal to represent > it, sometimes you have to modify it to become syntactically valid. Even simpler: use r""" instead of """ The only case where that won't work is when you need actual escape sequences. But I find this very rare in practice for triple-quoted strings. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/LV5STHINBEREK2Y43OQLFUOBQPN2AXZC/
[Python-Dev] Re: Long-term deprecation policy
On 2019-07-17 02:34, Brett Cannon wrote: I prefer removal for ease of maintenance (people always want to update code even if it's deprecated), and to help make sure people who don't read the docs but discover something via the REPL or something and don't run with warnings on do not accidentally come to rely on something that's deprecated. I see what you mean but it doesn't really answer my question. I was asking about a scenario where you plan on purpose a long deprecation period because you know in advance that you cannot remove the functionality soon (because of PEP 384 or because it's used a lot, for example collections ABCs). ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/GLFWX7QJ7WKNHHWXJEXMJI5JDCTIGODF/
[Python-Dev] Re: Long-term deprecation policy
On 2019-07-16 15:33, Inada Naoki wrote: We currently have a deprecation policy saying that functions deprecated in version N cannot be removed before version N+2. That's a reasonable policy but some deprecation purists insist that it MUST (instead of MAY) be removed in version N+2. Following this reasoning, we cannot deprecate something that we cannot remove. Really? Any example? * https://bugs.python.org/issue29548#msg287775 * https://discuss.python.org/t/pendingdeprecationwarning-is-really-useful/1038/10 and following ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/QUPCYUHYAY5UX7NIBGJ5FVY5EHHIK442/
[Python-Dev] Long-term deprecation policy
I have seen multiple discussions where somebody wants to deprecate a useless function but somebody else complains that we cannot do that because the function in question cannot be removed (because of backwards compatibility). See https://bugs.python.org/issue29548 for an example. We currently have a deprecation policy saying that functions deprecated in version N cannot be removed before version N+2. That's a reasonable policy but some deprecation purists insist that it MUST (instead of MAY) be removed in version N+2. Following this reasoning, we cannot deprecate something that we cannot remove. Personally, I think that this reasoning is flawed: even if we cannot *remove* a function, we can still *deprecate* it. That way, we send a message that the function shouldn't be used anymore. And it makes it easier to remove it in the (far) future: if the function was deprecated for a while, we have a valid reason to remove it. The longer it was deprecated, the less likely it is to be still used, which makes it easier to remove eventually. So I suggest to embrace such long-term deprecations, where we deprecate something without planning in advance when it will be removed. This is actually how most other open source projects that I know handle deprecations. I'd like to know the opinion of the Python core devs here. Cheers, Jeroen. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/IRUNBGU3GN4XGTXWKFFZZJR7XDOWZWWF/
[Python-Dev] Re: Keyword arguments with non-string names
I realized something that makes this even more tricky: dicts are mutable. So even if the dict contains only string keys at call time, it could theoretically be changed by the time that keywords are parsed. So for calling conventions passing dicts, I would leave it to the callee to sanity check the dict (this is the status quo). For the vectorcall/FASTCALL calling convention, the situation is a lot better: the call arguments are immutable and there are not many places where vectorcall calls are made with keywords. So we could check it on the caller side. I'll try to implement that. Jeroen. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/PQYD4GARMKSURX7GYRSNCHJSLIWK22XD/
[Python-Dev] Re: Keyword arguments with non-string names
Thanks for the pointer, but that's more about allowing strings which are not valid identifiers. I'm talking about passing non-strings and more specifically about the C protocol. For Python functions, non-string keyword arguments are already disallowed, but that's because of the implementation of the "function" object, it's not enforced by the CPython core. Jeroen. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/JSAQAZD2GREBH72VCHO5ZZJQSYQHLTOV/
[Python-Dev] Re: Keyword arguments with non-string names
On 2019-07-09 14:36, Jeroen Demeyer wrote: So this leads to the question: should the interpreter check the keys of a **kwargs dict? Some pointers: - https://bugs.python.org/issue8419 - https://bugs.python.org/issue29360 ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/GOM22KOBQD5Q42WPW5RYQGCWNDAY5T2V/
[Python-Dev] Keyword arguments with non-string names
When passing **kwargs to a callable, the expectation is that kwargs is a dict with string keys. The interpreter enforces that it's a dict, but it does NOT check the types of the keys. It's currently the job of the called function to check that. In some cases, this check is not applied: >>> from collections import OrderedDict >>> OrderedDict(**{1:2}) OrderedDict([(1, 2)]) So this leads to the question: should the interpreter check the keys of a **kwargs dict? I don't have an answer myself, I'm just asking the question because it comes up in https://github.com/python/cpython/pull/13930 and https://github.com/python/cpython/pull/14589 Jeroen. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/7CYDQX5T53XK7TWPV3MTYPTHM2KFUBWS/
[Python-Dev] Re: Change SystemError to NOT inherit from Exception
On 2019-07-02 23:29, Brett Cannon wrote: But two, this would be a semantic shift of what classes directly inherit from `BaseException`. It depends how you interpret that. I always interpreted classes inheriting directly from BaseException as exceptions that you almost never want to catch in an "except Exception" block. Adding `SystemError` to that list would make it a unique error condition that doesn't inherit from `Exception`. I would argue that the various exception classes inheriting from BaseException are already quite unique: a KeyboardInterrupt is very different from a GeneratorExit for example. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/U7CDZYS7WMJ2EIS3PQCE7WQMKUT7FFX6/
[Python-Dev] Change SystemError to NOT inherit from Exception
A SystemError is typically raised from C to indicate serious bugs in the application which shouldn't normally be caught and handled. It's used for example for NULL arguments where a Python object is expected. So in some sense, SystemError is the Python equivalent of a segmentation fault. Since these exceptions should typically not be handled in a try/except Exeption block, I suggest to make SystemError inherit directly from BaseException instead of Exception. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/XA2A33CKERYELYPJS6GKVHCQAXOQKG5M/
[Python-Dev] Re: PyAPI_FUNC() is needed to private APIs?
On 2019-06-13 18:03, Inada Naoki wrote: We don't provide method calling API which uses optimization same to LOAD_METHOD. Which may be like this: /* methname is Unicode, nargs > 0, and args[0] is self. */ PyObject_VectorCallMethod(PyObject *methname, PyObject **args, Py_ssize_t nargs, PyObject *kwds) I agree that this would be useful. Minor nitpick: we spell "Vectorcall" with a lower-case "c". There should also be a _Py_Identifier variant _PyObject_VectorcallMethodId The implementation should be like vectorcall_method from Objects/typeobject.c except that _PyObject_GetMethod should be used instead of lookup_method() (the difference is that the code for special methods like __add__ only looks at the attributes of the type, not the instance). (Would you try adding this? Or may I?) Or course you may. Just let me know if you're working on it. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/FLF74RH3XO4BYOTW2CRRD2GO23P2YUOO/
[Python-Dev] Re: PyAPI_FUNC() is needed to private APIs?
On 2019-06-13 17:11, Steve Dower wrote: The cost of that convenience is that we can never optimise internals because they are now public API. I think that the opposite is true actually: the reason that people access internals is because there is no public API doing what they want. Having more public API should *reduce* the need for accessing internals. For example, _PyObject_GetMethod is not public API but it's useful functionality. So Cython is forced to reinvent _PyObject_GetMethod (i.e. copy verbatim that function from the CPython sources), which requires accessing internals. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/BQKHK2S4YFHKZT5XNIYCZ6WKWO2UA4WZ/
[Python-Dev] Re: PyAPI_FUNC() is needed to private APIs?
On 2019-06-13 15:36, Victor Stinner wrote: The main risk is that people start to use it If people use it, it should be taken as a sign that the function is useful and deserves to be public API. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/YRXR5TYMKCH2E3GSCJEEZ6AT4J3L4WDL/
[Python-Dev] Re: Documenting METH_FASTCALL
On 2019-06-13 14:21, Victor Stinner wrote: We may deprecate it and document that VECTORCALL should be preferred. Not really. Vectorcall and METH_FASTCALL solve different problems on different levels. METH_FASTCALL is used specifically in PythonMethodDef, in other words for instances of "method_descriptor" or "builtin_function_or_method". It has no meaning outside of that. Vectorcall is meant for classes implementing callables. It is used for example to implement the *classes* "function", "method", "method_descriptor" and "builtin_function_or_method". The expectation is that most manually written C extensions will be happy with the functionality exposed by "method_descriptor" and "builtin_function_or_method" and they can use METH_FASTCALL. Cython on the other hand already has a custom class for callables and the expectation is that this class will implement vectorcall. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/X3KXJVCOLGISBFETCBWMOKLI4DLM4PD3/
[Python-Dev] Documenting METH_FASTCALL
Hello, has the time come to document METH_FASTCALL? This was introduced in Python 3.6 for positional arguments only and extended in Python 3.7 to support also keyword arguments. No changes have been made since then. The vectorcall protocol (PEP 590) uses a calling convention based on METH_FASTCALL, so I expect METH_FASTCALL to stay as it is. PEP 590 also means that CPython is now even more than before optimized for METH_FASTCALL instead of METH_VARARGS. Whether or not METH_FASTCALL should be added to the limited API is another question. I'm only talking about adding it to the documentation. Jeroen. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/RDSZ3JUQ6RK3CYVWXA2ESW7KF3HND3B7/
[Python-Dev] Re: Using vectorcall for tp_new and tp_init
On 2019-06-07 20:42, Terry Reedy wrote: On 6/7/2019 6:41 AM, Jeroen Demeyer wrote: Hello, I'm starting this thread to brainstorm for using vectorcall to speed up creating instances of Python classes. Currently the following happens when creating an instance of a Python class X using X(.) and assuming that __new__ and __init__ are Python functions and that the metaclass of X is simply "type": 1. type_call (the tp_call wrapper for type) is invoked with arguments (X, args, kwargs). 2. type_call calls slot_tp_new with arguments (X, args, kwargs). 3. slot_tp_new calls X.__new__, prepending X to the args tuple. A new object obj is returned. 4. type_call calls slot_tp_init with arguments (obj, args, kwargs). 5. slot_tp_init calls type(obj).__init__ method, prepending obj to the args tuple. A new object obj is returned. My understanding is that the argument obj is just mutated, which is one reason why a separate __new__ is needed. Yes indeed. I just accidentally copy-pasted that sentence :-) ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/RYHMJNNK7JX3BMZQU7YC4SLOA7DA3DNA/
[Python-Dev] Using vectorcall for tp_new and tp_init
Hello, I'm starting this thread to brainstorm for using vectorcall to speed up creating instances of Python classes. Currently the following happens when creating an instance of a Python class X using X(.) and assuming that __new__ and __init__ are Python functions and that the metaclass of X is simply "type": 1. type_call (the tp_call wrapper for type) is invoked with arguments (X, args, kwargs). 2. type_call calls slot_tp_new with arguments (X, args, kwargs). 3. slot_tp_new calls X.__new__, prepending X to the args tuple. A new object obj is returned. 4. type_call calls slot_tp_init with arguments (obj, args, kwargs). 5. slot_tp_init calls type(obj).__init__ method, prepending obj to the args tuple. A new object obj is returned. In the worst case, no less than 6 temporary objects are needed just to pass arguments around: 1. An args tuple and kwargs dict for tp_call 3. An args array with X prepended and a kwnames tuple for __new__ 5. An args array with obj prepended and a kwnames tuple for __init__ This is clearly not as efficient as it could be. An obvious solution would be to introduce variants of tp_new and tp_init using the vectorcall protocol. Assuming PY_VECTORCALL_ARGUMENTS_OFFSET is used, all 6 temporary allocations could be dropped. The implementation could be in the form of two new slots tp_vector_new and tp_vector_init. Since we're just dealing with type slots here (as opposed to offsets in an object structure), this should be easier to implement than PEP 590 itself. Jeroen. ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/CF5JGQ5DQBEWP4XLF4FAH66MNY2VRREG/
Re: [Python-Dev] PEP 590: vectorcall without tp_call
On 2019-05-29 16:00, Christian Heimes wrote: You could add a check to PyType_Ready() and have it either return an error or fix tp_call. Yes, but the question is: which of these two alternatives? I would vote for fixing tp_call but Petr voted for an error. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 590: vectorcall without tp_call
On 2019-05-29 15:29, Petr Viktorin wrote: That sounds like a good idea for PyType_FromSpec. I don't think we're planning to support vectorcall in PyType_FromSpec for now. That's maybe for 3.9 when vectorcall is no longer provisional. For static types I either wouldn't bother at all, or only check in debug builds and fail with Py_FatalError. So basically an assert(...)? ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 590: vectorcall without tp_call
Hello, I have one implementation question about vectorcall which is not specified in PEP 590: what should happen if a type implements vectorcall (i.e. _Py_TPFLAGS_HAVE_VECTORCALL is set) but doesn't set tp_call (i.e. tp_call == NULL)? I see the following possibilities: 1. Ignore this problem/assume that it won't happen. This would be bad, since callable(obj) would be False even though obj() would succeed. 2. Raise SystemError. 3. Automatically set tp_call to PyVectorcall_Call. I would vote for 3 since it's the most user-friendly option. There is also no way how it could be wrong: it ensures that tp_call and vectorcall are consistent. Any opinions? Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Missing testcase for bpo-34125
Could somebody please merge https://github.com/python/cpython/pull/13461 It adds a missing testcase for bpo-34125. This is testing code which is affected by PEP 590, so missing this test might accidentally break CPython if we screw up with implementing PEP 590. Thanks, Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
On 2019-05-10 00:07, Petr Viktorin wrote: METH_FASTCALL is currently not documented, and it should be renamed before it's documented. Names with "fast" or "new" generally don't age well. Just to make sure that we're understanding correctly, is your proposal to do the following: - remove the name METH_FASTCALL - remove the calling convention METH_FASTCALL without METH_KEYWORDS - rename METH_FASTCALL|METH_KEYWORDS -> METH_VECTORCALL ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
On 2019-05-09 20:30, Petr Viktorin wrote: But, if you apply the robustness principle to vectorcallfunc, it should accept empty tuples. Sure, if the callee wants to accept empty tuples anyway, it can do that. That's the robustness principle. But us *forcing* the callee to accept empty tuples is certainly not. Basically my point is: with a little bit of effort in CPython we can make things simpler for all users of vectorcall. Why not do that? Seriously, what's the argument for *not* applying this change? Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
On 2019-05-09 20:30, Petr Viktorin wrote: The underlying C function should not need to know how to extract "self" from the function object, or how to handle the argument offsetting. Those should be implementation details. Maybe you misunderstood my proposal. I want to allow both for extra flexibility: - METH_FASTCALL (possibly combined with METH_KEYWORDS) continues to work as before. If you don't want to care about the implementation details of vectorcall, this is the right thing to use. - METH_VECTORCALL (using exactly the vectorcallfunc signature) is a new calling convention for applications that want the lowest possible overhead at the cost of being slightly harder to use. Personally, I consider the discussion about who is supposed to check that a function returns NULL if and if an error occurred a tiny detail which shouldn't dictate the design. There are two solutions for this: either we move that check one level up and do it for all vectorcall functions. Or, we keep the existing checks in place but we don't do that check for METH_VECTORCALL (this is already more specialized anyway, so dropping that check doesn't hurt much). We could also decide to enable this check only for debug builds, especially if debug builds are going to be easier to use thank to Victor Stinner's work. I see the value in having METH_VECTORCALL equivalent to the existing METH_FASTCALL|METH_KEYWORDS. But why invent a new name for that? METH_FASTCALL|METH_KEYWORDS already works. The alias METH_VECTORCALL could only make things more confusing (having two ways to specify exactly the same thing). Or am I missing something? Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
On 2019-05-09 23:09, Brett Cannon wrote: Any reason the above are all "Vectorcall" and not "VectorCall"? You seem to potentially have that capitalization for "PyCall_MakeVectorCall" as mentioned below which seems to be asking for typos if there's going to be two ways to do it. :) "PyCall_MakeVectorCall" is a typo for "PyVectorcall_Call" (https://github.com/python/peps/pull/1037) Everything else uses "Vectorcall" or "VECTORCALL". In text, we use "vectorcall" without a space. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
On 2019-05-09 20:30, Petr Viktorin wrote: ### Making things private For Python 3.8, the public API should be private, so the API can get some contact with the real world. I'd especially like to be able to learn from Cython's experience using it. That would mean: * _PyObject_Vectorcall * _PyCall_MakeVectorCall * _PyVectorcall_NARGS * _METH_VECTORCALL * _Py_TPFLAGS_HAVE_VECTORCALL * _Py_TPFLAGS_METHOD_DESCRIPTOR Do we really have to underscore the names? Would there be a way to mark this API as provisional and subject to change without changing the names? If it turns out that PEP 590 was perfect after all, then we're just breaking stuff in Python 3.9 (when removing the underscores) for no reason. Alternatively, could we keep the underscored names as official API in Python 3.9? ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Stable ABI or not for PyTypeObject?
Hello, I have a simple question for which there doesn't seem to be a good answer: is the layout of PyTypeObject considered to be part of the stable ABI? Officially, the answer is certainly "no" (see PEP 384). However, unofficially the answer might be "yes". At least, the last time that an incompatible change was made to PyTypeObject (adding tp_finalize in Python 3.4, PEP 442), care was taken not to break the ABI by using the Py_TPFLAGS_HAVE_FINALIZE flag. There is some discussion about this on https://bugs.python.org/issue32388 The implementation of PEP 590 is going to make another ABI-breaking change. So should we add a new Py_TFLAGS_HAVE_... flag for that or not? Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
Hello Petr, Thanks for your time. I suggest you (or somebody else) to officially reject PEP 580. I start working on reformulating PEP 590, adding some elements from PEP 580. At the same time, I work on the implementation of PEP 590. I want to implement Mark's idea of having a separate wrapper for each old-style calling convention. In the mean time, we can continue the discussion about the details, for example whether to store the flags inside the instance (I don't have an answer for that right now, I'll need to think about it). Petr, did you discuss with the Steering Council? It would be good to have some kind of pre-approval that PEP 590 and its implementation will be accepted. I want to work on PEP 590, but I'm not the right person to "defend" it (I know that it's worse in some ways than PEP 580). Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
On 2019-05-06 00:04, Petr Viktorin wrote: - Single bound method class for all kinds of function classes: This would be a cleaner design, yes, but I don't see a pressing need. As PEP 579 says, "this is a compounding issue", not a goal. As I recall, that is the only major reason for CCALL_DEFARG. Just a minor correction here: I guess that you mean CCALL_SELFARG. The flag CCALL_DEFARG is for passing the PyCCallDef* in PEP 580, which is mostly equivalent to passing the callable object in PEP 590. The signature of PEP 580 is func(const PyCCallDef *def, PyObject *self, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) And with PEP 590 it is func(PyObject *callable, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) with the additional special role for the PY_VECTORCALL_ARGUMENTS_OFFSET bit (which is meant to solve the problem of "self" in a different way). ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Please merge : bpo-34848
On 2019-05-03 14:24, Victor Stinner wrote: Hi Srinivas, I merged your obvious doc fix, thanks. Can you please do the same for https://github.com/python/cpython/pull/12784 ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 580/590 proposals summary
Hello all, If we want to have a chance of implementing PEP 580/590 in Python 3.8, we shouldn't wait too long to make a decision on which proposal to accept. As a summary, below I'll write the four proposals together with a star "score" for 3 criteria (there is no obvious best proposal, all have advantages and disadvantages): - complexity: more stars is a protocol which is simpler to document and understand. - implementation: more stars is a simpler implementation of CPython (not just of the protocol itself, but also the code using the protocol). - performance: more stars is better performance for *existing* code. I'm using a minimum of 3 stars here, since the difference is not that big between the proposals. Criteria that I am NOT considering: - The performance for *new* code or the performance of wrappers generated by Argument Clinic: all proposals score excellent here. - Complexity of implementations of external classes: this is hard to judge, since that depends a lot on what those external classes (outside of CPython) want to do. - The work to implement the proposal in CPython: this is a one-time only thing that I'm volunteering to do anyway. - Extensibility of the protocol: first of all, it's hard to define what this means exactly. Second, using Petr's idea of putting the flags inside the instance, every proposal becomes extensible at little cost. Proposals: (A) PEP 580 complexity: * implementation: * performance:* (B) compromise: PEP 580 with a single calling convention complexity: *** implementation: performance: (C) PEP 590 with a single bound method class complexity: * implementation: *** performance:*** (D) PEP 590 complexity: * implementation: * performance: I consider Petr's proposal (a more extensible variant of PEP 590 with flags inside the instance) a minor variation of PEP 590 for this purpose and no need to score it differently than "plain" PEP 590. I tried to do this as unbiased as possible, even though I must admit that this is not really possible. I'm considering not just the PEP and the existing implementation as written, but also ideas that haven't been implemented yet such as: - proposals (A)-(C): rationalization of classes, in particular having a single class for bound methods (just like in PyPy). - proposals (B)-(D): Mark Shannon's idea of having a dedicated vectorcall wrapper for each calling convention (one for METH_O, one for METH_VARARGS|METH_KEYWORDS, ...). - using the protocol also for slot wrappers like object.__eq__ I'm NOT considering Petr's proposal of removing support for other calling conventions like METH_VARARGS because that won't happen any time soon. Cheers, Jeroen ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580 and PEP 590 comparison.
On 2019-04-27 11:26, Mark Shannon wrote: Performance improvements include, but aren't limited to: 1. Much faster calls to common classes: range(), set(), type(), list(), etc. That's not specific to PEP 590. It can be done with any proposal. I know that there is the ABI issue with PEP 580, but that's not such a big problem as you seem to think (see my last e-mail). 2. Modifying argument clinic to produce C functions compatible with the vectorcall, allowing the interpreter to call the C function directly, with no additional overhead beyond the vectorcall call sequence. This is a very good point. Doing this will certainly reduce the overhead of PEP 590 over PEP 580. 3. Customization of the C code for function objects depending on the Python code. The would probably be limited to treating closures and generator function differently, but optimizing other aspects of the Python function call is possible. I'm not entirely sure what you mean, but I'm pretty sure that it's not specific to PEP 590. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580 and PEP 590 comparison.
On 2019-04-27 11:26, Mark Shannon wrote: Specifically, and this is important, PEP 580 cannot implement efficient calls to class objects without breaking the ABI. First of all, the layout of PyTypeObject isn't actually part of the stable ABI (see PEP 384). So, we wouldn't be breaking anything by extending PyTypeObject. Second, even if you don't buy this argument and you really think that we should guarantee ABI-compatibility, we can still solve that in PEP 580 by special-casing instances of "type". Sure, that's an annoyance but it's not a fundamental obstruction. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
On 2019-04-27 14:07, Mark Shannon wrote: class D(C): __call__(self, ...): ... and then create an instance `d = D()` then calling d will have two contradictory behaviours; the one installed by C in the function pointer and the one specified by D.__call__ It's true that the function pointer in D will be wrong but it's also irrelevant since the function pointer won't be used: class D won't have the flag Py_TPFLAGS_HAVE_CCALL/Py_TPFLAGS_HAVE_VECTORCALL set. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 590 discussion
On 2019-04-25 23:11, Petr Viktorin wrote: My thoughts are not the roadmap, of course :) I asked about methods because we should aware of the consequences when choosing between PEP 580 and PEP 590 (or some compromise). There are basically 3 different ways of dealing with bound methods: (A) put methods inside the protocol. This is PEP 580 and my 580/590 compromise proposal. The disadvantage here is complexity in the protocol. (B) don't put methods inside the protocol and use a single generic method class types.MethodType. This is the status-quo for Python functions. It has the disadvantage of being slightly slower: there is an additional level of indirection when calling a bound method object. (C) don't put methods inside the protocol but use multiple method classes, one for every function class. This is the status-quo for functions implemented in C. This has the disadvantage of code duplication. I think that the choice between PEP 580 or 590 should be done together with a choice of one of the above options. For example, I really don't like the code duplication of (C), so I would prefer PEP 590 with (B) over PEP 590 with (C). ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
Hello, after reading the various comments and thinking about it more, let me propose a real compromise between PEP 580 and PEP 590. My proposal is: take the general framework of PEP 580 but support only a single calling convention like PEP 590. The single calling convention supported would be what is currently specified by the flag combination CCALL_DEFARG|CCALL_FASTCALL|CCALL_KEYWORDS. This way, the flags CCALL_VARARGS, CCALL_FASTCALL, CCALL_O, CCALL_NOARGS, CCALL_KEYWORDS, CCALL_DEFARG can be dropped. This calling convention is very similar to the calling convention of PEP 590, except that: - the callable is replaced by a pointer to a PyCCallDef (the structure from PEP 580, but possibly without cc_parent) - there is a self argument like PEP 580. This implies support for the CCALL_SELFARG flag from PEP 580 and no support for the PY_VECTORCALL_ARGUMENTS_OFFSET trick of PEP 590. Background: I added support for all those calling conventions in PEP 580 because I didn't want to make any compromise regarding performance. When writing PEP 580, I assumed that any kind of performance regression would be a reason to reject PEP 580. However, it seems now that you're willing to accept PEP 590 instead which does introduce performance regressions in certain code paths. So that suggests that we could keep the good parts of PEP 580 but reduce its complexity by having a single calling convention like PEP 590. If you compare this compromise to PEP 590, the main difference is dealing with bound methods. Personally, I really like the idea of having a *single* bound method class which would be used by all kinds of function classes without any loss of performance (not only in CPython itself, but also by Cython and other C extensions). To support that, we need something like the PyCCallRoot structure from PEP 580, together with the special handling for self. About cc_parent and CCALL_OBJCLASS: I prefer to keep that because it allows to merge classes for bare functions (not inside a class) and unbound methods (functions inside a class). Concretely, that could reduce code duplication between builtin_function_or_method and method_descriptor. But I'm also fine with removing cc_parent and CCALL_OBJCLASS. In any case, we can decide that later. What do you think? Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
On 2019-04-25 00:24, Petr Viktorin wrote: I believe we can achieve that by having PEP 590's (o+offset) point not just to function pointer, but to a {function pointer; flags} struct with flags defined for two optimizations: What's the rationale for putting the flags in the instance? Do you expect flags to be different between one instance and another instance of the same class? Both type flags and nargs bits are very limited resources. Type flags are only a limited resource if you think that all flags ever added to a type must be put into tp_flags. There is nothing wrong with adding new fields tp_extraflags or tp_vectorcall_flags to a type. What I don't like about it is that it has the extensions built-in; mandatory for all callers/callees. I don't agree with the above sentence about PEP 580: - callers should use APIs like PyCCall_FastCall() and shouldn't need to worry about the implementation details at all. - callees can opt out of all the extensions by not setting any special flags and setting cr_self to a non-NULL value. When using the flags CCALL_FASTCALL | CCALL_KEYWORDS, then implementing the callee is exactly the same as PEP 590. As in PEP 590, any class that uses this mechanism shall not be usable as a base class. Can we please lift this restriction? There is really no reason for it. I'm not aware of any similar restriction anywhere in CPython. Note that allowing subclassing is not the same as inheriting the protocol. As a compromise, we could simply never inherit the protocol. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 590 discussion
On 2019-04-25 00:24, Petr Viktorin wrote: PEP 590 defines a new simple/fast protocol for its users, and instead of making existing complexity faster and easier to use, it's left to be deprecated/phased out (or kept in existing classes for backwards compatibility). It makes it possible for future code to be faster/simpler. Can you elaborate on what you mean with this deprecating/phasing out? What's your view on dealing with method classes (not necessarily right now, but in the future)? Do you think that having separate method classes like method-wrapper (for example [].__add__) is good or bad? Since the way how PEP 580 and PEP 590 deal with bound method classes is very different, I would like to know the roadmap for this. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Use C extensions compiled in release mode on a Python compiled in debug mode
On 2019-04-24 01:44, Victor Stinner wrote: I would like to be able to run C extensions compiled in release mode on a Python compiled in debug mode That seems like a very good idea. I would certainly use the debug mode while developing CPython or C extensions. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 590 discussion
On 2019-04-03 07:33, Jeroen Demeyer wrote: Access to the class isn't possible currently and also not with PEP 590. But it's easy enough to fix that: PEP 573 adds a new METH_METHOD flag to change the signature of the C function (not the vectorcall wrapper). PEP 580 supports this "out of the box" because I'm reusing the class also to do type checks. But this shouldn't be an argument for or against either PEP. Actually, in the answer above I only considered "is implementing PEP 573 possible?" but I did not consider the complexity of doing that. And in line with what I claimed about complexity before, I think that PEP 580 scores better in this regard. Take PEP 580 and assume for the sake of argument that it didn't already have the cc_parent field. Then adding support for PEP 573 is easy: just add the cc_parent field to the C call protocol structure and set that field when initializing a method_descriptor. C functions can use the METH_DEFARG flag to get access to the PyCCallDef structure, which gives cc_parent. Implementing PEP 573 for a custom function class takes no extra effort: it doesn't require any changes to that class, except for correctly initializing the cc_parent field. Since PEP 580 has built-in support for methods, nothing special needs to be done to support methods too. With PEP 590 on the other hand, every single class which is involved in PEP 573 must be changed and every single vectorcall wrapper supporting PEP 573 must be changed. This is not limited to the function class itself, also the corresponding method class (for example, builtin_function_or_method for method_descriptor) needs to be changed. Jeroen ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 590 discussion
On 2019-04-14 13:30, Mark Shannon wrote: PY_VECTORCALL_ARGUMENTS_OFFSET exists so that callables that make onward calls with an additional argument can do so efficiently. The obvious example is bound-methods, but classes are at least as important. cls(*args) -> cls.new(cls, *args) -> cls.__init__(self, *args) But tp_new and tp_init take the "cls" and "self" as separate arguments, not as part of *args. So I don't see why you need PY_VECTORCALL_ARGUMENTS_OFFSET for this. The updated minimal implementation now uses `const` arguments. Code that uses args[-1] must explicitly cast away the const. https://github.com/markshannon/cpython/blob/vectorcall-minimal/Objects/classobject.c#L55 That's better indeed. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580 and PEP 590 comparison.
On 2019-04-14 13:34, Mark Shannon wrote: I'll address capability first. I don't think that comparing "capability" makes a lot of sense since neither PEP 580 nor PEP 590 adds any new capabilities to CPython. They are meant to allow doing things faster, not to allow more things. And yes, the C call protocol can be implemented on top of the vectorcall protocol and conversely, but that doesn't mean much. Now performance. Currently the PEP 590 implementation is intentionally minimal. It does nothing for performance. So, we're missing some information here. What kind of performance improvements are possible with PEP 590 which are not in the reference implementation? The benchmark Jeroen provides is a micro-benchmark that calls the same functions repeatedly. This is trivial and unrealistic. Well, it depends what you want to measure... I'm trying to measure precisely the thing that makes PEP 580 and PEP 590 different from the status-quo, so in that sense those benchmarks are very relevant. I think that the following 3 statements are objectively true: (A) Both PEP 580 and PEP 590 add a new calling convention, which is equally fast as builtin functions (and hence faster than tp_call). (B) Both PEP 580 and PEP 590 keep roughly the same performance as the status-quo for existing function/method calls. (C) While the performance of PEP 580 and PEP 590 is roughly the same, PEP 580 is slightly faster (based on the reference implementations linked from PEP 580 and PEP 590). Two caveats concerning (C): - the difference may be too small to matter. Relatively, it's a few percent of the call time but in absolute numbers, it's less than 10 CPU clock cycles. - there might be possible improvements to the reference implementation of either PEP 580/PEP 590. I don't expect big differences though. To repeat an example from an earlier email, which may have been overlooked, this code reduces the time to create ranges and small lists by about 30% That's just a special case of the general fact (A) above and using the new calling convention for "type". It's an argument in favor of both PEP 580 and PEP 590, not for PEP 590 specifically. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Removing PID check from signal handler
The signal handler (that receives signals from the OS) in Python starts with a check if (getpid() == main_pid) Looking at the comments, the intent was to do a check for the main *thread* but this is checking the *process* id. So this condition is basically always true. Therefore, I suggest to remove it in https://bugs.python.org/issue36601 If you have any objections or comments, I suggest to post them to that bpo. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 590 discussion
Petr, I realize that you are in a difficult position. You'll end up disappointing either me or Mark... I don't know if the steering council or somebody else has a good idea to deal with this situation. Jeroen has time Speaking of time, maybe I should clarify that I have time until the end of August: I am working for the OpenDreamKit grant, which allows me to work basically full-time on open source software development but that ends at the end of August. Here again, I mostly want to know if the details are there for deeper reasons, or just points to polish. I would say: mostly shallow details. The subclassing thing would be good to resolve, but I don't see any difference between PEP 580 and PEP 590 there. In PEP 580, I wrote a strategy for dealing with subclassing. I believe that it works and that exactly the same idea would work for PEP 590 too. Of course, I may be overlooking something... I don't have good general experience with premature extensibility, so I'd not count this as a plus. Fair enough. I also see it more as a "nice to have", not as a big plus. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 590 discussion
On 2019-04-10 18:25, Petr Viktorin wrote: Hello! I've had time for a more thorough reading of PEP 590 and the reference implementation. Thank you for the work! And thank you for the review! I'd now describe the fundamental difference between PEP 580 and PEP 590 as: - PEP 580 tries to optimize all existing calling conventions - PEP 590 tries to optimize (and expose) the most general calling convention (i.e. fastcall) And PEP 580 has better performance overall, even for METH_FASTCALL. See this thread: https://mail.python.org/pipermail/python-dev/2019-April/156954.html Since these PEPs are all about performance, I consider this a very relevant argument in favor of PEP 580. PEP 580 also does a number of other things, as listed in PEP 579. But I think PEP 590 does not block future PEPs for the other items. On the other hand, PEP 580 has a much more mature implementation -- and that's where it picked up real-world complexity. About complexity, please read what I wrote in https://mail.python.org/pipermail/python-dev/2019-March/156853.html I claim that the complexity in the protocol of PEP 580 is a good thing, as it removes complexity from other places, in particular from the users of the protocol (better have a complex protocol that's simple to use, rather than a simple protocol that's complex to use). As a more concrete example of the simplicity that PEP 580 could bring, CPython currently has 2 classes for bound methods implemented in C: - "builtin_function_or_method" for normal C methods - "method-descriptor" for slot wrappers like __eq__ or __add__ With PEP 590, these classes would need to stay separate to get maximal performance. With PEP 580, just one class for bound methods would be sufficient and there wouldn't be any performance loss. And this extends to custom third-party function/method classes, for example as implemented by Cython. PEP 590's METH_VECTORCALL is designed to handle all existing use cases, rather than mirroring the existing METH_* varieties. But both PEPs require the callable's code to be modified, so requiring it to switch calling conventions shouldn't be a problem. Agreed. Jeroen's analysis from https://mail.python.org/pipermail/python-dev/2018-July/154238.html seems to miss a step at the top: a. CALL_FUNCTION* / CALL_METHOD opcode calls b. _PyObject_FastCallKeywords() which calls c. _PyCFunction_FastCallKeywords() which calls d. _PyMethodDef_RawFastCallKeywords() which calls e. the actual C function (*ml_meth)() I think it's more useful to say that both PEPs bridge a->e (via _Py_VectorCall or PyCCall_Call). Not quite. For a builtin_function_or_method, we have with PEP 580: a. call_function() calls d. PyCCall_FastCall which calls e. the actual C function and with PEP 590 it's more like: a. call_function() calls c. _PyCFunction_FastCallKeywords which calls d. _PyMethodDef_RawFastCallKeywords which calls e. the actual C function Level c. above is the vectorcall wrapper, which is a level that PEP 580 doesn't have. The way `const` is handled in the function signatures strikes me as too fragile for public API. That's a detail which shouldn't influence the acceptance of either PEP. Why not have a per-type pointer, and for types that need it (like PyTypeObject), make it dispatch to an instance-specific function? That would be exactly https://bugs.python.org/issue29259 I'll let Mark comment on this. Minor things: - "Continued prohibition of callable classes as base classes" -- this section reads as a final. Would you be OK wording this as something other PEPs can tackle? - "PyObject_VectorCall" -- this looks extraneous, and the reference imlementation doesn't need it so far. Can it be removed, or justified? - METH_VECTORCALL is *not* strictly "equivalent to the currently undocumented METH_FASTCALL | METH_KEYWORD flags" (it has the ARGUMENTS_OFFSET complication). - I'd like to officially call this PEP "Vectorcall", see https://github.com/python/peps/pull/984 Those are indeed details which shouldn't influence the acceptance of either PEP. If you go with PEP 590, then we should discuss this further. Mark, what are your plans for next steps with PEP 590? If a volunteer wanted to help you push this forward, what would be the best thing to work on? Personally, I think what we need now is a decision between PEP 580 and PEP 590 (there is still the possibility of rejecting both but I really hope that this won't happen). There is a lot of work that still needs to be done after either PEP is accepted, such as: - finish and merge the reference implementation - document everything - use the protocol in more classes where it makes sense (for example, staticmethod, wrapper_descriptor) - use this in Cython - handle more issues from PEP 579 I volunteer to put my time into this, regardless of which PEP is accepted. Of course, I still think that PEP 580 is better, but I also want this functiona
Re: [Python-Dev] Deprecating "instance method" class
On 2019-04-08 17:08, Robert White wrote: So we're making pretty heavy use of PyInstanceMethod_New in our python binding library that we've written for a bunch of in house tools. If this isn't the best / correct way to go about adding methods to objects, what should we be using instead? First of all, the consensus in this thread is not to deprecate instancemethod. Well, it depends what you mean with "adding methods to objects", that's vaguely formulated. Do you mean adding methods at run-time (a.k.a. monkey-patching) to a pre-existing class? And is the process of adding methods done in C or in Python? Do you only need PyInstanceMethod_New() or also other PyInstanceMethod_XXX functions/macros? ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Deprecating "instance method" class
On 2019-04-07 09:48, Serhiy Storchaka wrote: total_ordering monkeypatches the decorated class. I'm planning to implement in C methods that implement __gt__ in terms of __lt__ etc. Yes, I understood that. I'm just saying: if you want to make it fast, that's not the best solution. The fastest would be to implement tp_richcompare from scratch (instead of relying on slot_tp_richcompare dispatching to methods). ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Deprecating "instance method" class
On 2019-04-05 21:58, Brett Cannon wrote: Then we can consider improving the documentation if there are performance implications. Sure, we could write in the docs something like "Don't use this, this is not what you want. It's slow and there are better alternatives like method descriptors". Should I do that (with better wording of course)? since we don't have nearly as good of a deprecation setup as we do in Python code. I don't get this. One can easily raise a DeprecationWarning from C code, there is plenty of code already doing that. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Deprecating "instance method" class
On 2019-04-05 17:46, Guido van Rossum wrote: This API is doing no harm, it's not a maintenance burden What if the following happens? 1. For some reason (possibly because of this thread), people discover instancemethod and start using it. 2. People realize that it's slow. 3. It needs to be made more efficient, causing new code bloat and maintenance burden. clearly *some* folks have a use for it. I'm not convinced. I don't think that instancemethod is the right solution for functools.total_ordering for example. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Deprecating "instance method" class
On 2019-04-05 19:53, Serhiy Storchaka wrote: At Python level we can monkeypatch __gt__, but not tp_richcompare. Sure, but you're planning to use C anyway so that's not really an argument. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Deprecating "instance method" class
On 2019-04-05 15:13, Serhiy Storchaka wrote: It is easy to implement a function in C. Why does it need to be a PyCFunction? You could put an actual method descriptor in the class. In other words, use PyDescr_NewMethod() instead of PyCFunction_New() + PyInstanceMethod_New(). It's probably going to be faster too since the instancemethod adds an unoptimized extra level of indirection. Yes, this is what I want to do. I did not do this only because implementing method-like functions which which do not belong to concrete class implemented in C is not convention. Sure, you could implement separate methods like __gt__ in C, but that's still less efficient than just implementing a specific tp_richcompare for total_ordering and then having the usual wrapper descriptors for __gt__. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Deprecating "instance method" class
On 2019-04-05 14:10, Serhiy Storchaka wrote: it can be used to implement accelerated versions of separate methods instead of the whole class. Could you elaborate? I'm curious what you mean. I'm going to use it to further optimize total_ordering. There are so many ways in which total_ordering is inefficient. If you really want it to be efficient, you should just implement it in C. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Deprecating "instance method" class
On 2019-04-05 00:57, Greg Ewing wrote: If it's designed for use by things outside of CPython, how can you be sure nothing is using it? Of course I'm not sure. However: 1. So far, nobody in this thread knows of any code using it. 2. So far, nobody in this thread knows any use case for it. And if we end up deprecating and it was a mistake, we can easily revert the deprecation. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Deprecating "instance method" class
On 2019-04-04 14:09, Christian Heimes wrote: I couldn't find any current code that uses PyInstanceMethod_New. Let's deprecate the feature and schedule it for removal in 3.10. Done at https://github.com/python/cpython/pull/12685 ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Deprecating "instance method" class
During my investigations related to low-level function/method classes, I came across the "instance method" class. There is a C API for it: https://docs.python.org/3.7/c-api/method.html However, it's not used/exposed anywhere in CPython, except as _testcapi.instancemethod (for testing its functionality) This class was discussed at https://mail.python.org/pipermail/python-3000/2007-December/011456.html and implemented in https://bugs.python.org/issue1587 Reading that old thread, there are use cases presented related to classic classes, wrapping Kogut (http://kokogut.sourceforge.net/kogut.html) objects and Pyrex. But classic classes no longer exist and the latter two use cases aren't actually needed if you read the thread to the end. So there are no surviving use cases from that thread. Does anybody know actual use cases or any code in the wild using it? To me, the fact that it's only exposed in the C API is a good sign that it's not really useful. So, should we deprecate the instance method class? Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 590 vs. bpo-29259
As I'm reading the PEP 590 reference implementation, it strikes me how similar it is to https://bugs.python.org/issue29259 The main difference is that bpo-29259 has a per-class pointer tp_fastcall instead of a per-object pointer. But actually, the PEP 590 reference implementation does not make much use of the per-object pointer: for all classes except "type", the vectorcall wrapper is the same for all objects of a given type. One thing that bpo-29259 did not realize is that existing optimizations could be dropped in favor of using tp_fastcall. For example, bpo-29259 has code like if (PyFunction_Check(callable)) { return _PyFunction_FastCallKeywords(...); } if (PyCFunction_Check(callable)) { return _PyCFunction_FastCallKeywords(...); } else if (PyType_HasFeature(..., Py_TPFLAGS_HAVE_FASTCALL) ...) but the first 2 branches are superfluous given the third. Anyway, this is just putting PEP 590 a bit in perspective. It doesn't say anything about the merits of PEP 590. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
On 2019-04-02 21:38, Mark Shannon wrote: Hi, On 01/04/2019 6:31 am, Jeroen Demeyer wrote: I added benchmarks for PEP 590: https://gist.github.com/jdemeyer/f0d63be8f30dc34cc989cd11d43df248 Thanks. As expected for calls to C function for both PEPs and master perform about the same, as they are using almost the same calling convention under the hood. While they are "about the same", in general PEP 580 is slightly faster than master and PEP 590. And PEP 590 actually has a minor slow-down for METH_VARARGS calls. I think that this happens because PEP 580 has less levels of indirection than PEP 590. The vectorcall protocol (PEP 590) changes a slower level (tp_call) by a faster level (vectorcall), while PEP 580 just removes that level entirely: it calls the C function directly. This shows that PEP 580 is really meant to have maximal performance in all cases, accidentally even making existing code faster. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 590 discussion
In one of the ways to call C functions in PEP 580, the function gets access to: - the arguments, - "self", the object - the class that the method was found in (which is not necessarily type(self)) I still have to read the details, but when combined with LOAD_METHOD/CALL_METHOD optimization (avoiding creation of a "bound method" object), it seems impossible to do this efficiently with just the callable's code and callable's object. It is possible, and relatively straightforward. Access to the class isn't possible currently and also not with PEP 590. But it's easy enough to fix that: PEP 573 adds a new METH_METHOD flag to change the signature of the C function (not the vectorcall wrapper). PEP 580 supports this "out of the box" because I'm reusing the class also to do type checks. But this shouldn't be an argument for or against either PEP. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 580/590 discussion
I added benchmarks for PEP 590: https://gist.github.com/jdemeyer/f0d63be8f30dc34cc989cd11d43df248 ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 580/590 discussion
On 2019-03-30 17:30, Mark Shannon wrote: 2. The claim that PEP 580 allows "certain optimizations because other code can make assumptions" is flawed. In general, the caller cannot make assumptions about the callee or vice-versa. Python is a dynamic language. PEP 580 is meant for extension classes, not Python classes. Extension classes are not dynamic. When you implement tp_call in a given way, the user cannot change it. So if a class implements the C call protocol or the vectorcall protocol, callers can make assumptions about what that means. PEP 579 is mainly a list of supposed flaws with the 'builtin_function_or_method' class. The general thrust of PEP 579 seems to be that builtin-functions and builtin-methods should be more flexible and extensible than they are. I don't agree. If you want different behaviour, then use a different object. Don't try an cram all this extra behaviour into a pre-existing object. I think that there is a misunderstanding here. I fully agree with the "use a different object" solution. This isn't a new solution: it's already possible to implement those different objects (Cython does it). It's just that this solution comes at a performance cost and that's what we want to avoid. I'll reiterate that PEP 590 is more general than PEP 580 and that once the callable's code has access to the callable object (as both PEPs allow) then anything is possible. You can't can get more extensible than that. I would argue the opposite: PEP 590 defines a fixed protocol that is not easy to extend. PEP 580 on the other hand uses a new data structure PyCCallDef which could easily be extended in the future (this will intentionally never be part of the stable ABI, so we can do that). I have also argued before that the generality of PEP 590 is a bad thing rather than a good thing: by defining a more rigid protocol as in PEP 580, more optimizations are possible. PEP 580 has the same limitation for the same reasons. The limitation is necessary for correctness if an object supports calls via `__call__` and through another calling convention. I don't think that this limitation is needed in either PEP. As I explained at the top of this email, it can easily be solved by not using the protocol for Python classes. What is wrong with my proposal in PEP 580: https://www.python.org/dev/peps/pep-0580/#inheritance Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] A request for PEP announcement format [was: PEP 570]
On 2019-03-29 04:08, Stephen J. Turnbull wrote: In this case, it's here: > https://discuss.python.org/t/pep-570-Python-Positional-Only-Parameters/1078 So, are we supposed to discuss PEPs on discuss.python.org now? That's fine for me, should I create a thread like that for PEP 580 too? ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 556 threaded garbage collection & linear recursion in gc
On 2019-03-28 01:38, Tim Peters wrote: The bad news is that the traschcan mechanism is excruciating, a long-time source of subtle bugs of its own :-( It just happens that I created a PR to fix some of the trashcan problems: see https://bugs.python.org/issue35983 and https://github.com/python/cpython/pull/11841 ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] BDFL-Delegate appointments for several PEPs
On 2019-03-27 14:50, Petr Viktorin wrote: The pre-PEP claims speedups of 2% in initial experiments, with expected overall performance gain of 4% for the standard benchmark suite. That's pretty big. I re-did my earlier benchmarks for PEP 580 and these are the results: https://gist.github.com/jdemeyer/f0d63be8f30dc34cc989cd11d43df248 In general, the PEP 580 timings seem slightly better than vanilla CPython, similar to what Mark got. I'm speculating that the speedup in both cases comes from the removal of type checks and dispatching depending on that, and instead using a single protocol that directly does what needs to be done. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 576bis discussion
By lack of a better name, I'm using the name PEP 576bis to refer to https://github.com/markshannon/peps/blob/new-calling-convention/pep-.rst (This is why this should get a PEP number soon, even if the PEP is not completely done yet). On 2019-03-27 14:50, Petr Viktorin wrote: The pre-PEP is simpler then PEP 580, because it solves simpler issues. I'll need to confirm that it won't paint us into a corner -- that there's a way to address all the issues in PEP 579 in the future. One potential issue is calling bound methods (in the duck typing sense) when the LOAD_METHOD optimization is *not* used. This would happen for example when storing a bound method object somewhere and then calling it (possibly repeatedly). Perhaps that's not a very common thing and we should just live with that. However, since __self__ is part of the PEP 580 protocol, it allows calling a bound method object without any performance penalty compared to calling the underlying function directly. Similarly, a follow-up of PEP 580 could allow zero-overhead calling of static/class methods (I didn't put this in PEP 580 because it's already too long). As far as I can see, PEP 580 claims not much improvement in CPython, but rather large improvements for extensions (Mistune with Cython). Cython is indeed the main reason for PEP 580. The pre-PEP has a complication around offsetting arguments by 1 to allow bound methods forward calls cheaply. I honestly don't understand what this "offset by one" means or why it's useful. It should be better explained in the PEP. The pre-PEP's "any third-party class implementing the new call interface will not be usable as a base class" looks quite limiting. I agree, this is pretty bad. However, I don't think that there is a need for this limitation. PEP 580 solves this by only inheriting the Py_TPFLAGS_HAVE_CCALL flag in specific cases. PEP 576bis could do something similar. Finally, I don't agree with this sentence from PEP 576bis: PEP 580 is specifically targetted at function-like objects, and doesn't support other callables like classes, partial functions, or proxies. It's true that classes are not supported (and I wonder how PEP 576bis deals with that, it would be good to explain that more explicitly) but other callables are not a problem. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 576/580: on the complexity of function calls
Now that the discussion on PEP 576/580 has been opened again, let me write something about the complexity of function calls (*), which is probably the most frequently given reason against PEP 580. An important fact is the following: *the status-quo is complex*. Over time, many performance improvements have been made to function calls. Each of these was a relatively small incremental change (for example, METH_FASTCALL with *args only was added before METH_FASTCALL|METH_KEYWORDS with *args and **kwargs). In the end, all these small changes add up to quite a bit of complexity. The fact that this complexity isn't documented anywhere and that it's distributed over several .c files in the CPython sources makes it perhaps not obvious that it's there. Neither PEP 576 nor PEP 580 tries to remove this complexity. Indeed, the complexity is there for good reasons, as it improves performance of function calls in many ways. But the PEPs handle it in very different ways. On the one hand, PEP 580 concentrates all the complexity in the protocol. So the protocol looks complex, even though most of it is really just formulating existing complexity. More importantly, since the complexity is moved to the protocol, it becomes quite easy to use PEP 580 in a class: you don't need to understand the implementation of PEP 580 for that. On the other hand, PEP 576 keeps the existing complexity out of the protocol. This means that the implementations of classes using PEP 576 become more complex, as the existing complexity needs to be implemented somewhere. In fact, with PEP 576 the existing complexity needs to be implemented in many places, leading for example to code duplication between builtin_function_or_method and method_descriptor. This kind of code duplication would again occur for third-party method-like classes. Note that everything I said above about PEP 576 also applies to the not-yet-PEP https://github.com/markshannon/peps/blob/new-calling-convention/pep-.rst Best wishes, Jeroen. (*) With "function calls", I mean most importantly calls of instances of builtin_function_or_method, method, method_descriptor and function. But since PEP 576/580 are meant for third-party function/method classes, also those should be considered. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] BDFL-Delegate appointments for several PEPs
On 2019-03-24 16:22, Mark Shannon wrote: The draft can be found here: https://github.com/markshannon/peps/blob/new-calling-convention/pep-.rst I think that this is basically a better version of PEP 576. The idea is the same as PEP 576, but the details are better. Since it's not fundamentally different from PEP 576, I think that this comparison still stands: https://mail.python.org/pipermail/python-dev/2018-July/154238.html ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] BDFL-Delegate appointments for several PEPs
On 2019-03-24 16:22, Mark Shannon wrote: Hi Petr, Regarding PEPs 576 and 580. Over the new year, I did a thorough analysis of possible approaches to possible calling conventions for use in the CPython ecosystems and came up with a new PEP. The draft can be found here: https://github.com/markshannon/peps/blob/new-calling-convention/pep-.rst Thanks for that. Is this new PEP meant to supersede PEP 576? I'd like to have a testable branch, before formally submitting the PEP, but I'd thought you should be aware of the PEP. If you want to bring up this PEP now during the PEP 576/580 discussion, maybe it's best to formally submit it now? Having an official PEP number might simplify the discussion. If it turns out to be a bad idea after all, you can still withdraw it. In the mean time, I remind you that PEP 576 also doesn't have a complete reference implementation (the PEP links to a "reference implementation" but it doesn't correspond to the text of the PEP). Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Removing PendingDeprecationWarning
On 2019-03-22 11:33, Serhiy Storchaka wrote: What is wrong with PendingDeprecationWarning? It serves the same purpose as DeprecationWarning: it indicates that a feature is planned to be removed in the future. There is no point in having two warnings with exactly the same meaning. What problem do you want to solve at the cost of removing this feature? 1. Typically, a PendingDeprecationWarning is meant to be promoted to a DeprecationWarning in some future release. It takes a minor effort to do that and it may be forgotten. It's just simpler to use DeprecationWarning from the start. 2. Whenever somebody wants to deprecate something, that person has to decide between the two. That's just more complicated than it needs to be. And I can easily ask the converse question: what problem do you want to solve by including that feature? ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Remove tempfile.mktemp()
On 2019-03-20 12:45, Victor Stinner wrote: You can watch the /tmp directory using inotify and "discover" immediately the "secret" filename, it doesn't depend on the amount of entropy used to generate the filename. That's not the problem. The security issue here is guessing the filename *before* it's created and putting a different file or symlink in place. So I actually do think that mktemp() could be made secure by using a longer name generated by a secure random generator. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 12 updated with templates for header fields and sections
On 2019-03-08 09:29, Victor Stinner wrote: I would like to suggest to add URLs to the first messages of all threads about a PEP... Like this? https://www.python.org/dev/peps/pep-0580/#discussion ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEPs from non-core devs now need a sponsor
On 2019-03-05 18:14, Steve Dower wrote: However, if you don't have *a single* core developer on board from python-ideas, chances are the whole team is going to reject the proposal. Sure, I couldn't agree more. But this is something that a PEP mentor (instead of sponsor) also could deal with. Any potential mentor would quickly dismiss the PEP as having no chance and that would work just fine. The problem with the "sponsor" idea is that the sponsor must come from the group of core devs supporting the PEP. What if all core devs supporting it don't have time to act as sponsor or just don't care enough? On the other hand, if there is some support for an idea, then anybody should be able to mentor even if the mentor doesn't personally support the idea. I guess the mentor shouldn't be opposed either, but there is a large gray zone of -0/+0 in between where mentors could come from. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEPs from non-core devs now need a sponsor
On 2019-03-05 14:05, Calvin Spealman wrote: I'm worried this creates a gatekeeping perception that will scare away contributors. +1 I also expressed this worry at https://github.com/python/peps/pull/903 You could keep the positive part of the sponsoring idea (somebody acting as mentor) but drop the negative part (make it a hard requirement to find a sponsor supporting the proposal before the proposal can even become a draft PEP). ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEPs from non-core devs now need a sponsor
Does this apply to existing draft PEPs or only new ones? ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Register-based VM [Was: Possible performance regression]
Let me just say that the code for METH_FASTCALL function/method calls is optimized for a stack layout: a piece of the stack is used directly for calling METH_FASTCALL functions (without any copying any PyObject* pointers). So this would probably be slower with a register-based VM (which doesn't imply that it's a bad idea, it's just a single point to take into account). ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Making PyInterpreterState an opaque type
On 2019-02-21 12:18, Victor Stinner wrote: What I also would like to see is the creation of a group of people who work on the C API to discuss each change and test these changes properly. I don't think that we should "discuss each change", we should first have an overall plan. It doesn't make a lot of sense to take small steps if we have no clue where we're heading to. I am aware of https://pythoncapi.readthedocs.io/new_api.html but we should first make that into an accepted PEP. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Making PyInterpreterState an opaque type
On 2019-02-19 04:04, Steve Dower wrote: On 18Feb.2019 1324, Jeroen Demeyer wrote: For a very concrete example, was it really necessary to put _PyTuple_ITEMS in (4)? That's used in _functoolsmodule.c. Especially given that the very similar PySequence_Fast_ITEMS is in (2), that seems like a strange and arbitrary limiting choice. The reason to do this is that we can "guarantee" that we've fixed all users when we change the internal representation. I think that CPython should then at least "eat its own dog food" and don't use any of the internal functions/macros when implementing the stdlib. As I said before: if a function/macro is useful for implementing stdlib functionality like "functools" or "json", it's probably useful for external modules too. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Making PyInterpreterState an opaque type
On 2019-02-19 04:04, Steve Dower wrote: Otherwise, the internal memory layout becomes part of the public ABI Of course, the ABI (not API) depends on the internal memory layout. Why is this considered a problem? If you want a fixed ABI, use API level (1) from my last post. If you want a fixed API but not ABI, use level (2). If you really want stuff to be broken at any time, use (3) or (4). This is why I don't see the need to make a difference between (3) and (4): neither of them makes any guarantees about stability. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Making PyInterpreterState an opaque type
On 2019-02-18 21:17, Eric Snow wrote: Historically our approach to keeping API private was to use underscore prefixes and to leave them out of the documentation (along with guarding with "#ifndef Py_LIMITED_API"). However, this has lead to occasional confusion and breakage, and even to leaking things into the stable ABI that shouldn't have been. Lately we've been working on making the distinction between internal and public API (and stable ABI) more clear and less prone to accidental exposure. Victor has done a lot of work in this area. So I'd like to understand your objection. First of all, if everybody can actually #define Py_BUILD_CORE and get access to the complete API, I don't mind so much. But then it's important that this actually keeps working (i.e. that those headers will always be installed). Still, do we really need so many levels of API: (1) stable API (with #define Py_LIMITED_API) (2) public documented API (3) private undocumented API (the default exposed API) (4) internal API (with #define Py_BUILD_CORE) I would argue to fold (4) into (3). Applications using (3) already know that they are living dangerously by using private API. I'm afraid of hiding actually useful private macros under Py_BUILD_CORE. For example, Modules/_functoolsmodule.c and Modules/_json.c use API functions from (4). But if an API function is useful for implementing functools or json, then it's probably also useful for external extension modules: what if I want to implement something similar to functools or json, why shouldn't I be allowed to use those same API functions? For a very concrete example, was it really necessary to put _PyTuple_ITEMS in (4)? That's used in _functoolsmodule.c. Especially given that the very similar PySequence_Fast_ITEMS is in (2), that seems like a strange and arbitrary limiting choice. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Making PyInterpreterState an opaque type
On 2019-02-16 00:37, Eric Snow wrote: One thing that would help simplify changes in this area is if PyInterpreterState were defined in Include/internal. How would that help anything? I don't like the idea (in general, I'm not talking about PyInterpreterState specifically) that external modules should be second-class citizens compared to modules inside CPython. If you want to break the undocumented API, just break it. I don't mind. But I don't see why it's required to move the include to Include/internal for that. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Reviewing PEP 580
Hello, I would like to propose to the new steering council to review PEP 580. Is there already a process for that? Thanks, Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Windows porting: request to review PR #880
Can somebody please review https://github.com/python/cpython/pull/880 That addresses a severe problem on Windows making it impossible to build any C++ extension module with some compilers. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] General concerns about C API changes
On 2018-11-14 04:06, Raymond Hettinger wrote: With cross module function calls, I'm less confident about what is happening If the functions are "static inline" (as opposed to plain "inline"), those aren't really cross-module function calls. Because the functions are "static" and defined in a header file, every module has its own copy of the function. If the function is not inlined in the end, this would inflate the compiled size because you end up with multiple compilations of the same code in the CPython library. It would not affect correct functioning in any way though. If the function *is* inlined, then the result should be no different from using a macro. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python Language Governance Proposals
On 2018-10-26 19:17, Brett Cannon wrote: But since you're asking about wanting to "review PEPs", you can review them now. Unfortunately not everybody agrees on that... See https://mail.python.org/pipermail/python-dev/2018-October/155441.html in particular I really hope that I won't have to wait 5 more months before a decision can be made on PEP 580. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python Language Governance Proposals
What is the timeframe for the installation of the new governance? In other words, when will it be possible to review PEPs? ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Petr Viktorin as BDFL-Delegate for PEP 580
On 2018-10-03 23:27, Guido van Rossum wrote: IMO changes to the C API should be taken just as seriously -- the potential for breaking the world is just about the same (since most serious Python applications use C extensions that risk breaking). Of course we are taking this seriously. I want this to be taken as seriously as any other PEP and any other BDFL-Delegate appointment in the past. To be clear: I'm not trying to rush my PEP though. It has been discussed and I have made changes to it based on comments. In fact, this is the second PEP with the same subject, I withdrew the first one, PEP 575. At some point in the past I asked one person to become BDFL-Delegate but he did not answer. And now recently Petr Viktorin made some insightful comments on it, so I asked him and he agreed. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Petr Viktorin as BDFL-Delegate for PEP 580
On 2018-10-03 18:55, Barry Warsaw wrote: Correct. It’s entirely possible that the different governance models will have different ways to pick delegates. And how does that affect *today*'s decision? The new governance model will only take effect 1 January (assuming that everything goes as planned). As long as there is no new governance model yet, can't we just continue the PEP 1 process which has worked for many years? I know that we cannot literally apply PEP 1 because there is no BDFL, but we can certainly continue the spirit of PEP 1 if the other core developers agree with the BDFL-Delegate. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Petr Viktorin as BDFL-Delegate for PEP 580
On 2018-10-03 17:06, Łukasz Langa wrote: That's the only reason why PEP 544 is not yet accepted for example. Did you actually try to get PEP 544 accepted or to appoint a BDFL-Delegate? I don't find any discussions about PEP 544 after the stepping down of the BDFL. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Petr Viktorin as BDFL-Delegate for PEP 580
On 2018-10-03 17:12, Wes Turner wrote: > AFAIU, there is not yet a documented process for BDFL-delegate assignment. PEP 1 says: """ However, whenever a new PEP is put forward, any core developer that believes they are suitably experienced to make the final decision on that PEP may offer to serve as the BDFL's delegate (or "PEP czar") for that PEP. If their self-nomination is accepted by the other core developers and the BDFL, then they will have the authority to approve (or reject) that PEP. """ I know that it says "core developers and the BDFL". However, if the core developers agree that Petr can become BDFL-Delegate, I don't see why that wouldn't be possible. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Petr Viktorin as BDFL-Delegate for PEP 580
Hello, I would like to propose Petr Viktorin as BDFL-Delegate for PEP 580, titled "The C call protocol". He has co-authored several PEPs (PEP 394, PEP 489, PEP 534, PEP 547, PEP 573), several of which involve extension modules. Petr has agreed to become BDFL-Delegate for PEP 580 if asked. Also Antoine Pitrou, INADA Naoki and Nick Coghlan have approved Petr being BDFL-Delegate. I am well aware of the current governance issues, but several people have mentioned that the BDFL-Delegate process can still continue for now. I created a PR for the peps repository at https://github.com/python/peps/pull/797 Cheers, Jeroen Demeyer. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Documenting the private C API (was Re: Questions about signal handling.)
On 2018-09-25 16:01, Barry Warsaw wrote: Maybe this is better off discussed in doc-sig but I think we need to consider documenting the private C API. Even the *public* C API is not fully documented. For example, none of the PyCFunction_... functions appears in the documentation. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Request review of bpo-34125/GH-8416
The gist of bpo-34125 is that the following two statements behave differently with respect to sys.setprofile() profiling: >>> list.append([], None) >>> list.append([], None, **{}) More precisely: the former call is profiled, but the latter is not. The fix at GH-8416 is simply to make this consistent by also profiling the latter. Victor Stinner did not want to accept the pull request because he wanted "a wider discussion on function calls". I think that GH-8416 is a simple bugfix which can be merged anyway and which won't make future "discussions on function calls" any harder. In any case, it would be good to have any *decision* (accepted or rejected) on that PR. I find the current uncertainty worse than a decision either way. Right now, my reference implementation of PEP 580 conflicts with that branch and I would like to resolve that conflict after GH-8416 has been decided. Links: https://bugs.python.org/issue34125 https://github.com/python/cpython/pull/8416 Thanks, Jeroen Demeyer. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 579 and PEP 580: refactoring C functions and methods
On 2018-09-13 02:26, Petr Viktorin wrote: The reference to PEP 573 is premature. It seems to me that PEP 580 helps with the use case of PEP 573. In fact, it implements part of what PEP 573 proposes. So I don't see the problem with the reference to PEP 573. Even if the implementation of PEP 573 changes, the problem statement will remain and that's what I'm referring to. If you agree I can summarize rationale for "parent", as much as it concerns 580. Sure. I still think that we should refer to PEP 573, but maybe we can summarize it also in PEP 580. # Using tp_print The tp_print gimmick is my biggest worry. AFAIK there's no guarantee that a function pointer and Py_ssize_t are the same size. I'm not actually claiming anywhere that it is the same size. # Descriptor behavior I'd say "SHOULD" rather than "MUST" here. The section describes how to implement expected/reasonable behavior, but I see no need to limit that. There *is* actually an important reason to limit it: it allows code to make assumptions on what __get__ does. This enables optimizations which wouldn't be possible otherwise. If you cannot be sure what __get__ does, then you cannot optimize obj.method(x) to type(obj).method(obj, x) "if func supports the C call protocol, then func.__set__ must not be implemented." -- also, __delete__ should not be implemented, right?. Indeed. I write Python but I think C API, so for me these are both really tp_descr_set. PyCCall_FASTCALL is not a macro, shouldn't it be named PyCCall_FastCall? What's the convention for that anyway? I assumed that capital letters meant a "really know what you are doing" function which could segfault if used badly. For me, whether something is a function or macro is just an implementation detail (which can change between Python versions) which should not affect the naming. # C API functions The function PyCFunction_GetFlags is, for better or worse, part of the stable ABI. We shouldn't just give up on it. I'm fine with documenting that it shouldn't be used, but for functions defined using PyCFunction_New etc. it should continue behaving as before. One solution could be to preserve the "definition time" METH_* flags in the 0xFFF bits of cc_flags and use the other bits for CCALL_*. I'm fine with that if you insist. However, it would be a silly solution to formally satisfy the "stable ABI" requirement without actually helping. I agree with your other points that I didn't reply to and will make some edits to PEP 580. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Let's change to C API!
On 2018-08-23 07:36, Antoine Pitrou wrote: - the bootstrap problem (Cython self-compiles with CPython) This is not a big problem: we can make sure that all stdlib dependencies of Cython either have PEP 399 pure Python implementations or we keep them in pure C. - the dependency / versioning problem (Cython is a large quick-evolving third-party package that we can't decently vendor) Is that a real problem? You're sort of doing the same thing with pip already. - the maintenance problem (how do ensure we can change small things in the C API, especially semi-private ones, without having to submit PRs to Cython as well) Why don't you want to submit PRs to Cython? If you're saying "I don't want to wait for the next stable release of Cython", you could use development versions of Cython for development versions of CPython. - the debugging problem (Cython's generated C code is unreadable, unlike Argument Clinic's, which can make debugging annoying) Luckily, you don't need to read the C code most of the time. And it's also a matter of experience: I can read Cython-generated C code just fine. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Can we split PEP 489 (extension module init) ?
> Would this be better than a flag + raising an error on init? Exactly. PEP 489 only says "Extensions using the new initialization scheme are expected to support subinterpreters". What's wrong with raising an exception when the module is initialized the second time? Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Error message for wrong number of arguments
On 2018-07-30 17:28, Nick Coghlan wrote: I would, and I think it would make sense for the PEP to cite improving consistency (and reducing code duplication?) in that regard as an advantage of the PEP. I'm not sure to which PEP you are referring (PEP 580 or a new PEP?). After thinking a bit about the issue of error messages, I realized that PEP 580 would make this easier to fix (to be clear: there are ways to fix it without PEP 580, I'm just saying that PEP 580 makes it easier). There are two related reasons for this: * The existing code which calls the actual underlying C function doesn't have access to the Python-level function object. So it can't know whether it's a function (where self doesn't count) or a method (where self counts). * Armin Rigo suggested to use a new flag to indicate this difference: that would certainly work for Argument Clinic (just have Argument Clinic add that flag). For methods defined without Argument Clinic, we cannot require such a new flag though. We could still add the flag at runtime, but it's far from clear if we can freely change the flags inside a PyMethodDef at runtime (at least, no existing code that I know does that). PEP 580 solves the first issue by having the function object available and it solves the second issue by not relying on PyMethodDef at all for calling functions/methods. The second issue especially can be generalized as: PEP 580 makes the implementation of functions/methods much less rigid, making it easier to change the implementation. So maybe this can be seen as yet another advantage of PEP 580. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Error message for wrong number of arguments
Actually, scratch that, I posted too soon. There is also a block /*[clinic input] class list "PyListObject *" "&PyList_Type" [clinic start generated code]*/ So it could work. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Error message for wrong number of arguments
Actually, I just realized that it's not really possible to fix the error messages for built-in methods. The problem is that Argument Clinic does not know whether a function or method is being handled. For example, there is no indication at all that this is a method (note that the name "list.insert" might refer to a function "insert" inside a module called "list" or a method "insert" or a class "list"): /*[clinic input] list.insert index: Py_ssize_t object: object / Insert object before index. [clinic start generated code]*/ ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [PEP 576/580] Comparing PEP 576 and 580
On 2018-07-31 11:12, INADA Naoki wrote: Any PEP won't be accepted in few month, because we don't have flow to accept PEPs for now. Is that certain? I haven't been following the process discussions, so I'm just asking the question. For example, given that you are already looking at PEP 580, would it be possible for you to handle PEP 580 as official BDFL-Delegate (even if there is no BDFL)? So it's worthless that waiting PEP accepted before start PoC. First of all, it's too early for a proof-of-concept of native C calling. We first have to design the theory before we can start implementing anything. But even if we could start to write a proof of concept, I would really prefer doing that on top of PEP 580. There are two reasons for this: 1. If PEP 580 is rejected, I very much doubt that the native C calling protocol will be accepted. 2. I would be good to use PEP 580 as a framework for the implementation. Otherwise we have to implement it twice: once before PEP 580 with the proof-of-concept and then again after PEP 580 with the "real" implementation. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Let's change to C API!
On 2018-07-31 15:34, Victor Stinner wrote: But I never used Cython nor cffi, so I'm not sure which one is the most appropriate depending on the use case. Cython is a build-time tool, while cffi is a run-time tool. But Cython does a lot more than just FFI. It is a Python->C compiler which can be used for FFI but also for many other things. A major "rewrite" of such large code base is very difficult since people want to push new things in parallel. Or is it maybe possible to do it incrementally? Yes, that's not a problem: you can easily mix pure Python code, Cython code and C code. I think that this kind of mixing is an important part in Cython's philosophy: for stuff where you don't care about performance: use Python. For most stuff where you do care: use Cython. For very specialized code which cannot easily be translated to Cython: use C. Jeroen. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [PEP 576/580] Comparing PEP 576 and 580
On 2018-07-31 12:10, INADA Naoki wrote: After spent several days to read PEP 580 and your implementation, I think I can implement it. I think it's not easy, but it's not impossible too. The signature of "extended_call_ptr" in PEP 576 is almost the same as the signature of a CCALL_FUNCARG|CCALL_FASTCALL|CCALL_KEYWORDS function in PEP 580 (the only difference is a "self" argument which can be ignored if you don't need it). So, if you can implement it using PEP 576, it's not a big step to implement it using PEP 580. ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com