Sebastian Berg added the comment:
Thanks, so there should already be a lock in place (sorry, I missed that). But
somehow we seem to get around it?
Do you know what may cause the locking logic to fail in this case? Recursive
imports in NumPy itself? Or Cython using low-level C-API?
I.e
Sebastian Berg added the comment:
To add to this: it would seem to me that the side-effects of importing should
be guaranteed to only be called once?
However, IO or other operations could be part of the import side-effects and
release the GIL. So even a simple, pure-Python, package could
Sebastian Berg added the comment:
While I have a repro for Python, I think the pre release of cython already
fixes it (and I just did not regenerated the C sources when trying, I guess. A
`git clean` to the rescue...).
--
___
Python tracker
Sebastian Berg added the comment:
Not reopening for now, but I will note again that (AFAIK) Cython uses
`PyEval_EvalCodeEx`, and the docs say that it is not used internally to CPython
anymore.
So it seems pretty plausible that the bug is in `PyEval_EvalCodeEx` and not the
generated Cython
Change by Sebastian Berg :
--
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.org/issue46451>
___
___
Python-bugs-list
Sebastian Berg added the comment:
Thanks for having a look. I have confirmed this is related to Cython (no
pandas/NumPy involved) – repro at https://github.com/seberg/bpo46451. What
happens under the hood in Cython is probably:
https://github.com/cython/cython/blob/master/Cython/Utility
Change by Sebastian Berg :
--
title: Possibly bad interaction with tracing and cython? -> Tracing causes
module globals to be mutated when calling functions from C
___
Python tracker
<https://bugs.python.org/issu
Sebastian Berg added the comment:
Ahh, a further data-point. The name from the module scope that is overwritten
IS a parameter name used in the function locals. Strangly, if I modify the
tracing to print more:
stop = 0
def trace(frame, event, arg):
global stop
if stop >
New submission from Sebastian Berg :
Starting here, but there could be Cython interaction or something else in
theory. But, when running the following:
* Python 3.10.1 (not 3.9.9, debug version or not)
* Setting a tracing function (not setting a trace-function will fix the issue)
* Running
Sebastian Berg added the comment:
Btw. huge thanks for looking into this! Let me know if I can try to help out
(I can make due with static metatypes, but my story seems much clearer if I
could say: Well with Py 3.11 you can, and probably should, do it dynamically.).
I had lost a lot of
Sebastian Berg added the comment:
Well, what we need is a way to say: I am calling `type.__new__` (i.e.
PyType_FromSpec) on purpose from (effectively) my own `mytype.__new__`?
That is, because right now I assume we want to protect users from calling
PyType_FromSpec thinking that it is
Sebastian Berg added the comment:
Fully, agree! In the end, `PyType_FromSpec` replaces `type.__new__()` (and
init I guess) when working in C. In Python, we would call `type.__new__`
(maybe via super) from the `metatype.__new__`, but right now, in C, the
metatype cannot reliably use
Sebastian Berg added the comment:
It is probably early, but I am starting to like the idea of a "C MetaClass
factory" helper/indicator.
It seems to me that yes, at least `tp_new` cannot be called reasonable for a
class that is created in C, it is just too confusing/awkward to t
Sebastian Berg added the comment:
Sorry, I need some time to dive back into this, so some things might be garbled
:). Yes, I do agree supporting a custom `tp_new` here seems incredibly tricky.
I have not thought about the implications of this, though.
> guess the safest option is to f
New submission from Sebastian Berg :
The PyType_FromSpec fails to take care about MetaClasses.
https://bugs.python.org/issue15870
Asks to create a new API to pass in the MetaClass. This issue is only about
"inheriting" the metaclass of the bases correctly. Currently, Pytho
Sebastian Berg added the comment:
Yeah, I will try and have a look. I had posted the patch, because the test
looked like a bit of a larger chunk of work ;).
> And I'm surprised that you're surprised :)
:). I am coming from a completely different angle, probably. Just if you
Sebastian Berg added the comment:
> But if tp_name is created dynamically, it could lead to a dangling pointer.
I will guess this is probably just an oversight/bug since the main aim was to
move towards heap-types, and opened an issue: https://bugs.python.org/issue45
New submission from Sebastian Berg :
As noted in the issue: https://bugs.python.org/issue15870#msg402800
`PyType_FromSpec` assumes that the `name` passed is persistent for the program
lifetime. This seems wrong/unnecessary: We are creating a heap-type, a
heap-type's name is stored
Sebastian Berg added the comment:
Just to note, that there are two – somewhat distinct – issues here in my
opinion:
1. `FromSpec` does not scan `bases` for the correct metaclass, which it could;
this could even be considered a bug?
2. You cannot pass in a desired metaclass, which may require
Sebastian Berg added the comment:
I can make a PR from the patch (and add the `Py_tp_metaclass` slot if desired)
with a basic test here, if that is what is blocking things.
Fixing the type and size of the allocation (as the patch does) would allow me
to give people a way to create a new
Sebastian Berg added the comment:
I am still fighting with this (and the issues surrounding it) for NumPy. The
main point is that my new DTypes in NumPy are metaclasses that extend the
(heap)type struct.
That just feels right and matches the structure perfectly, but seems to get
awkward
Sebastian Berg added the comment:
Thanks for looking into it!
`cpow` is indeed complicated. We had some discussion in NumPy about `0**y`
branch cuts (I did yet not finish it, because thinking about the branch cuts is
tricky).
It is almost reassuring that the C standard also hedges out
Sebastian Berg added the comment:
(Sorry for the spam. I think we can/should just hard-code the expected values
in the NumPy test-suite. So this is not actually an issue for NumPy and
probably just warrants a double-check that the behaviour change is desirable
Sebastian Berg added the comment:
Hmm, sorry, I overshot/misread :(.
The thing that the NumPy test-suite trips over is that:
c_powi(inf+0j, 3)
seems to not raise, but:
_Py_c_pow(inf+0j, 3.+0j)
(or nan+0.j rather then inf+0j)
does seem to raise (returning `nan+nanj` in both cases
Sebastian Berg added the comment:
The fix broke NumPy (see also https://github.com/numpy/numpy/pull/19612)
It seems incorrect. After all, it doesn't matter much whether the float can be
converted to an integer correctly (or even if it returns an undefined value),
so long `int_
New submission from Sebastian Berg :
`pybytes_concate` currently uses the following code to get the data:
va.len = -1;
vb.len = -1;
if (PyObject_GetBuffer(a, &va, PyBUF_SIMPLE) != 0 ||
PyObject_GetBuffer(b, &vb, PyBUF_SIMPLE) != 0) {
PyErr_Format(PyExc_T
Change by Sebastian Berg :
--
nosy: +seberg
___
Python tracker
<https://bugs.python.org/issue40522>
___
___
Python-bugs-list mailing list
Unsubscribe:
Sebastian Berg added the comment:
In NumPy ufuncs and datatype casting (iteration) we have the following setup:
user-code (releasing GIL) -> NumPy API -> 3rd-party C-function
(in the ufunc code, numpy is the one releasing the GIL, although others, such
as numba probably hook
New submission from Sebastian Berg :
In https://bugs.python.org/issue40826 it was defined that
`PyOS_InterruptOccurred()` can only be called with a GIL. NumPy had a few
places with very unsafe sigint handling (not thread-safe). But generally when
we are in a situation that catching sigints
Sebastian Berg added the comment:
Ok, I will just close it. It is painfully clear that e.g. `mmap` uses it this
way to prohibit closing, and also `memoryview` has all the machinery necessary
to do counting of how many exports, etc. exists.
I admit, this still rubs me the wrong way, and I
Sebastian Berg added the comment:
I went through Python, `array` seems to not break the logic. pickling has a
comment which specifically wants to run into the argument parsing corner case
above (I am not sure that it is really important). However,
`Modules/_testbuffer.c` (which is just test
Sebastian Berg added the comment:
Hmmm, it seems I had missed this chunk of PEP 3118 before:
> Exporters will need to define a bf_releasebuffer function if they can
> re-allocate their memory, strides, shape, suboffsets, or format variables
> which they might share through t
New submission from Sebastian Berg :
The current documentation of ``PyBuffer_Release()`` and the PEP is a bit fuzzy
about what the function can and cannot do.
When an object exposes the buffer interface, I believe it should always return
a `view` (in NumPy speak) of its own data, i.e. the
Change by Sebastian Berg :
--
keywords: +patch
pull_requests: +17412
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/18017
___
Python tracker
<https://bugs.python.org/issu
New submission from Sebastian Berg :
This is mainly an information request, so sorry if its a bit besides the point
(I do not mind if you just close it). But it seemed a bit too specific to get
answers in most places...
In Python you use argument clinic, which supports `METH_FASTCALL`, that
Sebastian Berg added the comment:
Thanks for the quick responses.
@Victor Stinner, I suppose you could change `numbers.Complex.__bool__()` by
adding the no-op bool to make it: `bool(self != 0)`.
But I am not sure I feel it is necessary. NumPy is a bit a strange in that it
uses its own
Sebastian Berg added the comment:
Fair enough, we had code that does it the other way, so it seemed "too obvious"
since the current check seems mainly useful with few kwargs. However, a single
kwarg is super common in python, while many seem super rare (in any argument
clini
New submission from Sebastian Berg :
The keyword argument extraction/finding function seems to have a performance
bug/enhancement (unless I am missing something here). It reads:
```
for (i=0; i < nkwargs; i++) {
PyObject *kwname = PyTuple_GET_ITEM(kwnames, i);
/*
Change by Sebastian Berg :
--
keywords: +patch
pull_requests: +17049
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/17576
___
Python tracker
<https://bugs.python.org/issu
Sebastian Berg added the comment:
I applaud the stricter rules in general, as Mark noted nicely, the issue is
that `__index__` is maybe a strange way to achieve that for bools (it is not
like `123` is a clean bool)? `__nonzero__` coerces to bools, there is no
`__bool__` to convert to bool
Change by Sebastian Berg :
--
nosy: +seberg
___
Python tracker
<https://bugs.python.org/issue37980>
___
___
Python-bugs-list mailing list
Unsubscribe:
Sebastian Berg added the comment:
To make warning testing saner, in numpy we added basically my own version of
catch_warnings on steroids, which needed/will need changing because of this.
Unless I missed it somewhere, this change should maybe put into the release
notes to warn make it a bit
Sebastian Berg added the comment:
I do not have an opinion either way, but I do think there is a small difference
here compared to the other issue. With the other issue, there are cases where
we cannot set the strides correctly.
If you ask numpy directly whether the array is contiguous (or
Sebastian Berg added the comment:
Numpy does not understand suboffsets. The buffers we create will always have
them NULL. The other way around To be honest, think it is probably ignoring
the whole fact that they might exist at all :/, really needs to be fixed if it
is the case
Sebastian Berg added the comment:
Antoine, sounds good to me, I don't mind this being in python rather sooner
then later, for NumPy itself it does not matter I think. I just wanted to warn
that there were problems when we first tried to switch in NumPy, which, if I
remember correctly, i
Sebastian Berg added the comment:
@pitrou, yes of course. This would make python do the same thing as numpy does
(currently only with that compile flag given).
About the time schedule, I think I will try to see if some other numpy dev has
an opinion. Plus, should look into documenting it for
Sebastian Berg added the comment:
Numpy 1.9. was only released recently, so 1.10. might be a while. If no
problems show up during release or until then, we will likely switch it
by then. But that could end up being a year from now, so I am not sure
if 3.6 might not fit better. The problems
Sebastian Berg added the comment:
Yeah, the code does much the same as the old numpy code (at least most of the
same funny little things, though I seem to remember the old numpy code had
something yet a bit weirder, would have to check).
To be honest, I do not know. It isn't implausible
Changes by Sebastian Berg :
Added file: http://bugs.python.org/file36680/contiguous.py
___
Python tracker
<http://bugs.python.org/issue22445>
___
___
Python-bugs-list m
Changes by Sebastian Berg :
Added file: http://bugs.python.org/file36678/contiguous.py
___
Python tracker
<http://bugs.python.org/issue22445>
___
___
Python-bugs-list m
Changes by Sebastian Berg :
Added file: http://bugs.python.org/file36677/relaxed-strides-checking.patch
___
Python tracker
<http://bugs.python.org/issue22445>
___
___
Sebastian Berg added the comment:
I am very sorry. The attached patch fixes this (not sure if quite right, but if
anything should be more general then necessary). One test fails, but it looks
like exactly the intended change.
--
Added file: http://bugs.python.org/file36676/relaxed
Sebastian Berg added the comment:
An extra dimension is certainly not irrelevant! The strides *are* valid
and numpy currently actually commonly creates such arrays when slicing.
The question is whether or not we want to ignore them for contiguity
checks even if they have no effect on the memory
Sebastian Berg added the comment:
Well, the 9223372036854775807 is certainly no good for production code and we
would never have it in a release version, it is just there currently to expose
if there are more problems. However I don't care what happens on overflow (as
long as it is n
Sebastian Berg added the comment:
To be clear, the important part here, is that to me all elements *can* be
accessed using that scheme. It is not correct to assume that `stride[-1]` or
`stride[0]` is actually equal to `itemsize`.
In other words, you have to be able to pass the pointer to the
Sebastian Berg added the comment:
#12845 should be closed, seems like a bug in some old version. The definition
now is simply that the array is contiguous if you can legally access it in a
contiguous fashion. Which means first stride is itemsize, second is
itemsize*shape[0] for Fortran
New submission from Sebastian Berg:
In NumPy we decided some time ago that if you have a multi dimensional buffer,
shaped for example 1x10, then this buffer should be considered both C- and
F-contiguous. Currently, some buffers which can be used validly in a contiguous
fashion are rejected
Sebastian Berg added the comment:
Thanks, yes, you are right, should have googled a bit more anyway. Though I did
not find much on the hashable vs unhashable itself, so if I ever stumble across
it again, I will write a mail...
--
___
Python tracker
Sebastian Berg added the comment:
This is closed, and maybe I am missing something. But from a general point of
view, why does hashing of NaN not raise an error as it did for decimals, i.e.
why was this not resolved exactly the other way around? I am mostly just
wondering about this it is not
New submission from Sebastian Berg:
`warnings.simplefilter` does not validate that the category passed in is
actually a class. This means that an invalid category leads to a `TypeError`
whenever a warning would otherwise occur due to `issubclass` check failing.
It is a very small thing, but
60 matches
Mail list logo