gmarkall commented on issue #47128: URL: https://github.com/apache/arrow/issues/47128#issuecomment-3085663610
Oops, I forgot to write up the remaining issue - it is: ``` ______________________________________ test_numba_memalloc[float32-numba.cuda] _______________________________________ c = 1, dtype = dtype('float32') @pytest.mark.parametrize("c", range(len(context_choice_ids)), ids=context_choice_ids) @pytest.mark.parametrize("dtype", dtypes, ids=dtypes) def test_numba_memalloc(c, dtype): ctx, nb_ctx = context_choices[c] dtype = np.dtype(dtype) # Allocate memory using numba context # Warning: this will not be reflected in pyarrow context manager # (e.g bytes_allocated does not change) size = 10 mem = nb_ctx.memalloc(size * dtype.itemsize) darr = DeviceNDArray((size,), (dtype.itemsize,), dtype, gpu_data=mem) darr[:5] = 99 darr[5:] = 88 np.testing.assert_equal(darr.copy_to_host()[:5], 99) np.testing.assert_equal(darr.copy_to_host()[5:], 88) # wrap numba allocated memory with CudaBuffer > cbuf = cuda.CudaBuffer.from_numba(mem) pyarrow/tests/test_cuda_numba_interop.py:178: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > if mem.device_pointer.value is None and mem.size==0: E AttributeError: 'cuda.bindings.driver.CUdeviceptr' object has no attribute 'value' ``` `memalloc` returns `c_void_p` objects with the ctypes bindings, and `CUdeviceptr` objects with the NVIDIA bindings. This is obviously an unpleasant inconsistency, but I've not yet thought of a straightforward fix because other libraries had adapted to this inconsistency (e.g. https://github.com/rapidsai/rmm), so just straightforwardly making it consistent will break them. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org