Good to hear. We have an RAII-type helper that we use in C++ to make
it easier to acquire and release the GIL in functions that need it

https://github.com/apache/arrow/blob/master/cpp/src/arrow/python/common.h#L74

On Wed, Oct 28, 2020 at 2:08 PM James Thomas <jamesjoetho...@gmail.com> wrote:
>
> Thanks, Wes. Wrapping my C++ function with
>
> PyGILState_STATE state = PyGILState_Ensure();
> ...
> PyGILState_Release(state);
>
> fixed the issue.
>
> On Wed, Oct 28, 2020 at 6:21 AM Wes McKinney <wesmck...@gmail.com> wrote:
>>
>> I haven't tried myself but my guess the problem is that your C++
>> function does not acquire the GIL. When ctypes invokes a native
>> function, it releases the GIL.
>>
>> On Wed, Oct 28, 2020 at 4:29 AM James Thomas <jamesjoetho...@gmail.com> 
>> wrote:
>> >
>> > Hi,
>> >
>> > I am trying to run the following simple example after pip installing 
>> > pandas and pyarrow:
>> >
>> > ---cube.cpp---
>> > #include <Python.h>
>> > #include <arrow/python/pyarrow.h>
>> > #include <arrow/api.h>
>> >
>> > extern "C" void print_is_array(PyObject *);
>> >
>> > void print_is_array(PyObject *obj) {
>> >   arrow::py::import_pyarrow();
>> >   printf("is_array: %d\n", arrow::py::is_array(obj));
>> > }
>> >
>> > ---cube.py---
>> > import ctypes
>> > import pandas as pd
>> > import pyarrow as pa
>> >
>> > c_lib = ctypes.CDLL("./libcube.so")
>> > df = pd.DataFrame({"a": [1, 2, 3]})
>> > table = pa.Table.from_pandas(df)
>> > c_lib.print_is_array(ctypes.py_object(table))
>> >
>> > ---build.sh---
>> > #!/bin/bash
>> > python3 -c 'import pyarrow; pyarrow.create_library_symlinks()'
>> > INC=$(python3 -c 'import pyarrow; print(pyarrow.get_include())')
>> > LIB=$(python3 -c 'import pyarrow; print(pyarrow.get_library_dirs()[0])')
>> > g++ -I$INC -I/usr/include/python3.6m -fPIC cube.cpp -shared -o libcube.so 
>> > -L$LIB -larrow -larrow_python
>> >
>> > When I run build.sh and then do python3 cube.py, I am seeing a segfault at 
>> > the import_pyarrow() statement in cube.cpp. Am I doing something wrong 
>> > here?
>> >
>> > Thanks,
>> > James

Reply via email to