Hi Cindy,
I don't think cudf supports arrow 7.0.0. Even the master branch is going
with 6.0.1 at the moment.
https://github.com/rapidsai/cudf/blob/6bcfc104051f926f46467fc55c456a9b012fc4af/conda/environments/cudf_dev_cuda11.5.yml#L20

On Wed, Mar 2, 2022 at 7:21 PM Cindy McMullen <[email protected]> wrote:

> pyarrow-7.0.0
>
>
> /usr/local/cuda/bin/nvcc --version
> nvcc: NVIDIA (R) Cuda compiler driver
> Copyright (c) 2005-2020 NVIDIA Corporation
> Built on Thu_Jun_11_22:26:38_PDT_2020
> Cuda compilation tools, release 11.0, V11.0.194
> Build cuda_11.0_bu.TC445_37.28540450_0
>
>
>
> On Wed, Mar 2, 2022 at 9:09 AM Keith Kraus <[email protected]>
> wrote:
>
>> Hey Cindy,
>>
>> What versions of cuDF and PyArrow do you have installed? I've typically
>> seen this pop up when there's a mismatched version.
>>
>> -Keith
>>
>> On Tue, Mar 1, 2022 at 8:28 PM Cindy McMullen <[email protected]>
>> wrote:
>>
>>> Hi -
>>>
>>> I'm trying to use DGL (Deep Graph Library) DGLDataset API with the
>>> RAPIDS cuda DataFrame API.   Am getting this error:
>>>
>>> module 'pyarrow.lib' has no attribute '_CRecordBatchReader'
>>>
>>>
>>> Wonder if you see anything obvious in the stack trace that might help me 
>>> debug?
>>>
>>>
>>>
>>> Here's the full stack trace:
>>>
>>>
>>> <ipython-input-5-81f2b1833437> in __init__(self)     10   """     11   def 
>>> __init__(self):---> 12     super(UserSimsSingleFileDataset, 
>>> self).__init__(name='UserSimsDataset', verbose=False)     13      14     # 
>>> One quirk of DGLDataset is that process() and __len__ (load, save) are 
>>> called immediately after super(),
>>> /opt/conda/lib/python3.7/site-packages/dgl/data/dgl_dataset.py in 
>>> __init__(self, name, url, raw_dir, save_dir, hash_key, force_reload, 
>>> verbose)     91             self._save_dir = save_dir     92 ---> 93        
>>>  self._load()     94      95     def download(self):
>>> /opt/conda/lib/python3.7/site-packages/dgl/data/dgl_dataset.py in 
>>> _load(self)    176         if not load_flag:    177             
>>> self._download()--> 178             self.process()    179             
>>> self.save()    180             if self.verbose:
>>> <ipython-input-5-81f2b1833437> in process(self)     20      21     import 
>>> gcsfs---> 22     import cudf     23     self.rows_per_batch = 10000     24  
>>>    gs = gcsfs.GCSFileSystem()
>>> /opt/conda/lib/python3.7/site-packages/cudf/__init__.py in <module>      9 
>>> import rmm     10 ---> 11 from cudf import core, datasets, testing     12 
>>> from cudf._version import get_versions     13 from cudf.api.extensions 
>>> import (
>>> /opt/conda/lib/python3.7/site-packages/cudf/core/__init__.py in <module>    
>>>   1 # Copyright (c) 2018-2020, NVIDIA CORPORATION.      2 ----> 3 from 
>>> cudf.core import buffer, column, column_accessor, common      4 from 
>>> cudf.core.buffer import Buffer      5 from cudf.core.dataframe import 
>>> DataFrame, from_pandas, merge
>>> /opt/conda/lib/python3.7/site-packages/cudf/core/column/__init__.py in 
>>> <module>      1 # Copyright (c) 2020-2021, NVIDIA CORPORATION.      2 ----> 
>>> 3 from cudf.core.column.categorical import CategoricalColumn      4 from 
>>> cudf.core.column.column import (      5     ColumnBase,
>>> /opt/conda/lib/python3.7/site-packages/cudf/core/column/categorical.py in 
>>> <module>     20      21 import cudf---> 22 from cudf import _lib as libcudf 
>>>     23 from cudf._lib.scalar import as_device_scalar     24 from 
>>> cudf._lib.transform import bools_to_mask
>>> /opt/conda/lib/python3.7/site-packages/cudf/_lib/__init__.py in <module>    
>>>   2 import numpy as np      3 ----> 4 from . import (      5     avro,      
>>> 6     binaryop,
>>> cudf/_lib/gpuarrow.pyx in init cudf._lib.gpuarrow()
>>> AttributeError: module 'pyarrow.lib' has no attribute '_CRecordBatchReader'
>>>
>>>

-- 
Niranda Perera
https://niranda.dev/
@n1r44 <https://twitter.com/N1R44>

Reply via email to