[ 
https://issues.apache.org/jira/browse/ARROW-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896980#comment-16896980
 ] 

Joris Van den Bossche commented on ARROW-5952:
----------------------------------------------

[~nugend] Thanks for the report!

Looking at your example, I noticed that the original table and the table read 
from the file are different in the following way (building on the code from 
above):

{code}
new_table = pa.ipc.open_file('bar').read_all()
original_column = table.column(0)
new_column = new_table.column(0)
{code}

{code}
>>> original_column 
<pyarrow.lib.ChunkedArray object at 0x7f06cc5d1e40>
[

  -- dictionary:
0 nulls
  -- indices:
    []
]
>>> new_column 
<pyarrow.lib.ChunkedArray object at 0x7f06c7029540>
[

]
>>> len(original_column)
0
>>> len(new_column)
0
>>> len(original_column.chunks)
1
>>> len(new_column.chunks)
0
{code}

So both columns have the same type and a length of 0, but the original column 
has 1 chunk of length 0, while the table read from file trough IPC has a column 
consisting of a ChunkedArray with 0 chunks.

So in that way, we can reproduce the segfault with a small snippet as follows:

{code}
>>> import pyarrow as pa
>>> array_0_chunks = pa.chunked_array([], pa.dictionary(pa.int8(), pa.null()))
>>> table = pa.table({'a': array_0_chunks}) 
>>> table.to_pandas()                                                           
>>>                                                                             
>>>                                                    
Segmentation fault (core dumped)
{code}

I am not fully sure if the IPC reading is supposed to create an array with 0 
chunks, but I suppose that such a ChunkedArray with 0 chunks is a valid object 
(for simple types as int64 it also does not segfault). 
But the segfault on the pandas conversion is for sure a bug (it seems that 
other arrow->python conversions such as {{to_pydict}} seems to work, so it 
might be specific to the pandas code)

> [Python] Segfault when reading empty table with category as pandas dataframe
> ----------------------------------------------------------------------------
>
>                 Key: ARROW-5952
>                 URL: https://issues.apache.org/jira/browse/ARROW-5952
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.14.0, 0.14.1
>         Environment: Linux 3.10.0-327.36.3.el7.x86_64
> Python 3.6.8
> Pandas 0.24.2
> Pyarrow 0.14.0
>            Reporter: Daniel Nugent
>            Priority: Major
>
> I have two short sample programs which demonstrate the issue:
> {code:java}
> import pyarrow as pa
> import pandas as pd
> empty = pd.DataFrame({'foo':[]},dtype='category')
> table = pa.Table.from_pandas(empty)
> outfile = pa.output_stream('bar')
> writer = pa.RecordBatchFileWriter(outfile,table.schema)
> writer.write(table)
> writer.close()
> {code}
> {code:java}
> import pyarrow as pa
> pa.ipc.open_file('bar').read_pandas()
> Segmentation fault
> {code}
> My apologies if this was already reported elsewhere, I searched but could not 
> find an issue which seemed to refer to the same behavior.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to