Dima Ryazanov created ARROW-1940:
------------------------------------

             Summary: [Python] Extra metadata gets added after multiple 
conversions between pd.DataFrame and pa.Table
                 Key: ARROW-1940
                 URL: https://issues.apache.org/jira/browse/ARROW-1940
             Project: Apache Arrow
          Issue Type: Bug
          Components: Python
    Affects Versions: 0.8.0
            Reporter: Dima Ryazanov
            Priority: Minor
         Attachments: fail.py

We have a unit test that verifies that loading a dataframe from a .parq file 
and saving it back with no changes produces the same result as the original 
file. It started failing with pyarrow 0.8.0.

After digging into it, I discovered that after the first conversion from 
pd.DataFrame to pa.Table, the table contains the following metadata (among 
other things):

{code}
"column_indexes": [{"metadata": null, "field_name": null, "name": null, 
"numpy_type": "object", "pandas_type": "bytes"}]
{code}

However, after converting it to pd.DataFrame and back into a pa.Table for the 
second time, the metadata gets an encoding field:

{code}
"column_indexes": [{"metadata": {"encoding": "UTF-8"}, "field_name": null, 
"name": null, "numpy_type": "object", "pandas_type": "unicode"}]
{code}

See the attached file for a test case.

So specifically, it appears that dataframe->table->dataframe->table conversion 
produces a different result from just dataframe->table - which I think is 
unexpected.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to