[ 
https://issues.apache.org/jira/browse/ARROW-7706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17320194#comment-17320194
 ] 

Joris Van den Bossche edited comment on ARROW-7706 at 4/13/21, 2:03 PM:
------------------------------------------------------------------------

For reference here, I opened a new issue ARROW-12358 on this topic for the new 
Datasets API, which currently has different behaviour as pandas' {{to_parquet}} 
or {{pyarrow.parquet.write_to_dataset}}:  
{{pyarrow.dataset.write_dataset}} by default uses a fixed filename template, 
which will in practice often overwrite existing data instead of silently 
doubling data (which is probably also not the desired default behaviour)


was (Author: jorisvandenbossche):
For reference here, I opened a new issue ARROW-12358 on this topic for the new 
Datasets API, which currently has different behaviour as pandas' {{to_parquet}} 
or {{pyarrow.parquet.write_to_dataset}}: {{pyarrow.dataset.write_dataset}} by 
default uses a fixed filename template, which will in practice often overwrite 
existing data instead of silently doubling data (which is probably also not the 
desired default behaviour)

> [Python] saving a dataframe to the same partitioned location silently doubles 
> the data
> --------------------------------------------------------------------------------------
>
>                 Key: ARROW-7706
>                 URL: https://issues.apache.org/jira/browse/ARROW-7706
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.15.1
>            Reporter: Tsvika Shapira
>            Priority: Major
>              Labels: dataset, dataset-parquet-write, parquet
>
> When a user saves a dataframe:
> {code:python}
> df1.to_parquet('/tmp/table', partition_cols=['col_a'], engine='pyarrow')
> {code}
> it will create sub-directories named "{{a=val1}}", "{{a=val2}}" in 
> {{/tmp/table}}. Each of them will contain one (or more?) parquet files with 
> random filenames.
> If a user runs the same command again, the code will use the existing 
> sub-directories, but with different (random) filenames. As a result, any data 
> loaded from this folder will be wrong - each row will be present twice.
> For example, when using
> {code:python}
> df1.to_parquet('/tmp/table', partition_cols=['col_a'], engine='pyarrow')  # 
> second time
> df2 = pd.read_parquet('/tmp/table', engine='pyarrow')
> assert len(df1) == len(df2)  # raise an error{code}
> This is a subtle change in the data that can pass unnoticed.
>  
> I would expect that the code will prevent the user from using an non-empty 
> destination as partitioned target. an overwrite flag can also be useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to