[ 
https://issues.apache.org/jira/browse/ARROW-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17067016#comment-17067016
 ] 

Joris Van den Bossche commented on ARROW-8208:
----------------------------------------------

This is now implemented, and also already available in the Python bindings to 
the new Datasets framework.

In released pyarrow 0.16.0, you can use the datasets API to use filtering on 
non-partition key columns, and this looks like (example I did locally with NYC 
taxi data):

{code}
import pyarrow.dataset as ds

dataset = ds.dataset("nyc-taxi-data/", format="parquet", partitioning="hive")   
                                                                                
 
dataset.to_table(filter=ds.field("passenger_count") > 8)                        
                                                                                
                                           
{code}

So the above is already possible with pyarrow 0.16. In the upcoming pyarrow 
0.17, we will also provide this functionality through the existing 
{{ParquetDataset}} API as you were using. But this is work in progress right 
now (ARROW-8039, https://github.com/apache/arrow/pull/6303)

> [PYTHON] Row Group Filtering With ParquetDataset
> ------------------------------------------------
>
>                 Key: ARROW-8208
>                 URL: https://issues.apache.org/jira/browse/ARROW-8208
>             Project: Apache Arrow
>          Issue Type: New Feature
>            Reporter: Christophe Clienti
>            Priority: Major
>              Labels: dataset, dataset-parquet-read
>
> Hello,
> I tried to use the row_group filtering at the file level with an instance of 
> ParquetDataset without success.
> I've tested the workaround proposed here:
>  [https://github.com/pandas-dev/pandas/issues/26551#issuecomment-497039883]
> But I wonder if it can work on a file as I get an exception with the 
> following code:
> {code:python}
> ParquetDataset('data.parquet',
>                filters=[('ticker', '=', 'AAPL')]).read().to_pandas()
> {code}
> {noformat}
> AttributeError: 'NoneType' object has no attribute 'filter_accepts_partition'
> {noformat}
> I read the documentation, and the filtering seems to work only on partitioned 
> dataset. Moreover I read some information in the following JIRA ticket: 
> ARROW-1796
> So I'm not sure that a ParquetDataset can use row_group statistics to filter 
> specific row_group in a file (in a dataset or not)?
> As mentioned in ARROW-1796, I tried with fastparquet, and after fixing a bug 
> (statistics.min instead of statistics.min_value), I was able to apply the 
> row_group filtering.
> Today I'm forced with pyarrow to filter manually the row_groups in each file, 
> which prevents me to use the ParquetDataset partition filtering functionality.
> The row groups are really useful because it prevents to fill the filesystem 
> with small files...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to