[
https://issues.apache.org/jira/browse/PARQUET-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14353436#comment-14353436
]
Alex Levenson commented on PARQUET-98:
--------------------------------------
Yes, I think that is the main difference. However, I'm a bit surprised it has
much performance impact -- for parquet to skip a value, it essentially has to
deserialize it, so even when we aren't reading some values, we do have to skip
them.
> filter2 API performance regression
> ----------------------------------
>
> Key: PARQUET-98
> URL: https://issues.apache.org/jira/browse/PARQUET-98
> Project: Parquet
> Issue Type: Bug
> Reporter: Viktor Szathmáry
>
> The new filter API seems to be much slower (or perhaps I'm using it wrong \:)
> Code using an UnboundRecordFilter:
> {code:java}
> ColumnRecordFilter.column(column,
> ColumnPredicates.applyFunctionToBinary(
> input -> Binary.fromString(value).equals(input)));
> {code}
> vs. code using FilterPredicate:
> {code:java}
> eq(binaryColumn(column), Binary.fromString(value));
> {code}
> The latter performs twice as slow on the same Parquet file (built using
> 1.6.0rc2).
> Note: the reader is constructed using
> {code:java}
> ParquetReader.builder(new ProtoReadSupport().withFilter(filter).build()
> {code}
> The new filter API based approach seems to create a whole lot more garbage
> (perhaps due to reconstructing all the rows?).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)