[ 
https://issues.apache.org/jira/browse/SPARK-11153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15005023#comment-15005023
 ] 

Cheng Lian edited comment on SPARK-11153 at 11/15/15 11:43 AM:
---------------------------------------------------------------

Good question. We tried, see [PR 
#9225|https://github.com/apache/spark/pull/9225], but at last decided not to do 
so.  There are several reasons:

# Parquet-mr 1.8.1 is not in a very good status, while we don't have much time 
left to test it before 1.6 release. The two major issues we found are:
#- We observed performance regression for full Parquet table scanning, but the 
reason is still unknown yet.
#- Parquet-mr 1.8.1 introduced PARQUET-363, which brings performance regression 
for queries like {{SELECT COUNT(1) FROM t}}. (This issue can be hacked and 
worked around though.)
# Parquet-mr 1.8.1 hasn't been widely deployed yet (e.g. Hive 1.2.1 is still 
using 1.6.0), which means that most Parquet files out there all suffer the 
corrupted statistics issue. Thus using parquet-mr 1.7.0 in Spark 1.6 while 
disabling filter push-down for string/binary columns doesn't bring too much 
negative impact.


was (Author: lian cheng):
Good question. We tried, see [PR 
#9225|https://github.com/apache/spark/pull/9225], but at last decided not to do 
so.  There are several reasons:

This issue can be hacked and worked around though.
# Parquet-mr 1.8.1 is not in a very good status, while we don't have much time 
left to test it before 1.6 release. The two major issues we found are:
#- We observed performance regression for full Parquet table scanning, but the 
reason is still unknown yet.
#- Parquet-mr 1.8.1 introduced PARQUET-363, which brings performance regression 
for queries like {{SELECT COUNT(1) FROM t}}. (This issue can be hacked and 
worked around though.)
# Parquet-mr 1.8.1 hasn't been widely deployed yet (e.g. Hive 1.2.1 is still 
using 1.6.0), which means that most Parquet files out there all suffer the 
corrupted statistics issue. Thus using parquet-mr 1.7.0 in Spark 1.6 while 
disabling filter push-down for string/binary columns doesn't bring too much 
negative impact.

> Turns off Parquet filter push-down for string and binary columns
> ----------------------------------------------------------------
>
>                 Key: SPARK-11153
>                 URL: https://issues.apache.org/jira/browse/SPARK-11153
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.5.0, 1.5.1
>            Reporter: Cheng Lian
>            Assignee: Cheng Lian
>            Priority: Blocker
>             Fix For: 1.5.2, 1.6.0
>
>
> Due to PARQUET-251, {{BINARY}} columns in existing Parquet files may be 
> written with corrupted statistics information. This information is used by 
> filter push-down optimization. Since Spark 1.5 turns on Parquet filter 
> push-down by default, we may end up with wrong query results. PARQUET-251 has 
> been fixed in parquet-mr 1.8.1, but Spark 1.5 is still using 1.7.0.
> Note that this kind of corrupted Parquet files could be produced by any 
> Parquet data models.
> This affects all Spark SQL data types that can be mapped to Parquet 
> {{BINARY}}, namely:
> - {{StringType}}
> - {{BinaryType}}
> - {{DecimalType}} (but Spark SQL doesn't support pushing down {{DecimalType}} 
> columns for now.)
> To avoid wrong query results, we should disable filter push-down for columns 
> of {{StringType}} and {{BinaryType}} until we upgrade to parquet-mr 1.8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to