Github user omalley commented on the issue:

    https://github.com/apache/spark/pull/20511
  
    I'm frustrated with the direction this has gone.
    
    The new reader is much better than the old reader, which uses Hive 1.2. ORC 
1.4.3 had a pair of important, but not large or complex fixes. Yet, because of 
those fixes, now the entire new reader is being disabled by default in the 
upcoming Spark 2.2.
    
    In particular, the Hive 1.2 ORC code has the following known problems:
    * HIVE-11312 - Char predicate pushdown can filter all rows
    * HIVE-13083 - Decimal columns can incorrectly suppress the isNonNull 
stream.
    * ORC-101 -  Predicate pushdown on bloom filters use default charset rather 
than UTF8.
    * ORC-135 - Predicate pushdown on timestamps doesn't correct for timezones


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to