Github user dongjoon-hyun commented on the issue:

    https://github.com/apache/spark/pull/18991
  
    Hi, @gatorsmile , @cloud-fan, @rxin , and @omalley .
    
    #19060 shows that the behavior of Apache ORC 1.4.0 predicate push-down is 
correct. #19060 will add more test cases for data source certification. 
Especially, if you want me to add more test cases on ORC predicate push-down, 
please let me know.
    
    So, back to the original this issue, I'm not aware of the old case which 
old ORC incorrectly filters out the extra rows, but new Apache ORC 1.4.0 looks 
ready for this now. Can we turn on ORC predicate push-down by default in Apache 
Spark?
    
    Enabling by default will give more opportunity for users to test it before 
Apache Spark 2.3.0 (on December). I'm sure that Apache ORC community will help 
us to ensure this feature and to get benefits from this feature.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to