Github user gatorsmile commented on the issue:

    https://github.com/apache/spark/pull/21320
  
    The feature has already been developed for almost two years. I am feeling 
sorry to merge it in Spark 2.4 release. Personally, I think we should not block 
merging this PR to Spark 2.4 release, even if the solution might still have a 
few issues we need to address. This PR only includes a rule and it does not 
touch any common code. Thus, the Spark users will face zero risk if we just 
turn off the conf `spark.sql.nestedSchemaPruning.enabled` by default. 
    
    More importantly, if we merge it now, we can collect the feedbacks from 
Spark users who are waiting for this feature and fix the holes if existed in 
the next releases. 
    
    @mallman Could you remove the changes made in ParquetRowConverter.scala and 
also turn off `spark.sql.nestedSchemaPruning.enabled` by default in this PR?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to