aokolnychyi commented on a change in pull request #26751: [SPARK-30107][SQL] 
Expose nested schema pruning to all V2 sources
URL: https://github.com/apache/spark/pull/26751#discussion_r354285161
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/PushDownUtils.scala
 ##########
 @@ -76,28 +78,48 @@ object PushDownUtils extends PredicateHelper {
    * @return the created `ScanConfig`(since column pruning is the last step of 
operator pushdown),
    *         and new output attributes after column pruning.
    */
-  // TODO: nested column pruning.
   def pruneColumns(
       scanBuilder: ScanBuilder,
       relation: DataSourceV2Relation,
-      exprs: Seq[Expression]): (Scan, Seq[AttributeReference]) = {
+      projects: Seq[NamedExpression],
+      filters: Seq[Expression]): (Scan, Seq[AttributeReference]) = {
     scanBuilder match {
+      case r: SupportsPushDownRequiredColumns if 
SQLConf.get.nestedSchemaPruningEnabled =>
+        val rootFields = SchemaPruning.identifyRootFields(projects, filters)
+        val prunedSchema = if (rootFields.nonEmpty) {
 
 Review comment:
   This check was needed to detect if any nested column was requested. The old 
rule would not apply otherwise. Here, the situation is different as we need to 
prune top-level columns even if no nested attributes are requested.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to