Github user mallman commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22905#discussion_r229729687
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala 
---
    @@ -306,7 +306,15 @@ case class FileSourceScanExec(
           withOptPartitionCount
         }
     
    -    withSelectedBucketsCount
    +    val withOptColumnCount = relation.fileFormat match {
    +      case columnar: ColumnarFileFormat =>
    +        val sqlConf = relation.sparkSession.sessionState.conf
    +        val columnCount = columnar.columnCountForSchema(sqlConf, 
requiredSchema)
    +        withSelectedBucketsCount + ("ColumnCount" -> columnCount.toString)
    --- End diff --
    
    You can "guess-timate" the physical column count by counting the leaf 
fields in the `ReadSchema` metadata value, but the true answer is an 
implementation issue of the file format. For example, in the implementation of 
`ColumnarFileFormat` for Parquet, we convert the the Catalyst schema to the 
Parquet schema before counting columns. I suppose a similar approach would be 
required for ORC and other columnar formats.
    
    That being said, this new metadata value isn't really meant to provide new 
and essential information, _per se_. Its purpose is to provide easy-to-read, 
practical information that's useful for quickly validating that schema pruning 
is working as expected. For example, seeing that a query is reading all 423 
columns from a table instead of 15 tells us pretty quickly that schema pruning 
is not working (unless we really are trying to read the entire table schema). 
I've found the `ReadSchema` value to be difficult to read in practice because 
of its terse syntax, and because its printout is truncated.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to