Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22750#discussion_r226819049
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala 
---
    @@ -168,10 +168,11 @@ case class FileSourceScanExec(
     
       // Note that some vals referring the file-based relation are lazy 
intentionally
       // so that this plan can be canonicalized on executor side too. See 
SPARK-23731.
    -  override lazy val supportsBatch: Boolean = 
relation.fileFormat.supportBatch(
    -    relation.sparkSession, StructType.fromAttributes(output))
    +  override lazy val supportsBatch: Boolean = {
    +    relation.fileFormat.supportBatch(relation.sparkSession, schema)
    +  }
     
    -  override lazy val needsUnsafeRowConversion: Boolean = {
    +  private lazy val needsUnsafeRowConversion: Boolean = {
         if (relation.fileFormat.isInstanceOf[ParquetSource]) {
    --- End diff --
    
    Our Parquet reader has one more feature than ORC: if vectorized reader is 
on but whole-stage-codegen is off, we can still read parquet with batches, and 
return `ColumnarRow`s. In ORC,  if whole-stage-codegen is off, we also turn off 
vectorized reader, so ORC never return `ColumnarRow`.
    
    However, I just found that this parquet reader feature is gone. Let's send 
a new PR to fix it or completely remove it.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to