cloud-fan commented on a change in pull request #29045:
URL: https://github.com/apache/spark/pull/29045#discussion_r454538413



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala
##########
@@ -116,47 +116,53 @@ object OrcUtils extends Logging {
   }
 
   /**
-   * Returns the requested column ids from the given ORC file. Column id can 
be -1, which means the
-   * requested column doesn't exist in the ORC file. Returns None if the given 
ORC file is empty.
+   * @return Returns the requested column ids from the given ORC file and 
Boolean flag to use actual
+   * schema or result schema. Column id can be -1, which means the requested 
column doesn't
+   * exist in the ORC file. Returns None if the given ORC file is empty.
    */
   def requestedColumnIds(
       isCaseSensitive: Boolean,
       dataSchema: StructType,
       requiredSchema: StructType,
       reader: Reader,
-      conf: Configuration): Option[Array[Int]] = {
+      conf: Configuration): (Option[Array[Int]], Boolean) = {
+    var sendActualSchema = false
     val orcFieldNames = reader.getSchema.getFieldNames.asScala
     if (orcFieldNames.isEmpty) {
       // SPARK-8501: Some old empty ORC files always have an empty schema 
stored in their footer.
-      None
+      (None, sendActualSchema)
     } else {
       if (orcFieldNames.forall(_.startsWith("_col"))) {
         // This is a ORC file written by Hive, no field names in the physical 
schema, assume the
         // physical schema maps to the data scheme by index.
         assert(orcFieldNames.length <= dataSchema.length, "The given data 
schema " +
           s"${dataSchema.catalogString} has less fields than the actual ORC 
physical schema, " +
           "no idea which columns were dropped, fail to read.")
-        Some(requiredSchema.fieldNames.map { name =>
+        (Some(requiredSchema.fieldNames.map { name =>
           val index = dataSchema.fieldIndex(name)
           if (index < orcFieldNames.length) {
+            // for ORC file written by Hive, no field names
+            // in the physical schema, there is a need to send the
+            // entire dataSchema instead of required schema
+            sendActualSchema = true
             index
           } else {
             -1

Review comment:
       I tried the test locally, and saw warning messages like
   ```
   10:45:52.783 WARN org.apache.orc.impl.SchemaEvolution: Column names are 
missing from this file. This is caused by a writer earlier than HIVE-4243. The 
reader will reconcile schemas based on index. File type: 
struct<_col1:int,_col2:string,_col3:int>, reader type: struct<_col2:string>
   {9}
   ```
   
   I think we can't do column pruning anyway if the physical file schema is 
`_col0`, ... We can always return true in this branch.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to