cloud-fan commented on a change in pull request #29045: URL: https://github.com/apache/spark/pull/29045#discussion_r454201305
########## File path: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala ########## @@ -116,47 +116,53 @@ object OrcUtils extends Logging { } /** - * Returns the requested column ids from the given ORC file. Column id can be -1, which means the - * requested column doesn't exist in the ORC file. Returns None if the given ORC file is empty. + * @return Returns the requested column ids from the given ORC file and Boolean flag to use actual + * schema or result schema. Column id can be -1, which means the requested column doesn't + * exist in the ORC file. Returns None if the given ORC file is empty. */ def requestedColumnIds( isCaseSensitive: Boolean, dataSchema: StructType, requiredSchema: StructType, reader: Reader, - conf: Configuration): Option[Array[Int]] = { + conf: Configuration): (Option[Array[Int]], Boolean) = { + var sendActualSchema = false val orcFieldNames = reader.getSchema.getFieldNames.asScala Review comment: Please correct me if I'm wrong: 1. the physical orc file schema is `_col0`, ... 2. the table schema in metastore is `d_date_sk`, ... 3. the query only requires only `d_year` I don't know why the query fails. The `requestedColumnIds` will be `[6]` and the orc reader will read the `_col6` column. Everything should be fine. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org