SaurabhChawla100 commented on a change in pull request #29045:
URL: https://github.com/apache/spark/pull/29045#discussion_r454359574



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala
##########
@@ -116,47 +116,53 @@ object OrcUtils extends Logging {
   }
 
   /**
-   * Returns the requested column ids from the given ORC file. Column id can 
be -1, which means the
-   * requested column doesn't exist in the ORC file. Returns None if the given 
ORC file is empty.
+   * @return Returns the requested column ids from the given ORC file and 
Boolean flag to use actual
+   * schema or result schema. Column id can be -1, which means the requested 
column doesn't
+   * exist in the ORC file. Returns None if the given ORC file is empty.
    */
   def requestedColumnIds(
       isCaseSensitive: Boolean,
       dataSchema: StructType,
       requiredSchema: StructType,
       reader: Reader,
-      conf: Configuration): Option[Array[Int]] = {
+      conf: Configuration): (Option[Array[Int]], Boolean) = {
+    var sendActualSchema = false
     val orcFieldNames = reader.getSchema.getFieldNames.asScala

Review comment:
       yes, if that would be the case then it will be same as how the data 
created from the spark Application using orc. And it follows the code flow as 
its working today for spark orc data source tables.
   
   So this only failing for the orc data created by hive, So if I create the 
data from this using the spark orc datasource , This error is not coming
   
   ```
   val u = """select * from date_dim limit 5"""
   
   scala> 
spark.sql(u).write.format("orc").save("/Users/tpcdsdata/testFS/testorc")
   
   val table = """CREATE TABLE `date_dim345` (
        |   `d_date_sk` INT,
        |   `d_date_id` STRING,
        |   `d_date` TIMESTAMP,
        |   `d_month_seq` INT,
        |   `d_week_seq` INT,
        |   `d_quarter_seq` INT,
        |   `d_year` INT,
        |   `d_dow` INT,
        |   `d_moy` INT,
        |   `d_dom` INT,
        |   `d_qoy` INT,
        |   `d_fy_year` INT,
        |   `d_fy_quarter_seq` INT,
        |   `d_fy_week_seq` INT,
        |   `d_day_name` STRING,
        |   `d_quarter_name` STRING,
        |   `d_holiday` STRING,
        |   `d_weekend` STRING,
        |   `d_following_holiday` STRING,
        |   `d_first_dom` INT,
        |   `d_last_dom` INT,
        |   `d_same_day_ly` INT,
        |   `d_same_day_lq` INT,
        |   `d_current_day` STRING,
        |   `d_current_week` STRING,
        |   `d_current_month` STRING,
        |   `d_current_quarter` STRING,
        |   `d_current_year` STRING)
        | USING orc
        | LOCATION '/Users/tpcdsdata/testFS/testorc/'"""
   
   spark.sql(table).collect
   val u = """select date_dim.d_date_id from date_dim345 limit 5"""
   ```
   Now this is having the correct value of  physical orc file not _col1, _col2 
etc
   ```
   orcFieldNames = {Wrappers$JListWrapper@19940} "Wrappers$JListWrapper" size = 
28
    1 = "d_date_id"
    9 = "d_dom"
    7 = "d_dow"
    11 = "d_fy_year"
    5 = "d_quarter_seq"
    0 = "d_date_sk"
    8 = "d_moy"
    10 = "d_qoy"
    4 = "d_week_seq"
    6 = "d_year"
    3 = "d_month_seq"
    2 = "d_date"
   ```
   
   
   
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to