Github user gengliangwang commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22320#discussion_r214828936
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/CreateHiveTableAsSelectCommand.scala
 ---
    @@ -63,13 +63,14 @@ case class CreateHiveTableAsSelectCommand(
             query,
             overwrite = false,
             ifPartitionNotExists = false,
    -        outputColumns = outputColumns).run(sparkSession, child)
    +        outputColumnNames = outputColumnNames).run(sparkSession, child)
         } else {
           // TODO ideally, we should get the output data ready first and then
           // add the relation into catalog, just in case of failure occurs 
while data
           // processing.
           assert(tableDesc.schema.isEmpty)
    -      catalog.createTable(tableDesc.copy(schema = query.schema), 
ignoreIfExists = false)
    +      val schema = DataWritingCommand.logicalPlanSchemaWithNames(query, 
outputColumnNames)
    +      catalog.createTable(tableDesc.copy(schema = schema), ignoreIfExists 
= false)
    --- End diff --
    
    The schema naming need to be consistent with `outputColumnNames` here.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to