Github user dongjoon-hyun commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19545#discussion_r146090102
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
    @@ -235,11 +235,10 @@ case class AlterTableAddColumnsCommand(
           DataSource.lookupDataSource(catalogTable.provider.get).newInstance() 
match {
             // For datasource table, this command can only support the 
following File format.
             // TextFileFormat only default to one column "value"
    -        // OrcFileFormat can not handle difference between user-specified 
schema and
    -        // inferred schema yet. TODO, once this issue is resolved , we can 
add Orc back.
             // Hive type is already considered as hive serde table, so the 
logic will not
             // come in here.
             case _: JsonFileFormat | _: CSVFileFormat | _: ParquetFileFormat =>
    +        case s if s.getClass.getCanonicalName.endsWith("OrcFileFormat") =>
    --- End diff --
    
    After implementing OrcFileFormat based on Apache ORC, we can move 
`OrcFileFormat` from `sql/hive` module into `sql/core` module.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to