linfey90 commented on code in PR #6456: URL: https://github.com/apache/hudi/pull/6456#discussion_r953253259
########## hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/command/CreateHoodieTableCommand.scala: ########## @@ -120,10 +119,8 @@ object CreateHoodieTableCommand { val tableType = tableConfig.getTableType.name() val inputFormat = tableType match { - case DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL => + case DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL | DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL => Review Comment: In hive queries, the original table name is used instead of the suffix _rt _ro table name. at this point we will choose to skip the _ro table. I also think hive offline tasks should use optimization table, so its inputFormat default value should be HoodieParquetInputFormat.Also If this default value there are other considerations,I will compare and modify when use hive sync metadata. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org