danny0405 commented on code in PR #8501:
URL: https://github.com/apache/hudi/pull/8501#discussion_r1177669046


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/table/HoodieTableFactory.java:
##########
@@ -401,4 +408,35 @@ private static void inferAvroSchema(Configuration conf, 
LogicalType rowType) {
       conf.setString(FlinkOptions.SOURCE_AVRO_SCHEMA, inferredSchema);
     }
   }
+
+  /**
+   *
+   * @param conf The configuration
+   */
+  private static void validateSourceSchema(Configuration conf) {
+    final HoodieTableMetaClient metaClient = 
StreamerUtil.metaClientForReader(conf, 
HadoopConfigurations.getHadoopConf(conf));
+    final TableSchemaResolver schemaResolver = new 
TableSchemaResolver(metaClient);
+    final Schema requiredSchema = StreamerUtil.getSourceSchema(conf);
+    final Schema srcSchema;
+
+    try {
+      srcSchema = schemaResolver.getTableAvroSchema();
+    } catch (Exception e) {
+      LOG.warn("Skipping validation for requiredSchema as table avro schema 
could not be fetched", e);
+      return;
+    }
+

Review Comment:
   I don't know why the schema mismatch for the queries, somehow you need a 
catalog to manage the table and schema altogether, query a table with same path 
but wrong schema is meaningless, and is the wrong schema hand-written by user 
each time? Why not just fetch the schema through the catalog?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to