[ https://issues.apache.org/jira/browse/HUDI-5572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
HunterXHunter updated HUDI-5572: -------------------------------- Description: When we use spark to initialize the hudi table, .hoodie#hoodie.properties#hoodie.table.create.schema will carry information 'name=$tablename_record' and 'namespace'='hoodie.$tablename'. But Flink will not carry this information when writing, so there will be incompatibilities when doing `validateSchema`. Here I think we should skip check the compatibility of Schema#name when using flink write. !image-2023-01-18-11-51-12-914.png|width=851,height=399! was: When we use spark to initialize the hudi table, .hoodie#hoodie.properties#hoodie.table.create.schema will carry information 'name=$tablename_record' and 'namespace'='hoodie.$tablename'. But Flink will not carry this information when writing, so there will be incompatibilities when doing `validateSchema`. Here I think we should skip check the compatibility of Schema#name when using flink write. > Flink write need to skip check the compatibility of Schema#name > --------------------------------------------------------------- > > Key: HUDI-5572 > URL: https://issues.apache.org/jira/browse/HUDI-5572 > Project: Apache Hudi > Issue Type: Bug > Reporter: HunterXHunter > Priority: Major > Attachments: image-2023-01-18-11-51-12-914.png > > > When we use spark to initialize the hudi table, > .hoodie#hoodie.properties#hoodie.table.create.schema will carry information > 'name=$tablename_record' and 'namespace'='hoodie.$tablename'. > But Flink will not carry this information when writing, > so there will be incompatibilities when doing `validateSchema`. > Here I think we should skip check the compatibility of Schema#name when using > flink write. > !image-2023-01-18-11-51-12-914.png|width=851,height=399! -- This message was sent by Atlassian Jira (v8.20.10#820010)