txl2017 commented on issue #6007:
URL: https://github.com/apache/hudi/issues/6007#issuecomment-1172032212

   > > @txl2017 could you also provide the full commands for Hive sync and 
spark-submit/spark-shell for reproducing the issue?
   > 
   > 'partition-path-field'='systemdate' 
'keygen-class'='ComplexTimeKeyGenerator' 
'hive-sync-partition-extractor-class'='org.apache.hudi.hive.MultiPartKeysValueExtractor',
 systemdate format is yyyyMMdd HH:mm:ss like '2022-06-29 12:00:00', and 
partition format is yyyyMMdd. with this config sync to hive, use 
hive、flink、spark query hive ro table with systemdate=20220629, hive and flink 
hit the right result, but spark return nothing, then we delete the 
systemdate=20220629 condition, the result return with systemdate='2022-06-29 
12:00:00', not '20220629', so i think it's the reason why spark use 
systemdate=20220629 hit nothing.
   
   after that, we try another way to avoid this. 
   we change the hudi  source code in StreamWriteOperatorCoordinator class
     private void initHiveSync() {
       this.hiveSyncExecutor = 
NonThrownExecutor.builder(LOG).waitForTasksFinish(true).build();
       
if(ZvosComplexTimeKeyGenerator.class.getName().equals(conf.getString(FlinkOptions.KEYGEN_CLASS_NAME))){
         conf.set(FlinkOptions.PARTITION_PATH_FIELD, "dt");
       }
       this.hiveSyncContext = HiveSyncContext.create(conf);
     }
   update the partition field to 'dt' when sync to hive.
   with this case , spark got an error "org.apache.spark.sql.AnalysisException: 
cannot resolve 'dt' given input columns",hive and flink is ok
   
   note: dt is a not exist field in record.
   
   this is the story of how we got an error with spark,and thanks for your 
reply!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to