rohan-uptycs commented on code in PR #8503: URL: https://github.com/apache/hudi/pull/8503#discussion_r1174790151
########## hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/index/bucket/HoodieSparkConsistentBucketIndex.java: ########## @@ -275,4 +278,46 @@ public Option<HoodieRecordLocation> getRecordLocation(HoodieKey key) { throw new HoodieIndexException("Failed to getBucket as hashing node has no file group"); } } + + /** + * Update default metadata file(00000000000000.hashing_meta) with the latest committed metadata file so that default file will be in sync + * with latest commit. + * + * @param table + */ + public void updateMetadata(HoodieTable table) { + Map<String, Boolean> partitionVisiteddMap = new HashMap<>(); + HoodieTimeline hoodieTimeline = table.getActiveTimeline().getCompletedReplaceTimeline(); + hoodieTimeline.getInstants().forEach(instant -> { + Option<Pair<HoodieInstant, HoodieClusteringPlan>> instantPlanPair = Review Comment: No i think, every clustering operation on consistent hashing index engine will create new metadata file, that will be **<instant_time>.hashing_meta** , this particular piece of code will make **00000000000000.hashing_meta** in sync with **<instant_time>.hashing_meta** before triggering of archival, as replace commit might get archived from active timeline. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org