Hi, all: We plan to use Hudi to sync mysql binlog data. There will be a flink ETL task to consume binlog records from kafka and save data to hudi every one hour. The binlog records are also grouped every one hour and all records of one hour will be saved in one commit. The data transmission pipeline should be like – binlog -> kafka -> flink -> parquet.
After the data is synced to hudi, we want to querying the historical hourly versions of the Hudi table in hive SQL. Here is a more detailed description of our issue along with a simply design of Time Travel for Hudi, the design is under development and testing: https://docs.google.com/document/d/1r0iwUsklw9aKSDMzZaiq43dy57cSJSAqT9KCvgjbtUo/edit?usp=sharing I opened a issue here: https://issues.apache.org/jira/browse/HUDI-1460 We have to support Time Travel ability recently for our business needs. We also have seen the RFC 07. Be glad to receive any suggestion or discussion.
