hudi-bot opened a new issue, #17124:
URL: https://github.com/apache/hudi/issues/17124
After data table write status is collected, and prepared to be written to
the MDT table, if the parallelism of write status RDD is too high (for
instance, if 100,000's of files were touched), then the entire workload profile
stages for MDT DAG could take 10's of mins. Put up a PR with a small fix that
repartitions the write status RDD to a configurable max partitions to reduce
latencies.
For instance, this is what we did in a POC. Added following code to
execute() in
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/commit/BaseSparkCommitActionExecutor.java
if (table.isMetadataTable() &&
config.getProps().getInteger("hoodie.metadata.temp.repartition.parallelism", 0)
> 0) {
inputRecords =
inputRecords.repartition(config.getProps().getInteger("hoodie.metadata.temp.repartition.parallelism",
16801));
}
## JIRA info
- Link: https://issues.apache.org/jira/browse/HUDI-9665
- Type: Improvement
- Fix version(s):
- 1.1.0
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]