TheR1sing3un commented on code in PR #12537:
URL: https://github.com/apache/hudi/pull/12537#discussion_r1900523149
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/MultipleSparkJobExecutionStrategy.java:
##########
@@ -325,56 +310,16 @@ private HoodieData<HoodieRecord<T>>
readRecordsForGroup(JavaSparkContext jsc, Ho
private HoodieData<HoodieRecord<T>>
readRecordsForGroupWithLogs(JavaSparkContext jsc,
List<ClusteringOperation> clusteringOps,
String
instantTime) {
- HoodieWriteConfig config = getWriteConfig();
- HoodieTable table = getHoodieTable();
- // NOTE: It's crucial to make sure that we don't capture whole "this"
object into the
- // closure, as this might lead to issues attempting to serialize its
nested fields
- StorageConfiguration<?> storageConf = table.getStorageConf();
- HoodieTableConfig tableConfig = table.getMetaClient().getTableConfig();
- String bootstrapBasePath = tableConfig.getBootstrapBasePath().orElse(null);
- Option<String[]> partitionFields = tableConfig.getPartitionFields();
-
int readParallelism =
Math.min(writeConfig.getClusteringGroupReadParallelism(), clusteringOps.size());
return HoodieJavaRDD.of(jsc.parallelize(clusteringOps,
readParallelism).mapPartitions(clusteringOpsPartition -> {
List<Supplier<ClosableIterator<HoodieRecord<T>>>> suppliers = new
ArrayList<>();
clusteringOpsPartition.forEachRemaining(clusteringOp -> {
Supplier<ClosableIterator<HoodieRecord<T>>> iteratorSupplier = () -> {
- long maxMemoryPerCompaction = IOUtils.getMaxMemoryPerCompaction(new
SparkTaskContextSupplier(), config);
Review Comment:
These deleted code simply moved to the SparkJobExecutionStrategy used to
provide a common reading method
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/MultipleSparkJobExecutionStrategy.java:
##########
@@ -387,70 +332,20 @@ private HoodieData<HoodieRecord<T>>
readRecordsForGroupWithLogs(JavaSparkContext
*/
private HoodieData<HoodieRecord<T>>
readRecordsForGroupBaseFiles(JavaSparkContext jsc,
List<ClusteringOperation> clusteringOps) {
- StorageConfiguration<?> storageConf = getHoodieTable().getStorageConf();
- HoodieWriteConfig writeConfig = getWriteConfig();
-
- // NOTE: It's crucial to make sure that we don't capture whole "this"
object into the
- // closure, as this might lead to issues attempting to serialize its
nested fields
- HoodieTableConfig tableConfig =
getHoodieTable().getMetaClient().getTableConfig();
- String bootstrapBasePath = tableConfig.getBootstrapBasePath().orElse(null);
- Option<String[]> partitionFields = tableConfig.getPartitionFields();
-
int readParallelism =
Math.min(writeConfig.getClusteringGroupReadParallelism(), clusteringOps.size());
return HoodieJavaRDD.of(jsc.parallelize(clusteringOps, readParallelism)
.mapPartitions(clusteringOpsPartition -> {
List<Supplier<ClosableIterator<HoodieRecord<T>>>>
iteratorGettersForPartition = new ArrayList<>();
clusteringOpsPartition.forEachRemaining(clusteringOp -> {
- Supplier<ClosableIterator<HoodieRecord<T>>> recordIteratorGetter =
() -> {
Review Comment:
These deleted code simply moved to the SparkJobExecutionStrategy used to
provide a common reading method
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]