yihua commented on code in PR #8342:
URL: https://github.com/apache/hudi/pull/8342#discussion_r1169386885


##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/MultipleSparkJobExecutionStrategy.java:
##########
@@ -327,21 +332,52 @@ private HoodieData<HoodieRecord<T>> 
readRecordsForGroupBaseFiles(JavaSparkContex
 
     // NOTE: It's crucial to make sure that we don't capture whole "this" 
object into the
     //       closure, as this might lead to issues attempting to serialize its 
nested fields
+    HoodieTableConfig  tableConfig = 
getHoodieTable().getMetaClient().getTableConfig();
+    String bootstrapBasePath = tableConfig.getBootstrapBasePath().orElse(null);
+    Option<String[]> partitionFields = tableConfig.getPartitionFields();
+    String timeZoneId = jsc.getConf().get("timeZone", 
SQLConf.get().sessionLocalTimeZone());
+    boolean shouldValidateColumns = 
jsc.getConf().getBoolean("spark.sql.sources.validatePartitionColumns", true);
+
     return HoodieJavaRDD.of(jsc.parallelize(clusteringOps, 
clusteringOps.size())
         .mapPartitions(clusteringOpsPartition -> {
           List<Iterator<HoodieRecord<T>>> iteratorsForPartition = new 
ArrayList<>();
           clusteringOpsPartition.forEachRemaining(clusteringOp -> {
             try {
               Schema readerSchema = HoodieAvroUtils.addMetadataFields(new 
Schema.Parser().parse(writeConfig.getSchema()));
               HoodieFileReader baseFileReader = 
HoodieFileReaderFactory.getReaderFactory(recordType).getFileReader(hadoopConf.get(),
 new Path(clusteringOp.getDataFilePath()));

Review Comment:
   We should skip this for bootstrap file group.



##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/MultipleSparkJobExecutionStrategy.java:
##########
@@ -327,21 +332,52 @@ private HoodieData<HoodieRecord<T>> 
readRecordsForGroupBaseFiles(JavaSparkContex
 
     // NOTE: It's crucial to make sure that we don't capture whole "this" 
object into the
     //       closure, as this might lead to issues attempting to serialize its 
nested fields
+    HoodieTableConfig  tableConfig = 
getHoodieTable().getMetaClient().getTableConfig();
+    String bootstrapBasePath = tableConfig.getBootstrapBasePath().orElse(null);
+    Option<String[]> partitionFields = tableConfig.getPartitionFields();
+    String timeZoneId = jsc.getConf().get("timeZone", 
SQLConf.get().sessionLocalTimeZone());
+    boolean shouldValidateColumns = 
jsc.getConf().getBoolean("spark.sql.sources.validatePartitionColumns", true);
+
     return HoodieJavaRDD.of(jsc.parallelize(clusteringOps, 
clusteringOps.size())
         .mapPartitions(clusteringOpsPartition -> {
           List<Iterator<HoodieRecord<T>>> iteratorsForPartition = new 
ArrayList<>();
           clusteringOpsPartition.forEachRemaining(clusteringOp -> {
             try {
               Schema readerSchema = HoodieAvroUtils.addMetadataFields(new 
Schema.Parser().parse(writeConfig.getSchema()));
               HoodieFileReader baseFileReader = 
HoodieFileReaderFactory.getReaderFactory(recordType).getFileReader(hadoopConf.get(),
 new Path(clusteringOp.getDataFilePath()));
+              // handle bootstrap path
+              if (StringUtils.nonEmpty(clusteringOp.getBootstrapFilePath()) && 
StringUtils.nonEmpty(bootstrapBasePath)) {

Review Comment:
   Do we need to provide the same fix for MOR table, in 
`readRecordsForGroupWithLogs(jsc, clusteringOps, instantTime)`?  E.g., 
clustering is applied to a bootstrap file group with bootstrap data file, 
skeleton file, and log files.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to