stream2000 commented on code in PR #9199:
URL: https://github.com/apache/hudi/pull/9199#discussion_r1280502151


##########
hudi-spark-datasource/hudi-spark/src/test/java/org/apache/hudi/functional/TestSparkConsistentBucketClustering.java:
##########
@@ -303,16 +312,19 @@ public void testConcurrentWrite() throws IOException {
     // Concurrent is not blocked by the clustering
     writeData(HoodieActiveTimeline.createNewInstantTime(), 2000, true);
     // The records are immediately visible when the writer completes
-    Assertions.assertEquals(4000, 
readRecords(dataGen.getPartitionPaths()).size());
+    Assertions.assertEquals(4000, readRecords().size());
     // Clustering finished, check the number of records (there will be file 
group switch in the background)
     writeClient.cluster(clusteringTime, true);
-    Assertions.assertEquals(4000, 
readRecords(dataGen.getPartitionPaths()).size());
+    Assertions.assertEquals(4000, readRecords().size());
   }
 
-  private List<GenericRecord> readRecords(String[] partitions) {
-    return HoodieMergeOnReadTestUtils.getRecordsUsingInputFormat(hadoopConf,
-        Arrays.stream(partitions).map(p -> Paths.get(basePath, 
p).toString()).collect(Collectors.toList()),
-        basePath, new JobConf(hadoopConf), true, false);
+  private List<Row> readRecords() {

Review Comment:
   For reviewers: Reading the data written by bulk insert row writer will have 
the same issue as #8838. So I use spark instead of input format to read data 
here. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to