cshuo commented on code in PR #18074:
URL: https://github.com/apache/hudi/pull/18074#discussion_r2757875143


##########
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/table/TestHoodieTableSource.java:
##########
@@ -788,4 +788,48 @@ void testNewHoodieSourceWithMaxCompactionMemory() throws 
Exception {
     HoodieTableSource tableSource = createHoodieTableSource(conf);
     assertNotNull(tableSource, "HoodieTableSource with custom compaction 
memory should be created");
   }
+
+  @Test
+  void testPartitionPrunerInStreamingMode() throws Exception {
+    beforeEach();
+    Configuration conf = 
TestConfigurations.getDefaultConf(tempFile.getAbsolutePath());
+    conf.set(FlinkOptions.READ_SOURCE_V2_ENABLED, true);
+    conf.set(FlinkOptions.READ_AS_STREAMING, true);
+
+    // Apply partition filter
+    FieldReferenceExpression partRef = new 
FieldReferenceExpression("partition", DataTypes.STRING(), 4, 4);
+    ValueLiteralExpression partLiteral = new ValueLiteralExpression("par1", 
DataTypes.STRING().notNull());
+    CallExpression partFilter = CallExpression.permanent(
+        BuiltInFunctionDefinitions.EQUALS,
+        Arrays.asList(partRef, partLiteral),
+        DataTypes.BOOLEAN());
+
+    HoodieTableSource tableSource = createHoodieTableSource(conf);
+    tableSource.applyFilters(Arrays.asList(partFilter));
+
+    assertNotNull(tableSource, "HoodieTableSource with partition pruner should 
be created");
+  }
+
+  @Test
+  void testPartitionPrunerNotSetInBatchMode() throws Exception {
+    beforeEach();
+    Configuration conf = 
TestConfigurations.getDefaultConf(tempFile.getAbsolutePath());
+    conf.set(FlinkOptions.READ_SOURCE_V2_ENABLED, true);
+    conf.set(FlinkOptions.READ_AS_STREAMING, false);
+
+    // Apply partition filter
+    FieldReferenceExpression partRef = new 
FieldReferenceExpression("partition", DataTypes.STRING(), 4, 4);
+    ValueLiteralExpression partLiteral = new ValueLiteralExpression("par1", 
DataTypes.STRING().notNull());
+    CallExpression partFilter = CallExpression.permanent(
+        BuiltInFunctionDefinitions.EQUALS,
+        Arrays.asList(partRef, partLiteral),
+        DataTypes.BOOLEAN());
+
+    HoodieTableSource tableSource = createHoodieTableSource(conf);
+    tableSource.applyFilters(Arrays.asList(partFilter));
+
+    assertNotNull(tableSource, "HoodieTableSource in batch mode should work 
without partition pruner");
+    // Verify that partition pruning still works in batch mode through 
FileIndex
+    assertEquals(1, tableSource.getReadPartitions().size(), "Partition should 
be pruned in batch mode");

Review Comment:
   For source V2 batch mode, HoodieSource does not use the table-source 
FileIndex; it relies on IncrementalInputSplits with 
scanContext.partitionPruner(). 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to