nsivabalan commented on code in PR #5501:
URL: https://github.com/apache/hudi/pull/5501#discussion_r871398787


##########
hudi-integ-test/src/main/java/org/apache/hudi/integ/testsuite/dag/nodes/BaseValidateDatasetNode.java:
##########
@@ -157,14 +157,19 @@ public void execute(ExecutionContext context, int 
curItrCount) throws Exception
     }
   }
 
-  private Dataset<Row> getInputDf(ExecutionContext context, SparkSession 
session, String inputPath) {
+  private Dataset<Row> getInputDf(ExecutionContext context, SparkSession 
session, String inputPath, String partitionsToSkipWithValidate) {
     String recordKeyField = 
context.getWriterContext().getProps().getString(DataSourceWriteOptions.RECORDKEY_FIELD().key());
     String partitionPathField = 
context.getWriterContext().getProps().getString(DataSourceWriteOptions.PARTITIONPATH_FIELD().key());
     // todo: fix hard coded fields from configs.
     // read input and resolve insert, updates, etc.
     Dataset<Row> inputDf = session.read().format("avro").load(inputPath);
+    Dataset<Row> trimmedDf = inputDf;
+    if (!config.partitonsToSkipWithValidate().isEmpty()) {
+      trimmedDf = inputDf.filter("instr("+partitionPathField+", \'"+ 
config.partitonsToSkipWithValidate() +"\') != 1");

Review Comment:
   then, we have to convert to timestamp based partition path and then do the 
filtering. here we are filtering on the input dataframe itself. So, we couldn't 
do much here. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to