xushiyan commented on a change in pull request #2873:
URL: https://github.com/apache/hudi/pull/2873#discussion_r619699121



##########
File path: 
hudi-hadoop-mr/src/test/java/org/apache/hudi/hadoop/realtime/TestHoodieRealtimeRecordReader.java
##########
@@ -237,13 +231,13 @@ public void testUnMergedReader() throws Exception {
     // create a split with baseFile (parquet file written earlier) and new log 
file(s)
     String logFilePath = writer.getLogFile().getPath().toString();
     HoodieRealtimeFileSplit split = new HoodieRealtimeFileSplit(
-        new FileSplit(new Path(partitionDir + "/fileid0_1-0-1_" + instantTime 
+ ".parquet"), 0, 1, jobConf),
-        basePath.toString(), Collections.singletonList(logFilePath), 
newCommitTime);
+        new FileSplit(new Path(partitionDir + "/fileid0_1-0-1_" + instantTime 
+ ".parquet"), 0, 1, baseJobConf),
+        basePath.toUri().toString(), Collections.singletonList(logFilePath), 
newCommitTime);

Review comment:
       i could spend more time digging into Azure env around this problem with 
must having scheme, but given this is a blocker, shall we move on and keep a 
note about this?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to