wecharyu commented on code in PR #9121: URL: https://github.com/apache/hudi/pull/9121#discussion_r1264405704
########## hudi-common/src/main/java/org/apache/hudi/metadata/FileSystemBackedTableMetadata.java: ########## @@ -106,42 +107,33 @@ private List<String> getPartitionPathWithPathPrefix(String relativePathPrefix) t // TODO: Get the parallelism from HoodieWriteConfig int listingParallelism = Math.min(DEFAULT_LISTING_PARALLELISM, pathsToList.size()); - // List all directories in parallel + // List all directories in parallel: + // if current dictionary contains PartitionMetadata, add it to result + // if current dictionary does not contain PartitionMetadata, add its subdirectory to queue to be processed. engineContext.setJobStatus(this.getClass().getSimpleName(), "Listing all partitions with prefix " + relativePathPrefix); - List<FileStatus> dirToFileListing = engineContext.flatMap(pathsToList, path -> { + // result below holds a list of pair. first entry in the pair optionally holds the deduced list of partitions. + // and second entry holds optionally a directory path to be processed further. + List<Pair<Option<String>, Option<Path>>> result = engineContext.flatMap(pathsToList, path -> { FileSystem fileSystem = path.getFileSystem(hadoopConf.get()); - return Arrays.stream(fileSystem.listStatus(path)); + if (HoodiePartitionMetadata.hasPartitionMetadata(fileSystem, path)) { + return Stream.of(Pair.of(Option.of(FSUtils.getRelativePartitionPath(new Path(datasetBasePath), path)), Option.empty())); Review Comment: Thanks for your careful check, this case will be handled in `HoodiePartitionMetadata#hasPartitionMetadata`: https://github.com/apache/hudi/blob/37d3d8ef504794d64fb87c838bf58bafa8acaa16/hudi-common/src/main/java/org/apache/hudi/common/model/HoodiePartitionMetadata.java#L302-L306 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org