umehrot2 commented on a change in pull request #1924:
URL: https://github.com/apache/hudi/pull/1924#discussion_r466779443



##########
File path: 
hudi-client/src/main/java/org/apache/hudi/table/action/bootstrap/BootstrapUtils.java
##########
@@ -41,37 +48,87 @@
    * Returns leaf folders with files under a path.
    * @param fs  File System
    * @param basePathStr Base Path to look for leaf folders
-   * @param filePathFilter  Filters to skip directories/paths
+   * @param jsc Java spark context
    * @return list of partition paths with files under them.
    * @throws IOException
    */
   public static List<Pair<String, List<HoodieFileStatus>>> 
getAllLeafFoldersWithFiles(FileSystem fs, String basePathStr,
-                                                                               
       PathFilter filePathFilter) throws IOException {
+      JavaSparkContext jsc) throws IOException {
     final Path basePath = new Path(basePathStr);
     final Map<Integer, List<String>> levelToPartitions = new HashMap<>();
     final Map<String, List<HoodieFileStatus>> partitionToFiles = new 
HashMap<>();
-    FSUtils.processFiles(fs, basePathStr, (status) -> {
-      if (status.isFile() && filePathFilter.accept(status.getPath())) {
-        String relativePath = FSUtils.getRelativePartitionPath(basePath, 
status.getPath().getParent());
-        List<HoodieFileStatus> statusList = partitionToFiles.get(relativePath);
-        if (null == statusList) {
-          Integer level = (int) relativePath.chars().filter(ch -> ch == 
'/').count();
-          List<String> dirs = levelToPartitions.get(level);
-          if (null == dirs) {
-            dirs = new ArrayList<>();
-            levelToPartitions.put(level, dirs);
+    PathFilter filePathFilter = getFilePathFilter();
+    PathFilter metaPathFilter = getExcludeMetaPathFilter();
+
+    FileStatus[] topLevelStatuses = fs.listStatus(new Path(basePathStr));
+    List<String> subDirectories = new ArrayList<>();
+
+    List<Pair<HoodieFileStatus, Pair<Integer, String>>> result = new 
ArrayList<>();

Review comment:
       Only the outer structure is a bit similar in terms of first listing and 
taking action on top level files, and then using `spark context` to perform the 
same action on sub-directories in parallel. But the inner logic is different 
and values being collected are different.
   
   If we really want to re-use the common outer logic, it would require 
exploring extracting out the inner logic into `serializable functions` that 
would work fine with `spark context` as well. So, to not over-complicate this 
PR I can explore this separately if its okay. I have create a new Jira 
https://issues.apache.org/jira/browse/HUDI-1158 where I have listed the two 
optimizations we discussed about w.r.t to parallel listing behavior:
   
   - The parallelization should be at leaf partition directory level and not 
just at the top directory level
   - Extract out common code paths




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to