[ 
https://issues.apache.org/jira/browse/HDFS-15987?focusedWorklogId=604526&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-604526
 ]

ASF GitHub Bot logged work on HDFS-15987:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 01/Jun/21 14:17
            Start Date: 01/Jun/21 14:17
    Worklog Time Spent: 10m 
      Work Description: Hexiaoqiao commented on a change in pull request #2918:
URL: https://github.com/apache/hadoop/pull/2918#discussion_r643135023



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java
##########
@@ -640,6 +657,19 @@ long getParentId(long id) throws IOException {
   private void output(Configuration conf, FileSummary summary,
       FileInputStream fin, ArrayList<FileSummary.Section> sections)
       throws IOException {
+    ArrayList<FileSummary.Section> allINodeSubSections =
+        getINodeSubSections(sections);
+    if (numThreads > 1 && !parallelOut.equals("-") &&

Review comment:
       the same as above comments.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java
##########
@@ -649,14 +679,123 @@ private void output(Configuration conf, FileSummary 
summary,
         is = FSImageUtil.wrapInputStreamForCompression(conf,
             summary.getCodec(), new BufferedInputStream(new LimitInputStream(
                 fin, section.getLength())));
-        outputINodes(is);
+        INodeSection s = INodeSection.parseDelimitedFrom(is);
+        LOG.info("Found {} INodes in the INode section", s.getNumInodes());
+        int count = outputINodes(is, out);
+        LOG.info("Outputted {} INodes.", count);
       }
     }
     afterOutput();
     long timeTaken = Time.monotonicNow() - startTime;
     LOG.debug("Time to output inodes: {}ms", timeTaken);
   }
 
+  /**
+   * STEP1: Multi-threaded process sub-sections
+   * Given n (1<n<=k) threads to process k sections,
+   * E.g. 10 sections and 4 threads, grouped as follows:
+   * |---------------------------------------------------------------|
+   * | (0    1    2)    (3    4    5)    (6    7)     (8    9)       |
+   * | thread[0]        thread[1]        thread[2]    thread[3]      |
+   * |---------------------------------------------------------------|
+   *
+   * STEP2: Merge files.
+   */
+  private void outputInParallel(Configuration conf, FileSummary summary,

Review comment:
       Make sense to me. Leave some nit comment inline.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java
##########
@@ -132,6 +133,7 @@ private static Options buildOptions() {
     options.addOption("delimiter", true, "");
     options.addOption("sp", false, "");
     options.addOption("t", "temp", true, "");
+    options.addOption("threads", true, "");

Review comment:
       I am concerned that parameter `-threads` will collide with `-t`, and it 
could be parsed to `-t hreads` here. It is safe to changes names to another one 
to avoid abuse.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageDelimitedTextWriter.java
##########
@@ -146,7 +146,13 @@ public String build() {
   PBImageDelimitedTextWriter(PrintStream out, String delimiter,
                              String tempPath, boolean printStoragePolicy)
       throws IOException {
-    super(out, delimiter, tempPath);
+    this(out, delimiter, tempPath, printStoragePolicy, 1, "-");

Review comment:
       I am confused why we use static string "-" rather than null. any other 
consideration?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 604526)
    Time Spent: 2h  (was: 1h 50m)

> Improve oiv tool to parse fsimage file in parallel with delimited format
> ------------------------------------------------------------------------
>
>                 Key: HDFS-15987
>                 URL: https://issues.apache.org/jira/browse/HDFS-15987
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Hongbing Wang
>            Assignee: Hongbing Wang
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 2h
>  Remaining Estimate: 0h
>
> The purpose of this Jira is to improve oiv tool to parse fsimage file with 
> sub-sections (see -HDFS-14617-) in parallel with delmited format. 
> 1.Serial parsing is time-consuming
> The time to serially parse a large fsimage with delimited format (e.g. `hdfs 
> oiv -p Delimited -t <tmp> ...`) is as follows: 
> {code:java}
> 1) Loading string table:                 -> Not time consuming.
> 2) Loading inode references:             -> Not time consuming
> 3) Loading directories in INode section: -> Slightly time consuming (3%)
> 4) Loading INode directory section:      -> A bit time consuming (11%)
> 5) Output:                               -> Very time consuming (86%){code}
> Therefore, output is the most parallelized stage.
> 2.How to output in parallel
> The sub-sections are grouped in order, and each thread processes a group and 
> outputs it to the file corresponding to each thread, and finally merges the 
> output files.
> 3. The result of a test
> {code:java}
>  input fsimage file info:
>  3.4G, 12 sub-sections, 55976500 INodes
>  -----------------------------------------
>  Threads TotalTime OutputTime MergeTime
>  1       18m37s     16m18s      –
>  4        8m7s      4m49s       41s{code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to