[ 
https://issues.apache.org/jira/browse/HDFS-7784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15796824#comment-15796824
 ] 

Gang Xie commented on HDFS-7784:
--------------------------------

About the GC activities, the following is the gstat output. And it do caused 
some long-time GC. But comparing it to the one used in full block report, it 
looks OK. 


jstat -gcutil 10885 5000 1000
  S0     S1     E      O      P     YGC     YGCT    FGC    FGCT     GCT   
  0.00 100.00  67.94  89.32  69.63    313  188.870     3    3.130  192.000
  0.00 100.00  67.94  89.32  69.63    313  188.870     3    3.130  192.000
  0.00 100.00  67.95  89.32  69.63    313  188.870     3    3.130  192.000
  0.00 100.00  81.32  89.32  70.61    313  188.870     3    3.130  192.000
100.00   0.00  19.44  89.68  70.62    314  192.495     3    3.130  195.626
  0.00  64.43  60.41  90.04  70.62    315  192.938     3    3.130  196.068
 56.75   7.26 100.00  90.27  70.62    317  193.167     3    3.130  196.297
  2.27   0.00  43.16  90.38  70.62    318  193.653     3    3.130  196.783
  0.00   0.68  91.15  90.38  70.62    319  193.729     3    3.130  196.859
  0.00   0.05  38.53  90.38  70.62    321  193.875     3    3.130  197.005
  0.01   0.00  82.04  90.38  70.62    322  193.951     3    3.130  197.081
  0.00   0.00  19.95  90.38  70.62    324  194.084     3    3.130  197.214
  0.00   0.00   0.00  90.38  70.62    326  194.235     4    3.130  197.365
  0.00   0.00  98.27  90.33  70.62    326  194.235     4    5.240  199.475
  0.00   0.00  40.11  90.27  70.62    328  194.372     4    5.240  199.612
  0.00   0.00  90.25  90.20  70.62    329  194.449     4    5.240  199.689
  0.00   0.00  30.08  90.13  70.62    331  194.605     4    5.240  199.845
  0.00   0.00  74.21  90.05  70.62    332  194.676     4    5.240  199.916
  0.00   0.00  14.04  89.95  70.62    334  194.819     4    5.240  200.059
  0.00   0.00  62.17  89.85  70.62    335  194.894     4    5.240  200.134
  0.00   0.00   4.01  89.79  70.62    337  195.042     4    5.240  200.282
  0.00   0.00  48.13  89.74  60.00    338  195.116     4    5.240  200.356
  0.00   0.00  80.22  89.74  60.00    339  195.192     5    5.241  200.433
  0.00   0.00   4.01  89.74  60.00    341  195.349     5    5.241  200.590
  0.00   0.00  24.07  89.74  60.00    342  195.423     5    5.241  200.664
  0.00   0.00  50.14  89.74  60.00    343  195.498     5    5.241  200.739
  0.00   0.00  96.27  89.74  60.00    344  195.571     5    5.241  200.813
  0.00   0.00  38.11  89.74  60.00    346  195.708     5    5.241  200.949
  0.00   0.00  86.24  89.74  60.00    347  195.785     5    5.241  201.026


Total time for which application threads were stopped: 1.6167710 seconds
Total time for which application threads were stopped: 9.6578530 seconds
Total time for which application threads were stopped: 1.0820690 seconds
Total time for which application threads were stopped: 1.1189530 seconds
Total time for which application threads were stopped: 1.2096840 seconds
Total time for which application threads were stopped: 8.6128080 seconds
Total time for which application threads were stopped: 7.5763860 seconds
Total time for which application threads were stopped: 2.1393520 seconds
Total time for which application threads were stopped: 1.9607400 seconds
Total time for which application threads were stopped: 3.0785030 seconds
Total time for which application threads were stopped: 2.7774960 seconds
Total time for which application threads were stopped: 4.5180250 seconds
Total time for which application threads were stopped: 1.9637590 seconds
Total time for which application threads were stopped: 1.8422970 seconds
Total time for which application threads were stopped: 1.9868880 seconds
Total time for which application threads were stopped: 2.2927440 seconds
Total time for which application threads were stopped: 2.7141160 seconds
Total time for which application threads were stopped: 2.9030460 seconds
Total time for which application threads were stopped: 5.2282350 seconds
Total time for which application threads were stopped: 3.6261510 seconds
Total time for which application threads were stopped: 2.1100760 seconds


> load fsimage in parallel
> ------------------------
>
>                 Key: HDFS-7784
>                 URL: https://issues.apache.org/jira/browse/HDFS-7784
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Walter Su
>            Assignee: Walter Su
>            Priority: Minor
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-7784.001.patch, test-20150213.pdf
>
>
> When single Namenode has huge amount of files, without using federation, the 
> startup/restart speed is slow. The fsimage loading step takes the most of the 
> time. fsimage loading can seperate to two parts, deserialization and object 
> construction(mostly map insertion). Deserialization takes the most of CPU 
> time. So we can do deserialization in parallel, and add to hashmap in serial. 
>  It will significantly reduce the NN start time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to