[ 
https://issues.apache.org/jira/browse/HDFS-15610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HDFS-15610:
--------------------------------------
    Description: 
There is a kernel overhead on datanode upgrade. If datanode with millions of 
blocks and 10+ disks then block-layout migration will be super expensive during 
its hardlink operation.  Slowness is observed when running with large hardlink 
threads(dfs.datanode.block.id.layout.upgrade.threads, default is 12 thread for 
each disk) and its runs for 2+ hours. 

I.e 10*12=120 threads (for 10 disks)

Small test:

RHEL7, 32 cores, 20 GB RAM, 8 GB DN heap
||dfs.datanode.block.id.layout.upgrade.threads||Blocks||Disks||Time taken||
|12|3.3 Million|1|2 minutes and 59 seconds|
|6|3.3 Million|1|2 minutes and 35 seconds|
|3|3.3 Million|1|2 minutes and 51 seconds|

Tried same test twice and 95% is accurate (only a few sec difference on each 
iteration). Using 6 thread is faster than 12 thread because of its overhead. 

  was:
There is a kernel overhead on datanode upgrade. If datanode with millions of 
blocks and 10+ disks then block-layout migration will be super expensive during 
its hardlink operation.  Slowness is observed when running with large hardlink 
threads(dfs.datanode.block.id.layout.upgrade.threads, default is 12 thread for 
each disk) and its runs for 2+ hours. 

I.e 10*12=120 threads (for 10 disks)

Small test. 
||dfs.datanode.block.id.layout.upgrade.threads||Blocks||Disks||Time taken||
|12|3.3 Million|1|2 minutes and 59 seconds|
|6|3.3 Million|1|2 minutes and 35 seconds|
|3|3.3 Million|1|2 minutes and 51 seconds|

Tried same test twice and 95% is accurate (only a few sec difference on each 
iteration). Using 6 thread is faster than 12 thread because of its overhead. 


> Reduce datanode upgrade/hardlink thread
> ---------------------------------------
>
>                 Key: HDFS-15610
>                 URL: https://issues.apache.org/jira/browse/HDFS-15610
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 3.0.0, 3.1.4
>            Reporter: Karthik Palanisamy
>            Assignee: Karthik Palanisamy
>            Priority: Major
>
> There is a kernel overhead on datanode upgrade. If datanode with millions of 
> blocks and 10+ disks then block-layout migration will be super expensive 
> during its hardlink operation.  Slowness is observed when running with large 
> hardlink threads(dfs.datanode.block.id.layout.upgrade.threads, default is 12 
> thread for each disk) and its runs for 2+ hours. 
> I.e 10*12=120 threads (for 10 disks)
> Small test:
> RHEL7, 32 cores, 20 GB RAM, 8 GB DN heap
> ||dfs.datanode.block.id.layout.upgrade.threads||Blocks||Disks||Time taken||
> |12|3.3 Million|1|2 minutes and 59 seconds|
> |6|3.3 Million|1|2 minutes and 35 seconds|
> |3|3.3 Million|1|2 minutes and 51 seconds|
> Tried same test twice and 95% is accurate (only a few sec difference on each 
> iteration). Using 6 thread is faster than 12 thread because of its overhead. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to