[ 
https://issues.apache.org/jira/browse/HDFS-16270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu resolved HDFS-16270.
---------------------------------
    Resolution: Not A Problem

> Improve NNThroughputBenchmark#printUsage() related to block size
> ----------------------------------------------------------------
>
>                 Key: HDFS-16270
>                 URL: https://issues.apache.org/jira/browse/HDFS-16270
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: benchmarks, namenode
>            Reporter: JiangHua Zhu
>            Assignee: JiangHua Zhu
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When using the NNThroughputBenchmark test, if the usage is not correct, we 
> will get some prompt messages.
> E.g:
> '
> If connecting to a remote NameNode with -fs option, 
> dfs.namenode.fs-limits.min-block-size should be set to 16.
> 21/10/13 11:55:32 INFO util.ExitUtil: Exiting with status -1: ExitException
> '
> Yes, this way is good.
> However, the setting of'dfs.blocksize' has been completed before execution, 
> for example:
> conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 16);
> We will still get the above prompt, which is wrong.
> At the same time, it should also be explained. The hint here should not be 
> for'dfs.namenode.fs-limits.min-block-size', but should be'dfs.blocksize'.
> Because in the NNThroughputBenchmark construction, 
> the'dfs.namenode.fs-limits.min-block-size' has been set to 0 in advance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to