[ 
https://issues.apache.org/jira/browse/HDFS-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15469770#comment-15469770
 ] 

Mingliang Liu commented on HDFS-9847:
-------------------------------------

When I use a new cluster built from the current {{trunk}} code, I got log 
message:
{code}
$ hdfs dfs -ls /
2016-09-06 23:46:36,344 INFO Configuration.deprecation: No unit for 
dfs.client.datanode-restart.timeout(30) assuming SECONDS
{code}

Do we have follow-up JIRA to address this, or is it by design? I'm sure 
{{dfs.client.datanode-restart.timeout}} is using the default one (not 
configured).

> HDFS configuration should accept time units
> -------------------------------------------
>
>                 Key: HDFS-9847
>                 URL: https://issues.apache.org/jira/browse/HDFS-9847
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>    Affects Versions: 2.7.1
>            Reporter: Yiqun Lin
>            Assignee: Yiqun Lin
>             Fix For: 3.0.0-alpha2
>
>         Attachments: HDFS-9847-branch-2.001.patch, 
> HDFS-9847-branch-2.002.patch, HDFS-9847-nothrow.001.patch, 
> HDFS-9847-nothrow.002.patch, HDFS-9847-nothrow.003.patch, 
> HDFS-9847-nothrow.004.patch, HDFS-9847.001.patch, HDFS-9847.002.patch, 
> HDFS-9847.003.patch, HDFS-9847.004.patch, HDFS-9847.005.patch, 
> HDFS-9847.006.patch, HDFS-9847.007.patch, HDFS-9847.008.patch, 
> branch-2-delta.002.txt, timeduration-w-y.patch
>
>
> In HDFS-9821, it talks about the issue of leting existing keys use friendly 
> units e.g. 60s, 5m, 1d, 6w etc. But there are som configuration key names 
> contain time unit name, like {{dfs.blockreport.intervalMsec}}, so we can make 
> some other configurations which without time unit name to accept friendly 
> time units. The time unit  {{seconds}} is frequently used in hdfs. We can 
> updating this configurations first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to