thanks Hardik,

I did a bit of reading on 'stale' state in HDFS-3703 it says stale state
as  a state that is between dead and alive.
And that the value for marking a node as dead is 10.30 minutes.
But can this be configured?

Please help.


On Wed, Jan 1, 2014 at 2:46 AM, Hardik Pandya <smarty.ju...@gmail.com>wrote:

> <property>
>   <name>dfs.heartbeat.interval</name>
>   <value>3</value>
>   <description>Determines datanode heartbeat interval in
> seconds.</description>
> </property>
>
> and may be you are looking for
>
>
> <property>
>   <name>*dfs.namenode.stale.datanode.interval<*/name>
>   <value>30000</value>
>   <description>
>     Default time interval for marking a datanode as "stale", i.e., if
>     the namenode has not received heartbeat msg from a datanode for
>     more than this time interval, the datanode will be marked and treated
>     as "stale" by default. The stale interval cannot be too small since
>     otherwise this may cause too frequent change of stale states.
>     We thus set a minimum stale interval value (the default value is 3
> times
>     of heartbeat interval) and guarantee that the stale interval cannot be
> less
>     than the minimum value.
>   </description>
>
>
> On Fri, Dec 27, 2013 at 10:10 PM, Vishnu Viswanath <
> vishnu.viswanat...@gmail.com> wrote:
>
>> well i couldn't find any property in
>> http://hadoop.apache.org/docs/r1.2.1/hdfs-default.html that sets the
>> time interval time consider a node as dead.
>>
>> I saw there is a property dfs.namenode.heartbeat.recheck-interval or
>> heartbeat.recheck.interval, but i couldn't find it there. is it removed?
>> or am i looking at wrong place?
>>
>>
>> On Sat, Dec 28, 2013 at 7:36 AM, Chris Embree <cemb...@gmail.com> wrote:
>>
>>> Maybe I'm just grouchy tonight.. it's seems all of these questions can
>>> be answered by RTFM.
>>> http://hadoop.apache.org/docs/current2/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html
>>>
>>> What's the balance between encouraging learning by New to Hadoop users
>>> and OMG!?
>>>
>>>
>>> On Fri, Dec 27, 2013 at 8:58 PM, Vishnu Viswanath <
>>> vishnu.viswanat...@gmail.com> wrote:
>>>
>>>> Hi all,
>>>>
>>>> Can someone tell me these:
>>>>
>>>> 1) which property in hadoop conf sets the time limit to consider a node
>>>> as dead?
>>>> 2) after detecting a node as dead, after how much time does hadoop
>>>> replicates the block to another node?
>>>> 3) if the dead node comes alive again, in how much time does hadoop
>>>> identifies a block as over-replicated and when does it deletes that block?
>>>>
>>>> Regards,
>>>>
>>>
>>>
>>
>

Reply via email to