The reason being that when you write something in HDFS, it guarantees that
it will be written to the specified number of replicas. So if your
replication factor is 2 and one of your node (out of 2) is down, then it
cannot guarantee the 'write'.

The way to handle this to have a cluster of more than 2 nodes. Basically
large enough so that the chance of nodes going down is low.

More here:
http://www.bigdataplanet.info/2013/10/Hadoop-Tutorial-Part-4-Write-Operations-in-HDFS.html

Regards,
Shahab


On Mon, Jul 28, 2014 at 12:44 PM, Satyam Singh <satyam.si...@ericsson.com>
wrote:

> @vikas i have initially set 2 but after that i have make one DN down. So
> you are saying from initial i have to make replication factor as 1 even i
> have DN 2 active initially. If so then what is the reason?
>
> On 07/28/2014 10:02 PM, Vikas Srivastava wrote:
>
>> What replication have you set for cluster.
>>
>> Its should be 1 in your case.
>>
>> On Jul 28, 2014 9:26 PM, Satyam Singh <satyam.si...@ericsson.com> wrote:
>>
>>> Hello,
>>>
>>>
>>> I have hadoop cluster setup of one namenode and two datanodes.
>>> And i continuously write/read/delete through hdfs on namenode through
>>> hadoop client.
>>>
>>> Then i kill one of the datanode, still one is working but writing on
>>> datanode is getting failed for all write requests.
>>>
>>> I want to overcome this scenario because at live traffic scenario any of
>>> datanode might get down then how do we handle those cases.
>>>
>>> Can anybody face this issue or i am doing something wrong in my setup.
>>>
>>> Thanx in advance.
>>>
>>>
>>> Warm Regards,
>>> Satyam
>>>
>>
>

Reply via email to