Hello,
I have hadoop cluster setup of one namenode and two datanodes.
And i continuously write/read/delete through hdfs on namenode through
hadoop client.
Then i kill one of the datanode, still one is working but writing on
datanode is getting failed for all write requests.
I want to
Can you make sure you still have enough HDFS space once you kill this DN? If
not, HDFS will automatically enter safemode if it detects there's no hdfs space
available. The error message on the logs should have some hints on this.
Cheers.
On 28 Jul 2014, at 16:56, Satyam Singh
Yes, there is lot of space available at that instant.
I am not sure but i have read somewhere that we must have datanodes live
= replication factor given at namenode at any point of time. If live
datanodes get less than replication factor then this write/read failure
occurs.
In my case i
@vikas i have initially set 2 but after that i have make one DN down. So
you are saying from initial i have to make replication factor as 1 even
i have DN 2 active initially. If so then what is the reason?
On 07/28/2014 10:02 PM, Vikas Srivastava wrote:
What replication have you set for
The reason being that when you write something in HDFS, it guarantees that
it will be written to the specified number of replicas. So if your
replication factor is 2 and one of your node (out of 2) is down, then it
cannot guarantee the 'write'.
The way to handle this to have a cluster of more
If you have 2 an live initially and rep set to 2 which is perfectly fine
but you killed one dn... There is no place to put another replica of new
files as well as old files... Which causing issue in writing blocks.
On Jul 28, 2014 10:15 PM, Satyam Singh satyam.si...@ericsson.com wrote:
@vikas i