It's alowable to decommission multi nodes at the same time.
Just write the all the hostnames which will be decommissioned  to the
exclude file and run "bin/hadoop dfsadmin -refreshNodes".

However you need to ensure the decommissioned DataNodes are minority of all
the DataNodes in the cluster and the block replica can be guaranteed after
decommission.

For example, default replication level mapred.submit.replication=10.
So if you have less than 10 DataNodes after decommissioned, the decommision
process will hang.


2013/4/1 varun kumar <varun....@gmail.com>

> How many nodes do you have and replication factor for it.
>

Reply via email to