Hello,
Is it possible to avoid replication caused by data node decommission?
I want to stop data nodes without moving or copying data blocks even
though they have smaller replication factor.
Thank you!!!
I have a running Hadoop/HBase cluster.
When I want to change hadoop parameters without stopping the cluster, can I
use org.apache.hadoop.conf.Configuration API?
I wrote following java source, but it didn't do anything.
-
import org.apache.hadoop.conf.Configuration;
import o
Konstantin is right.
Anyway, did you add namenode address to file "masters" under conf directory?
Is it possible to add more data directories by changing the
configuration `dfs.data.dir' during runtime?
Regards,
Lee, Jin Yeon