jason hadoop wrote: > > You can decommission the datanode, and then un-decommission it. >
Thanks Jason, I went off and figured out what to decomission a datanode means and this looks like a very neat idea. Decommissioning requires that the nodes be listed in the dfs.hosts.excludes file. The administrator runs the "dfsadmin -refreshNodes" command. I will need some reconfiguring to be able to do this as the local storage has exactly the same path on all my datanodes. Essentially, if I change dfs.data.dir taking away the path to the local storage, it will take it away on all the datanodes. Therefore I wonder if this advice uncovers a problem with my cluster configuration. When i first installed hadoop on the cluster, since most settings looked the same for all nodes, I thought to set same location paths for the local storage and this way making it easier to put the configuration files in one directory and then create symlinks from all the hadoop home folders to this one configuration directory. Is this what usually people do or have I gone in a completely wrong direction? -- View this message in context: http://www.nabble.com/How-to-replace-the-storage-on-a-datanode-without-formatting-the-namenode--tp23542127p23544682.html Sent from the Hadoop core-user mailing list archive at Nabble.com.