Another option is to stop the node's relevant Hadoop services (including
e.g spark, impala, etc. if applicable), move the existing local storage,
mount the desired file system, and move the data over. Then just restart
hadoop. As long as this does not take too long, you don't have write
consistency that forces that shard to be written, etc. you will be fine.





*Daemeon C.M. ReiydelleUSA (+1) 415.501.0198London (+44) (0) 20 8144 9872*


On Thu, Jul 6, 2017 at 9:17 AM, Brian Jeltema <bdjelt...@gmail.com> wrote:

> I recently discovered that I made a mistake setting up some cluster nodes
> and didn’t
> attach storage to some mount points for HDFS. To fix this, I presume I
> should decommission
> the relevant nodes, fix the mounts, then recommission the nodes.
>
> My question is, when the nodes are recommissioned, will the HDFS storage
> automatically be reset to ‘empty’, or do I need to perform some sort of
> explicit
> initialization on those volumes before returning the nodes to active
> status.
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: user-h...@hadoop.apache.org
>
>

Reply via email to