Hi Jerry,
R
efer to the following links for reference:
http://www.michael-noll.com/blog/2011/08/23/performing-an-hdfs-upgrade-of-an-hadoop-cluster/
http://wiki.apache.org/hadoop/Hadoop_Upgrade

Notes:
1. the hadoop version used in the doc may be different from yours, but they
are good references to understand the basic flow.
2. suggest create a test cluster that could mimic your production
environment, try it out on the test cluster before on the production.
3. backup your namenode meta data, which may help you to recover.
4. the official rollback functionality does not work in hadoop 1.0.1 so be
prepared if the upgrade failed.


On Thu, Mar 6, 2014 at 12:47 PM, Jerry Zhang <emacs2...@hotmail.com> wrote:

>  Hi there
>
>
>
>
>
> We plan to migrate a 30 nodes hadoop 1.0.1 cluster to the version 2.3.0.
> We don't have extra machines to setup a separate new cluster, thus hope to
> do an "in-place" migration by replacing the components on the existing
> computers. So the questions are:
>
>
>
> 1)      Is it possible to do an "in-place" migration, while keeping all
> data in HDFS safely?
>
> 2)      If it is yes, is there any doc/guidance to do this?
>
> 3)      Is the 2.0.3 MR API binary compatible with the one of 1.0.1?
>
>
>
>
>
> Any information are highly appreciated.
>
>
>
>
>
> Jerry Zhang
>
>
>



-- 
Cheers
-MJ

Reply via email to