[ 
https://issues.apache.org/jira/browse/HDFS-5553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu updated HDFS-5553:
------------------------------

    Description: 
As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
manually according to NN's VERSION.
then , I do upgrade and get this exception.

my steps as following:
It's a fresh cluster with hadoop-2.0.1 before upgrading.

0) install hadoop-2.2.0 hadoop package on all nodes.
1) stop-dfs.sh on active NN
2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN now.)
4) stop-dfs.sh after active NN started successfully.
5) enable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
6) change all journal nodes' VERSION manually according to NN's VERSION
7) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep VERSION 
here)
8) delete all data under 'dfs.namenode.name.dir' on SNN
9) scp -r 'dfs.namenode.name.dir' to SNN on active NN
10) start-dfs.sh






  was:
As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
manually according to NN's VERSION.
then , I do upgrade and get this exception.

my steps as following:
It's a fresh cluster with hadoop-2.0.1 before upgrading.

0) install hadoop-2.2.0 hadoop package on all nodes.
1) stop-dfs.sh on active NN
2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN now.)
4) stop-dfs.sh after active NN started successfully.
5) change all journal nodes' VERSION manually according to NN's VERSION
6) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep VERSION 
here)
7) delete all data under 'dfs.namenode.name.dir' on SNN
8) scp -r 'dfs.namenode.name.dir' to SNN on active NN
9) start-dfs.sh







> SNN crashed because edit log has gap after upgrade
> --------------------------------------------------
>
>                 Key: HDFS-5553
>                 URL: https://issues.apache.org/jira/browse/HDFS-5553
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: ha, hdfs-client
>    Affects Versions: 3.0.0, 2.2.0
>            Reporter: Fengdong Yu
>            Priority: Blocker
>
> As HDFS-5550 depicted, journal nodes doesn't upgrade, so I change the VERSION 
> manually according to NN's VERSION.
> then , I do upgrade and get this exception.
> my steps as following:
> It's a fresh cluster with hadoop-2.0.1 before upgrading.
> 0) install hadoop-2.2.0 hadoop package on all nodes.
> 1) stop-dfs.sh on active NN
> 2) disable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
> 3) start-dfs.sh -upgrade -clusterId test-cluster on active NN(only one NN 
> now.)
> 4) stop-dfs.sh after active NN started successfully.
> 5) enable HA in the core-site.xml and hdfs-site.xml on active NN and SNN
> 6) change all journal nodes' VERSION manually according to NN's VERSION
> 7) rm -f 'dfs.journalnode.edits.dir'/test-cluster/current/* (just keep 
> VERSION here)
> 8) delete all data under 'dfs.namenode.name.dir' on SNN
> 9) scp -r 'dfs.namenode.name.dir' to SNN on active NN
> 10) start-dfs.sh



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to