Lucy, 

I’m sorry, I’m only doing HDFS HA, not federated HDFS.

mn

On Sep 29, 2014, at 9:24 PM, 清如许 <475053...@qq.com> wrote:

> Hi, Matt
> 
> Thank you very much for your response!
> 
> There were some mistakes in my description as i wrote this mail in a hurry. I 
> put those properties is in hdfs-site.xml not core-site.xml.
> 
> There are four name nodes because i also using HDFS federation, so there are 
> two nameservices in porperty
> <name>dfs.nameservices</name>
> and each nameservice will have two namenodes.
> 
> If i configure only HA (only one nameservice), everything is ok, and HAAdmin 
> can determine the namenodes nn1, nn3.
> 
> But if i configure two nameservice and set namenodes nn1,nn3 for nameservice1 
> and nn2,nn4 for nameservices2. I can start these namenodes successfully and 
> the namenodes are all in standby state at th beginning. But if i want to 
> change one namenode to active state, use command
> hdfs haadmin -transitionToActive nn1
> HAAdmin throw exception as it cannot determine the four 
> namenodes(nn1,nn2,nn3,nn4) at all.
> 
> Do you used to configure HA&Federation and know what may cause these problem?
> 
> Thanks,
> Lucy
> 
> ------------------ Original ------------------
> From:  "Matt Narrell";<matt.narr...@gmail.com>;
> Send time: Monday, Sep 29, 2014 6:28 AM
> To: "user"<user@hadoop.apache.org>;
> Subject:  Re: Failed to active namenode when config HA
> 
> I’m pretty sure HDFS HA is relegated to two name nodes (not four), designated 
> active and standby.  Secondly, I believe these properties should be in 
> hdfs-site.xml NOT core-site.xml.
> 
> Furthermore, I think your HDFS nameservices are misconfigured.  Consider the 
> following:
> 
> <?xml version="1.0"?>
> <configuration>
>   <property>
>     <name>dfs.replication</name>
>     <value>3</value>
>   </property>
>   <property>
>     <name>dfs.namenode.name.dir</name>
>     <value>file:/var/data/hadoop/hdfs/nn</value>
>   </property>
>   <property>
>     <name>dfs.datanode.data.dir</name>
>     <value>file:/var/data/hadoop/hdfs/dn</value>
>   </property>
> 
>     <property>
>       <name>dfs.ha.automatic-failover.enabled</name>
>       <value>true</value>
>     </property>
>     <property>
>       <name>dfs.nameservices</name>
>       <value>hdfs-cluster</value>
>     </property>
> 
>     <property>
>       <name>dfs.ha.namenodes.hdfs-cluster</name>
>       <value>nn1,nn2</value>
>     </property>
>       <property>
>         <name>dfs.namenode.rpc-address.hdfs-cluster.nn1</name>
>         <value>namenode1:8020</value>
>       </property>
>       <property>
>         <name>dfs.namenode.http-address.hdfs-cluster.nn1</name>
>         <value>namenode1:50070</value>
>       </property>
>       <property>
>         <name>dfs.namenode.rpc-address.hdfs-cluster.nn2</name>
>         <value>namenode2:8020</value>
>       </property>
>       <property>
>         <name>dfs.namenode.http-address.hdfs-cluster.nn2</name>
>         <value>namenode2:50070</value>
>       </property>
> 
>     <property>
>       <name>dfs.namenode.shared.edits.dir</name>
>       
> <value>qjournal://journalnode1:8485;journalnode2:8485;journalnode3:8485/hdfs-cluster</value>
>     </property>
> 
>     <property>
>       <name>dfs.client.failover.proxy.provider.hdfs-cluster</name>
>       
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>     </property>
> 
>     <property>
>       <name>dfs.ha.fencing.methods</name>
>       <value>sshfence</value>
>     </property>
>     <property>
>       <name>dfs.ha.fencing.ssh.private-key-files</name>
>       <value>/home/hadoop/.ssh/id_rsa</value>
>     </property>
> </configuration>
> 
> mn
> 
> On Sep 28, 2014, at 12:56 PM, 清如许 <475053...@qq.com> wrote:
> 
> > Hi,
> > 
> > I'm new to hadoop and meet some problems when config HA.
> > Below are some important configuration in core-site.xml
> > 
> >   <property>
> >     <name>dfs.nameservices</name>
> >     <value>ns1,ns2</value>
> >   </property>
> >   <property>
> >     <name>dfs.ha.namenodes.ns1</name>
> >     <value>nn1,nn3</value>
> >   </property>
> >   <property>
> >     <name>dfs.ha.namenodes.ns2</name>
> >     <value>nn2,nn4</value>
> >   </property>
> >   <property>
> >     <name>dfs.namenode.rpc-address.ns1.nn1</name>
> >     <value>namenode1:9000</value>
> >   </property>
> >   <property>
> >     <name>dfs.namenode.rpc-address.ns1.nn3</name>
> >     <value>namenode3:9000</value>
> >   </property>
> >   <property>
> >     <name>dfs.namenode.rpc-address.ns2.nn2</name>
> >     <value>namenode2:9000</value>
> >   </property>
> >   <property>
> >     <name>dfs.namenode.rpc-address.ns2.nn4</name>
> >     <value>namenode4:9000</value>
> >   </property>
> >   <property>
> >     <name>dfs.namenode.shared.edits.dir</name>
> >     
> > <value>qjournal://datanode2:8485;datanode3:8485;datanode4:8485/ns1</value>
> >   </property>
> >   <property>
> >     <name>dfs.client.failover.proxy.provider.ns1</name>
> >     
> > <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> >   </property>
> >   <property>
> >     <name>dfs.ha.fencing.methods</name>
> >     <value>sshfence</value>
> >   </property>
> >   <property>
> >     <name>dfs.ha.fencing.ssh.private-key-files</name>
> >     <value>/home/hduser/.ssh/id_rsa</value>
> >   </property>
> >   <property>
> >     <name>dfs.ha.fencing.ssh.connect-timeout</name>
> >     <value>30000</value>
> >   </property>
> >   <property>
> >     <name>dfs.journalnode.edits.dir</name>
> >     <value>/home/hduser/mydata/hdfs/journalnode</value>
> >   </property>
> > 
> > (two nameservice ns1,ns2 is for configuring federation later. In this step, 
> > I only want launch ns1 on namenode1,namenode3)
> > 
> > After configuration, I did the following steps
> > firstly,  I start jornalnode on datanode2,datanode3,datanode4
> > secondly I format datanode1 and start namenode on it
> > then i run 'hdfs namenode -bootstrapStandby' on the other namenode and 
> > start namenode on it
> > 
> > Everything seems fine unless no namenode is active now, then i tried to 
> > active one by running 
> > hdfs haadmin -transitionToActive nn1 on namenode1
> > but strangely it says "Illegal argument: Unable to determine the 
> > nameservice id."
> > 
> > Could anyone tell me why it cannot determine nn1 from my configuration?
> > Is there something wrong in my configuraion?
> > 
> > Thanks a lot!!!
> > 
> > 

Reply via email to