Hi Arthur,

Property hive.metastore.uris should contain the URI of the remote
metastore. For example: thrift://hivemetahost:9083, here Hive metastore is
running on host hivemetahost. Configuration properties related to NameNode
HA should be part of the "configProps" section. These properties should be
same as those in your hdfs-site.xml. Example configuration needed for
NameNodeHA can be found here [1]

[1]
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.1/bk_system-admin-guide/content/ch_hadoop-ha-3-1.html

Thanks
Venki

On Thu, Jul 16, 2015 at 2:18 PM, Sudheesh Katkam <skat...@maprtech.com>
wrote:

> Can you try just “thrift://nn2:9083 <thrift://nn2:9083>” (and not include
> the failover namenode) for “hive.metastore.uris” property?
>
> Thank you,
> Sudheesh
>
> > On Jul 16, 2015, at 1:43 AM, Arthur Chan <arthur.hk.c...@gmail.com>
> wrote:
> >
> > Anyone has idea what I would be wrong in setup Drill?
> >
> > On Tue, Jul 14, 2015 at 4:21 PM, Arthur Chan <arthur.hk.c...@gmail.com>
> > wrote:
> >
> >> Hi,
> >>
> >> I have HDFS HA with two namenodes (nn1 and nn2 respectively)
> >>
> >>
> >> When the namenode nn1 is failover to nn2,  when querying HIVE, I got the
> >> following error:
> >>
> >> Query Failed: An Error Occurred
> >>
> >> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR:
> >> org.apache.hadoop.ipc.RemoteException: Operation category READ is not
> >> supported in state standby at
> >>
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1719)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1350)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4132)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838)
> >> at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821)
> >> at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> >> at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) at
> >> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) at
> >> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) at
> >> java.security.AccessController.doPrivileged(Native Method) at
> >> javax.security.auth.Subject.doAs(Subject.java:422) at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2035
> >>
> >>
> >> {
> >>
> >>  "type": "hive",
> >>
> >>  "enabled": true,
> >>
> >>  "configProps": {
> >>
> >>    "hive.metastore.uris": "thrift://nn1:9083,thrift://nn2:9083",
> >>
> >>    "hive.metastore.sasl.enabled": "false"
> >>
> >>  }
> >>
> >> }
> >>
> >>
> >> Any idea to resolve the issue? Please help!
> >>
>
>

Reply via email to