Re: Hive Error on restart when public IP is changed

2018-02-09 Thread Benoit Perroud
I can be configured on hostname if your internal DNS resolution works.




> On 09 Feb 2018, at 12:37, Satyanarayana Jampa  wrote:
> 
> Yes, changing the public DNS to local hostname/IP works. I would like to know 
> if this can be configured to local hostname(FQDN) during installation itself, 
> so that it need not be changed manually on every restart of the AWS server or 
> whenever the public IP is changed.
> 
> Thanks,
> Satya.
> From: Benoit Perroud [mailto:ben...@noisette.ch]
> Sent: 09 February 2018 15:32
> To: user@ambari.apache.org
> Subject: Re: Hive Error on restart when public IP is changed
> 
> The ip listed in the exception is the instance private ip.
> 
> I would change
> 
> > hadoop.proxyuser.hive.hosts: ec2-54-197-36-23.compute-1.amazonaws.com 
> > <http://ec2-54-197-36-23.compute-1.amazonaws.com/>
> 
> to
> 
> > hadoop.proxyuser.hive.hosts: 172.31.55.219
> 
> If this still doesn’t work, remove the IP and put * instead.
> 
> Small warning here, I would not open Hive to the whole world and rely only on 
> host filtering thinking it’s secure.
> 
> 
> 
> 
> 
> 
> On 09 Feb 2018, at 09:59, Satyanarayana Jampa  <mailto:sja...@innominds.com>> wrote:
> 
> Hi,
> 
> The below error is observed after restarting the single node AWS machine.
> 
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Unauthorized connection for super-user: hive from IP 172.31.55.219
> at org.apache.hadoop.ipc.Client.call(Client.java:1427)
> at org.apache.hadoop.ipc.Client.call(Client.java:1358)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> 
> Scenario:
> 1.   Install the HDP on a single node AWS box.
> 2.   We can see the below configuration after installation:
> a.services->HDFS->configs->advanced->custom core-site
>i.  
> hadoop.proxyuser.hive.hosts: ec2-54-197-36-23.compute-1.amazonaws.com 
> <http://ec2-54-197-36-23.compute-1.amazonaws.com/>
>   ii.  
> hadoop.proxyuser.hive.groups: *
> 3.   Once we restart the AWS machine, the public IP of the machine 
> changes and as such the  public DNS name which was picked up automatically 
> during installation for “hadoop.proxyuser.hive.hosts” property becomes 
> invalid and hence the error.
> 
> Can someone please let me know how to overcome this situation.
> 
> Thanks,
> Satya.



signature.asc
Description: Message signed with OpenPGP using GPGMail


RE: Hive Error on restart when public IP is changed

2018-02-09 Thread Satyanarayana Jampa
Yes, changing the public DNS to local hostname/IP works. I would like to know 
if this can be configured to local hostname(FQDN) during installation itself, 
so that it need not be changed manually on every restart of the AWS server or 
whenever the public IP is changed.

Thanks,
Satya.
From: Benoit Perroud [mailto:ben...@noisette.ch]
Sent: 09 February 2018 15:32
To: user@ambari.apache.org
Subject: Re: Hive Error on restart when public IP is changed

The ip listed in the exception is the instance private ip.

I would change

> hadoop.proxyuser.hive.hosts: 
> ec2-54-197-36-23.compute-1.amazonaws.com<http://ec2-54-197-36-23.compute-1.amazonaws.com/>

to

> hadoop.proxyuser.hive.hosts: 172.31.55.219

If this still doesn’t work, remove the IP and put * instead.

Small warning here, I would not open Hive to the whole world and rely only on 
host filtering thinking it’s secure.






On 09 Feb 2018, at 09:59, Satyanarayana Jampa 
mailto:sja...@innominds.com>> wrote:

Hi,

The below error is observed after restarting the single node AWS machine.

Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Unauthorized connection for super-user: hive from IP 172.31.55.219
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1358)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)

Scenario:
1.   Install the HDP on a single node AWS box.
2.   We can see the below configuration after installation:
a.services->HDFS->configs->advanced->custom core-site
   i.  
hadoop.proxyuser.hive.hosts: 
ec2-54-197-36-23.compute-1.amazonaws.com<http://ec2-54-197-36-23.compute-1.amazonaws.com/>
  ii.  
hadoop.proxyuser.hive.groups: *
3.   Once we restart the AWS machine, the public IP of the machine changes 
and as such the  public DNS name which was picked up automatically during 
installation for “hadoop.proxyuser.hive.hosts” property becomes invalid and 
hence the error.

Can someone please let me know how to overcome this situation.

Thanks,
Satya.



Re: Hive Error on restart when public IP is changed

2018-02-09 Thread Benoit Perroud
The ip listed in the exception is the instance private ip.

I would change

> hadoop.proxyuser.hive.hosts: ec2-54-197-36-23.compute-1.amazonaws.com 
> 

to

> hadoop.proxyuser.hive.hosts: 172.31.55.219

If this still doesn’t work, remove the IP and put * instead.

Small warning here, I would not open Hive to the whole world and rely only on 
host filtering thinking it’s secure.






> On 09 Feb 2018, at 09:59, Satyanarayana Jampa  wrote:
> 
> Hi,
> 
> The below error is observed after restarting the single node AWS machine.
> 
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Unauthorized connection for super-user: hive from IP 172.31.55.219
> at org.apache.hadoop.ipc.Client.call(Client.java:1427)
> at org.apache.hadoop.ipc.Client.call(Client.java:1358)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> 
> Scenario:
> 1.   Install the HDP on a single node AWS box.
> 2.   We can see the below configuration after installation:
> a.services->HDFS->configs->advanced->custom core-site
>i.  
> hadoop.proxyuser.hive.hosts: ec2-54-197-36-23.compute-1.amazonaws.com 
> 
>   ii.  
> hadoop.proxyuser.hive.groups: *
> 3.   Once we restart the AWS machine, the public IP of the machine 
> changes and as such the  public DNS name which was picked up automatically 
> during installation for “hadoop.proxyuser.hive.hosts” property becomes 
> invalid and hence the error.
> 
> Can someone please let me know how to overcome this situation.
> 
> Thanks,
> Satya.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Hive Error on restart when public IP is changed

2018-02-09 Thread Satyanarayana Jampa
Hi,

The below error is observed after restarting the single node AWS machine.

Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Unauthorized connection for super-user: hive from IP 172.31.55.219
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1358)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)

Scenario:

1.   Install the HDP on a single node AWS box.

2.   We can see the below configuration after installation:

a.services->HDFS->configs->advanced->custom core-site

   i.  
hadoop.proxyuser.hive.hosts: ec2-54-197-36-23.compute-1.amazonaws.com

  ii.  
hadoop.proxyuser.hive.groups: *

3.   Once we restart the AWS machine, the public IP of the machine changes 
and as such the  public DNS name which was picked up automatically during 
installation for "hadoop.proxyuser.hive.hosts" property becomes invalid and 
hence the error.

Can someone please let me know how to overcome this situation.

Thanks,
Satya.