ambari UI cannot connect to oracle database due to "cannot find jdbc driver"

2018-05-09 Thread Lian Jiang
Hi,

I am setting up Ranger in HDP2.6 using pre-existing oracle databases. In
ambari UI, I test the connection to the oracle database but it failed
because ambari cannot find ojdbc8.jar. The error message asks me to setup
ambari using:

ambari-server setup --jdbc-db=oracle
--jdbc-driver=/usr/share/java/ojdbc8.jar

My ambari server is currently setup using default postgres db and I don't
want to change due to Ranger. How can I make ambari server find the
ojdbc8.jar so that ambari UI can connect to my oracle db? Thanks for any
hints.


Re: make ambari create kerberos users in custom format

2018-05-09 Thread Lian Jiang
Thanks guys. It is very helpful.

The problem is resolved by changing keyring cache to file cache. Cheers!

On Wed, May 9, 2018 at 8:15 AM, Robert Levas  wrote:

> Lian…
>
>
>
> It appears you have a few issues here – neither are related to the
> Ambari-generated auth-to-local rule.
>
>
>
> 1) The realm name needs to be in all uppercase characters.  So
> test_kdc.com is incorrect.  It needs to be TEST_KDC.COM.  If the KDC is
> configured to use the lowercase version of this, then it needs to be
> changed to use the uppercase version.  I know that the case of the realm
> name is theoretically uppercase by convention, but the underlying Kerberos
> libraries expect that the realm is all uppercase characters and issues are
> seen if this is not the case.
>
>
>
> 2) The Kerberos ticket cache needs to be a file rather than a keyring.
> This is a Hadoop limitation in that is does not know how to access cached
> tickets in a keyring.  I am not sure of the details, but I do know that you
> need to make sure the ticket cache is a file.   This is typically the
> default for the MIT Kerberos library; however, it can be set in the
> krb5.conf file under the [libdefaults] section using
>
>
>
> default_ccache_name = /tmp/krb5cc_%{uid}
>
>
>
> or more explicitly
>
>
>
> default_ccache_name = FILE:/tmp/krb5cc_%{uid}
>
>
>
> After fixing these issues, you should have better luck with a cluster
> where Kerberos is enabled.
>
>
>
> Related to this, if you wish to test out the auth-to-local rules on a host
> where Hadoop is set up (NameNode, DataNode, etc..), you can execute the
> following command:
>
>
>
> hadoop org.apache.hadoop.security.HadoopKerberosName 
>
>
>
> For example:
>
>
>
> hadoop org.apache.hadoop.security.HadoopKerberosName
> hdfs-spark_cluster@TEST_KDC.COM
>
> Name: joe_u...@example.com to hdfs
>
>
>
> For more information on auth-to-local rules, see my article on the
> Hortonworks community site - https://community.hortonworks.
> com/articles/14463/auth-to-local-rules-syntax.html.
>
>
>
> I hope this helps…
>
> Rob
>
>
>
>
>
> *From: *Lian Jiang 
> *Reply-To: *"user@ambari.apache.org" 
> *Date: *Monday, May 7, 2018 at 7:14 PM
> *To: *"user@ambari.apache.org" 
> *Subject: *make ambari create kerberos users in custom format
>
>
>
> Hi,
>
> I am using HDP2.6 and have enabled kerberos. The rules generated by ambari
> has:
>
> RULE:[1:$1@$0](hdfs-spark_cluster@test_kdc.com)s/.*/hdfs/
>
> Also, klist shows hdfs user is mapped correctly to the rule:
>
> [hdfs@test-namenode ~]$ klist
> Ticket cache: KEYRING:persistent:1012:1012
> Default principal: hdfs-spark_cluster@test_kdc.com
>
> User hdfs-spark_cluster is associated with hdfs keytab:
>
> [hdfs@test-namenode ~]$ kinit -V -kt 
> /etc/security/keytabs/hdfs.headless.keytab
> hdfs-spark_cluster
> Using existing cache: persistent:1012:1012
> Using principal: hdfs-spark_cluster@test_kdc.com
> Using keytab: /etc/security/keytabs/hdfs.headless.keytab
> Authenticated to Kerberos v5
>
> However, hdfs is NOT associated with this hdfs keytab:
>
> [hdfs@test-namenode ~]$ kinit -V -kt 
> /etc/security/keytabs/hdfs.headless.keytab
> hdfs
> Using new cache: persistent:1012:krb_ccache_V36KQXp
> Using principal: hdfs@test_kdc.com
> Using keytab: /etc/security/keytabs/hdfs.headless.keytab
> kinit: Keytab contains no suitable keys for hdfs@test_kdc.com while
> getting initial credentials
>
> As you can see, kinit maps hdfs to hdfs@test_kdc.com instead of
> hdfs-spark_cluster@test_kdc.com.
>
> I guess this is the reason I got "Failed to find any Kerberos tgt" when
> doing "hdfs dfs -ls".
>
> I don't know why ambari create kerberos users in the format of
> "hdfs-{CLUSTERNAME}@{REALNAME}" instead of "hdfs@{REALNAME}".
>
>
>
> Should I follow https://community.hortonworks.com/articles/79574/build-a-
> cluster-with-custom-principal-names-using.html to force ambari to create
> hdfs@test_kdc.com instead of hdfs-spark_cluster@test_kdc.com? Or I am
> missing anything else?
>
> Thanks for any help.
>
>
>


Re: make ambari create kerberos users in custom format

2018-05-09 Thread Robert Levas
Lian…

It appears you have a few issues here – neither are related to the 
Ambari-generated auth-to-local rule.

1) The realm name needs to be in all uppercase characters.  So test_kdc.com is 
incorrect.  It needs to be TEST_KDC.COM.  If the KDC is configured to use the 
lowercase version of this, then it needs to be changed to use the uppercase 
version.  I know that the case of the realm name is theoretically uppercase by 
convention, but the underlying Kerberos libraries expect that the realm is all 
uppercase characters and issues are seen if this is not the case.

2) The Kerberos ticket cache needs to be a file rather than a keyring.   This 
is a Hadoop limitation in that is does not know how to access cached tickets in 
a keyring.  I am not sure of the details, but I do know that you need to make 
sure the ticket cache is a file.   This is typically the default for the MIT 
Kerberos library; however, it can be set in the krb5.conf file under the 
[libdefaults] section using

default_ccache_name = /tmp/krb5cc_%{uid}

or more explicitly

default_ccache_name = FILE:/tmp/krb5cc_%{uid}

After fixing these issues, you should have better luck with a cluster where 
Kerberos is enabled.

Related to this, if you wish to test out the auth-to-local rules on a host 
where Hadoop is set up (NameNode, DataNode, etc..), you can execute the 
following command:

hadoop org.apache.hadoop.security.HadoopKerberosName 

For example:

hadoop org.apache.hadoop.security.HadoopKerberosName 
hdfs-spark_cluster@TEST_KDC.COM
Name: joe_u...@example.com to hdfs

For more information on auth-to-local rules, see my article on the Hortonworks 
community site - 
https://community.hortonworks.com/articles/14463/auth-to-local-rules-syntax.html.

I hope this helps…
Rob


From: Lian Jiang 
Reply-To: "user@ambari.apache.org" 
Date: Monday, May 7, 2018 at 7:14 PM
To: "user@ambari.apache.org" 
Subject: make ambari create kerberos users in custom format

Hi,
I am using HDP2.6 and have enabled kerberos. The rules generated by ambari has:

RULE:[1:$1@$0](hdfs-spark_cluster@test_kdc.com)s/.*/hdfs/
Also, klist shows hdfs user is mapped correctly to the rule:

[hdfs@test-namenode ~]$ klist
Ticket cache: KEYRING:persistent:1012:1012
Default principal: 
hdfs-spark_cluster@test_kdc.com
User hdfs-spark_cluster is associated with hdfs keytab:

[hdfs@test-namenode ~]$ kinit -V -kt /etc/security/keytabs/hdfs.headless.keytab 
hdfs-spark_cluster
Using existing cache: persistent:1012:1012
Using principal: 
hdfs-spark_cluster@test_kdc.com
Using keytab: /etc/security/keytabs/hdfs.headless.keytab
Authenticated to Kerberos v5
However, hdfs is NOT associated with this hdfs keytab:

[hdfs@test-namenode ~]$ kinit -V -kt /etc/security/keytabs/hdfs.headless.keytab 
hdfs
Using new cache: persistent:1012:krb_ccache_V36KQXp
Using principal: hdfs@test_kdc.com
Using keytab: /etc/security/keytabs/hdfs.headless.keytab
kinit: Keytab contains no suitable keys for 
hdfs@test_kdc.com while getting initial credentials
As you can see, kinit maps hdfs to hdfs@test_kdc.com 
instead of 
hdfs-spark_cluster@test_kdc.com.
I guess this is the reason I got "Failed to find any Kerberos tgt" when doing 
"hdfs dfs -ls".
I don't know why ambari create kerberos users in the format of 
"hdfs-{CLUSTERNAME}@{REALNAME}" instead of "hdfs@{REALNAME}".

Should I follow 
https://community.hortonworks.com/articles/79574/build-a-cluster-with-custom-principal-names-using.html
 to force ambari to create hdfs@test_kdc.com instead 
of hdfs-spark_cluster@test_kdc.com? Or 
I am missing anything else?
Thanks for any help.



Re: make ambari create kerberos users in custom format

2018-05-09 Thread David Quiroga
The formatting of the principal name is just a property and while it could
be changed I believe the cluster name is usually added as Principal Suffix
because what if there are multiple clusters in the same Kerberos realm.

If the principal was only hdfs@domain then multiple clusters would share
the same KDC entry. Re-generating the keytab in one cluster might cause
issues in the others. Also potential security risk as it would allow cross
cluster access.

This kinit is correct
kinit  -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-spark_cluster

Using principal: hdfs-spark_cluster@test_kdc.com

When you present that principal you will be treated like hdfs because of
the rule RULE:[1:$1@$0](hdfs-spark_cluster@test_kdc.com)s/.*/hdfs/


On Mon, May 7, 2018 at 6:14 PM, Lian Jiang  wrote:

> Hi,
>
> I am using HDP2.6 and have enabled kerberos. The rules generated by ambari
> has:
>
> RULE:[1:$1@$0](hdfs-spark_cluster@test_kdc.com)s/.*/hdfs/
>
> Also, klist shows hdfs user is mapped correctly to the rule:
>
> [hdfs@test-namenode ~]$ klist
> Ticket cache: KEYRING:persistent:1012:1012
> Default principal: hdfs-spark_cluster@test_kdc.com
>
> User hdfs-spark_cluster is associated with hdfs keytab:
>
> [hdfs@test-namenode ~]$ kinit -V -kt 
> /etc/security/keytabs/hdfs.headless.keytab
> hdfs-spark_cluster
> Using existing cache: persistent:1012:1012
> Using principal: hdfs-spark_cluster@test_kdc.com
> Using keytab: /etc/security/keytabs/hdfs.headless.keytab
> Authenticated to Kerberos v5
>
> However, hdfs is NOT associated with this hdfs keytab:
>
> [hdfs@test-namenode ~]$ kinit -V -kt 
> /etc/security/keytabs/hdfs.headless.keytab
> hdfs
> Using new cache: persistent:1012:krb_ccache_V36KQXp
> Using principal: hdfs@test_kdc.com
> Using keytab: /etc/security/keytabs/hdfs.headless.keytab
> kinit: Keytab contains no suitable keys for hdfs@test_kdc.com while
> getting initial credentials
>
> As you can see, kinit maps hdfs to hdfs@test_kdc.com instead of
> hdfs-spark_cluster@test_kdc.com.
>
> I guess this is the reason I got "Failed to find any Kerberos tgt" when
> doing "hdfs dfs -ls".
>
> I don't know why ambari create kerberos users in the format of
> "hdfs-{CLUSTERNAME}@{REALNAME}" instead of "hdfs@{REALNAME}".
>
> Should I follow https://community.hortonworks.com/articles/79574/build-a-
> cluster-with-custom-principal-names-using.html to force ambari to create
> hdfs@test_kdc.com instead of hdfs-spark_cluster@test_kdc.com? Or I am
> missing anything else?
>
> Thanks for any help.
>
>


Re: metrics can not work as expected

2018-05-09 Thread xiang . dai
I reinstalled and tested, found that zookeeper failed indeed: 

2018-05-09 14:55:09,187 - File['/var/lib/ambari-agent/tmp/zkSmoke.out'] 
{'action': ['delete']} 
2018-05-09 14:55:09,188 - File['/var/lib/ambari-agent/tmp/zkSmoke.sh'] 
{'content': StaticFile('zkSmoke.sh'), 'mode': 0755} 
2018-05-09 14:55:09,189 - Writing File['/var/lib/ambari-agent/tmp/zkSmoke.sh'] 
because it doesn't exist 
2018-05-09 14:55:09,189 - Changing permission for 
/var/lib/ambari-agent/tmp/zkSmoke.sh from 644 to 755 
2018-05-09 14:55:09,189 - Execute['/var/lib/ambari-agent/tmp/zkSmoke.sh 
/usr/hdp/current/zookeeper-client/bin/zkCli.sh ambari-qa /etc/zookeeper/conf 
2181 False kinit no_keytab no_principal /var/lib/ambari-agent/tmp/zkSmoke.out'] 
{'logoutput': True, 'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 
'tries': 3, 'try_sleep': 5} 
zk_node1=dx-app.novalocal 
log4j:WARN No appenders could be found for logger 
(org.apache.zookeeper.ZooKeeper). 
log4j:WARN Please initialize the log4j system properly. 
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info. 
Exception in thread "main" java.lang.IllegalArgumentException: A HostProvider 
may not be empty! 
at 
org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:63)
 
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:446) 
at org.apache.zookeeper.ZooKeeperMain.connectToZK(ZooKeeperMain.java:279) 
at org.apache.zookeeper.ZooKeeperMain.(ZooKeeperMain.java:293) 
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:286) 
log4j:WARN No appenders could be found for logger 
(org.apache.zookeeper.ZooKeeper). 
log4j:WARN Please initialize the log4j system properly. 
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info. 
Exception in thread "main" java.lang.IllegalArgumentException: A HostProvider 
may not be empty! 
at 
org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:63)
 
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:446) 
at org.apache.zookeeper.ZooKeeperMain.connectToZK(ZooKeeperMain.java:279) 
at org.apache.zookeeper.ZooKeeperMain.(ZooKeeperMain.java:293) 
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:286) 
Running test on host dx-app.novalocal 
Connecting to dx-app.novalocal:2181 
log4j:WARN No appenders could be found for logger 
(org.apache.zookeeper.ZooKeeper). 
log4j:WARN Please initialize the log4j system properly. 
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info. 
Exception in thread "main" java.lang.IllegalArgumentException: A HostProvider 
may not be empty! 
at 
org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:63)
 
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:446) 
at org.apache.zookeeper.ZooKeeperMain.connectToZK(ZooKeeperMain.java:279) 
at org.apache.zookeeper.ZooKeeperMain.(ZooKeeperMain.java:293) 
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:286) 

I know little about zookeeper, what is wrong with zookeeper? 
Maybe start right but can not work as expected. 

From: "Siddharth Wagle"  
To: "user"  
Sent: Wednesday, May 9, 2018 1:26:39 PM 
Subject: Re: metrics can not work as expected 



Yes the default that Ambari sets in most cases should just work, not sure about 
the move part, usually configs are cluster specific so cannot know for sure 
unless diving deeper. 





The wiki has some useful info that you could take a look at: 





https://cwiki.apache.org/confluence/display/AMBARI/Troubleshooting+Guide 





https://cwiki.apache.org/confluence/display/AMBARI/Known+Issues 





https://cwiki.apache.org/confluence/display/AMBARI/Configuration ​ 





- Sid 




From: David Quiroga  
Sent: Tuesday, May 8, 2018 9:46 PM 
To: user@ambari.apache.org 
Subject: Re: metrics can not work as expected 
In most cases Ambari Metrics runs separate HBase and Zookeeper instances. 
The default ports of Ambari Metrics Zookeeper are typically in the 60,000 
range. 
Would expect the Ambari defaults to do the trick, which I suspect relates to 
Sid's question. 


On Sun, May 6, 2018 at 5:07 AM, < xiang@sky-data.cn > wrote: 



I just download them and make my own repo including them. 
Then, i install them with ambari UI. 




From: "Siddharth Wagle" < swa...@hortonworks.com > 
To: "user" < user@ambari.apache.org > 
Sent: Saturday, May 5, 2018 11:07:12 PM 
Subject: Re: metrics can not work as expected 



​Collector is not able to reach Zookeeper. Are you not installing AMS using 
Ambari? 





- Sid 




From: xiang@sky-data.cn < xiang@sky-data.cn > 
Sent: Saturday, May 5, 2018 1:27 AM 
To: user 
Subject: metrics can not work as expected 
Hi! 

I tested installation on my vbox, it works well, then i move it to a server 
which run many services. 

When install, it failed at check service status. 

I checked ambari-metrics-collector.log and found below error: 

2018-05-05 08:16:03,536 INFO org.apache.phoenix.metrics.Metrics: Initializing 
metrics system: phoenix