[ 
https://issues.apache.org/jira/browse/AMBARI-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14519072#comment-14519072
 ] 

Hari Sekhon commented on AMBARI-10493:
--------------------------------------

An existing cluster continues to function but adding services is problematic 
since Ambari is confused it doesn't configure kerberos principal or keytab - 
leaving them as 'none' but then tries to kinit during service deployment, which 
obviously fails:
{code}stderr: 
2015-04-29 10:52:56,867 - Error while executing command 'start':
Traceback (most recent call last):
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 214, in execute
    method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/SPARK/1.2.0.2.2/package/scripts/job_history_server.py",
 line 73, in start
    Execute(spark_kinit_cmd, user=params.spark_user)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 148, in __init__
    self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 152, in run
    self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 118, in run_action
    provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 274, in action_run
    raise ex
Fail: Execution of '/usr/bin/kinit -kt none none; ' returned 1. kinit: Client 
'none@LOCALDOMAIN' not found in Kerberos database while getting initial 
credentials
 stdout:
2015-04-29 10:52:35,410 - u"Group['hadoop']" {'ignore_failures': False}
2015-04-29 10:52:35,411 - Modifying group hadoop
2015-04-29 10:52:35,475 - u"Group['users']" {'ignore_failures': False}
2015-04-29 10:52:35,477 - Modifying group users
2015-04-29 10:52:35,524 - u"Group['spark']" {'ignore_failures': False}
2015-04-29 10:52:35,526 - Modifying group spark
2015-04-29 10:52:35,575 - u"User['hive']" {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-04-29 10:52:35,575 - Modifying user hive
2015-04-29 10:52:35,627 - u"User['oozie']" {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'users']}
2015-04-29 10:52:35,631 - Modifying user oozie
2015-04-29 10:52:35,680 - u"User['ambari-qa']" {'gid': 'hadoop', 
'ignore_failures': False, 'groups': [u'users']}
2015-04-29 10:52:35,680 - Modifying user ambari-qa
2015-04-29 10:52:35,727 - u"User['hdfs']" {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-04-29 10:52:35,727 - Modifying user hdfs
2015-04-29 10:52:35,774 - u"User['spark']" {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-04-29 10:52:35,777 - Modifying user spark
2015-04-29 10:52:35,826 - u"User['mapred']" {'gid': 'hadoop', 
'ignore_failures': False, 'groups': [u'hadoop']}
2015-04-29 10:52:35,828 - Modifying user mapred
2015-04-29 10:52:35,877 - u"User['tez']" {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'users']}
2015-04-29 10:52:35,880 - Modifying user tez
2015-04-29 10:52:35,929 - u"User['zookeeper']" {'gid': 'hadoop', 
'ignore_failures': False, 'groups': [u'hadoop']}
2015-04-29 10:52:35,932 - Modifying user zookeeper
2015-04-29 10:52:35,983 - u"User['kafka']" {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-04-29 10:52:35,985 - Modifying user kafka
2015-04-29 10:52:36,034 - u"User['sqoop']" {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-04-29 10:52:36,036 - Modifying user sqoop
2015-04-29 10:52:36,085 - u"User['yarn']" {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-04-29 10:52:36,087 - Modifying user yarn
2015-04-29 10:52:36,136 - u"User['hcat']" {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-04-29 10:52:36,138 - Modifying user hcat
2015-04-29 10:52:36,187 - u"User['ams']" {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-04-29 10:52:36,190 - Modifying user ams
2015-04-29 10:52:36,243 - 
u"File['/var/lib/ambari-agent/data/tmp/changeUid.sh']" {'content': 
StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-04-29 10:52:36,568 - 
u"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa 
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']"
 {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2015-04-29 10:52:36,618 - Skipping 
u"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa 
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']"
 due to not_if
2015-04-29 10:52:36,619 - u"Group['hdfs']" {'ignore_failures': False}
2015-04-29 10:52:36,619 - Modifying group hdfs
2015-04-29 10:52:36,669 - u"User['hdfs']" {'ignore_failures': False, 'groups': 
[u'hadoop', 'access_credit_admin', 'access_rmtradestats', 'access_intelhedger', 
'access_credit_ets', 'access_blackbird', 'access_client1st', 'access_tradeweb', 
'access_onetick', 'access_dexo', 'access_news', 'access_ramp', 'data-admins', 
'fx_etrading', 'access_cdp', 'access_mrx', 'access_cb', 'hadoop', 'first', 
'cia', 'mis', 'hadoop', 'hdfs', u'hdfs']}
2015-04-29 10:52:36,670 - Modifying user hdfs
2015-04-29 10:52:36,763 - u"Directory['/etc/hadoop']" {'mode': 0755}
2015-04-29 10:52:36,930 - u"Directory['/etc/hadoop/conf.empty']" {'owner': 
'root', 'group': 'hadoop', 'recursive': True}
2015-04-29 10:52:37,107 - u"Link['/etc/hadoop/conf']" {'not_if': 'ls 
/etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-04-29 10:52:37,160 - Skipping u"Link['/etc/hadoop/conf']" due to not_if
2015-04-29 10:52:37,188 - u"File['/etc/hadoop/conf/hadoop-env.sh']" {'content': 
InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2015-04-29 10:52:37,473 - u"Execute['('setenforce', '0')']" {'sudo': True, 
'only_if': 'test -f /selinux/enforce'}
2015-04-29 10:52:37,552 - Skipping u"Execute['('setenforce', '0')']" due to 
only_if
2015-04-29 10:52:37,553 - u"Directory['/var/log/hadoop']" {'owner': 'root', 
'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2015-04-29 10:52:38,021 - u"Directory['/var/run/hadoop']" {'owner': 'root', 
'group': 'root', 'recursive': True, 'cd_access': 'a'}
2015-04-29 10:52:38,509 - u"Directory['/tmp/hadoop-hdfs']" {'owner': 'hdfs', 
'recursive': True, 'cd_access': 'a'}
2015-04-29 10:52:39,144 - 
u"File['/etc/hadoop/conf/commons-logging.properties']" {'content': 
Template('commons-logging.properties.j2'), 'owner': 'root'}
2015-04-29 10:52:39,599 - u"File['/etc/hadoop/conf/health_check']" {'content': 
Template('health_check-v2.j2'), 'owner': 'root'}
2015-04-29 10:52:39,957 - u"File['/etc/hadoop/conf/log4j.properties']" 
{'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-04-29 10:52:40,264 - 
u"File['/etc/hadoop/conf/hadoop-metrics2.properties']" {'content': 
Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-04-29 10:52:40,714 - u"File['/etc/hadoop/conf/task-log4j.properties']" 
{'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-04-29 10:52:41,637 - call['hdp-select status hadoop-client'] {'timeout': 
20}
2015-04-29 10:52:41,744 - u"Directory['/var/run/spark']" {'owner': 'spark', 
'group': 'hadoop', 'recursive': True}
2015-04-29 10:52:42,021 - u"Directory['/var/log/spark']" {'owner': 'spark', 
'group': 'hadoop', 'recursive': True}
2015-04-29 10:52:42,291 - u"HdfsDirectory['/user/spark']" {'security_enabled': 
True, 'keytab': '/etc/security/keytabs/hdfs.headless.keytab', 'conf_dir': 
'/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '/usr/bin/kinit', 
'mode': 0775, 'owner': 'spark', 'bin_dir': 
'/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-04-29 10:52:42,298 - u"Execute['/usr/bin/kinit -kt 
/etc/security/keytabs/hdfs.headless.keytab hdfs']" {'user': 'hdfs'}
2015-04-29 10:52:42,555 - u"Execute['hadoop --config /etc/hadoop/conf fs -mkdir 
-p /user/spark && hadoop --config /etc/hadoop/conf fs -chmod  775 /user/spark 
&& hadoop --config /etc/hadoop/conf fs -chown  spark /user/spark']" {'not_if': 
"ambari-sudo.sh su hdfs -l -s /bin/bash -c 'hadoop --config /etc/hadoop/conf fs 
-ls /user/spark'", 'user': 'hdfs', 'path': 
['/usr/hdp/current/hadoop-client/bin']}
2015-04-29 10:52:54,167 - u"File['/etc/spark/conf/spark-env.sh']" {'content': 
InlineTemplate(...), 'owner': 'spark', 'group': 'spark'}
2015-04-29 10:52:54,515 - Writing u"File['/etc/spark/conf/spark-env.sh']" 
because contents don't match
2015-04-29 10:52:54,793 - u"File['/etc/spark/conf/log4j.properties']" 
{'content': '...', 'owner': 'spark', 'group': 'spark'}
2015-04-29 10:52:55,249 - u"File['/etc/spark/conf/metrics.properties']" 
{'content': InlineTemplate(...), 'owner': 'spark', 'group': 'spark'}
2015-04-29 10:52:55,575 - u"File['/etc/spark/conf/java-opts']" {'content': '  
-Dhdp.version=2.2.4.2-2', 'owner': 'spark', 'group': 'spark'}
2015-04-29 10:52:55,957 - u"XmlConfig['hive-site.xml']" {'owner': 'spark', 
'group': 'spark', 'mode': 0644, 'conf_dir': '/etc/spark/conf', 
'configurations': ...}
2015-04-29 10:52:55,967 - Generating config: /etc/spark/conf/hive-site.xml
2015-04-29 10:52:55,967 - u"File['/etc/spark/conf/hive-site.xml']" {'owner': 
'spark', 'content': InlineTemplate(...), 'group': 'spark', 'mode': 0644, 
'encoding': 'UTF-8'}
2015-04-29 10:52:56,355 - Writing u"File['/etc/spark/conf/hive-site.xml']" 
because contents don't match
2015-04-29 10:52:56,634 - u"Execute['/usr/bin/kinit -kt none none; ']" {'user': 
'spark'}
2015-04-29 10:52:56,867 - Error while executing command 'start':
Traceback (most recent call last):
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 214, in execute
    method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/SPARK/1.2.0.2.2/package/scripts/job_history_server.py",
 line 73, in start
    Execute(spark_kinit_cmd, user=params.spark_user)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 148, in __init__
    self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 152, in run
    self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 118, in run_action
    provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 274, in action_run
    raise ex
Fail: Execution of '/usr/bin/kinit -kt none none; ' returned 1. kinit: Client 
'none@LOCALDOMAIN' not found in Kerberos database while getting initial 
credentials
2015-04-29 10:52:56,960 - Command: /usr/bin/hdp-select status 
spark-historyserver > /tmp/tmpbOQTvL
Output: spark-historyserver - 2.2.4.2-2{code}

> Ambari 2.0 doesn't recognize Kerberos on existing cluster after upgrade
> -----------------------------------------------------------------------
>
>                 Key: AMBARI-10493
>                 URL: https://issues.apache.org/jira/browse/AMBARI-10493
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server, security
>    Affects Versions: 2.0.0
>         Environment: HDP 2.2.0
>            Reporter: Hari Sekhon
>            Priority: Critical
>
> After upgrading to Ambari 2.0 (from 1.7) it wants to manage Kerberos but it 
> doesn't seem to recognize the cluster as already kerberized, nor does it 
> appear to have the capability to just use the existing keytabs as we have 
> historically done - it wants to redeploy them from an MIT KDC as part of the 
> enable kerberos process, which would obviously mess up my already deployed 
> kerberized cluster which is running off FreeIPA (which includes an MIT KDC in 
> each IPA server but isn't supported to be managed via kadmin interface).
> There doesn't seem to be an obvious way of getting Ambari to re-enable or 
> recognize that kerberos is deployed and the services are kerberized. The 
> current configurations do seem to still be intact with the kerberos config 
> settings but Ambari does not recognize that Kerberos is deployed and I'm 
> concerned this is going to eventually mess up my existing cluster or deploy 
> new services without Kerberos.
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to