[ 
https://issues.apache.org/jira/browse/AMBARI-9504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308049#comment-14308049
 ] 

Hudson commented on AMBARI-9504:
--------------------------------

SUCCESS: Integrated in Ambari-trunk-Commit #1699 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/1699/])
AMBARI-9504. Configuration parameter 'dfs.journalnode.kerberos.principal' was 
not found in configurations dictionary. Kerberized cluster (alejandro) 
(afernandez: 
http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=f5c7f1eb17da4bfb26798edfa9000eed8434e5c3)
* 
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/params.py


> Configuration parameter 'dfs.journalnode.kerberos.principal' was not found in 
> configurations dictionary. Kerberized cluster
> ---------------------------------------------------------------------------------------------------------------------------
>
>                 Key: AMBARI-9504
>                 URL: https://issues.apache.org/jira/browse/AMBARI-9504
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.0.0
>            Reporter: Alejandro Fernandez
>            Assignee: Alejandro Fernandez
>             Fix For: 2.0.0
>
>         Attachments: AMBARI-9504.patch
>
>
> Cluster uses Blueprint
> HDFS_CLIENT install and DATANODE restart failed on secured cluster with error:
> Fail: Configuration parameter 'dfs.journalnode.kerberos.principal' was not 
> found in configurations dictionary!
> {code}
> 2015-02-04 07:16:24,098 - Error while executing command 'restart':
> Traceback (most recent call last):
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 184, in execute
>     method(env)
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 338, in restart
>     self.stop(env)
>   File 
> \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py\",
>  line 67, in stop
>     import params
>   File 
> \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/params.py\",
>  line 245, in <module>
>     _jn_principal_name = _jn_principal_name.replace('_HOST', hostname.lower())
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py\",
>  line 79, in __getattr__
>     raise Fail(\"Configuration parameter '\" + self.name + \"' was not found 
> in configurations dictionary!\")
> Fail: Configuration parameter 'dfs.journalnode.kerberos.principal' was not 
> found in configurations dictionary!",
>     "stdout" : "2015-02-04 07:16:22,978 - u\"Group['hadoop']\" 
> {'ignore_failures': False}
> 2015-02-04 07:16:22,980 - Modifying group hadoop
> 2015-02-04 07:16:23,067 - u\"Group['users']\" {'ignore_failures': False}
> 2015-02-04 07:16:23,067 - Modifying group users
> 2015-02-04 07:16:23,132 - u\"Group['hdfs']\" {'ignore_failures': False}
> 2015-02-04 07:16:23,133 - Modifying group hdfs
> 2015-02-04 07:16:23,221 - u\"Group['knox']\" {'ignore_failures': False}
> 2015-02-04 07:16:23,221 - Modifying group knox
> 2015-02-04 07:16:23,284 - u\"User['hive']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,285 - Modifying user hive
> 2015-02-04 07:16:23,304 - u\"User['oozie']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'users']}
> 2015-02-04 07:16:23,305 - Modifying user oozie
> 2015-02-04 07:16:23,321 - u\"User['ambari-qa']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'users']}
> 2015-02-04 07:16:23,321 - Modifying user ambari-qa
> 2015-02-04 07:16:23,340 - u\"User['flume']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,341 - Modifying user flume
> 2015-02-04 07:16:23,363 - u\"User['hdfs']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hdfs']}
> 2015-02-04 07:16:23,364 - Modifying user hdfs
> 2015-02-04 07:16:23,383 - u\"User['knox']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,384 - Modifying user knox
> 2015-02-04 07:16:23,396 - u\"User['storm']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,396 - Modifying user storm
> 2015-02-04 07:16:23,413 - u\"User['mapred']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,413 - Modifying user mapred
> 2015-02-04 07:16:23,432 - u\"User['hbase']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,433 - Modifying user hbase
> 2015-02-04 07:16:23,452 - u\"User['tez']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'users']}
> 2015-02-04 07:16:23,454 - Modifying user tez
> 2015-02-04 07:16:23,473 - u\"User['zookeeper']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,473 - Modifying user zookeeper
> 2015-02-04 07:16:23,491 - u\"User['falcon']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,492 - Modifying user falcon
> 2015-02-04 07:16:23,511 - u\"User['sqoop']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,511 - Modifying user sqoop
> 2015-02-04 07:16:23,532 - u\"User['yarn']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,532 - Modifying user yarn
> 2015-02-04 07:16:23,552 - u\"User['hcat']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,553 - Modifying user hcat
> 2015-02-04 07:16:23,572 - u\"User['ams']\" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
> 2015-02-04 07:16:23,573 - Modifying user ams
> 2015-02-04 07:16:23,586 - 
> u\"File['/var/lib/ambari-agent/data/tmp/changeUid.sh']\" {'content': 
> StaticFile('changeToSecureUid.sh'), 'mode': 0555}
> 2015-02-04 07:16:23,603 - 
> u\"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']\"
>  {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
> 2015-02-04 07:16:23,609 - Skipping 
> u\"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']\"
>  due to not_if
> 2015-02-04 07:16:23,610 - u\"Directory['/hadoop/hbase']\" {'owner': 'hbase', 
> 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
> 2015-02-04 07:16:23,645 - 
> u\"File['/var/lib/ambari-agent/data/tmp/changeUid.sh']\" {'content': 
> StaticFile('changeToSecureUid.sh'), 'mode': 0555}
> 2015-02-04 07:16:23,663 - 
> u\"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase 
> /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase']\" 
> {'not_if': 'test $(id -u hbase) -gt 1000'}
> 2015-02-04 07:16:23,669 - Skipping 
> u\"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase 
> /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase']\" due to 
> not_if
> 2015-02-04 07:16:23,669 - u\"Directory['/etc/hadoop']\" {'mode': 0755}
> 2015-02-04 07:16:23,670 - u\"Directory['/etc/hadoop/conf.empty']\" {'owner': 
> 'root', 'group': 'hadoop', 'recursive': True}
> 2015-02-04 07:16:23,671 - u\"Link['/etc/hadoop/conf']\" {'not_if': 'ls 
> /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
> 2015-02-04 07:16:23,676 - Skipping u\"Link['/etc/hadoop/conf']\" due to not_if
> 2015-02-04 07:16:23,692 - u\"File['/etc/hadoop/conf/hadoop-env.sh']\" 
> {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
> 2015-02-04 07:16:23,723 - u\"Execute['('setenforce', '0')']\" {'sudo': True, 
> 'only_if': 'test -f /selinux/enforce'}
> 2015-02-04 07:16:23,747 - Skipping u\"Execute['('setenforce', '0')']\" due to 
> only_if
> 2015-02-04 07:16:23,751 - 
> u\"Directory['/usr/hdp/current/hadoop-client/lib/native/Linux-i386-32']\" 
> {'recursive': True}
> 2015-02-04 07:16:23,752 - 
> u\"Directory['/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64']\" 
> {'recursive': True}
> 2015-02-04 07:16:23,753 - 
> u\"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-i386-32/libsnappy.so']\"
>  {'to': '/usr/hdp/current/hadoop-client/lib/libsnappy.so'}
> 2015-02-04 07:16:23,753 - 
> u\"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-i386-32/libsnappy.so']\"
>  replacing old symlink to /grid/0/hdp/2.2.1.0-2340/hadoop/lib/libsnappy.so
> 2015-02-04 07:16:23,765 - Warning: linking to nonexistent location 
> /usr/hdp/current/hadoop-client/lib/libsnappy.so
> 2015-02-04 07:16:23,765 - Creating symbolic 
> u\"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-i386-32/libsnappy.so']\"
> 2015-02-04 07:16:23,778 - 
> u\"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64/libsnappy.so']\"
>  {'to': '/usr/hdp/current/hadoop-client/lib64/libsnappy.so'}
> 2015-02-04 07:16:23,780 - 
> u\"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64/libsnappy.so']\"
>  replacing old symlink to /grid/0/hdp/2.2.1.0-2340/hadoop/lib64/libsnappy.so
> 2015-02-04 07:16:23,792 - Warning: linking to nonexistent location 
> /usr/hdp/current/hadoop-client/lib64/libsnappy.so
> 2015-02-04 07:16:23,793 - Creating symbolic 
> u\"Link['/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64/libsnappy.so']\"
> 2015-02-04 07:16:23,807 - u\"Directory['/var/log/hadoop']\" {'owner': 'root', 
> 'group': 'hadoop', 'mode': 0775, 'recursive': True}
> 2015-02-04 07:16:23,808 - u\"Directory['/var/run/hadoop']\" {'owner': 'root', 
> 'group': 'root', 'recursive': True}
> 2015-02-04 07:16:23,809 - u\"Directory['/tmp/hadoop-hdfs']\" {'owner': 
> 'hdfs', 'recursive': True}
> 2015-02-04 07:16:23,815 - 
> u\"File['/etc/hadoop/conf/commons-logging.properties']\" {'content': 
> Template('commons-logging.properties.j2'), 'owner': 'root'}
> 2015-02-04 07:16:23,833 - u\"File['/etc/hadoop/conf/health_check']\" 
> {'content': Template('health_check-v2.j2'), 'owner': 'root'}
> 2015-02-04 07:16:23,848 - u\"File['/etc/hadoop/conf/log4j.properties']\" 
> {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
> 2015-02-04 07:16:23,879 - 
> u\"File['/etc/hadoop/conf/hadoop-metrics2.properties']\" {'content': 
> Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
> 2015-02-04 07:16:23,895 - u\"File['/etc/hadoop/conf/task-log4j.properties']\" 
> {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
> 2015-02-04 07:16:24,098 - Error while executing command 'restart':
> Traceback (most recent call last):
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 184, in execute
>     method(env)
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 338, in restart
>     self.stop(env)
>   File 
> \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py\",
>  line 67, in stop
>     import params
>   File 
> \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/params.py\",
>  line 245, in <module>
>     _jn_principal_name = _jn_principal_name.replace('_HOST', hostname.lower())
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py\",
>  line 79, in __getattr__
>     raise Fail(\"Configuration parameter '\" + self.name + \"' was not found 
> in configurations dictionary!\")
> Fail: Configuration parameter 'dfs.journalnode.kerberos.principal' was not 
> found in configurations dictionary!
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to