-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31005/#review72383
-----------------------------------------------------------

Ship it!


Ship It!

- Vitalyi Brodetskyi


On Лют. 13, 2015, 4:49 після полудня, Andrew Onischuk wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31005/
> -----------------------------------------------------------
> 
> (Updated Лют. 13, 2015, 4:49 після полудня)
> 
> 
> Review request for Ambari and Vitalyi Brodetskyi.
> 
> 
> Bugs: AMBARI-9632
>     https://issues.apache.org/jira/browse/AMBARI-9632
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Steps to reproduce:  
> 1\. Create 3 node --extradisk ubuntu cluster on gce  
> 2\. Install ambari sderver  
> 3\. Install All services except Spark.
> 
> Note that mysql start failures are observed after installation.
> 
>     
>     
>     
>     2015-02-13 02:09:42,604 - Error while executing command 'start':
>     Traceback (most recent call last):
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 208, in execute
>         method(env)
>       File 
> "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_server.py",
>  line 49, in start
>         mysql_service(daemon_name=params.daemon_name, action='start')
>       File 
> "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_service.py",
>  line 42, in mysql_service
>         sudo = True,
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 
> 148, in __init__
>         self.env.run()
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 152, in run
>         self.run_action(resource, action)
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 118, in run_action
>         provider_action()
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 276, in action_run
>         raise ex
>     Fail: Execution of 'service mysql start' returned 1. start: Job is 
> already running: mysql
>     
>     
>     
>     
>     2015-02-13 02:09:41,627 - 
> u"Directory['/var/lib/ambari-agent/data/tmp/AMBARI-artifacts/']" 
> {'recursive': True}
>     2015-02-13 02:09:41,628 - 
> u"File['/var/lib/ambari-agent/data/tmp/AMBARI-artifacts//UnlimitedJCEPolicyJDK7.zip']"
>  {'content': 
> DownloadSource('http://richard-ubuntu-1.c.pramod-thangali.internal:8080/resources//UnlimitedJCEPolicyJDK7.zip')}
>     2015-02-13 02:09:41,628 - Not downloading the file from 
> http://richard-ubuntu-1.c.pramod-thangali.internal:8080/resources//UnlimitedJCEPolicyJDK7.zip,
>  because /var/lib/ambari-agent/data/tmp/UnlimitedJCEPolicyJDK7.zip already 
> exists
>     2015-02-13 02:09:41,667 - u"Group['hadoop']" {'ignore_failures': False}
>     2015-02-13 02:09:41,668 - Modifying group hadoop
>     2015-02-13 02:09:41,730 - u"Group['users']" {'ignore_failures': False}
>     2015-02-13 02:09:41,730 - Modifying group users
>     2015-02-13 02:09:41,761 - u"Group['knox']" {'ignore_failures': False}
>     2015-02-13 02:09:41,761 - Modifying group knox
>     2015-02-13 02:09:41,788 - u"User['hive']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,788 - Modifying user hive
>     2015-02-13 02:09:41,800 - u"User['oozie']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'users']}
>     2015-02-13 02:09:41,800 - Modifying user oozie
>     2015-02-13 02:09:41,813 - u"User['ambari-qa']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'users']}
>     2015-02-13 02:09:41,813 - Modifying user ambari-qa
>     2015-02-13 02:09:41,824 - u"User['flume']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,825 - Modifying user flume
>     2015-02-13 02:09:41,836 - u"User['hdfs']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,837 - Modifying user hdfs
>     2015-02-13 02:09:41,849 - u"User['knox']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,849 - Modifying user knox
>     2015-02-13 02:09:41,861 - u"User['storm']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,861 - Modifying user storm
>     2015-02-13 02:09:41,873 - u"User['mapred']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,874 - Modifying user mapred
>     2015-02-13 02:09:41,885 - u"User['hbase']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,886 - Modifying user hbase
>     2015-02-13 02:09:41,899 - u"User['tez']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'users']}
>     2015-02-13 02:09:41,900 - Modifying user tez
>     2015-02-13 02:09:41,911 - u"User['zookeeper']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,912 - Modifying user zookeeper
>     2015-02-13 02:09:41,923 - u"User['kafka']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,923 - Modifying user kafka
>     2015-02-13 02:09:41,935 - u"User['falcon']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,935 - Modifying user falcon
>     2015-02-13 02:09:41,948 - u"User['sqoop']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,949 - Modifying user sqoop
>     2015-02-13 02:09:41,964 - u"User['yarn']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,964 - Modifying user yarn
>     2015-02-13 02:09:41,982 - u"User['hcat']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:41,983 - Modifying user hcat
>     2015-02-13 02:09:42,004 - u"User['ams']" {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-02-13 02:09:42,004 - Modifying user ams
>     2015-02-13 02:09:42,024 - 
> u"File['/var/lib/ambari-agent/data/tmp/changeUid.sh']" {'content': 
> StaticFile('changeToSecureUid.sh'), 'mode': 0555}
>     2015-02-13 02:09:42,047 - 
> u"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']"
>  {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
>     2015-02-13 02:09:42,054 - Skipping 
> u"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']"
>  due to not_if
>     2015-02-13 02:09:42,054 - u"Directory['/grid/0/hadoop/hbase']" {'owner': 
> 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
>     2015-02-13 02:09:42,105 - 
> u"File['/var/lib/ambari-agent/data/tmp/changeUid.sh']" {'content': 
> StaticFile('changeToSecureUid.sh'), 'mode': 0555}
>     2015-02-13 02:09:42,116 - 
> u"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase 
> /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/grid/0/hadoop/hbase']" 
> {'not_if': 'test $(id -u hbase) -gt 1000'}
>     2015-02-13 02:09:42,120 - Skipping 
> u"Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase 
> /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/grid/0/hadoop/hbase']" 
> due to not_if
>     2015-02-13 02:09:42,121 - u"Group['hdfs']" {'ignore_failures': False}
>     2015-02-13 02:09:42,121 - Modifying group hdfs
>     2015-02-13 02:09:42,138 - u"User['hdfs']" {'groups': [u'hadoop', 
> 'hadoop', 'hdfs', u'hdfs']}
>     2015-02-13 02:09:42,138 - Modifying user hdfs
>     2015-02-13 02:09:42,151 - u"Directory['/etc/hadoop']" {'mode': 0755}
>     2015-02-13 02:09:42,151 - u"Directory['/etc/hadoop/conf.empty']" 
> {'owner': 'hdfs', 'group': 'hadoop', 'recursive': True}
>     2015-02-13 02:09:42,152 - u"Link['/etc/hadoop/conf']" {'not_if': 'ls 
> /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
>     2015-02-13 02:09:42,155 - Skipping u"Link['/etc/hadoop/conf']" due to 
> not_if
>     2015-02-13 02:09:42,165 - u"File['/etc/hadoop/conf/hadoop-env.sh']" 
> {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
>     2015-02-13 02:09:42,186 - u"Execute['('setenforce', '0')']" {'sudo': 
> True, 'only_if': 'test -f /selinux/enforce'}
>     2015-02-13 02:09:42,203 - Skipping u"Execute['('setenforce', '0')']" due 
> to only_if
>     2015-02-13 02:09:42,203 - u"Directory['/var/log/hadoop']" {'owner': 
> 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
>     2015-02-13 02:09:42,230 - u"Directory['/var/run/hadoop']" {'owner': 
> 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
>     2015-02-13 02:09:42,258 - u"Directory['/tmp/hadoop-hdfs']" {'owner': 
> 'hdfs', 'recursive': True, 'cd_access': 'a'}
>     2015-02-13 02:09:42,279 - 
> u"File['/etc/hadoop/conf/commons-logging.properties']" {'content': 
> Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
>     2015-02-13 02:09:42,291 - u"File['/etc/hadoop/conf/health_check']" 
> {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
>     2015-02-13 02:09:42,303 - u"File['/etc/hadoop/conf/log4j.properties']" 
> {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
>     2015-02-13 02:09:42,320 - 
> u"File['/etc/hadoop/conf/hadoop-metrics2.properties']" {'content': 
> Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
>     2015-02-13 02:09:42,330 - 
> u"File['/etc/hadoop/conf/task-log4j.properties']" {'content': 
> StaticFile('task-log4j.properties'), 'mode': 0755}
>     2015-02-13 02:09:42,482 - u"Execute['('service', 'mysql', 'start')']" 
> {'logoutput': True, 'not_if': "pgrep -l '^mysql$'", 'sudo': True}
>     start: Job is already running: mysql
>     2015-02-13 02:09:42,604 - Error while executing command 'start':
>     Traceback (most recent call last):
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 208, in execute
>         method(env)
>       File 
> "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_server.py",
>  line 49, in start
>         mysql_service(daemon_name=params.daemon_name, action='start')
>       File 
> "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_service.py",
>  line 42, in mysql_service
>         sudo = True,
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 
> 148, in __init__
>         self.env.run()
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 152, in run
>         self.run_action(resource, action)
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 118, in run_action
>         provider_action()
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 276, in action_run
>         raise ex
>     Fail: Execution of 'service mysql start' returned 1. start: Job is 
> already running: mysql
> 
> 
> Diffs
> -----
> 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_server.py
>  40ddb86 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/mysql_service.py
>  136fe03 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params.py
>  b10706a 
>   
> ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/status_params.py
>  66de02a 
>   ambari-server/src/test/python/stacks/2.0.6/HIVE/test_mysql_server.py 
> d0d701f 
> 
> Diff: https://reviews.apache.org/r/31005/diff/
> 
> 
> Testing
> -------
> 
> mvn clean test
> 
> 
> Thanks,
> 
> Andrew Onischuk
> 
>

Reply via email to