[ 
https://issues.apache.org/jira/browse/AMBARI-15712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15226086#comment-15226086
 ] 

Hadoop QA commented on AMBARI-15712:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12797052/AMBARI-15712.patch
  against trunk revision .

    {color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/6206//console

This message is automatically generated.

> Flume Handler Start fails while installing without HDFS
> -------------------------------------------------------
>
>                 Key: AMBARI-15712
>                 URL: https://issues.apache.org/jira/browse/AMBARI-15712
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Andrew Onischuk
>            Assignee: Andrew Onischuk
>             Fix For: 2.2.2
>
>         Attachments: AMBARI-15712.patch
>
>
> STR:  
> Install Ambari  
> Install cluster with only Flume
> Flume Handler start fails. This was not see in previous runs, so it may be
> intermittent.
>     
>     
>     
>     {
>       "href" : 
> "http://172.22.74.255:8080/api/v1/clusters/cl1/requests/5/tasks/14";,
>       "Tasks" : {
>         "attempt_cnt" : 1,
>         "cluster_name" : "cl1",
>         "command" : "START",
>         "command_detail" : "FLUME_HANDLER START",
>         "end_time" : 1459753941446,
>         "error_log" : "/var/lib/ambari-agent/data/errors-14.txt",
>         "exit_code" : 1,
>         "host_name" : "os-d7-ngzvlu-ambari-se-serv-2-2.novalocal",
>         "id" : 14,
>         "output_log" : "/var/lib/ambari-agent/data/output-14.txt",
>         "request_id" : 5,
>         "role" : "FLUME_HANDLER",
>         "stage_id" : 0,
>         "start_time" : 1459753934012,
>         "status" : "FAILED",
>         "stderr" : "Traceback (most recent call last):\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py\",
>  line 39, in <module>\n    BeforeStartHook().execute()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 219, in execute\n    method(env)\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py\",
>  line 36, in hook\n    create_topology_script_and_mapping()\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/rack_awareness.py\",
>  line 43, in create_topology_script_and_mapping\n    
> create_topology_mapping()\n  File 
> \"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/rack_awareness.py\",
>  line 32, in create_topology_mapping\n    only_if=format(\"test -d 
> {net_topology_script_dir}\"))\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 
> 154, in __init__\n    self.env.run()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 160, in run\n    self.run_action(resource, action)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 124, in run_action\n    provider_action()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 108, in action_create\n    self.resource.group, 
> mode=self.resource.mode, cd_access=self.resource.cd_access)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 40, in _ensure_metadata\n    if user or group:\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py\",
>  line 81, in __getattr__\n    raise Fail(\"Configuration parameter '\" + 
> self.name + \"' was not found in configurations 
> dictionary!\")\nresource_management.core.exceptions.Fail: Configuration 
> parameter 'hadoop-env' was not found in configurations dictionary!",
>         "stdout" : "2016-04-04 07:12:19,785 - Group['hadoop'] {}\n2016-04-04 
> 07:12:19,787 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': 
> True, 'groups': ['users']}\n2016-04-04 07:12:19,788 - User['flume'] {'gid': 
> 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}\n2016-04-04 
> 07:12:19,789 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': 
> StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2016-04-04 07:12:19,791 - 
> Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
>  {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}\n2016-04-04 
> 07:12:19,796 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh 
> ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
>  due to not_if\n2016-04-04 07:12:19,807 - Execute[('setenforce', '0')] 
> {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep 
> -q Disabled)', 'sudo': True, 'only_if': 'test -f 
> /selinux/enforce'}\n2016-04-04 07:12:19,813 - Skipping Execute[('setenforce', 
> '0')] due to not_if\n2016-04-04 07:12:19,822 - 
> File['/etc/hadoop/conf/topology_mappings.data'] {'owner': [EMPTY], 'content': 
> Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 
> 'group': 'hadoop'}\n2016-04-04 07:12:19,827 - Writing 
> File['/etc/hadoop/conf/topology_mappings.data'] because it doesn't exist",
>         "structured_out" : { }
>       }
>     }
>     
> Artifacts: <http://linux-jenkins.qe.hortonworks.com/home/jenkins/qe-
> artifacts/os-d7-ngzvlu-ambari-se-serv-2/ambari-serv-1459776340/artifacts/scree
> nshots/com.hw.ambari.ui.tests.heavyweights.TestFlumeOnlyInstall/Test10_install
> HDP/_4_7_14_36_Checking_smoke_test_for__FLUME_service_failed/>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to