[jira] [Commented] (AMBARI-15389) Intermittent YARN service check failures during and post EU

2016-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15220188#comment-15220188
 ] 

Hudson commented on AMBARI-15389:
-

SUCCESS: Integrated in Ambari-trunk-Commit #4572 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/4572/])
AMBARI-15389. Intermittent YARN service check failures during and post (hiveww: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=1fb2aab41c35cbe85fbb9545505872199fef8822])
* ambari-web/app/utils/configs/mount_points_based_initializer_mixin.js


> Intermittent YARN service check failures during and post EU
> ---
>
> Key: AMBARI-15389
> URL: https://issues.apache.org/jira/browse/AMBARI-15389
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.2
>Reporter: Dmitry Lysnichenko
>Assignee: Antonenko Alexander
> Fix For: 2.2.2
>
> Attachments: AMBARI-15389.patch, AMBARI-15389.patch, 
> AMBARI-15389_2.2.patch
>
>
> Build # - Ambari 2.2.1.1 - #63
> Observed this issue in a couple of EU runs recently where YARN service check 
> reports failure
> a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
> reported failure during EU itself; a retry of the operation led to service 
> check being successful
> b. In another test post EU when YARN service check was run, it reported 
> failure; afterwards when I ran it again - success
> Looks like there is some corner condition which causes this issue to be hit
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-822.txt
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 142, in 
> ServiceCheck().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 104, in service_check
> user=params.smokeuser,
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 70, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 92, in checked_call
> tries=tries, try_sleep=try_sleep)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 140, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 291, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
>  returned 2.  Hortonworks #
> This is MOTD message, added for testing in qe infra
> 16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
> http://host:8188/ws/v1/timeline/
> 16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
> 16/03/03 02:33:51 INFO distributedshell.Client: Running Client
> 16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
> host-9-5.test/127.0.0.254:8050
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=3
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
> nodeNumContainers1
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
> resources in this cluster 10240
> 16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 1
> 16/03/03 

[jira] [Commented] (AMBARI-15389) Intermittent YARN service check failures during and post EU

2016-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15220147#comment-15220147
 ] 

Hudson commented on AMBARI-15389:
-

SUCCESS: Integrated in Ambari-branch-2.2 #575 (See 
[https://builds.apache.org/job/Ambari-branch-2.2/575/])
AMBARI-15389. Intermittent YARN service check failures during and post (hiveww: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=78eafc7b4a7b189ef895fad1e500d84d30c4c0c4])
* ambari-web/app/utils/configs/config_property_helper.js


> Intermittent YARN service check failures during and post EU
> ---
>
> Key: AMBARI-15389
> URL: https://issues.apache.org/jira/browse/AMBARI-15389
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.2
>Reporter: Dmitry Lysnichenko
>Assignee: Antonenko Alexander
> Fix For: 2.2.2
>
> Attachments: AMBARI-15389.patch, AMBARI-15389.patch, 
> AMBARI-15389_2.2.patch
>
>
> Build # - Ambari 2.2.1.1 - #63
> Observed this issue in a couple of EU runs recently where YARN service check 
> reports failure
> a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
> reported failure during EU itself; a retry of the operation led to service 
> check being successful
> b. In another test post EU when YARN service check was run, it reported 
> failure; afterwards when I ran it again - success
> Looks like there is some corner condition which causes this issue to be hit
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-822.txt
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 142, in 
> ServiceCheck().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 104, in service_check
> user=params.smokeuser,
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 70, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 92, in checked_call
> tries=tries, try_sleep=try_sleep)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 140, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 291, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
>  returned 2.  Hortonworks #
> This is MOTD message, added for testing in qe infra
> 16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
> http://host:8188/ws/v1/timeline/
> 16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
> 16/03/03 02:33:51 INFO distributedshell.Client: Running Client
> 16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
> host-9-5.test/127.0.0.254:8050
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=3
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
> nodeNumContainers1
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
> resources in this cluster 10240
> 16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 1
> 16/03/03 02:33:53 INFO 

[jira] [Commented] (AMBARI-15389) Intermittent YARN service check failures during and post EU

2016-03-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209454#comment-15209454
 ] 

Hudson commented on AMBARI-15389:
-

FAILURE: Integrated in Ambari-trunk-Commit #4535 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/4535/])
AMBARI-15389. Intermittent YARN service check failures during and post 
(dlysnichenko: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=7213fda8d2e7de435592b6180d7d13282500ffa6])
* 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py


> Intermittent YARN service check failures during and post EU
> ---
>
> Key: AMBARI-15389
> URL: https://issues.apache.org/jira/browse/AMBARI-15389
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.2.2
>
> Attachments: AMBARI-15389.patch
>
>
> Build # - Ambari 2.2.1.1 - #63
> Observed this issue in a couple of EU runs recently where YARN service check 
> reports failure
> a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
> reported failure during EU itself; a retry of the operation led to service 
> check being successful
> b. In another test post EU when YARN service check was run, it reported 
> failure; afterwards when I ran it again - success
> Looks like there is some corner condition which causes this issue to be hit
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-822.txt
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 142, in 
> ServiceCheck().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 104, in service_check
> user=params.smokeuser,
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 70, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 92, in checked_call
> tries=tries, try_sleep=try_sleep)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 140, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 291, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
>  returned 2.  Hortonworks #
> This is MOTD message, added for testing in qe infra
> 16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
> http://host:8188/ws/v1/timeline/
> 16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
> 16/03/03 02:33:51 INFO distributedshell.Client: Running Client
> 16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
> host-9-5.test/127.0.0.254:8050
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=3
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
> nodeNumContainers1
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
> resources in this cluster 10240
> 16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 1
> 16/03/03 02:33:53 INFO 

[jira] [Commented] (AMBARI-15389) Intermittent YARN service check failures during and post EU

2016-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191392#comment-15191392
 ] 

Hudson commented on AMBARI-15389:
-

SUCCESS: Integrated in Ambari-branch-2.2 #502 (See 
[https://builds.apache.org/job/Ambari-branch-2.2/502/])
AMBARI-15389 Intermittent YARN service check failures during and post EU 
(dlysnichenko: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=77dc81f63a830a8fe9a99284d31dcfec98ac98d3])
* 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py


> Intermittent YARN service check failures during and post EU
> ---
>
> Key: AMBARI-15389
> URL: https://issues.apache.org/jira/browse/AMBARI-15389
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.2.2
>
> Attachments: AMBARI-15389.patch
>
>
> Build # - Ambari 2.2.1.1 - #63
> Observed this issue in a couple of EU runs recently where YARN service check 
> reports failure
> a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
> reported failure during EU itself; a retry of the operation led to service 
> check being successful
> b. In another test post EU when YARN service check was run, it reported 
> failure; afterwards when I ran it again - success
> Looks like there is some corner condition which causes this issue to be hit
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-822.txt
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 142, in 
> ServiceCheck().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 104, in service_check
> user=params.smokeuser,
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 70, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 92, in checked_call
> tries=tries, try_sleep=try_sleep)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 140, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 291, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
>  returned 2.  Hortonworks #
> This is MOTD message, added for testing in qe infra
> 16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
> http://host:8188/ws/v1/timeline/
> 16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
> 16/03/03 02:33:51 INFO distributedshell.Client: Running Client
> 16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
> host-9-5.test/127.0.0.254:8050
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=3
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
> nodeNumContainers1
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
> resources in this cluster 10240
> 16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 1
> 16/03/03 02:33:53 INFO