[jira] [Updated] (AMBARI-15394) Add second parameter to App.format.role()

2016-03-11 Thread Zhe (Joe) Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe (Joe) Wang updated AMBARI-15394:

Affects Version/s: 2.4.0

> Add second parameter to App.format.role()
> -
>
> Key: AMBARI-15394
> URL: https://issues.apache.org/jira/browse/AMBARI-15394
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0, 2.2.2
>Reporter: Zhe (Joe) Wang
>Assignee: Zhe (Joe) Wang
> Fix For: 2.4.0, 2.2.2
>
> Attachments: AMBARI-15394.0.patch, AMBARI-15394_branch_2_2.0.patch
>
>
> Due to case that service has the same key with its component (e.g. Pig), 
> App.format.role() needs a second parameter to know which it should return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15394) Add second parameter to App.format.role()

2016-03-11 Thread Zhe (Joe) Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe (Joe) Wang updated AMBARI-15394:

Fix Version/s: 2.4.0

> Add second parameter to App.format.role()
> -
>
> Key: AMBARI-15394
> URL: https://issues.apache.org/jira/browse/AMBARI-15394
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0, 2.2.2
>Reporter: Zhe (Joe) Wang
>Assignee: Zhe (Joe) Wang
> Fix For: 2.4.0, 2.2.2
>
> Attachments: AMBARI-15394.0.patch, AMBARI-15394_branch_2_2.0.patch
>
>
> Due to case that service has the same key with its component (e.g. Pig), 
> App.format.role() needs a second parameter to know which it should return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15394) Add second parameter to App.format.role()

2016-03-11 Thread Zhe (Joe) Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe (Joe) Wang updated AMBARI-15394:

Attachment: AMBARI-15394.0.patch

Modified unit test. Local ambari-web test passed.
24564 tests complete (21 seconds)
145 tests pending
Manual testing done.

> Add second parameter to App.format.role()
> -
>
> Key: AMBARI-15394
> URL: https://issues.apache.org/jira/browse/AMBARI-15394
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0, 2.2.2
>Reporter: Zhe (Joe) Wang
>Assignee: Zhe (Joe) Wang
> Fix For: 2.4.0, 2.2.2
>
> Attachments: AMBARI-15394.0.patch, AMBARI-15394_branch_2_2.0.patch
>
>
> Due to case that service has the same key with its component (e.g. Pig), 
> App.format.role() needs a second parameter to know which it should return.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15395) Enhance blueprint support for using references

2016-03-11 Thread Shantanu Mundkur (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shantanu Mundkur updated AMBARI-15395:
--
Description: 
An exported blueprint should provide ready portability i.e. an exported 
blueprint be usable without changes to deploy another cluster; some elements 
that are masked or omitted could still use tokens or placeholders. These have 
been called references in previous Jiras. A reference follow some convention 
that indicates that it is a reference by using a keyword and a pattern e.g.
ReferenceName:configType:configVersion:propertyName

References would be a good indicators of properties that user could choose to 
customize before deploying the cluster. It could also indicate the need for a 
"global" default for that property in the cluster template. Examples:
Passwords
Hostnames 
External databases

Currently Ambari has a concept of SECRET references. E.g.
SECRET:hive-site:2:hive.server2.keystore.password

These are used for masking the password when a blueprint is exported. However, 
it would be useful to have an entry exported but using a reference.

Similarly one could have,
HOST:kerberos-env:-1:kdc_host
and so forth.

For any reference, in the cluster template there would be a corresponding 
property that would be used for substituting a value for the reference during 
deployment if the registered blueprint had such references. Currently such 
behavior is used if a property of type password is not specified 
(default_password). Such references could be used to tag properties to flag 
them to be the ones that users must customize or include in the cluster 
template. It could also serve a way to annotate/comment parts of the blueprint 
JSON.

> Enhance blueprint support for using references
> --
>
> Key: AMBARI-15395
> URL: https://issues.apache.org/jira/browse/AMBARI-15395
> Project: Ambari
>  Issue Type: Story
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shantanu Mundkur
>
> An exported blueprint should provide ready portability i.e. an exported 
> blueprint be usable without changes to deploy another cluster; some elements 
> that are masked or omitted could still use tokens or placeholders. These have 
> been called references in previous Jiras. A reference follow some convention 
> that indicates that it is a reference by using a keyword and a pattern e.g.
> ReferenceName:configType:configVersion:propertyName
> References would be a good indicators of properties that user could choose to 
> customize before deploying the cluster. It could also indicate the need for a 
> "global" default for that property in the cluster template. Examples:
> Passwords
> Hostnames 
> External databases
> Currently Ambari has a concept of SECRET references. E.g.
> SECRET:hive-site:2:hive.server2.keystore.password
> These are used for masking the password when a blueprint is exported. 
> However, it would be useful to have an entry exported but using a reference.
> Similarly one could have,
> HOST:kerberos-env:-1:kdc_host
> and so forth.
> For any reference, in the cluster template there would be a corresponding 
> property that would be used for substituting a value for the reference during 
> deployment if the registered blueprint had such references. Currently such 
> behavior is used if a property of type password is not specified 
> (default_password). Such references could be used to tag properties to flag 
> them to be the ones that users must customize or include in the cluster 
> template. It could also serve a way to annotate/comment parts of the 
> blueprint JSON.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15380) PXF alerts not working on secured HDFS HA clusters

2016-03-11 Thread bhuvnesh chaudhary (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bhuvnesh chaudhary updated AMBARI-15380:

Attachment: AMBARI-15380-1.patch

> PXF alerts not working on secured HDFS HA clusters
> --
>
> Key: AMBARI-15380
> URL: https://issues.apache.org/jira/browse/AMBARI-15380
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: trunk, 2.2.0
>Reporter: bhuvnesh chaudhary
>Assignee: bhuvnesh chaudhary
> Fix For: trunk, 2.2.0
>
> Attachments: AMBARI-15380-1.patch, AMBARI-15380.patch
>
>
> PXF alerts are not working on secured HDFS HA clusters. When the cluster is 
> HA, the api should reach out to the active namenode, however currently it 
> goes to localhost.
> Updated the logic to find out the active namenode and use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-15395) Enhance blueprint support for using references

2016-03-11 Thread Shantanu Mundkur (JIRA)
Shantanu Mundkur created AMBARI-15395:
-

 Summary: Enhance blueprint support for using references
 Key: AMBARI-15395
 URL: https://issues.apache.org/jira/browse/AMBARI-15395
 Project: Ambari
  Issue Type: Story
  Components: ambari-server
Affects Versions: 2.4.0
Reporter: Shantanu Mundkur






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15393) Add stderr output of Ambari auto-recovery commands in agent log

2016-03-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191530#comment-15191530
 ] 

Hadoop QA commented on AMBARI-15393:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12792873/AMBARI-15393.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/Ambari-trunk-test-patch/5833//testReport/
Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/5833//console

This message is automatically generated.

> Add stderr output of Ambari auto-recovery commands in agent log
> ---
>
> Key: AMBARI-15393
> URL: https://issues.apache.org/jira/browse/AMBARI-15393
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: 2.2.1
>Reporter: Sandor Magyari
>Assignee: Sandor Magyari
>Priority: Critical
> Fix For: 2.2.2
>
> Attachments: AMBARI-15393.patch
>
>
> Users rely on Ambari auto-recovery logic to recover from component start 
> failures during cluster create. The idea is to improve reliability (through 
> retries) by sacrificing some of the latency.
> In some cases we see that cluster creates fail because component start fails 
> and auto-recovery is unable to start those components for up to 2 hrs, most 
> often on headnodes for HIVE_SERVER, OOZIE_SERVER, and NAMENODE components.
> The problem these kind of problems are hard to investigate later, as auto 
> recovery files are not sent to server side nor they are saved in ambari agent 
> logs, only stored on agent . 
> The solution is to add a new an option log_auto_execute_errors in logging 
> section to ambari-agent.ini. In case this is enabled agent will append stderr 
> of auto recovery command to agent log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15382) Alert for updating HAWQ namespace after enabling HDFS HA

2016-03-11 Thread Lav Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lav Jain updated AMBARI-15382:
--
Attachment: AMBARI-15382.v2.patch

> Alert for updating HAWQ namespace after enabling HDFS HA 
> -
>
> Key: AMBARI-15382
> URL: https://issues.apache.org/jira/browse/AMBARI-15382
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-web
>Affects Versions: trunk, 2.2.0
>Reporter: Lav Jain
>Assignee: Lav Jain
>Priority: Minor
> Fix For: 2.2.2
>
> Attachments: AMBARI-15382.branch22.patch, 
> AMBARI-15382.branch22.v2.patch, AMBARI-15382.patch, AMBARI-15382.v2.patch, 
> Screen Shot 2016-03-10 at 5.06.47 PM.png, Screen Shot 2016-03-10 at 5.13.41 
> PM.png
>
>
> Alert user on step1 of the wizard (conditional if HAWQ is installed). Repeat 
> the message when the namenode HA wizard is done.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15338) After exporting blueprint from ranger enabled cluster ranger.service.https.attrib.keystore.pass is exported

2016-03-11 Thread Amruta Borkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruta Borkar updated AMBARI-15338:
---
Status: Open  (was: Patch Available)

> After exporting blueprint from ranger enabled cluster 
> ranger.service.https.attrib.keystore.pass is exported
> ---
>
> Key: AMBARI-15338
> URL: https://issues.apache.org/jira/browse/AMBARI-15338
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server, blueprints
>Affects Versions: 2.2.0
>Reporter: Amruta Borkar
>Assignee: Amruta Borkar
> Attachments: AMBARI-15338-trunk.patch, AMBARI-15338.patch
>
>
> After exporting a blueprint from Ranger enabled cluster 
> ranger.service.https.attrib.keystore.pass is also included, which needs to be 
> removed before using the same blueprint to create another cluster
> Error Show when used same blueprint:
> {
>   "status" : 400,
>   "message" : "Blueprint configuration validation failed: Secret references 
> are not allowed in blueprints, replace following properties with real 
> passwords:\n  Config:ranger-admin-site 
> Property:ranger.service.https.attrib.keystore.pass\n"
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15391) Ambari upgrade to 2.2.2 failed

2016-03-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191490#comment-15191490
 ] 

Hadoop QA commented on AMBARI-15391:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12792870/AMBARI-15391.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-server.

Test results: 
https://builds.apache.org/job/Ambari-trunk-test-patch/5832//testReport/
Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/5832//console

This message is automatically generated.

> Ambari upgrade to 2.2.2 failed
> --
>
> Key: AMBARI-15391
> URL: https://issues.apache.org/jira/browse/AMBARI-15391
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.2.2
>
> Attachments: AMBARI-15391.patch
>
>
> **STR**  
> 1) Install Ambari-2.2.1  
> 2) Deploy Zookeeper only  
> 3) Upgrade to Ambari-2.2.2
> **Result**:
> 
> 
> ERROR: Error executing schema upgrade, please check the server logs.
> ERROR: Ambari server upgrade failed. Please look at 
> /var/log/ambari-server/ambari-server.log, for more details.
> ERROR: Exiting with exit code 11. 
> REASON: Schema upgrade failed.
> 
> 
> 
> 11 Mar 2016 13:54:08,586  INFO [main] UpgradeCatalog222:357 - Updating 
> HDFS widget definition.
> 11 Mar 2016 13:54:08,590 ERROR [main] SchemaUpgradeHelper:230 - Upgrade 
> failed.
> java.util.NoSuchElementException
> at java.util.Vector$Itr.next(Vector.java:1140)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog222.updateHDFSWidgetDefinition(UpgradeCatalog222.java:380)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog222.executeDMLUpdates(UpgradeCatalog222.java:152)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:662)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:228)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:305)
> 11 Mar 2016 13:54:08,600 ERROR [main] SchemaUpgradeHelper:316 - Exception 
> occurred during upgrade, failed
> org.apache.ambari.server.AmbariException
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:231)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:305)
> Caused by: java.util.NoSuchElementException
> at java.util.Vector$Itr.next(Vector.java:1140)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog222.updateHDFSWidgetDefinition(UpgradeCatalog222.java:380)
> at 
> org.apache.ambari.server.upgrade.UpgradeCatalog222.executeDMLUpdates(UpgradeCatalog222.java:152)
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:662)
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:228)
> ... 1 more
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-12906) Alert notifications are created even if credential fields are left empty

2016-03-11 Thread Qin Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-12906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qin Liu updated AMBARI-12906:
-
Fix Version/s: trunk

> Alert notifications are created even if credential fields are left empty
> 
>
> Key: AMBARI-12906
> URL: https://issues.apache.org/jira/browse/AMBARI-12906
> Project: Ambari
>  Issue Type: Bug
>  Components: alerts
>Affects Versions: 2.1.0
>Reporter: Chandana Mirashi
>Assignee: Qin Liu
> Fix For: trunk
>
> Attachments: AMBARI-12906.patch
>
>
> User should not be allowed to save changes if the credential fields are left 
> empty when creating new Alert Notifications.
> Steps to reproduce:
> 1. Create New Alert Notification
> 2. Enter Name = test
> 3. Tick the checkbox for 'Use Authentication'
> 4. Keep username and password empty
> 5. Click on Save
> 6. New alert notifications is created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15330) Bubble up errors during RU/EU

2016-03-11 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-15330:
-
Attachment: AMBARI-15330.trunk.patch

> Bubble up errors during RU/EU
> -
>
> Key: AMBARI-15330
> URL: https://issues.apache.org/jira/browse/AMBARI-15330
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: 2.4.0
>
> Attachments: AMBARI-15330.trunk.patch
>
>
> During RU/EU, need a way to bubble up an error of the current item that 
> failed. This is useful to quickly get a human-readable error that others UIs 
> can quickly retrieve.
> It can print a human-readable error, plus stdout and stderr.
> This would become part of the upgrade endpoint. e.g,
> api/v1/clusters/$name/upgrade_summary/$request_id
> {code}
> {
> attempt_cnt: 1,
> cluster_name: "c1",
> request_id: 1,
> fail_reason: "Failed calling RESTART ZOOKEEPER/ZOOKEEPER_SERVER on host 
> c6401.ambari.apache.org",
> // Notice that the rest are inherited from the failed task if it exists.
> command: "CUSTOM_COMMAND",
> command_detail: "RESTART ZOOKEEPER/ZOOKEEPER_SERVER",
> custom_command_name: "RESTART",
> end_time: -1,
> error_log: "/var/lib/ambari-agent/data/errors-1234.txt",
> exit_code: 1,
> host_name: "c6401.ambari.apache.org",
> id: 1234,
> output_log: "/var/lib/ambari-agent/data/output-1234.txt",
> role: "ZOOKEEPER_SERVER",
> stage_id: 1,
> start_time: 123456789,
> status: "HOLDING_FAILED",
> stdout: "",
> stderr: ""
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15330) Bubble up errors during RU/EU

2016-03-11 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-15330:
-
Attachment: (was: AMBARI-15330.trunk.patch)

> Bubble up errors during RU/EU
> -
>
> Key: AMBARI-15330
> URL: https://issues.apache.org/jira/browse/AMBARI-15330
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: 2.4.0
>
>
> During RU/EU, need a way to bubble up an error of the current item that 
> failed. This is useful to quickly get a human-readable error that others UIs 
> can quickly retrieve.
> It can print a human-readable error, plus stdout and stderr.
> This would become part of the upgrade endpoint. e.g,
> api/v1/clusters/$name/upgrade_summary/$request_id
> {code}
> {
> attempt_cnt: 1,
> cluster_name: "c1",
> request_id: 1,
> fail_reason: "Failed calling RESTART ZOOKEEPER/ZOOKEEPER_SERVER on host 
> c6401.ambari.apache.org",
> // Notice that the rest are inherited from the failed task if it exists.
> command: "CUSTOM_COMMAND",
> command_detail: "RESTART ZOOKEEPER/ZOOKEEPER_SERVER",
> custom_command_name: "RESTART",
> end_time: -1,
> error_log: "/var/lib/ambari-agent/data/errors-1234.txt",
> exit_code: 1,
> host_name: "c6401.ambari.apache.org",
> id: 1234,
> output_log: "/var/lib/ambari-agent/data/output-1234.txt",
> role: "ZOOKEEPER_SERVER",
> stage_id: 1,
> start_time: 123456789,
> status: "HOLDING_FAILED",
> stdout: "",
> stderr: ""
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-15379) UI Label changes for Ranger KMS and NFSGateway not uniform across pages

2016-03-11 Thread Zhe (Joe) Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe (Joe) Wang resolved AMBARI-15379.
-
Resolution: Fixed
  Assignee: Zhe (Joe) Wang

> UI Label changes for Ranger KMS and NFSGateway not uniform across pages
> ---
>
> Key: AMBARI-15379
> URL: https://issues.apache.org/jira/browse/AMBARI-15379
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.2.2
>Reporter: Dhanya Balasundaran
>Assignee: Zhe (Joe) Wang
> Fix For: 2.2.2
>
>
> Recently "Ranger KMS Server" and "NFSGateway" have been changed at few places 
> as "Ranger Kms Server" and "Nfs Gateway" and are now not uniform across the 
> pages as they used to be hence creating problem in Automation tests.
> Below pages are expected to have uniform Label convention
> While adding Service in case of both Ranger KMS and NFS Gateway
> Also in install wizard assignMasters page
> Add Service wizard assignMaster page
> (Ranger/HDFS) service page > summary
> Hosts Page > add > NFSGateway
> Host Page where Ranger KMS Server and NFSGateway are hosted (labels on Host 
> details page)
> Actual: Add service showing Ranger Kms Server and Nfs Gateway on few Pages
> Expected: All Labels should be uniform (Original Labels were : "Ranger KMS 
> Server" and "NFSGateway" which were uniform across)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-12906) Alert notifications are created even if credential fields are left empty

2016-03-11 Thread Qin Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-12906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191446#comment-15191446
 ] 

Qin Liu commented on AMBARI-12906:
--

Has anyone had a chance to review the patch?
I am new to Ambari and this is the first JIRA I'm working on.  Would really 
appreciate some feedback.  Thanks.

> Alert notifications are created even if credential fields are left empty
> 
>
> Key: AMBARI-12906
> URL: https://issues.apache.org/jira/browse/AMBARI-12906
> Project: Ambari
>  Issue Type: Bug
>  Components: alerts
>Affects Versions: 2.1.0
>Reporter: Chandana Mirashi
>Assignee: Qin Liu
> Attachments: AMBARI-12906.patch
>
>
> User should not be allowed to save changes if the credential fields are left 
> empty when creating new Alert Notifications.
> Steps to reproduce:
> 1. Create New Alert Notification
> 2. Enter Name = test
> 3. Tick the checkbox for 'Use Authentication'
> 4. Keep username and password empty
> 5. Click on Save
> 6. New alert notifications is created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15311) Update descriptions for HAWQ and PXF configurations

2016-03-11 Thread bhuvnesh chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191445#comment-15191445
 ] 

bhuvnesh chaudhary commented on AMBARI-15311:
-

Committed to trunk : commit c15b088ca3c0b0e5a69340ba9e95eeae3dbc3aad

> Update descriptions for HAWQ and PXF configurations
> ---
>
> Key: AMBARI-15311
> URL: https://issues.apache.org/jira/browse/AMBARI-15311
> Project: Ambari
>  Issue Type: Sub-task
>  Components: stacks
>Reporter: Matt
>Assignee: Goutam Tadi
>Priority: Minor
> Fix For: trunk, 2.2.2
>
> Attachments: AMBARI-15311-trunk.patch, AMBARI-15311.v1-trunk.patch
>
>
> Update the descriptions for HAWQ and PXF configurations so that they are more 
> readable and convey the right message to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15388) Upgrade XML should be pushed down as much as possible to the services

2016-03-11 Thread Tim Thorpe (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Thorpe updated AMBARI-15388:

Status: Patch Available  (was: In Progress)

> Upgrade XML should be pushed down as much as possible to the services
> -
>
> Key: AMBARI-15388
> URL: https://issues.apache.org/jira/browse/AMBARI-15388
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Tim Thorpe
>Assignee: Tim Thorpe
> Fix For: trunk
>
> Attachments: AMBARI-15388.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently the upgrade is defined as a series of xml files specific to the 
> current stack version and the target stack version.  Each upgrade xml defines 
> the overall sequence of the upgrade and what needs to be done for each 
> service.  It would both easier to maintain and easier to add new services, if 
> the services themselves could specify what should be done during their 
> upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15388) Upgrade XML should be pushed down as much as possible to the services

2016-03-11 Thread Tim Thorpe (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Thorpe updated AMBARI-15388:

Attachment: AMBARI-15388.patch

Patch splitting the upgrade xml.

> Upgrade XML should be pushed down as much as possible to the services
> -
>
> Key: AMBARI-15388
> URL: https://issues.apache.org/jira/browse/AMBARI-15388
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Tim Thorpe
>Assignee: Tim Thorpe
> Fix For: trunk
>
> Attachments: AMBARI-15388.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently the upgrade is defined as a series of xml files specific to the 
> current stack version and the target stack version.  Each upgrade xml defines 
> the overall sequence of the upgrade and what needs to be done for each 
> service.  It would both easier to maintain and easier to add new services, if 
> the services themselves could specify what should be done during their 
> upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15338) After exporting blueprint from ranger enabled cluster ranger.service.https.attrib.keystore.pass is exported

2016-03-11 Thread Amruta Borkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruta Borkar updated AMBARI-15338:
---
Status: Patch Available  (was: In Progress)

> After exporting blueprint from ranger enabled cluster 
> ranger.service.https.attrib.keystore.pass is exported
> ---
>
> Key: AMBARI-15338
> URL: https://issues.apache.org/jira/browse/AMBARI-15338
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server, blueprints
>Affects Versions: 2.2.0
>Reporter: Amruta Borkar
>Assignee: Amruta Borkar
> Attachments: AMBARI-15338-trunk.patch, AMBARI-15338.patch
>
>
> After exporting a blueprint from Ranger enabled cluster 
> ranger.service.https.attrib.keystore.pass is also included, which needs to be 
> removed before using the same blueprint to create another cluster
> Error Show when used same blueprint:
> {
>   "status" : 400,
>   "message" : "Blueprint configuration validation failed: Secret references 
> are not allowed in blueprints, replace following properties with real 
> passwords:\n  Config:ranger-admin-site 
> Property:ranger.service.https.attrib.keystore.pass\n"
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15338) After exporting blueprint from ranger enabled cluster ranger.service.https.attrib.keystore.pass is exported

2016-03-11 Thread Amruta Borkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruta Borkar updated AMBARI-15338:
---
Status: Open  (was: Patch Available)

> After exporting blueprint from ranger enabled cluster 
> ranger.service.https.attrib.keystore.pass is exported
> ---
>
> Key: AMBARI-15338
> URL: https://issues.apache.org/jira/browse/AMBARI-15338
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server, blueprints
>Affects Versions: 2.2.0
>Reporter: Amruta Borkar
>Assignee: Amruta Borkar
> Attachments: AMBARI-15338-trunk.patch, AMBARI-15338.patch
>
>
> After exporting a blueprint from Ranger enabled cluster 
> ranger.service.https.attrib.keystore.pass is also included, which needs to be 
> removed before using the same blueprint to create another cluster
> Error Show when used same blueprint:
> {
>   "status" : 400,
>   "message" : "Blueprint configuration validation failed: Secret references 
> are not allowed in blueprints, replace following properties with real 
> passwords:\n  Config:ranger-admin-site 
> Property:ranger.service.https.attrib.keystore.pass\n"
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15228) Ambari overwrites permissions on HDFS directories

2016-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191391#comment-15191391
 ] 

Hudson commented on AMBARI-15228:
-

SUCCESS: Integrated in Ambari-branch-2.2 #502 (See 
[https://builds.apache.org/job/Ambari-branch-2.2/502/])
AMBARI-15228. Ambari overwrites permissions on HDFS directories (aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=4277dbe8e356b309f6e1ecb98e1ddffb974e4692])
* ambari-server/src/test/python/stacks/2.3/MAHOUT/test_mahout_service_check.py
* 
ambari-server/src/test/python/stacks/2.0.6/YARN/test_mapreduce2_service_check.py
* ambari-server/src/test/python/stacks/2.0.6/HDFS/test_namenode.py
* ambari-server/src/test/python/stacks/2.3/SPARK/test_spark_thrift_server.py
* ambari-server/src/test/python/stacks/2.1/YARN/test_apptimelineserver.py
* ambari-server/src/test/python/stacks/2.2/PIG/test_pig_service_check.py
* ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_master.py
* ambari-server/src/test/python/stacks/2.0.6/PIG/test_pig_service_check.py
* ambari-server/src/test/python/stacks/2.2/SPARK/test_job_history_server.py
* ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py
* 
ambari-server/src/main/resources/stacks/HDP/2.0.6/configuration/cluster-env.xml
* ambari-server/src/test/python/stacks/2.0.6/HDFS/test_service_check.py
* ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_service_check.py
* ambari-server/src/test/python/stacks/2.3/HAWQ/test_hawqmaster.py
* ambari-server/src/test/python/stacks/2.1/TEZ/test_service_check.py
* ambari-server/src/test/python/stacks/2.0.6/YARN/test_historyserver.py
* ambari-server/src/test/python/stacks/2.1/FALCON/test_falcon_server.py
* ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py
* ambari-server/src/test/python/stacks/2.3/configs/hawq_default.json
* ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_service_check.py
* 
ambari-server/src/test/python/stacks/2.0.6/AMBARI_METRICS/test_metrics_collector.py
* ambari-server/src/test/python/stacks/2.3/YARN/test_ats_1_5.py


> Ambari overwrites permissions on HDFS directories
> -
>
> Key: AMBARI-15228
> URL: https://issues.apache.org/jira/browse/AMBARI-15228
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.2.2
>
> Attachments: AMBARI-15228.patch
>
>
> Ambari is overriding permissions on default HDFS directories such as /app-
> logs, /apps/hive/warehouse, /tmp.  
> This is allowing any user to write in those locations preventing them from
> having control via Ranger/HDFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15389) Intermittent YARN service check failures during and post EU

2016-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191392#comment-15191392
 ] 

Hudson commented on AMBARI-15389:
-

SUCCESS: Integrated in Ambari-branch-2.2 #502 (See 
[https://builds.apache.org/job/Ambari-branch-2.2/502/])
AMBARI-15389 Intermittent YARN service check failures during and post EU 
(dlysnichenko: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=77dc81f63a830a8fe9a99284d31dcfec98ac98d3])
* 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py


> Intermittent YARN service check failures during and post EU
> ---
>
> Key: AMBARI-15389
> URL: https://issues.apache.org/jira/browse/AMBARI-15389
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.2.2
>
> Attachments: AMBARI-15389.patch
>
>
> Build # - Ambari 2.2.1.1 - #63
> Observed this issue in a couple of EU runs recently where YARN service check 
> reports failure
> a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
> reported failure during EU itself; a retry of the operation led to service 
> check being successful
> b. In another test post EU when YARN service check was run, it reported 
> failure; afterwards when I ran it again - success
> Looks like there is some corner condition which causes this issue to be hit
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-822.txt
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 142, in 
> ServiceCheck().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 104, in service_check
> user=params.smokeuser,
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 70, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 92, in checked_call
> tries=tries, try_sleep=try_sleep)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 140, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 291, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
>  returned 2.  Hortonworks #
> This is MOTD message, added for testing in qe infra
> 16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
> http://host:8188/ws/v1/timeline/
> 16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
> 16/03/03 02:33:51 INFO distributedshell.Client: Running Client
> 16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
> host-9-5.test/127.0.0.254:8050
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=3
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
> nodeNumContainers1
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
> resources in this cluster 10240
> 16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 1
> 16/03/03 02:33:53 INFO 

[jira] [Commented] (AMBARI-15374) Check To Ensure That All Components Are On The Same Version Before Upgrading

2016-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191393#comment-15191393
 ] 

Hudson commented on AMBARI-15374:
-

SUCCESS: Integrated in Ambari-branch-2.2 #502 (See 
[https://builds.apache.org/job/Ambari-branch-2.2/502/])
AMBARI-15374. Check To Ensure That All Components Are On The Same 
(dlysnichenko: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=b400a051051a7f621e22b0048e91b7d6d8f9105e])
* 
ambari-server/src/test/java/org/apache/ambari/server/checks/ServiceComponentHostVersionMatchCheckTest.java
* 
ambari-server/src/main/java/org/apache/ambari/server/checks/CheckDescription.java
* 
ambari-server/src/main/java/org/apache/ambari/server/checks/UpgradeCheckGroup.java
* 
ambari-server/src/main/java/org/apache/ambari/server/checks/ServiceComponentHostVersionMatchCheck.java


> Check To Ensure That All Components Are On The Same Version Before Upgrading
> 
>
> Key: AMBARI-15374
> URL: https://issues.apache.org/jira/browse/AMBARI-15374
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.2.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.2.2
>
> Attachments: AMBARI-15374.patch
>
>
> Before beginning an upgrade, there should be a pre-upgrade check which 
> ensures that all known, versionable components are reporting the same 
> version. If any host component is reporting a version which does not match 
> the current repository version, then a warning should be produced. The 
> warning should provide information on the failed components, the expected 
> version, and the version they are reporting.
> Note that this is only for Ambari 2.2.x; Ambari 2.4.0 uses a new workflow for 
> how versions are recorded and stored during upgrade and does not need this.
> There are two ways to go about this:
> - Compare each service component host version to that of the {{CURRENT}} 
> {{repo_version}}. However, there could be cases where the {{repo_version}} 
> has not been calculated yet.
> - Compare each service component host version to each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15372) Zookeeper service check fails after removing a host

2016-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191394#comment-15191394
 ] 

Hudson commented on AMBARI-15372:
-

SUCCESS: Integrated in Ambari-branch-2.2 #502 (See 
[https://builds.apache.org/job/Ambari-branch-2.2/502/])
AMBARI-15372. Zookeeper service check fails after removing a host  (aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=94f5bb7d52d887989792c67bf51b8c645436379e])
* 
ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java
* 
ambari-server/src/test/java/org/apache/ambari/server/controller/AmbariManagementControllerTest.java


> Zookeeper service check fails after removing a host 
> 
>
> Key: AMBARI-15372
> URL: https://issues.apache.org/jira/browse/AMBARI-15372
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.2.2
>
> Attachments: AMBARI-15372.patch
>
>
> This was reproduced in the rerun.  
> STR:  
> 1) Move all the masters from the host to be deleted(In this case, SNamenode
> was moved)  
> 2) Remove the host  
> 3) Run service checks
> Mapreduce service check fails here.
> Here is the live cluster: I am extending its life till 72 hours:  
>   
>  HDP/84552/>
> Artifacts:  artifacts/os-r6-phtdlu-ambari-rare-17r/ambari-rare-1457129950/artifacts/screen
> shots/com.hw.ambari.ui.tests.heavyweights.TestDeleteHostFromOriginalClusterAft
> erMoveMaster/test2_deleteAdditionalHostFromClusterAfterMoveMaster/_4_22_12_8_C
> hecking_smoke_test_for__ZOOKEEPER_service_failed/>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15390) Ambari names jar 'ojdbc6.jar' even though it is actually ojdbc7.jar

2016-03-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191390#comment-15191390
 ] 

Hadoop QA commented on AMBARI-15390:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12792869/AMBARI-15390.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
ambari-server:

  
org.apache.ambari.server.controller.AmbariManagementControllerImplTest

Test results: 
https://builds.apache.org/job/Ambari-trunk-test-patch/5831//testReport/
Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/5831//console

This message is automatically generated.

> Ambari names jar 'ojdbc6.jar' even though it is actually ojdbc7.jar
> ---
>
> Key: AMBARI-15390
> URL: https://issues.apache.org/jira/browse/AMBARI-15390
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.1.2
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.4.0
>
> Attachments: AMBARI-15390.patch
>
>
> PROBLEM:
> When set up jdbc driver with 
> ambari-server setup --jdbc-db=oracle --jdbc-driver=/tmp/ojdbc7.jar
> and see that it is properly set up in /var/lib/ambari-server/resources, and 
> also cleaned up any ojdbc*.jar in the cluster, when restart HDFS service, 
> Ambari copies the jar but name it 'ojdbc6.jar'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15228) Ambari overwrites permissions on HDFS directories

2016-03-11 Thread bhuvnesh chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191365#comment-15191365
 ] 

bhuvnesh chaudhary commented on AMBARI-15228:
-

Hello [~aonishuk] - Thank you very much. will test.

> Ambari overwrites permissions on HDFS directories
> -
>
> Key: AMBARI-15228
> URL: https://issues.apache.org/jira/browse/AMBARI-15228
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.2.2
>
> Attachments: AMBARI-15228.patch
>
>
> Ambari is overriding permissions on default HDFS directories such as /app-
> logs, /apps/hive/warehouse, /tmp.  
> This is allowing any user to write in those locations preventing them from
> having control via Ranger/HDFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15387) Pxf service status is currently red

2016-03-11 Thread bhuvnesh chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191358#comment-15191358
 ] 

bhuvnesh chaudhary commented on AMBARI-15387:
-

Hello [~aonishuk] Thank you for the suggestion and patch, let me take it 
forward.

> Pxf service status is currently red
> ---
>
> Key: AMBARI-15387
> URL: https://issues.apache.org/jira/browse/AMBARI-15387
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.2.2
>Reporter: Andrew Onischuk
>Assignee: bhuvnesh chaudhary
> Fix For: 2.2.2
>
> Attachments: AMBARI-15387.patch
>
>
> {noformat}
> [root@c6401 ~]# /usr/bin/python 
> /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py 
> STATUS /var/lib/ambari-agent/data/status_command.json 
> /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package 
> /var/lib/ambari-agent/data/structured-out-status.json DEBUG 
> /var/lib/ambari-agent/tmp
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 136, in 
> Pxf().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 67, in status
> self.__execute_service_command("status")
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 77, in __execute_service_command
> import params
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/params.py",
>  line 88, in 
> immutable_paths = get_not_managed_resources())
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_not_managed_resources.py",
>  line 36, in get_not_managed_resources
> not_managed_hdfs_path_list = 
> json.loads(config['hostLevelParams']['not_managed_hdfs_path_list'])[:]
>   File "/usr/lib64/python2.6/json/__init__.py", line 307, in loads
> return _default_decoder.decode(s)
>   File "/usr/lib64/python2.6/json/decoder.py", line 319, in decode
> obj, end = self.raw_decode(s, idx=_w(s, 0).end())
> TypeError: expected string or buffer
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15390) Ambari names jar 'ojdbc6.jar' even though it is actually ojdbc7.jar

2016-03-11 Thread Vitaly Brodetskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Brodetskyi updated AMBARI-15390:
---
Attachment: AMBARI-15390.patch

> Ambari names jar 'ojdbc6.jar' even though it is actually ojdbc7.jar
> ---
>
> Key: AMBARI-15390
> URL: https://issues.apache.org/jira/browse/AMBARI-15390
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.1.2
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Critical
> Fix For: 2.4.0
>
> Attachments: AMBARI-15390.patch
>
>
> PROBLEM:
> When set up jdbc driver with 
> ambari-server setup --jdbc-db=oracle --jdbc-driver=/tmp/ojdbc7.jar
> and see that it is properly set up in /var/lib/ambari-server/resources, and 
> also cleaned up any ojdbc*.jar in the cluster, when restart HDFS service, 
> Ambari copies the jar but name it 'ojdbc6.jar'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-15390) Ambari names jar 'ojdbc6.jar' even though it is actually ojdbc7.jar

2016-03-11 Thread Vitaly Brodetskyi (JIRA)
Vitaly Brodetskyi created AMBARI-15390:
--

 Summary: Ambari names jar 'ojdbc6.jar' even though it is actually 
ojdbc7.jar
 Key: AMBARI-15390
 URL: https://issues.apache.org/jira/browse/AMBARI-15390
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.1.2
Reporter: Vitaly Brodetskyi
Assignee: Vitaly Brodetskyi
Priority: Critical
 Fix For: 2.4.0


PROBLEM:
When set up jdbc driver with 
ambari-server setup --jdbc-db=oracle --jdbc-driver=/tmp/ojdbc7.jar
and see that it is properly set up in /var/lib/ambari-server/resources, and 
also cleaned up any ojdbc*.jar in the cluster, when restart HDFS service, 
Ambari copies the jar but name it 'ojdbc6.jar'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15389) Intermittent YARN service check failures during and post EU

2016-03-11 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-15389:

Fix Version/s: 2.2.2

> Intermittent YARN service check failures during and post EU
> ---
>
> Key: AMBARI-15389
> URL: https://issues.apache.org/jira/browse/AMBARI-15389
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.2.2
>
> Attachments: AMBARI-15389.patch
>
>
> Build # - Ambari 2.2.1.1 - #63
> Observed this issue in a couple of EU runs recently where YARN service check 
> reports failure
> a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
> reported failure during EU itself; a retry of the operation led to service 
> check being successful
> b. In another test post EU when YARN service check was run, it reported 
> failure; afterwards when I ran it again - success
> Looks like there is some corner condition which causes this issue to be hit
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-822.txt
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 142, in 
> ServiceCheck().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 104, in service_check
> user=params.smokeuser,
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 70, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 92, in checked_call
> tries=tries, try_sleep=try_sleep)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 140, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 291, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
>  returned 2.  Hortonworks #
> This is MOTD message, added for testing in qe infra
> 16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
> http://host:8188/ws/v1/timeline/
> 16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
> 16/03/03 02:33:51 INFO distributedshell.Client: Running Client
> 16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
> host-9-5.test/127.0.0.254:8050
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=3
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
> nodeNumContainers1
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
> resources in this cluster 10240
> 16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 1
> 16/03/03 02:33:53 INFO distributedshell.Client: Copy App Master jar from 
> local filesystem and add to local environment
> 16/03/03 02:33:53 INFO distributedshell.Client: Set the environment for the 
> application master
> 16/03/03 02:33:53 INFO distributedshell.Client: Setting up app master command
> 16/03/03 02:33:53 INFO distributedshell.Client: Completed setting up app 
> master command {{JAVA_HOME}}/bin/java 

[jira] [Updated] (AMBARI-15389) Intermittent YARN service check failures during and post EU

2016-03-11 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-15389:

Attachment: AMBARI-15389.patch

> Intermittent YARN service check failures during and post EU
> ---
>
> Key: AMBARI-15389
> URL: https://issues.apache.org/jira/browse/AMBARI-15389
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Attachments: AMBARI-15389.patch
>
>
> Build # - Ambari 2.2.1.1 - #63
> Observed this issue in a couple of EU runs recently where YARN service check 
> reports failure
> a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
> reported failure during EU itself; a retry of the operation led to service 
> check being successful
> b. In another test post EU when YARN service check was run, it reported 
> failure; afterwards when I ran it again - success
> Looks like there is some corner condition which causes this issue to be hit
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-822.txt
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 142, in 
> ServiceCheck().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 104, in service_check
> user=params.smokeuser,
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 70, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 92, in checked_call
> tries=tries, try_sleep=try_sleep)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 140, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 291, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
>  returned 2.  Hortonworks #
> This is MOTD message, added for testing in qe infra
> 16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
> http://host:8188/ws/v1/timeline/
> 16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
> 16/03/03 02:33:51 INFO distributedshell.Client: Running Client
> 16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
> host-9-5.test/127.0.0.254:8050
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=3
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
> nodeNumContainers1
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
> resources in this cluster 10240
> 16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 1
> 16/03/03 02:33:53 INFO distributedshell.Client: Copy App Master jar from 
> local filesystem and add to local environment
> 16/03/03 02:33:53 INFO distributedshell.Client: Set the environment for the 
> application master
> 16/03/03 02:33:53 INFO distributedshell.Client: Setting up app master command
> 16/03/03 02:33:53 INFO distributedshell.Client: Completed setting up app 
> master command {{JAVA_HOME}}/bin/java -Xmx10m 
> 

[jira] [Updated] (AMBARI-15389) Intermittent YARN service check failures during and post EU

2016-03-11 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-15389:

Status: Patch Available  (was: Open)

> Intermittent YARN service check failures during and post EU
> ---
>
> Key: AMBARI-15389
> URL: https://issues.apache.org/jira/browse/AMBARI-15389
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Attachments: AMBARI-15389.patch
>
>
> Build # - Ambari 2.2.1.1 - #63
> Observed this issue in a couple of EU runs recently where YARN service check 
> reports failure
> a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
> reported failure during EU itself; a retry of the operation led to service 
> check being successful
> b. In another test post EU when YARN service check was run, it reported 
> failure; afterwards when I ran it again - success
> Looks like there is some corner condition which causes this issue to be hit
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-822.txt
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 142, in 
> ServiceCheck().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 104, in service_check
> user=params.smokeuser,
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 70, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 92, in checked_call
> tries=tries, try_sleep=try_sleep)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 140, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 291, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
>  returned 2.  Hortonworks #
> This is MOTD message, added for testing in qe infra
> 16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
> http://host:8188/ws/v1/timeline/
> 16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
> 16/03/03 02:33:51 INFO distributedshell.Client: Running Client
> 16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
> host-9-5.test/127.0.0.254:8050
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=3
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
> nodeNumContainers1
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
> resources in this cluster 10240
> 16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 1
> 16/03/03 02:33:53 INFO distributedshell.Client: Copy App Master jar from 
> local filesystem and add to local environment
> 16/03/03 02:33:53 INFO distributedshell.Client: Set the environment for the 
> application master
> 16/03/03 02:33:53 INFO distributedshell.Client: Setting up app master command
> 16/03/03 02:33:53 INFO distributedshell.Client: Completed setting up app 
> master command {{JAVA_HOME}}/bin/java -Xmx10m 
> 

[jira] [Created] (AMBARI-15389) Intermittent YARN service check failures during and post EU

2016-03-11 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-15389:
---

 Summary: Intermittent YARN service check failures during and post 
EU
 Key: AMBARI-15389
 URL: https://issues.apache.org/jira/browse/AMBARI-15389
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
 Attachments: AMBARI-15389.patch


Build # - Ambari 2.2.1.1 - #63

Observed this issue in a couple of EU runs recently where YARN service check 
reports failure
a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
reported failure during EU itself; a retry of the operation led to service 
check being successful

b. In another test post EU when YARN service check was run, it reported 
failure; afterwards when I ran it again - success

Looks like there is some corner condition which causes this issue to be hit

{code}
stderr:   /var/lib/ambari-agent/data/errors-822.txt

Traceback (most recent call last):
File 
"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
 line 142, in 
ServiceCheck().execute()
File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 219, in execute
method(env)
File 
"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
 line 104, in service_check
user=params.smokeuser,
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
/etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
-num_containers 1 -jar 
/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
 returned 2.  Hortonworks #
This is MOTD message, added for testing in qe infra
16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
http://host:8188/ws/v1/timeline/
16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
16/03/03 02:33:51 INFO distributedshell.Client: Running Client
16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
host-9-5.test/127.0.0.254:8050
16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
ASM, numNodeManagers=3
16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
nodeNumContainers1
16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
nodeRackName/default-rack, nodeNumContainers0
16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
nodeRackName/default-rack, nodeNumContainers0
16/03/03 02:33:53 INFO distributedshell.Client: Queue info, queueName=default, 
queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
queueApplicationCount=0, queueChildQueueCount=0
16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
queueName=root, userAcl=SUBMIT_APPLICATIONS
16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
queueName=default, userAcl=SUBMIT_APPLICATIONS
16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
resources in this cluster 10240
16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
of resources in this cluster 1
16/03/03 02:33:53 INFO distributedshell.Client: Copy App Master jar from local 
filesystem and add to local environment
16/03/03 02:33:53 INFO distributedshell.Client: Set the environment for the 
application master
16/03/03 02:33:53 INFO distributedshell.Client: Setting up app master command
16/03/03 02:33:53 INFO distributedshell.Client: Completed setting up app master 
command {{JAVA_HOME}}/bin/java -Xmx10m 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster 
--container_memory 10 --container_vcores 1 --num_containers 1 --priority 0 
1>/AppMaster.stdout 2>/AppMaster.stderr
16/03/03 02:33:53 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 290 
for ambari-qa on 127.0.0.235:8020
16/03/03 02:33:53 INFO distributedshell.Client: Got dt for 
hdfs://host-9-1.test:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 
127.0.0.235:8020, 

[jira] [Updated] (AMBARI-15389) Intermittent YARN service check failures during and post EU

2016-03-11 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-15389:

Component/s: ambari-server

> Intermittent YARN service check failures during and post EU
> ---
>
> Key: AMBARI-15389
> URL: https://issues.apache.org/jira/browse/AMBARI-15389
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Attachments: AMBARI-15389.patch
>
>
> Build # - Ambari 2.2.1.1 - #63
> Observed this issue in a couple of EU runs recently where YARN service check 
> reports failure
> a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
> reported failure during EU itself; a retry of the operation led to service 
> check being successful
> b. In another test post EU when YARN service check was run, it reported 
> failure; afterwards when I ran it again - success
> Looks like there is some corner condition which causes this issue to be hit
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-822.txt
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 142, in 
> ServiceCheck().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 104, in service_check
> user=params.smokeuser,
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 70, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 92, in checked_call
> tries=tries, try_sleep=try_sleep)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 140, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 291, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
>  returned 2.  Hortonworks #
> This is MOTD message, added for testing in qe infra
> 16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
> http://host:8188/ws/v1/timeline/
> 16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
> 16/03/03 02:33:51 INFO distributedshell.Client: Running Client
> 16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
> host-9-5.test/127.0.0.254:8050
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=3
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
> nodeNumContainers1
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
> resources in this cluster 10240
> 16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 1
> 16/03/03 02:33:53 INFO distributedshell.Client: Copy App Master jar from 
> local filesystem and add to local environment
> 16/03/03 02:33:53 INFO distributedshell.Client: Set the environment for the 
> application master
> 16/03/03 02:33:53 INFO distributedshell.Client: Setting up app master command
> 16/03/03 02:33:53 INFO distributedshell.Client: Completed setting up app 
> master command {{JAVA_HOME}}/bin/java -Xmx10m 
> 

[jira] [Updated] (AMBARI-15387) Pxf service status is currently red

2016-03-11 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-15387:
-
Attachment: AMBARI-15387.patch

> Pxf service status is currently red
> ---
>
> Key: AMBARI-15387
> URL: https://issues.apache.org/jira/browse/AMBARI-15387
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.2.2
>Reporter: Andrew Onischuk
>Assignee: bhuvnesh chaudhary
> Fix For: 2.2.2
>
> Attachments: AMBARI-15387.patch
>
>
> {noformat}
> [root@c6401 ~]# /usr/bin/python 
> /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py 
> STATUS /var/lib/ambari-agent/data/status_command.json 
> /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package 
> /var/lib/ambari-agent/data/structured-out-status.json DEBUG 
> /var/lib/ambari-agent/tmp
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 136, in 
> Pxf().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 67, in status
> self.__execute_service_command("status")
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 77, in __execute_service_command
> import params
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/params.py",
>  line 88, in 
> immutable_paths = get_not_managed_resources())
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_not_managed_resources.py",
>  line 36, in get_not_managed_resources
> not_managed_hdfs_path_list = 
> json.loads(config['hostLevelParams']['not_managed_hdfs_path_list'])[:]
>   File "/usr/lib64/python2.6/json/__init__.py", line 307, in loads
> return _default_decoder.decode(s)
>   File "/usr/lib64/python2.6/json/decoder.py", line 319, in decode
> obj, end = self.raw_decode(s, idx=_w(s, 0).end())
> TypeError: expected string or buffer
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15387) Pxf service status is currently red

2016-03-11 Thread Andrew Onischuk (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191184#comment-15191184
 ] 

Andrew Onischuk commented on AMBARI-15387:
--

In status_command we do not add not_managed_hdfs_path_list to hostLevelParams. 
In many services we have separated status_params.py and params.py. During 
status check only status_params.py is imported.
We can do the same for PXF.

The patch attached fixes the issue.
I am not sure how to test the pxf service that's why assigning this to you to 
fix and commit.

> Pxf service status is currently red
> ---
>
> Key: AMBARI-15387
> URL: https://issues.apache.org/jira/browse/AMBARI-15387
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.2.2
>Reporter: Andrew Onischuk
>Assignee: bhuvnesh chaudhary
> Fix For: 2.2.2
>
> Attachments: AMBARI-15387.patch
>
>
> {noformat}
> [root@c6401 ~]# /usr/bin/python 
> /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py 
> STATUS /var/lib/ambari-agent/data/status_command.json 
> /var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package 
> /var/lib/ambari-agent/data/structured-out-status.json DEBUG 
> /var/lib/ambari-agent/tmp
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 136, in 
> Pxf().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 67, in status
> self.__execute_service_command("status")
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py",
>  line 77, in __execute_service_command
> import params
>   File 
> "/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/params.py",
>  line 88, in 
> immutable_paths = get_not_managed_resources())
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_not_managed_resources.py",
>  line 36, in get_not_managed_resources
> not_managed_hdfs_path_list = 
> json.loads(config['hostLevelParams']['not_managed_hdfs_path_list'])[:]
>   File "/usr/lib64/python2.6/json/__init__.py", line 307, in loads
> return _default_decoder.decode(s)
>   File "/usr/lib64/python2.6/json/decoder.py", line 319, in decode
> obj, end = self.raw_decode(s, idx=_w(s, 0).end())
> TypeError: expected string or buffer
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-15387) Pxf service status is currently red

2016-03-11 Thread Andrew Onischuk (JIRA)
Andrew Onischuk created AMBARI-15387:


 Summary: Pxf service status is currently red
 Key: AMBARI-15387
 URL: https://issues.apache.org/jira/browse/AMBARI-15387
 Project: Ambari
  Issue Type: Bug
Affects Versions: 2.2.2
Reporter: Andrew Onischuk
Assignee: bhuvnesh chaudhary
 Fix For: 2.2.2


{noformat}
[root@c6401 ~]# /usr/bin/python 
/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py 
STATUS /var/lib/ambari-agent/data/status_command.json 
/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package 
/var/lib/ambari-agent/data/structured-out-status.json DEBUG 
/var/lib/ambari-agent/tmp
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py", 
line 136, in 
Pxf().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 219, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py", 
line 67, in status
self.__execute_service_command("status")
  File 
"/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/pxf.py", 
line 77, in __execute_service_command
import params
  File 
"/var/lib/ambari-agent/cache/common-services/PXF/3.0.0/package/scripts/params.py",
 line 88, in 
immutable_paths = get_not_managed_resources())
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_not_managed_resources.py",
 line 36, in get_not_managed_resources
not_managed_hdfs_path_list = 
json.loads(config['hostLevelParams']['not_managed_hdfs_path_list'])[:]
  File "/usr/lib64/python2.6/json/__init__.py", line 307, in loads
return _default_decoder.decode(s)
  File "/usr/lib64/python2.6/json/decoder.py", line 319, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-15386) Hive View : Upload Table : Tables are not delete in case of exception.

2016-03-11 Thread Nitiraj Singh Rathore (JIRA)
Nitiraj Singh Rathore created AMBARI-15386:
--

 Summary: Hive View : Upload Table : Tables are not delete in case 
of exception.
 Key: AMBARI-15386
 URL: https://issues.apache.org/jira/browse/AMBARI-15386
 Project: Ambari
  Issue Type: Bug
  Components: ambari-views
Affects Versions: 2.2.0
Reporter: Nitiraj Singh Rathore
Assignee: Nitiraj Singh Rathore
 Fix For: 2.3.0


Lets say you tried to create a table named T1 using hive view upload table 
feature.
If and error occurs after creation of T1 and temporary table but before 
transferring data to T1,
then the flow stops but the T1 and the temporary table is not delete from hive.
This can cause error while next attempt or can leave lots of temporary tables 
in hive database.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15371) Increase the default http header size of the Ambari server to 64k

2016-03-11 Thread Myroslav Papirkovskyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myroslav Papirkovskyy updated AMBARI-15371:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Unit test failure is not related.
Pushed to trunk and branch-2.2

> Increase the default http header size of the Ambari server to 64k
> -
>
> Key: AMBARI-15371
> URL: https://issues.apache.org/jira/browse/AMBARI-15371
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.2
>Reporter: Myroslav Papirkovskyy
>Assignee: Myroslav Papirkovskyy
>Priority: Critical
> Fix For: 2.2.2
>
> Attachments: AMBARI-15371.patch, AMBARI-15371_branch-2.2.patch
>
>
> Increase the default http header size to 64k for Ambari server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15375) NPE while gathering JMX ports from configs

2016-03-11 Thread Myroslav Papirkovskyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myroslav Papirkovskyy updated AMBARI-15375:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to trunk and branch-2.2

> NPE while gathering JMX ports from configs
> --
>
> Key: AMBARI-15375
> URL: https://issues.apache.org/jira/browse/AMBARI-15375
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.2
>Reporter: Myroslav Papirkovskyy
>Assignee: Myroslav Papirkovskyy
>Priority: Critical
> Fix For: 2.2.2
>
> Attachments: AMBARI-15375.patch, AMBARI-15375_branch-2.2.patch
>
>
> Sometimes getting JMX ports from configs throws NPE exception.
> {noformat}
> org.apache.ambari.server.controller.spi.SystemException: Caught exception 
> getting JMX metrics : null
>   at 
> org.apache.ambari.server.controller.metrics.ThreadPoolEnabledPropertyProvider.rethrowSystemException(ThreadPoolEnabledPropertyProvider.java:245)
>   at 
> org.apache.ambari.server.controller.metrics.ThreadPoolEnabledPropertyProvider.populateResources(ThreadPoolEnabledPropertyProvider.java:155)
>   at 
> org.apache.ambari.server.controller.internal.StackDefinedPropertyProvider.populateResources(StackDefinedPropertyProvider.java:200)
>   at 
> org.apache.ambari.server.controller.internal.ClusterControllerImpl.populateResources(ClusterControllerImpl.java:146)
>   at 
> org.apache.ambari.server.api.query.QueryImpl.queryForResources(QueryImpl.java:406)
>   at 
> org.apache.ambari.server.api.query.QueryImpl.execute(QueryImpl.java:216)
>   at 
> org.apache.ambari.server.api.handlers.ReadHandler.handleRequest(ReadHandler.java:68)
>   at 
> org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:135)
>   at 
> org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:106)
>   at 
> org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:75)
>   at 
> org.apache.ambari.server.api.services.HostComponentService.getHostComponent(HostComponentService.java:89)
>   at sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:540)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:715)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1496)
>   at 
> 

[jira] [Updated] (AMBARI-15372) Zookeeper service check fails after removing a host

2016-03-11 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-15372:
-
Attachment: AMBARI-15372.patch

> Zookeeper service check fails after removing a host 
> 
>
> Key: AMBARI-15372
> URL: https://issues.apache.org/jira/browse/AMBARI-15372
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.2.2
>
> Attachments: AMBARI-15372.patch
>
>
> This was reproduced in the rerun.  
> STR:  
> 1) Move all the masters from the host to be deleted(In this case, SNamenode
> was moved)  
> 2) Remove the host  
> 3) Run service checks
> Mapreduce service check fails here.
> Here is the live cluster: I am extending its life till 72 hours:  
>   
>  HDP/84552/>
> Artifacts:  artifacts/os-r6-phtdlu-ambari-rare-17r/ambari-rare-1457129950/artifacts/screen
> shots/com.hw.ambari.ui.tests.heavyweights.TestDeleteHostFromOriginalClusterAft
> erMoveMaster/test2_deleteAdditionalHostFromClusterAfterMoveMaster/_4_22_12_8_C
> hecking_smoke_test_for__ZOOKEEPER_service_failed/>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15372) Zookeeper service check fails after removing a host

2016-03-11 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-15372:
-
Attachment: (was: AMBARI-15372.patch)

> Zookeeper service check fails after removing a host 
> 
>
> Key: AMBARI-15372
> URL: https://issues.apache.org/jira/browse/AMBARI-15372
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.2.2
>
>
> This was reproduced in the rerun.  
> STR:  
> 1) Move all the masters from the host to be deleted(In this case, SNamenode
> was moved)  
> 2) Remove the host  
> 3) Run service checks
> Mapreduce service check fails here.
> Here is the live cluster: I am extending its life till 72 hours:  
>   
>  HDP/84552/>
> Artifacts:  artifacts/os-r6-phtdlu-ambari-rare-17r/ambari-rare-1457129950/artifacts/screen
> shots/com.hw.ambari.ui.tests.heavyweights.TestDeleteHostFromOriginalClusterAft
> erMoveMaster/test2_deleteAdditionalHostFromClusterAfterMoveMaster/_4_22_12_8_C
> hecking_smoke_test_for__ZOOKEEPER_service_failed/>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-14556) Role based access control UX fixes

2016-03-11 Thread Mahadev konar (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahadev konar updated AMBARI-14556:
---
Fix Version/s: 2.4.0

> Role based access control UX fixes
> --
>
> Key: AMBARI-14556
> URL: https://issues.apache.org/jira/browse/AMBARI-14556
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-admin
>Reporter: Richard Zang
>Assignee: Richard Zang
> Fix For: 2.4.0
>
> Attachments: AMBARI-14556.patch
>
>
> Order of blocks in manage access page needs to be reversed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15385) Counter values need to request for rate function to maintain parity

2016-03-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190927#comment-15190927
 ] 

Hadoop QA commented on AMBARI-15385:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12792797/AMBARI-15385.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-funtest ambari-server ambari-web.

Test results: 
https://builds.apache.org/job/Ambari-trunk-test-patch/5829//testReport/
Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/5829//console

This message is automatically generated.

> Counter values need to request for rate function to maintain parity
> ---
>
> Key: AMBARI-15385
> URL: https://issues.apache.org/jira/browse/AMBARI-15385
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.2.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.2.2
>
> Attachments: AMBARI-15385.patch
>
>
> AMS API will return back Counter metrics as is which means a monotonically 
> increasing sequence. To maintain parity with previous implementation the 
> Counter metrics need to be request with *._rate* as the aggregate function.
> List of Counter metrics can be obtained by making a call to AMS metadata API:
> http://localhost:6188/ws/v1/timeline/metrics/metadata



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15385) Counter values need to request for rate function to maintain parity

2016-03-11 Thread Andrii Tkach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190902#comment-15190902
 ] 

Andrii Tkach commented on AMBARI-15385:
---

  24564 tests complete (30 seconds)
  145 tests pending

> Counter values need to request for rate function to maintain parity
> ---
>
> Key: AMBARI-15385
> URL: https://issues.apache.org/jira/browse/AMBARI-15385
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.2.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.2.2
>
> Attachments: AMBARI-15385.patch
>
>
> AMS API will return back Counter metrics as is which means a monotonically 
> increasing sequence. To maintain parity with previous implementation the 
> Counter metrics need to be request with *._rate* as the aggregate function.
> List of Counter metrics can be obtained by making a call to AMS metadata API:
> http://localhost:6188/ws/v1/timeline/metrics/metadata



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15385) Counter values need to request for rate function to maintain parity

2016-03-11 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-15385:
--
Status: Patch Available  (was: Open)

> Counter values need to request for rate function to maintain parity
> ---
>
> Key: AMBARI-15385
> URL: https://issues.apache.org/jira/browse/AMBARI-15385
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.2.2
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
>Priority: Critical
> Fix For: 2.2.2
>
> Attachments: AMBARI-15385.patch
>
>
> AMS API will return back Counter metrics as is which means a monotonically 
> increasing sequence. To maintain parity with previous implementation the 
> Counter metrics need to be request with *._rate* as the aggregate function.
> List of Counter metrics can be obtained by making a call to AMS metadata API:
> http://localhost:6188/ws/v1/timeline/metrics/metadata



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15294) HBase RegionServer circular decommission

2016-03-11 Thread Krisztian Horvath (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Horvath updated AMBARI-15294:
---
Description: 
Provision a cluster with the attached blueprint with more than 2 hosts in the 
host_group_slave_1 host group. Put a few RegionServers to maintenance mode and 
try to decommission one of the RSs. It will try to find valid targets to move 
the data to, but it includes RSs in maintenance mode.
In case of larger number of decommission like 20, it leads to circular data 
movements as RSs which are decommissioning will copy their data to other RSs 
which are also decommissioning.
In the log you can see the selected targets and 80% of them is in maintenance 
mode. 15 nodes decommission 1 RS.


  was:
rovision a cluster with the attached blueprint with more than 2 hosts in the 
host_group_slave_1 host group. Put a few RegionServers to maintenance mode and 
try to decommission one of the RSs. It will try to find valid targets to move 
the data to, but it includes RSs in maintenance mode.
In case of larger number of decommission like 20, it leads to circular data 
movements as RSs which are decommissioning will copy their data to other RSs 
which are also decommissioning.
In the log you can see the selected targets and 80% of them is in maintenance 
mode. 15 nodes decommission 1 RS.



> HBase RegionServer circular decommission
> 
>
> Key: AMBARI-15294
> URL: https://issues.apache.org/jira/browse/AMBARI-15294
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Mahadev konar
>
> Provision a cluster with the attached blueprint with more than 2 hosts in the 
> host_group_slave_1 host group. Put a few RegionServers to maintenance mode 
> and try to decommission one of the RSs. It will try to find valid targets to 
> move the data to, but it includes RSs in maintenance mode.
> In case of larger number of decommission like 20, it leads to circular data 
> movements as RSs which are decommissioning will copy their data to other RSs 
> which are also decommissioning.
> In the log you can see the selected targets and 80% of them is in maintenance 
> mode. 15 nodes decommission 1 RS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15355) After EU from 2.2 -> 2.3 HBase region server started failed with memory error

2016-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190859#comment-15190859
 ] 

Hudson commented on AMBARI-15355:
-

FAILURE: Integrated in Ambari-branch-2.2 #499 (See 
[https://builds.apache.org/job/Ambari-branch-2.2/499/])
AMBARI-15355. After EU from 2.2 -> 2.3 HBase region server started 
(dlysnichenko: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=13b5f2d20814e51bb7927cfda1ef87b0ef652ddc])
* ambari-server/src/test/python/stacks/2.3/common/test_stack_advisor.py
* ambari-server/src/main/resources/stacks/HDP/2.2/services/stack_advisor.py


> After EU from 2.2 -> 2.3 HBase region server started failed with memory error
> -
>
> Key: AMBARI-15355
> URL: https://issues.apache.org/jira/browse/AMBARI-15355
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.2.2
>
> Attachments: AMBARI-15355.patch
>
>
> On one of the clusters i did EU from 2.2.x to 2.3.x.
> During upgrade there were problems with HBase service checks for region 
> servers and thus upgrade is paused.
> Region server start is failing with error
> {code}
> 2016-03-03 19:55:31,203 ERROR [regionserver:16020] 
> regionserver.HRegionServer: Failed init
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
> at 
> org.apache.hadoop.hbase.util.ByteBufferArray.(ByteBufferArray.java:65)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.(ByteBufferIOEngine.java:47)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:307)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.(BucketCache.java:217)
> at 
> org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:614)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:553)
> at 
> org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:637)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.(CacheConfig.java:231)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1361)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:899)
> at java.lang.Thread.run(Thread.java:745)
> 2016-03-03 19:55:31,206 FATAL [regionserver:16020] 
> regionserver.RSRpcServices: Run out of memory; RSRpcServices will abort 
> itself immediately
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
> at 
> org.apache.hadoop.hbase.util.ByteBufferArray.(ByteBufferArray.java:65)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.(ByteBufferIOEngine.java:47)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:307)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.(BucketCache.java:217)
> at 
> org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:614)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:553)
> at 
> org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:637)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.(CacheConfig.java:231)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1361)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:899)
> at java.lang.Thread.run(Thread.java:745)
> 2016-03-03 19:55:35,138 INFO  [main] zookeeper.ZooKeeper: Client 
> environment:zookeeper.version=3.4.6-3485--1, built on 12/16/2015 02:35 GMT
> {code}
> This was seen on the following cluster: 
> https://s.c:8443/#/main/services/HBASE/configs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15355) After EU from 2.2 -> 2.3 HBase region server started failed with memory error

2016-03-11 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-15355:

Fix Version/s: 2.2.2

> After EU from 2.2 -> 2.3 HBase region server started failed with memory error
> -
>
> Key: AMBARI-15355
> URL: https://issues.apache.org/jira/browse/AMBARI-15355
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.2
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.2.2
>
> Attachments: AMBARI-15355.patch
>
>
> On one of the clusters i did EU from 2.2.x to 2.3.x.
> During upgrade there were problems with HBase service checks for region 
> servers and thus upgrade is paused.
> Region server start is failing with error
> {code}
> 2016-03-03 19:55:31,203 ERROR [regionserver:16020] 
> regionserver.HRegionServer: Failed init
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
> at 
> org.apache.hadoop.hbase.util.ByteBufferArray.(ByteBufferArray.java:65)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.(ByteBufferIOEngine.java:47)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:307)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.(BucketCache.java:217)
> at 
> org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:614)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:553)
> at 
> org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:637)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.(CacheConfig.java:231)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1361)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:899)
> at java.lang.Thread.run(Thread.java:745)
> 2016-03-03 19:55:31,206 FATAL [regionserver:16020] 
> regionserver.RSRpcServices: Run out of memory; RSRpcServices will abort 
> itself immediately
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
> at 
> org.apache.hadoop.hbase.util.ByteBufferArray.(ByteBufferArray.java:65)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.(ByteBufferIOEngine.java:47)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:307)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.(BucketCache.java:217)
> at 
> org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:614)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:553)
> at 
> org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:637)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.(CacheConfig.java:231)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1361)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:899)
> at java.lang.Thread.run(Thread.java:745)
> 2016-03-03 19:55:35,138 INFO  [main] zookeeper.ZooKeeper: Client 
> environment:zookeeper.version=3.4.6-3485--1, built on 12/16/2015 02:35 GMT
> {code}
> This was seen on the following cluster: 
> https://s.c:8443/#/main/services/HBASE/configs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15330) Bubble up errors during RU/EU

2016-03-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15190747#comment-15190747
 ] 

Hadoop QA commented on AMBARI-15330:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12792599/AMBARI-15330.trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/5828//console

This message is automatically generated.

> Bubble up errors during RU/EU
> -
>
> Key: AMBARI-15330
> URL: https://issues.apache.org/jira/browse/AMBARI-15330
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: 2.4.0
>
> Attachments: AMBARI-15330.trunk.patch
>
>
> During RU/EU, need a way to bubble up an error of the current item that 
> failed. This is useful to quickly get a human-readable error that others UIs 
> can quickly retrieve.
> It can print a human-readable error, plus stdout and stderr.
> This would become part of the upgrade endpoint. e.g,
> api/v1/clusters/$name/upgrade_summary/$request_id
> {code}
> {
> attempt_cnt: 1,
> cluster_name: "c1",
> request_id: 1,
> fail_reason: "Failed calling RESTART ZOOKEEPER/ZOOKEEPER_SERVER on host 
> c6401.ambari.apache.org",
> // Notice that the rest are inherited from the failed task if it exists.
> command: "CUSTOM_COMMAND",
> command_detail: "RESTART ZOOKEEPER/ZOOKEEPER_SERVER",
> custom_command_name: "RESTART",
> end_time: -1,
> error_log: "/var/lib/ambari-agent/data/errors-1234.txt",
> exit_code: 1,
> host_name: "c6401.ambari.apache.org",
> id: 1234,
> output_log: "/var/lib/ambari-agent/data/output-1234.txt",
> role: "ZOOKEEPER_SERVER",
> stage_id: 1,
> start_time: 123456789,
> status: "HOLDING_FAILED",
> stdout: "",
> stderr: ""
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)