Re: Review Request 45254: Apply the stack featurization prototype detailed on AMBARI-13364 to TEZ service

2016-03-23 Thread Jayush Luniya

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45254/#review125191
---




ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/pre_upgrade.py
 (line 25)


Remove unused compare_versions import



ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/service_check.py
 (line 28)


Remove unused compare_versions import



ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/tez_client.py
 (line 36)


Remove unused compare_versions import


- Jayush Luniya


On March 24, 2016, 12:22 a.m., Juanjo  Marron wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45254/
> ---
> 
> (Updated March 24, 2016, 12:22 a.m.)
> 
> 
> Review request for Ambari, Alejandro Fernandez and Jayush Luniya.
> 
> 
> Bugs: AMBARI-15137
> https://issues.apache.org/jira/browse/AMBARI-15137
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Apply the stack featurization prototype detailed on AMBARI-13364 to TEZ 
> service
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/params_linux.py
>  0165c0b 
>   
> ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/pre_upgrade.py
>  1faedf9 
>   
> ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/service_check.py
>  c0c66af 
>   
> ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/tez_client.py
>  e770d9b 
> 
> Diff: https://reviews.apache.org/r/45254/diff/
> 
> 
> Testing
> ---
> 
> TEZ fresh installation
> 
> 
> Thanks,
> 
> Juanjo  Marron
> 
>



Re: Review Request 45253: AMBARI-15544: Creating multinode cluster using Blueprints fails.

2016-03-23 Thread Sumit Mohanty

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45253/#review125147
---


Ship it!




Ship It!

- Sumit Mohanty


On March 23, 2016, 10:45 p.m., Nahappan Somasundaram wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45253/
> ---
> 
> (Updated March 23, 2016, 10:45 p.m.)
> 
> 
> Review request for Ambari, Jonathan Hurley, Nate Cole, Sumit Mohanty, 
> Sebastian Toader, and Sid Wagle.
> 
> 
> Bugs: AMBARI-15544
> https://issues.apache.org/jira/browse/AMBARI-15544
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> AMBARI-15544: Creating multinode cluster using Blueprints fails.
> 
> ** Issue **:
> 
> This issue happens when there are multiple agents running before deployment 
> happens. During registration, there is no cluster, so the recovery 
> configuration is not obtained. Subsequently, when hosts become a part of a 
> cluster, agents attempt to get the recovery configuration during the 
> heartbeats. The first agent successfully gets the configuration because the 
> timestamp map is empty. When the next agent heartbeats, it checks to see if 
> the configuration is stale. While there is an entry for the cluster name in 
> the timestamp map created by the previous agent, there is no hostname entry 
> for the current agent which causes the timestamp returned for that hostname 
> to be null.
> 
> ** Fix **:
> 
> Check the returned Timestamp object for null before accessing it.
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/java/org/apache/ambari/server/agent/RecoveryConfigHelper.java
>  dca4a9b9a32377a2d7d620d6f939e7250cc40590 
> 
> Diff: https://reviews.apache.org/r/45253/diff/
> 
> 
> Testing
> ---
> 
> ** 1. mvn clean install -DskipTests **
> 
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Ambari Main ... SUCCESS [5.199s]
> [INFO] Apache Ambari Project POM . SUCCESS [0.037s]
> [INFO] Ambari Web  SUCCESS [31.250s]
> [INFO] Ambari Views .. SUCCESS [1.140s]
> [INFO] Ambari Admin View . SUCCESS [5.628s]
> [INFO] ambari-metrics  SUCCESS [0.355s]
> [INFO] Ambari Metrics Common . SUCCESS [0.476s]
> [INFO] Ambari Metrics Hadoop Sink  SUCCESS [1.067s]
> [INFO] Ambari Metrics Flume Sink . SUCCESS [0.562s]
> [INFO] Ambari Metrics Kafka Sink . SUCCESS [0.595s]
> [INFO] Ambari Metrics Storm Sink . SUCCESS [1.437s]
> [INFO] Ambari Metrics Collector .. SUCCESS [6.724s]
> [INFO] Ambari Metrics Monitor  SUCCESS [2.089s]
> [INFO] Ambari Metrics Grafana  SUCCESS [0.862s]
> [INFO] Ambari Metrics Assembly ... SUCCESS [1:17.652s]
> [INFO] Ambari Server . SUCCESS [2:31.754s]
> [INFO] Ambari Functional Tests ... SUCCESS [1.281s]
> [INFO] Ambari Agent .. SUCCESS [22.491s]
> [INFO] Ambari Client . SUCCESS [0.061s]
> [INFO] Ambari Python Client .. SUCCESS [1.008s]
> [INFO] Ambari Groovy Client .. SUCCESS [2.175s]
> [INFO] Ambari Shell .. SUCCESS [0.058s]
> [INFO] Ambari Python Shell ... SUCCESS [0.694s]
> [INFO] Ambari Groovy Shell ... SUCCESS [1.028s]
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 5:16.311s
> [INFO] Finished at: Wed Mar 23 15:30:45 PDT 2016
> [INFO] Final Memory: 261M/1167M
> [INFO] 
> 
> 
> ** 2. Manual tests **
> 
> Deployed a cluster with 3 nodes, registered a blueprint and template. Noticed 
> that the second agent now gets a **true** value for 
> **isConfigStale(clusterName, hostName, timestamp)**, allowing it to get the 
> recovery configuration.
> 
> ** 3. Unit tests **
> 
> 
> ---
>  T E S T S
> ---
> Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m 
> -Djava.awt.headless=true
> Running 

Re: Review Request 44972: Improve error logging for install errors during blueprint deployments.

2016-03-23 Thread Amruta Borkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/44972/
---

(Updated March 23, 2016, 10:54 p.m.)


Review request for Ambari, Di Li and Sid Wagle.


Changes
---

Made siilar changes to address the issue in trunk. Trunk test output is 
attached.


Bugs: AMBARI-15412
https://issues.apache.org/jira/browse/AMBARI-15412


Repository: ambari


Description
---

Improve error logging for install errors during blueprint deployments. 

Currently a severe error during install of a service component gets logged as a 
WARNing

E.g.:

09 Mar 2016 12:11:45,881 WARN [qtp-ambari-agent-146] HeartBeatHandler:603 - 
Operation failed - may be retried. Service component host: KAFKA_BROKER, host: 
hdtest159.svl.ibm.com Action id12-0


Diffs
-

  
ambari-server/src/main/java/org/apache/ambari/server/agent/HeartBeatHandler.java
 24fea22 

Diff: https://reviews.apache.org/r/44972/diff/


Testing
---

There are no JUnit test cases, but attached the screenshot which shows modified 
output.


File Attachments (updated)


Output
  
https://reviews.apache.org/media/uploaded/files/2016/03/17/923d218f-7ca4-4439-b42f-743511936f94__AMBARI-15412_output.png
AMBARI-15412_branch-2.2.patch
  
https://reviews.apache.org/media/uploaded/files/2016/03/23/80355c33-2d5e-45f0-8f3d-1640f4386f05__AMBARI-15412_branch-2.2.patch
trunk patch
  
https://reviews.apache.org/media/uploaded/files/2016/03/23/5c7a7d2f-fcf7-4858-9626-6a08ec9c7ba7__AMBARI-15412-trunk.patch
trunk--output
  
https://reviews.apache.org/media/uploaded/files/2016/03/23/41b8327f-40ae-4a2f-b9c3-4e85eb9004c4__trunk-output.png


Thanks,

Amruta Borkar



Review Request 45253: AMBARI-15544: Creating multinode cluster using Blueprints fails.

2016-03-23 Thread Nahappan Somasundaram

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45253/
---

Review request for Ambari, Jonathan Hurley, Nate Cole, Sumit Mohanty, Sebastian 
Toader, and Sid Wagle.


Bugs: AMBARI-15544
https://issues.apache.org/jira/browse/AMBARI-15544


Repository: ambari


Description
---

AMBARI-15544: Creating multinode cluster using Blueprints fails.

** Issue **:

This issue happens when there are multiple agents running before deployment 
happens. During registration, there is no cluster, so the recovery 
configuration is not obtained. Subsequently, when hosts become a part of a 
cluster, agents attempt to get the recovery configuration during the 
heartbeats. The first agent successfully gets the configuration because the 
timestamp map is empty. When the next agent heartbeats, it checks to see if the 
configuration is stale. While there is an entry for the cluster name in the 
timestamp map created by the previous agent, there is no hostname entry for the 
current agent which causes the timestamp returned for that hostname to be null.

** Fix **:

Check the returned Timestamp object for null before accessing it.


Diffs
-

  
ambari-server/src/main/java/org/apache/ambari/server/agent/RecoveryConfigHelper.java
 dca4a9b9a32377a2d7d620d6f939e7250cc40590 

Diff: https://reviews.apache.org/r/45253/diff/


Testing
---

** 1. mvn clean install -DskipTests **

[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] Ambari Main ... SUCCESS [5.199s]
[INFO] Apache Ambari Project POM . SUCCESS [0.037s]
[INFO] Ambari Web  SUCCESS [31.250s]
[INFO] Ambari Views .. SUCCESS [1.140s]
[INFO] Ambari Admin View . SUCCESS [5.628s]
[INFO] ambari-metrics  SUCCESS [0.355s]
[INFO] Ambari Metrics Common . SUCCESS [0.476s]
[INFO] Ambari Metrics Hadoop Sink  SUCCESS [1.067s]
[INFO] Ambari Metrics Flume Sink . SUCCESS [0.562s]
[INFO] Ambari Metrics Kafka Sink . SUCCESS [0.595s]
[INFO] Ambari Metrics Storm Sink . SUCCESS [1.437s]
[INFO] Ambari Metrics Collector .. SUCCESS [6.724s]
[INFO] Ambari Metrics Monitor  SUCCESS [2.089s]
[INFO] Ambari Metrics Grafana  SUCCESS [0.862s]
[INFO] Ambari Metrics Assembly ... SUCCESS [1:17.652s]
[INFO] Ambari Server . SUCCESS [2:31.754s]
[INFO] Ambari Functional Tests ... SUCCESS [1.281s]
[INFO] Ambari Agent .. SUCCESS [22.491s]
[INFO] Ambari Client . SUCCESS [0.061s]
[INFO] Ambari Python Client .. SUCCESS [1.008s]
[INFO] Ambari Groovy Client .. SUCCESS [2.175s]
[INFO] Ambari Shell .. SUCCESS [0.058s]
[INFO] Ambari Python Shell ... SUCCESS [0.694s]
[INFO] Ambari Groovy Shell ... SUCCESS [1.028s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 5:16.311s
[INFO] Finished at: Wed Mar 23 15:30:45 PDT 2016
[INFO] Final Memory: 261M/1167M
[INFO] 

** 2. Manual tests **

Deployed a cluster with 3 nodes, registered a blueprint and template. Noticed 
that the second agent now gets a **true** value for 
**isConfigStale(clusterName, hostName, timestamp)**, allowing it to get the 
recovery configuration.

** 3. Unit tests **


---
 T E S T S
---
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
Running org.apache.ambari.server.agent.TestHeartbeatHandler
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.831 sec - 
in org.apache.ambari.server.agent.TestHeartbeatHandler
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
Running org.apache.ambari.server.configuration.RecoveryConfigHelperTest
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.43 sec - in 
org.apache.ambari.server.configuration.RecoveryConfigHelperTest

Results :

Tests run: 31, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] BUILD SUCCESS

Re: Review Request 45247: YARN Queue should be refreshed when enabling/disabling Interactive Query

2016-03-23 Thread Jaimin Jetly


> On March 23, 2016, 9:11 p.m., Yusaku Sako wrote:
> > How are we handling errors?  Seems like we chain a bunch of calls.  The 
> > user would need to see what went wrong.
> > Also, can we get into partial failure scenarios?

>> How are we handling errors?
 We are using App.ajax coded endpoint which has a default failure handler for 
server errors: 
https://github.com/apache/ambari/blob/trunk/ambari-web/app/utils/ajax/ajax.js#L3054
 
>> Also, can we get into partial failure scenarios?
   I don't see that happening in usual scenario but might happen for any 
unknown reason if any one of the API fails, in which case popup from error 
handler will be shown with error message and ajax chain for the APIs will be 
broken from that point.


- Jaimin


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45247/#review125132
---


On March 23, 2016, 8:58 p.m., Jaimin Jetly wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45247/
> ---
> 
> (Updated March 23, 2016, 8:58 p.m.)
> 
> 
> Review request for Ambari, Srimanth Gunturi and Yusaku Sako.
> 
> 
> Bugs: AMBARI-15539
> https://issues.apache.org/jira/browse/AMBARI-15539
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Capacity scheduler will be changed as part of saving Hive configs related to 
> adding/deleting Hive Interactive Server. So YARN Queue has to be refreshed 
> before issuing Hive Interactive Server Start/Delete command
> 
> 
> Diffs
> -
> 
>   ambari-web/app/mixins/main/service/configs/component_actions_by_configs.js 
> 7857411 
> 
> Diff: https://reviews.apache.org/r/45247/diff/
> 
> 
> Testing
> ---
> 
> Tested the patch on a cluster
> Verified that all ambari-web unit tests passes with the patch:
> 
> 
>   24653 tests complete (24 seconds)
>   145 tests pending
> 
> 
> Thanks,
> 
> Jaimin Jetly
> 
>



Re: Review Request 45247: YARN Queue should be refreshed when enabling/disabling Interactive Query

2016-03-23 Thread Yusaku Sako

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45247/#review125132
---



How are we handling errors?  Seems like we chain a bunch of calls.  The user 
would need to see what went wrong.
Also, can we get into partial failure scenarios?

- Yusaku Sako


On March 23, 2016, 8:58 p.m., Jaimin Jetly wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45247/
> ---
> 
> (Updated March 23, 2016, 8:58 p.m.)
> 
> 
> Review request for Ambari, Srimanth Gunturi and Yusaku Sako.
> 
> 
> Bugs: AMBARI-15539
> https://issues.apache.org/jira/browse/AMBARI-15539
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Capacity scheduler will be changed as part of saving Hive configs related to 
> adding/deleting Hive Interactive Server. So YARN Queue has to be refreshed 
> before issuing Hive Interactive Server Start/Delete command
> 
> 
> Diffs
> -
> 
>   ambari-web/app/mixins/main/service/configs/component_actions_by_configs.js 
> 7857411 
> 
> Diff: https://reviews.apache.org/r/45247/diff/
> 
> 
> Testing
> ---
> 
> Tested the patch on a cluster
> Verified that all ambari-web unit tests passes with the patch:
> 
> 
>   24653 tests complete (24 seconds)
>   145 tests pending
> 
> 
> Thanks,
> 
> Jaimin Jetly
> 
>



Re: Review Request 45247: YARN Queue should be refreshed when enabling/disabling Interactive Query

2016-03-23 Thread Jaimin Jetly

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45247/
---

(Updated March 23, 2016, 8:58 p.m.)


Review request for Ambari, Srimanth Gunturi and Yusaku Sako.


Summary (updated)
-

YARN Queue should be refreshed when enabling/disabling Interactive Query


Bugs: AMBARI-15539
https://issues.apache.org/jira/browse/AMBARI-15539


Repository: ambari


Description
---

Capacity scheduler will be changed as part of saving Hive configs related to 
adding/deleting Hive Interactive Server. So YARN Queue has to be refreshed 
before issuing Hive Interactive Server Start/Delete command


Diffs
-

  ambari-web/app/mixins/main/service/configs/component_actions_by_configs.js 
7857411 

Diff: https://reviews.apache.org/r/45247/diff/


Testing
---

Tested the patch on a cluster
Verified that all ambari-web unit tests passes with the patch:


  24653 tests complete (24 seconds)
  145 tests pending


Thanks,

Jaimin Jetly



Re: Review Request 45247: YARN Queue should be refreshed when adding/deleting Hive Interactive Server

2016-03-23 Thread Jaimin Jetly

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45247/
---

(Updated March 23, 2016, 8:57 p.m.)


Review request for Ambari, Srimanth Gunturi and Yusaku Sako.


Changes
---

Uploaded 2nd patch which optimzes the code to do "Yarn Queue refresh" command 
only when user saves changes in capacity-scheduler.xml as recommended by stack 
advisor.


Bugs: AMBARI-15539
https://issues.apache.org/jira/browse/AMBARI-15539


Repository: ambari


Description
---

Capacity scheduler will be changed as part of saving Hive configs related to 
adding/deleting Hive Interactive Server. So YARN Queue has to be refreshed 
before issuing Hive Interactive Server Start/Delete command


Diffs (updated)
-

  ambari-web/app/mixins/main/service/configs/component_actions_by_configs.js 
7857411 

Diff: https://reviews.apache.org/r/45247/diff/


Testing
---

Tested the patch on a cluster
Verified that all ambari-web unit tests passes with the patch:


  24653 tests complete (24 seconds)
  145 tests pending


Thanks,

Jaimin Jetly



Re: Review Request 45250: AMBARI-15540 : NAMENODE critical alert is present [Percentage standard deviation] after upgrade from 2.0.2/ 2.2.1.0 etc to 2.2.2.0 and disabling security

2016-03-23 Thread Aravindan Vijayan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45250/
---

(Updated March 23, 2016, 8:46 p.m.)


Review request for Ambari, Dmytro Sen, Sumit Mohanty, and Sid Wagle.


Bugs: AMBARI-15540
https://issues.apache.org/jira/browse/AMBARI-15540


Repository: ambari


Description
---

STR:
1)Deploy old version
2)Enable Mit security
3)Make Ambari only Upgrade
4)Disable security
5)Enable NN, RM HA
6)Enable/Disable security
7) Check Alerts for HDFS

*Issue was absent for*
{code}
2.2.2.0-285
2b1e7b75d1970f26b0a033cdfac8b53b457b83ff
{code}

*Actual result:*
 NAMENODE critical alert is present [Percentage standard deviation] after 
upgrade from 2.0.2/ 2.2.1.0 etc  to 2.2.2.0 and disabling security
{code}
{
  "href" : "https://<>:8443/api/v1/clusters/cl1/alerts/201",
  "Alert" : {
"cluster_name" : "cl1",
"component_name" : "NAMENODE",
"definition_id" : 102,
"definition_name" : "namenode_client_rpc_processing_latency_daily",
"host_name" : "<>",
"id" : 201,
"instance" : null,
"label" : "NameNode Client RPC Processing Latency (Daily)",
"latest_timestamp" : 1458643485805,
"maintenance_state" : "OFF",
"original_timestamp" : 1458643485805,
"scope" : "ANY",
"service_name" : "HDFS",
"state" : "CRITICAL",
"text" : "CRITICAL. Percentage standard deviation value 218.17% is 
beyond the critical threshold of 200.00%"
  }
},
{code}


Diffs
-

  
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/alerts/alert_metrics_deviation.py
 f62c4a3 

Diff: https://reviews.apache.org/r/45250/diff/


Testing (updated)
---

Manual testing done


Thanks,

Aravindan Vijayan



Re: Review Request 45250: AMBARI-15540 : NAMENODE critical alert is present [Percentage standard deviation] after upgrade from 2.0.2/ 2.2.1.0 etc to 2.2.2.0 and disabling security

2016-03-23 Thread Sid Wagle

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45250/#review125116
---


Ship it!




Ship It!

- Sid Wagle


On March 23, 2016, 8:26 p.m., Aravindan Vijayan wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45250/
> ---
> 
> (Updated March 23, 2016, 8:26 p.m.)
> 
> 
> Review request for Ambari, Dmytro Sen, Sumit Mohanty, and Sid Wagle.
> 
> 
> Bugs: AMBARI-15540
> https://issues.apache.org/jira/browse/AMBARI-15540
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> STR:
> 1)Deploy old version
> 2)Enable Mit security
> 3)Make Ambari only Upgrade
> 4)Disable security
> 5)Enable NN, RM HA
> 6)Enable/Disable security
> 7) Check Alerts for HDFS
> 
> *Issue was absent for*
> {code}
> 2.2.2.0-285
> 2b1e7b75d1970f26b0a033cdfac8b53b457b83ff
> {code}
> 
> *Actual result:*
>  NAMENODE critical alert is present [Percentage standard deviation] after 
> upgrade from 2.0.2/ 2.2.1.0 etc  to 2.2.2.0 and disabling security
> {code}
> {
>   "href" : "https://<>:8443/api/v1/clusters/cl1/alerts/201",
>   "Alert" : {
> "cluster_name" : "cl1",
> "component_name" : "NAMENODE",
> "definition_id" : 102,
> "definition_name" : "namenode_client_rpc_processing_latency_daily",
> "host_name" : "<>",
> "id" : 201,
> "instance" : null,
> "label" : "NameNode Client RPC Processing Latency (Daily)",
> "latest_timestamp" : 1458643485805,
> "maintenance_state" : "OFF",
> "original_timestamp" : 1458643485805,
> "scope" : "ANY",
> "service_name" : "HDFS",
> "state" : "CRITICAL",
> "text" : "CRITICAL. Percentage standard deviation value 218.17% is 
> beyond the critical threshold of 200.00%"
>   }
> },
> {code}
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/alerts/alert_metrics_deviation.py
>  f62c4a3 
> 
> Diff: https://reviews.apache.org/r/45250/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Aravindan Vijayan
> 
>



Review Request 45250: AMBARI-15540 : NAMENODE critical alert is present [Percentage standard deviation] after upgrade from 2.0.2/ 2.2.1.0 etc to 2.2.2.0 and disabling security

2016-03-23 Thread Aravindan Vijayan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45250/
---

Review request for Ambari, Dmytro Sen, Sumit Mohanty, and Sid Wagle.


Bugs: AMBARI-15540
https://issues.apache.org/jira/browse/AMBARI-15540


Repository: ambari


Description
---

STR:
1)Deploy old version
2)Enable Mit security
3)Make Ambari only Upgrade
4)Disable security
5)Enable NN, RM HA
6)Enable/Disable security
7) Check Alerts for HDFS

*Issue was absent for*
{code}
2.2.2.0-285
2b1e7b75d1970f26b0a033cdfac8b53b457b83ff
{code}

*Actual result:*
 NAMENODE critical alert is present [Percentage standard deviation] after 
upgrade from 2.0.2/ 2.2.1.0 etc  to 2.2.2.0 and disabling security
{code}
{
  "href" : "https://<>:8443/api/v1/clusters/cl1/alerts/201",
  "Alert" : {
"cluster_name" : "cl1",
"component_name" : "NAMENODE",
"definition_id" : 102,
"definition_name" : "namenode_client_rpc_processing_latency_daily",
"host_name" : "<>",
"id" : 201,
"instance" : null,
"label" : "NameNode Client RPC Processing Latency (Daily)",
"latest_timestamp" : 1458643485805,
"maintenance_state" : "OFF",
"original_timestamp" : 1458643485805,
"scope" : "ANY",
"service_name" : "HDFS",
"state" : "CRITICAL",
"text" : "CRITICAL. Percentage standard deviation value 218.17% is 
beyond the critical threshold of 200.00%"
  }
},
{code}


Diffs
-

  
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/alerts/alert_metrics_deviation.py
 f62c4a3 

Diff: https://reviews.apache.org/r/45250/diff/


Testing
---


Thanks,

Aravindan Vijayan



Review Request 45247: YARN Queue should be refreshed when adding/deleting Hive Interactive Server

2016-03-23 Thread Jaimin Jetly

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45247/
---

Review request for Ambari, Srimanth Gunturi and Yusaku Sako.


Bugs: AMBARI-15539
https://issues.apache.org/jira/browse/AMBARI-15539


Repository: ambari


Description
---

Capacity scheduler will be changed as part of saving Hive configs related to 
adding/deleting Hive Interactive Server. So YARN Queue has to be refreshed 
before issuing Hive Interactive Server Start/Delete command


Diffs
-

  ambari-web/app/mixins/main/service/configs/component_actions_by_configs.js 
7857411 

Diff: https://reviews.apache.org/r/45247/diff/


Testing
---

Tested the patch on a cluster
Verified that all ambari-web unit tests passes with the patch:


  24653 tests complete (24 seconds)
  145 tests pending


Thanks,

Jaimin Jetly



Re: Review Request 45056: Blueprint install using config_recommendation_strategy is not functional

2016-03-23 Thread Robert Levas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45056/#review125100
---


Ship it!




Ship It!

- Robert Levas


On March 18, 2016, 6:31 p.m., Shantanu Mundkur wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45056/
> ---
> 
> (Updated March 18, 2016, 6:31 p.m.)
> 
> 
> Review request for Ambari, Oliver Szabo and Robert Levas.
> 
> 
> Bugs: AMBARI-15454
> https://issues.apache.org/jira/browse/AMBARI-15454
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> hBlueprint install using config_recommendation strategy seems to hang for a 
> long time (couple of hours?) and ends up logging exceptions continually to 
> ambari-server.log. At the same time many hundreds of directories are seen 
> getting created under /var/run/ambari-server/stack-recommendations (I have 
> seen above 800-900). If you keep it running eventually the cluster install 
> seems to start but fails miserably at least during the start and some of it 
> makes obvious that configuration recommendations were NOT applied. You see 
> errors during startup that hint that options used for JVM (For Datanode etc) 
> were unreasonable.
> 
> Note that both the blueprint and cluster templates used empty configurations. 
> Example:
> 
> .
> .
>"configurations" : [],
>"host_groups": [
> {
>  "name": "host-group-1",
>  "configurations" : [],
>  "cardinality" : "1",
>  "components": [
>   { "name": "APP_TIMELINE_SERVER" },
>   { "name": "DATANODE" },
>   { "name": "FALCON_CLIENT" },
>   { "name": "FALCON_SERVER" },
>   { "name": "FLUME_HANDLER" },
>   { "name": "HBASE_CLIENT" },
>   { "name": "HBASE_MASTER" },
>   { "name": "HBASE_REGIONSERVER" },
> .
> .
> .
> 
> Teh cluster template was:
> { 
> "blueprint": "1node",
> "config_recommendation_strategy" : "ONLY_STACK_DEFAULTS_APPLY",
> "default_password": "myPassword1",
> "host_groups": [
> {
> "name": "host-group-1",
> "hosts": [
> {
> "fqdn": "mynode.ibm.com"
> }
> ]
> }
> ] 
> }
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java
>  f5e7578 
>   
> ambari-server/src/test/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessorTest.java
>  68d5755 
> 
> Diff: https://reviews.apache.org/r/45056/diff/
> 
> 
> Testing
> ---
> 
> 1) ambari-server unit tests
> 
> Results :
> 
> Tests run: 3968, Failures: 0, Errors: 0, Skipped: 33
> 2) Added unit test to 
> ambari-server/src/test/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessorTest.java
>
> 3) Deployed clusters 1-5 nodes specifying "config_recommendation_strategy" : 
> "ONLY_STACK_DEFAULTS_APPLY"
> 
> 
> Thanks,
> 
> Shantanu Mundkur
> 
>



Re: Review Request 45056: Blueprint install using config_recommendation_strategy is not functional

2016-03-23 Thread Shantanu Mundkur


> On March 19, 2016, 1:39 p.m., Oliver Szabo wrote:
> > Ship It!
> > 
> > In the near future I will create a patch for fixing some issues with stack 
> > advisor blueprint support (e.g.: same kind of error can happen if hosts are 
> > not registered during stack advisor processing)
> 
> Shantanu Mundkur wrote:
> Thanks Oliver. Once Robert has reviewed the change I would request one of 
> you to push the change into trunk as I do not have the privileges to do so. 
> Can I assume it would get included into 2.4 when the branch is created? Thank 
> you.

Hello Robert,

Would you be able to review the change as well? I would also request you to 
push the change into trunk as I do not have the privileges to do so. I 
appreciate your time on this. Thank you.


- Shantanu


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45056/#review124404
---


On March 18, 2016, 10:31 p.m., Shantanu Mundkur wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45056/
> ---
> 
> (Updated March 18, 2016, 10:31 p.m.)
> 
> 
> Review request for Ambari, Oliver Szabo and Robert Levas.
> 
> 
> Bugs: AMBARI-15454
> https://issues.apache.org/jira/browse/AMBARI-15454
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> hBlueprint install using config_recommendation strategy seems to hang for a 
> long time (couple of hours?) and ends up logging exceptions continually to 
> ambari-server.log. At the same time many hundreds of directories are seen 
> getting created under /var/run/ambari-server/stack-recommendations (I have 
> seen above 800-900). If you keep it running eventually the cluster install 
> seems to start but fails miserably at least during the start and some of it 
> makes obvious that configuration recommendations were NOT applied. You see 
> errors during startup that hint that options used for JVM (For Datanode etc) 
> were unreasonable.
> 
> Note that both the blueprint and cluster templates used empty configurations. 
> Example:
> 
> .
> .
>"configurations" : [],
>"host_groups": [
> {
>  "name": "host-group-1",
>  "configurations" : [],
>  "cardinality" : "1",
>  "components": [
>   { "name": "APP_TIMELINE_SERVER" },
>   { "name": "DATANODE" },
>   { "name": "FALCON_CLIENT" },
>   { "name": "FALCON_SERVER" },
>   { "name": "FLUME_HANDLER" },
>   { "name": "HBASE_CLIENT" },
>   { "name": "HBASE_MASTER" },
>   { "name": "HBASE_REGIONSERVER" },
> .
> .
> .
> 
> Teh cluster template was:
> { 
> "blueprint": "1node",
> "config_recommendation_strategy" : "ONLY_STACK_DEFAULTS_APPLY",
> "default_password": "myPassword1",
> "host_groups": [
> {
> "name": "host-group-1",
> "hosts": [
> {
> "fqdn": "mynode.ibm.com"
> }
> ]
> }
> ] 
> }
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java
>  f5e7578 
>   
> ambari-server/src/test/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessorTest.java
>  68d5755 
> 
> Diff: https://reviews.apache.org/r/45056/diff/
> 
> 
> Testing
> ---
> 
> 1) ambari-server unit tests
> 
> Results :
> 
> Tests run: 3968, Failures: 0, Errors: 0, Skipped: 33
> 2) Added unit test to 
> ambari-server/src/test/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessorTest.java
>
> 3) Deployed clusters 1-5 nodes specifying "config_recommendation_strategy" : 
> "ONLY_STACK_DEFAULTS_APPLY"
> 
> 
> Thanks,
> 
> Shantanu Mundkur
> 
>



Review Request 45226: HBASE start, Check ZooKeeper and other ones was failed after upgrade to ambari 2.2.2.0 (resource_management.core.exceptions.Fail) from 2.1.1/2.1.2 etc

2016-03-23 Thread Dmitro Lisnichenko

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45226/
---

Review request for Ambari and Vitalyi Brodetskyi.


Bugs: AMBARI-15536
https://issues.apache.org/jira/browse/AMBARI-15536


Repository: ambari


Description
---

STR: *Gateway* : http://host:8080/#/main/services/HBASE/summary - for 2.1.2
1)Deploy old version
2)Make Ambari only Upgrade

Actual result:
HBASE start,Check ZooKeeper and other ones  was failed after upgrade to ambari 
2.2.2.0 (resource_management.core.exceptions.Fail) from 2.1.1/2.1.2 etc


{code}
stderr:   /var/lib/ambari-agent/data/errors-916.txt

Traceback (most recent call last):
File 
"/var/lib/ambari-agent/cache/common-services/ZOOKEEPER/3.4.5.2.0/package/scripts/service_check.py",
 line 73, in 
ZookeeperServiceCheck().execute()
File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 219, in execute
method(env)
File 
"/var/lib/ambari-agent/cache/common-services/ZOOKEEPER/3.4.5.2.0/package/scripts/service_check.py",
 line 59, in service_check
logoutput=True
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 
154, in __init__
self.env.run()
File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
provider_action()
File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 
'/var/lib/ambari-agent/tmp/zkSmoke.sh 
/usr/hdp/current/zookeeper-client/bin/zkCli.sh c-smoke 
/usr/hdp/current/zookeeper-client/conf 2181 False /usr/bin/kinit no_keytab 
no_principal /var/lib/ambari-agent/tmp/zkSmoke.out' returned 3. 
zk_node1=os-s11-3-usjyms-upg-sanity-202-1.test
log4j:WARN No appenders could be found for logger 
(org.apache.zookeeper.ZooKeeper).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
Exception in thread "main" 
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /zk_smoketest
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:703)
at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:591)
at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:363)
at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:323)
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:282)
log4j:WARN No appenders could be found for logger 
(org.apache.zookeeper.ZooKeeper).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
Exception in thread "main" 
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /zk_smoketest
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:698)
at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:591)
at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:363)
at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:323)
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:282)
Running test on host os-s11-3-usjyms-upg-sanity-202-1.test
Connecting to os-s11-3-usjyms-upg-sanity-202-1.test:2181
log4j:WARN No appenders could be found for logger 
(org.apache.zookeeper.ZooKeeper).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
Welcome to ZooKeeper!
JLine support is enabled
[zk: os-s11-3-usjyms-upg-sanity-202-1.test:2181(CONNECTING) 0] get /zk_smoketest
Exception in thread "main" 

Re: Review Request 43049: Oozie should update war after adding Falcon

2016-03-23 Thread Alejandro Fernandez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43049/#review125070
---


Ship it!




Ship It!

- Alejandro Fernandez


On March 23, 2016, 4:04 p.m., Andrew Onischuk wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43049/
> ---
> 
> (Updated March 23, 2016, 4:04 p.m.)
> 
> 
> Review request for Ambari and Dmitro Lisnichenko.
> 
> 
> Bugs: AMBARI-14863
> https://issues.apache.org/jira/browse/AMBARI-14863
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> After adding Falcon, falcon-oozie-el-extension-*.jar is added to oozie-
> server/libext  
> Oozie war should be updated.
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/oozie_prepare_war.py
>  PRE-CREATION 
>   
> ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
>  5587380 
>   
> ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py
>  d26b89d 
>   
> ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py
>  f92d90c 
>   ambari-server/src/main/resources/scripts/Ambaripreupload.py cbec3cf 
>   ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 
> ba61b3d 
> 
> Diff: https://reviews.apache.org/r/43049/diff/
> 
> 
> Testing
> ---
> 
> mvn clean test
> 
> 
> Thanks,
> 
> Andrew Onischuk
> 
>



Re: Review Request 45220: /tmp hdfs folder created with mode 0777

2016-03-23 Thread Alejandro Fernandez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45220/#review125069
---




ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py
 (line 75)


Shouldn't this just be create direcotry with parents allowed rather than 
checking explicitly for /tmp?


- Alejandro Fernandez


On March 23, 2016, 4:40 p.m., Laszlo Puskas wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45220/
> ---
> 
> (Updated March 23, 2016, 4:40 p.m.)
> 
> 
> Review request for Ambari, Andrew Onischuk, Sumit Mohanty, and Sebastian 
> Toader.
> 
> 
> Bugs: AMBARI-15531
> https://issues.apache.org/jira/browse/AMBARI-15531
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> When cluster created via blueprint the /tmp folder may be created implicitly 
> by various components with permissions that prevent other components to write 
> to it. In the specific case described by the linked issue, the folder has 
> been created when starting the historyserver, and later on the folder 
> couldn't be written by the hiveserver2. (ideally the folder is expected to be 
> created when the namenode starts)
> 
> The patch fixes the specific case described in the bug, however a more 
> robust/generic solution is needed to properly sort out the problem.
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py
>  05e19cf 
> 
> Diff: https://reviews.apache.org/r/45220/diff/
> 
> 
> Testing
> ---
> 
> Unit tests in progress.
> Manual testing under way
> 
> 
> Thanks,
> 
> Laszlo Puskas
> 
>



Re: Review Request 45218: Atlas Integration : Rename Atlas Configurations (2.5 stack definition)

2016-03-23 Thread Alejandro Fernandez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45218/#review125068
---


Ship it!




Ship It!

- Alejandro Fernandez


On March 23, 2016, 4:07 p.m., Tom Beerbower wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45218/
> ---
> 
> (Updated March 23, 2016, 4:07 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Sumit Mohanty.
> 
> 
> Bugs: AMBARI-15431
> https://issues.apache.org/jira/browse/AMBARI-15431
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Move changes for AMBARI-15431 from 2.6 stack definition to 2.5 stack 
> definition.
> 
> Atlas configuration name application.properties has been changed to 
> atlas-application.properties to avoid name conflicts with other services. 
> See https://issues.apache.org/jira/browse/ATLAS-392.
> 
> Ambari scripts for Atlas currently use the configuration name 
> application.properties for all stack levels. Stacks which include Atlas > 0.5 
> should use the configuration name atlas-application.properties.
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/stacks/HDP/2.5/services/ATLAS/configuration/atlas-env.xml
>  PRE-CREATION 
>   ambari-server/src/main/resources/stacks/HDP/2.5/services/ATLAS/metainfo.xml 
> 66aea9d 
>   
> ambari-server/src/main/resources/stacks/HDP/2.6/services/ATLAS/configuration/atlas-env.xml
>  42503b5 
>   ambari-server/src/main/resources/stacks/HDP/2.6/services/ATLAS/metainfo.xml 
> af1a047 
>   ambari-server/src/test/python/stacks/2.5/ATLAS/test_atlas_server.py 
> PRE-CREATION 
>   ambari-server/src/test/python/stacks/2.5/configs/default.json PRE-CREATION 
>   ambari-server/src/test/python/stacks/2.6/ATLAS/test_atlas_server.py 8e51ea0 
>   ambari-server/src/test/python/stacks/2.6/configs/default.json 2e1bc68 
> 
> Diff: https://reviews.apache.org/r/45218/diff/
> 
> 
> Testing
> ---
> 
> Manual test install Atlas (HDP 2.5 stack).  Verify configuration.
> 
> mvn clean test
> 
> all pass
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>



Re: Review Request 45218: Atlas Integration : Rename Atlas Configurations (2.5 stack definition)

2016-03-23 Thread Nate Cole

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45218/#review125063
---


Ship it!




Ship It!

- Nate Cole


On March 23, 2016, 12:07 p.m., Tom Beerbower wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45218/
> ---
> 
> (Updated March 23, 2016, 12:07 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Sumit Mohanty.
> 
> 
> Bugs: AMBARI-15431
> https://issues.apache.org/jira/browse/AMBARI-15431
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Move changes for AMBARI-15431 from 2.6 stack definition to 2.5 stack 
> definition.
> 
> Atlas configuration name application.properties has been changed to 
> atlas-application.properties to avoid name conflicts with other services. 
> See https://issues.apache.org/jira/browse/ATLAS-392.
> 
> Ambari scripts for Atlas currently use the configuration name 
> application.properties for all stack levels. Stacks which include Atlas > 0.5 
> should use the configuration name atlas-application.properties.
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/stacks/HDP/2.5/services/ATLAS/configuration/atlas-env.xml
>  PRE-CREATION 
>   ambari-server/src/main/resources/stacks/HDP/2.5/services/ATLAS/metainfo.xml 
> 66aea9d 
>   
> ambari-server/src/main/resources/stacks/HDP/2.6/services/ATLAS/configuration/atlas-env.xml
>  42503b5 
>   ambari-server/src/main/resources/stacks/HDP/2.6/services/ATLAS/metainfo.xml 
> af1a047 
>   ambari-server/src/test/python/stacks/2.5/ATLAS/test_atlas_server.py 
> PRE-CREATION 
>   ambari-server/src/test/python/stacks/2.5/configs/default.json PRE-CREATION 
>   ambari-server/src/test/python/stacks/2.6/ATLAS/test_atlas_server.py 8e51ea0 
>   ambari-server/src/test/python/stacks/2.6/configs/default.json 2e1bc68 
> 
> Diff: https://reviews.apache.org/r/45218/diff/
> 
> 
> Testing
> ---
> 
> Manual test install Atlas (HDP 2.5 stack).  Verify configuration.
> 
> mvn clean test
> 
> all pass
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>



Review Request 45191: HAWQ - exchange keys should be done only from HAWQMASTER

2016-03-23 Thread bhuvnesh chaudhary

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45191/
---

Review request for Ambari, Alejandro Fernandez, jun aoki, Jayush Luniya, and 
Oleksandr Diachenko.


Bugs: AMBARI-15524
https://issues.apache.org/jira/browse/AMBARI-15524


Repository: ambari


Description
---

HAWQ - exchange keys should be done only from HAWQMASTER. Currently, both 
standby and master does exchange keys however, its not required be done twice


Diffs
-

  
ambari-server/src/main/resources/common-services/HAWQ/2.0.0/package/scripts/common.py
 0631144 
  
ambari-server/src/main/resources/common-services/HAWQ/2.0.0/package/scripts/hawqmaster.py
 2c3493a 
  
ambari-server/src/main/resources/common-services/HAWQ/2.0.0/package/scripts/hawqsegment.py
 1891ede 
  
ambari-server/src/main/resources/common-services/HAWQ/2.0.0/package/scripts/hawqstandby.py
 0f52b9e 
  
ambari-server/src/main/resources/common-services/HAWQ/2.0.0/package/scripts/master_helper.py
 330b6c0 
  ambari-server/src/test/python/stacks/2.3/HAWQ/test_hawqmaster.py 3907ad9 
  ambari-server/src/test/python/stacks/2.3/HAWQ/test_hawqsegment.py 8049821 
  ambari-server/src/test/python/stacks/2.3/HAWQ/test_hawqstandby.py 039d109 

Diff: https://reviews.apache.org/r/45191/diff/


Testing
---

yes. manually


Thanks,

bhuvnesh chaudhary



Re: Review Request 44712: Intermittent YARN service check failures during and post EU

2016-03-23 Thread Andrew Onischuk

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/44712/#review125059
---


Ship it!




Ship It!

- Andrew Onischuk


On March 23, 2016, 5:04 p.m., Dmitro Lisnichenko wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/44712/
> ---
> 
> (Updated March 23, 2016, 5:04 p.m.)
> 
> 
> Review request for Ambari and Andrew Onischuk.
> 
> 
> Bugs: AMBARI-15389
> https://issues.apache.org/jira/browse/AMBARI-15389
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Build # - Ambari 2.2.1.1 - #63
> 
> Observed this issue in a couple of EU runs recently where YARN service check 
> reports failure
> a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
> reported failure during EU itself; a retry of the operation led to service 
> check being successful
> 
> b. In another test post EU when YARN service check was run, it reported 
> failure; afterwards when I ran it again - success
> 
> Looks like there is some corner condition which causes this issue to be hit
> 
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-822.txt
> 
> Traceback (most recent call last):
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 142, in 
> ServiceCheck().execute()
> File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 219, in execute
> method(env)
> File 
> "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
>  line 104, in service_check
> user=params.smokeuser,
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 70, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 92, in checked_call
> tries=tries, try_sleep=try_sleep)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 140, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 291, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
> org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
> -num_containers 1 -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
>  returned 2.  Hortonworks #
> This is MOTD message, added for testing in qe infra
> 16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
> http://host:8188/ws/v1/timeline/
> 16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
> 16/03/03 02:33:51 INFO distributedshell.Client: Running Client
> 16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
> host-9-5.test/127.0.0.254:8050
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
> ASM, numNodeManagers=3
> 16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
> nodeNumContainers1
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
> nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
> nodeRackName/default-rack, nodeNumContainers0
> 16/03/03 02:33:53 INFO distributedshell.Client: Queue info, 
> queueName=default, queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
> queueApplicationCount=0, queueChildQueueCount=0
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=root, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
> queueName=default, userAcl=SUBMIT_APPLICATIONS
> 16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
> resources in this cluster 10240
> 16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
> of resources in this cluster 1
> 16/03/03 02:33:53 INFO distributedshell.Client: Copy App Master jar from 
> local filesystem and add to local environment
> 16/03/03 02:33:53 INFO distributedshell.Client: Set the environment for the 
> application master
> 16/03/03 02:33:53 INFO distributedshell.Client: Setting up app master command
> 16/03/03 02:33:53 INFO distributedshell.Client: Completed 

Re: Review Request 44712: Intermittent YARN service check failures during and post EU

2016-03-23 Thread Dmitro Lisnichenko

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/44712/
---

(Updated March 23, 2016, 7:04 p.m.)


Review request for Ambari and Andrew Onischuk.


Changes
---

Increase timeout even more to see if it fixes an issue


Bugs: AMBARI-15389
https://issues.apache.org/jira/browse/AMBARI-15389


Repository: ambari


Description
---

Build # - Ambari 2.2.1.1 - #63

Observed this issue in a couple of EU runs recently where YARN service check 
reports failure
a. In one test, the EU ran from HDP 2.3.4.0 to 2.4.0.0 and YARN service check 
reported failure during EU itself; a retry of the operation led to service 
check being successful

b. In another test post EU when YARN service check was run, it reported 
failure; afterwards when I ran it again - success

Looks like there is some corner condition which causes this issue to be hit

{code}
stderr:   /var/lib/ambari-agent/data/errors-822.txt

Traceback (most recent call last):
File 
"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
 line 142, in 
ServiceCheck().execute()
File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 219, in execute
method(env)
File 
"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py",
 line 104, in service_check
user=params.smokeuser,
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 
291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
/etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; yarn 
org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls 
-num_containers 1 -jar 
/usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar'
 returned 2.  Hortonworks #
This is MOTD message, added for testing in qe infra
16/03/03 02:33:51 INFO impl.TimelineClientImpl: Timeline service address: 
http://host:8188/ws/v1/timeline/
16/03/03 02:33:51 INFO distributedshell.Client: Initializing Client
16/03/03 02:33:51 INFO distributedshell.Client: Running Client
16/03/03 02:33:51 INFO client.RMProxy: Connecting to ResourceManager at 
host-9-5.test/127.0.0.254:8050
16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster metric info from 
ASM, numNodeManagers=3
16/03/03 02:33:53 INFO distributedshell.Client: Got Cluster node info from ASM
16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, 
nodeNumContainers1
16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
nodeId=host-9-5.test:25454, nodeAddresshost-9-5.test:8042, 
nodeRackName/default-rack, nodeNumContainers0
16/03/03 02:33:53 INFO distributedshell.Client: Got node report from ASM for, 
nodeId=host-9-1.test:25454, nodeAddresshost-9-1.test:8042, 
nodeRackName/default-rack, nodeNumContainers0
16/03/03 02:33:53 INFO distributedshell.Client: Queue info, queueName=default, 
queueCurrentCapacity=0.08336, queueMaxCapacity=1.0, 
queueApplicationCount=0, queueChildQueueCount=0
16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
queueName=root, userAcl=SUBMIT_APPLICATIONS
16/03/03 02:33:53 INFO distributedshell.Client: User ACL Info for Queue, 
queueName=default, userAcl=SUBMIT_APPLICATIONS
16/03/03 02:33:53 INFO distributedshell.Client: Max mem capabililty of 
resources in this cluster 10240
16/03/03 02:33:53 INFO distributedshell.Client: Max virtual cores capabililty 
of resources in this cluster 1
16/03/03 02:33:53 INFO distributedshell.Client: Copy App Master jar from local 
filesystem and add to local environment
16/03/03 02:33:53 INFO distributedshell.Client: Set the environment for the 
application master
16/03/03 02:33:53 INFO distributedshell.Client: Setting up app master command
16/03/03 02:33:53 INFO distributedshell.Client: Completed setting up app master 
command {{JAVA_HOME}}/bin/java -Xmx10m 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster 
--container_memory 10 --container_vcores 1 --num_containers 1 --priority 0 
1>/AppMaster.stdout 2>/AppMaster.stderr
16/03/03 02:33:53 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 290 
for ambari-qa on 127.0.0.235:8020
16/03/03 02:33:53 INFO distributedshell.Client: Got dt for 
hdfs://host-9-1.test:8020; Kind: 

Re: Review Request 45220: /tmp hdfs folder created with mode 0777

2016-03-23 Thread Andrew Onischuk

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45220/#review125055
---


Ship it!




If unit tests passed +1

- Andrew Onischuk


On March 23, 2016, 4:40 p.m., Laszlo Puskas wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45220/
> ---
> 
> (Updated March 23, 2016, 4:40 p.m.)
> 
> 
> Review request for Ambari, Andrew Onischuk, Sumit Mohanty, and Sebastian 
> Toader.
> 
> 
> Bugs: AMBARI-15531
> https://issues.apache.org/jira/browse/AMBARI-15531
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> When cluster created via blueprint the /tmp folder may be created implicitly 
> by various components with permissions that prevent other components to write 
> to it. In the specific case described by the linked issue, the folder has 
> been created when starting the historyserver, and later on the folder 
> couldn't be written by the hiveserver2. (ideally the folder is expected to be 
> created when the namenode starts)
> 
> The patch fixes the specific case described in the bug, however a more 
> robust/generic solution is needed to properly sort out the problem.
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py
>  05e19cf 
> 
> Diff: https://reviews.apache.org/r/45220/diff/
> 
> 
> Testing
> ---
> 
> Unit tests in progress.
> Manual testing under way
> 
> 
> Thanks,
> 
> Laszlo Puskas
> 
>



Re: Review Request 45219: Unable to restart Falcon server

2016-03-23 Thread Dmitro Lisnichenko

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45219/#review125054
---


Ship it!




Ship It!

- Dmitro Lisnichenko


On March 23, 2016, 6:14 p.m., Andrew Onischuk wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45219/
> ---
> 
> (Updated March 23, 2016, 6:14 p.m.)
> 
> 
> Review request for Ambari and Dmitro Lisnichenko.
> 
> 
> Bugs: AMBARI-15534
> https://issues.apache.org/jira/browse/AMBARI-15534
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> This is consistently noticed on WASB environment.
> 
> 
> Diffs
> -
> 
>   
> contrib/fast-hdfs-resource/src/main/java/org/apache/ambari/fast_hdfs_resource/Resource.java
>  9ef7660 
> 
> Diff: https://reviews.apache.org/r/45219/diff/
> 
> 
> Testing
> ---
> 
> mvn clean test
> 
> 
> Thanks,
> 
> Andrew Onischuk
> 
>



Review Request 45219: Unable to restart Falcon server

2016-03-23 Thread Andrew Onischuk

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45219/
---

Review request for Ambari and Dmitro Lisnichenko.


Bugs: AMBARI-15534
https://issues.apache.org/jira/browse/AMBARI-15534


Repository: ambari


Description
---

This is consistently noticed on WASB environment.


Diffs
-

  
contrib/fast-hdfs-resource/src/main/java/org/apache/ambari/fast_hdfs_resource/Resource.java
 9ef7660 

Diff: https://reviews.apache.org/r/45219/diff/


Testing
---

mvn clean test


Thanks,

Andrew Onischuk



Re: Review Request 43049: Oozie should update war after adding Falcon

2016-03-23 Thread Dmitro Lisnichenko

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43049/#review125051
---


Ship it!




Ship It!

- Dmitro Lisnichenko


On March 23, 2016, 6:04 p.m., Andrew Onischuk wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43049/
> ---
> 
> (Updated March 23, 2016, 6:04 p.m.)
> 
> 
> Review request for Ambari and Dmitro Lisnichenko.
> 
> 
> Bugs: AMBARI-14863
> https://issues.apache.org/jira/browse/AMBARI-14863
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> After adding Falcon, falcon-oozie-el-extension-*.jar is added to oozie-
> server/libext  
> Oozie war should be updated.
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/oozie_prepare_war.py
>  PRE-CREATION 
>   
> ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py
>  5587380 
>   
> ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server_upgrade.py
>  d26b89d 
>   
> ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py
>  f92d90c 
>   ambari-server/src/main/resources/scripts/Ambaripreupload.py cbec3cf 
>   ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 
> ba61b3d 
> 
> Diff: https://reviews.apache.org/r/43049/diff/
> 
> 
> Testing
> ---
> 
> mvn clean test
> 
> 
> Thanks,
> 
> Andrew Onischuk
> 
>



Review Request 45218: Atlas Integration : Rename Atlas Configurations (2.5 stack definition)

2016-03-23 Thread Tom Beerbower

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45218/
---

Review request for Ambari, John Speidel, Nate Cole, and Sumit Mohanty.


Bugs: AMBARI-15431
https://issues.apache.org/jira/browse/AMBARI-15431


Repository: ambari


Description
---

Move changes for AMBARI-15431 from 2.6 stack definition to 2.5 stack definition.

Atlas configuration name application.properties has been changed to 
atlas-application.properties to avoid name conflicts with other services. 
See https://issues.apache.org/jira/browse/ATLAS-392.

Ambari scripts for Atlas currently use the configuration name 
application.properties for all stack levels. Stacks which include Atlas > 0.5 
should use the configuration name atlas-application.properties.


Diffs
-

  
ambari-server/src/main/resources/stacks/HDP/2.5/services/ATLAS/configuration/atlas-env.xml
 PRE-CREATION 
  ambari-server/src/main/resources/stacks/HDP/2.5/services/ATLAS/metainfo.xml 
66aea9d 
  
ambari-server/src/main/resources/stacks/HDP/2.6/services/ATLAS/configuration/atlas-env.xml
 42503b5 
  ambari-server/src/main/resources/stacks/HDP/2.6/services/ATLAS/metainfo.xml 
af1a047 
  ambari-server/src/test/python/stacks/2.5/ATLAS/test_atlas_server.py 
PRE-CREATION 
  ambari-server/src/test/python/stacks/2.5/configs/default.json PRE-CREATION 
  ambari-server/src/test/python/stacks/2.6/ATLAS/test_atlas_server.py 8e51ea0 
  ambari-server/src/test/python/stacks/2.6/configs/default.json 2e1bc68 

Diff: https://reviews.apache.org/r/45218/diff/


Testing
---

Manual test install Atlas (HDP 2.5 stack).  Verify configuration.

mvn clean test

all pass


Thanks,

Tom Beerbower



Re: Review Request 44725: After exporting blueprint from ranger enabled cluster ranger.service.https.attrib.keystore.pass is exported

2016-03-23 Thread Robert Levas


> On March 23, 2016, 8:31 a.m., Robert Levas wrote:
> > ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java,
> >  line 2723
> > 
> >
> > This is a great idea. I am suprised we haven't done this already.  
> > However, I don't see where this new filter class is being used, so how was 
> > the issue in the description addressed?
> 
> Robert Nettleton wrote:
> Hi Rob, the Blueprint export filters have been around for a while, check 
> out:
> 
> 
> org.apache.ambari.server.controller.internal.BlueprintConfigurationProcessor#shouldPropertyBeExcludedForBlueprintExport
> 
> Basically, this method iterates over the registered filters to determine 
> if a property should be excluded. 
> 
> Thanks.

Thnaks for the clarification Bob.  Droppng my issue.


- Robert


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/44725/#review125010
---


On March 22, 2016, 9:40 p.m., Amruta Borkar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/44725/
> ---
> 
> (Updated March 22, 2016, 9:40 p.m.)
> 
> 
> Review request for Ambari, Robert Levas and Robert Nettleton.
> 
> 
> Bugs: AMBARI-15338
> https://issues.apache.org/jira/browse/AMBARI-15338
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> After exporting blueprint from ranger enabled cluster 
> ranger.service.https.attrib.keystore.pass is exported.
> Which needs to be removed before using the same blueprint to create another 
> cluster
> Error Show when used same blueprint:
> { "status" : 400, "message" : "Blueprint configuration validation failed: 
> Secret references are not allowed in blueprints, replace following properties 
> with real passwords:\n Config:ranger-admin-site 
> Property:ranger.service.https.attrib.keystore.pass\n" }
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java
>  4230862 
>   
> ambari-server/src/test/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessorTest.java
>  0f62b2c 
> 
> Diff: https://reviews.apache.org/r/44725/diff/
> 
> 
> Testing
> ---
> 
> Modified test cases to test for if the properties that end with "pass" are 
> getting filtered. Other properties which have 'pass' else where in the name 
> will not get filtered.
> 
> 
> Thanks,
> 
> Amruta Borkar
> 
>



Re: Review Request 45215: HDFS Alerts for AMS Throw 'invalid literal for int() with base 10: '50.0''

2016-03-23 Thread Sid Wagle

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45215/#review125036
---


Ship it!




Ship It!

- Sid Wagle


On March 23, 2016, 3:08 p.m., Jonathan Hurley wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45215/
> ---
> 
> (Updated March 23, 2016, 3:08 p.m.)
> 
> 
> Review request for Ambari, Nate Cole and Sid Wagle.
> 
> 
> Bugs: AMBARI-15533
> https://issues.apache.org/jira/browse/AMBARI-15533
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> SCRIPT alerts stuck in UNKNWN status with response message 'invalid literal 
> for int() with base 10: '50.0''.
> 
> It is noticed that the error is thrown only after a PUT alertDefinition call 
> to update few parameters of alert definition since the numeric values are 
> changed into strings.
> 
> The scripts need to safely cast their parameters; the fix is in the script 
> here.
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/alerts/alert_metrics_deviation.py
>  f62c4a3 
> 
> Diff: https://reviews.apache.org/r/45215/diff/
> 
> 
> Testing
> ---
> 
> Deployed on a cluster exibiting the cast problem.
> 
> 
> Thanks,
> 
> Jonathan Hurley
> 
>



Re: Review Request 45208: Cleanup LDAP sync process

2016-03-23 Thread Oliver Szabo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45208/
---

(Updated March 23, 2016, 1:28 p.m.)


Review request for Ambari, Daniel Gergely, Robert Levas, and Sebastian Toader.


Changes
---

fixed issues & testing done (new changes tested manually with 
AmbariLdapDataPopulatorTest class)


Bugs: AMBARI-15383
https://issues.apache.org/jira/browse/AMBARI-15383


Repository: ambari


Description
---

Cleanup LDAP sync process:
- speedup sync with "--all" (do not check nested groups recursively...ambari 
gather all of the groups first, and it's enough to just process them 
sequentially)
- fixing issue: uppercase member attributes (in ambari.properties) don't work 
well with the queries


Diffs (updated)
-

  
ambari-server/src/main/java/org/apache/ambari/server/security/ldap/AmbariLdapDataPopulator.java
 75df9cc 
  
ambari-server/src/test/java/org/apache/ambari/server/security/ldap/AmbariLdapDataPopulatorTest.java
 3ea 

Diff: https://reviews.apache.org/r/45208/diff/


Testing (updated)
---

Testing done.
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 1:02:31.664s
[INFO] Finished at: Wed Mar 23 13:45:38 CET 2016
[INFO] Final Memory: 38M/607M
[INFO] 


Thanks,

Oliver Szabo



Re: Review Request 45208: Cleanup LDAP sync process

2016-03-23 Thread Robert Levas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/45208/#review125019
---


Fix it, then Ship it!





ambari-server/src/main/java/org/apache/ambari/server/security/ldap/AmbariLdapDataPopulator.java
 (line 347)


Could `allMode` be renamed to something more self explanitory.  Maybe 
`recursive` and then reverse the logic as needed.



ambari-server/src/main/java/org/apache/ambari/server/security/ldap/AmbariLdapDataPopulator.java
 (line 361)


You might want to reverse the clauses here since the boolean check is 
faster than the `contains` check and mail fail first.


- Robert Levas


On March 23, 2016, 7:45 a.m., Oliver Szabo wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/45208/
> ---
> 
> (Updated March 23, 2016, 7:45 a.m.)
> 
> 
> Review request for Ambari, Daniel Gergely, Robert Levas, and Sebastian Toader.
> 
> 
> Bugs: AMBARI-15383
> https://issues.apache.org/jira/browse/AMBARI-15383
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Cleanup LDAP sync process:
> - speedup sync with "--all" (do not check nested groups recursively...ambari 
> gather all of the groups first, and it's enough to just process them 
> sequentially)
> - fixing issue: uppercase member attributes (in ambari.properties) don't work 
> well with the queries
> 
> 
> Diffs
> -
> 
>   
> ambari-server/src/main/java/org/apache/ambari/server/security/ldap/AmbariLdapDataPopulator.java
>  75df9cc 
>   
> ambari-server/src/test/java/org/apache/ambari/server/security/ldap/AmbariLdapDataPopulatorTest.java
>  3ea 
> 
> Diff: https://reviews.apache.org/r/45208/diff/
> 
> 
> Testing
> ---
> 
> Testing is in progress...
> 
> 
> Thanks,
> 
> Oliver Szabo
> 
>