[jira] [Updated] (AMBARI-14714) [Umbrella] Multi Everything Architecture

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14714:
---
Description: 
*Multi Stack Services*
_Scenario: Deploy HDP & HDF services in same cluster_
- Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.

*Multiple Service Instances*
_Scenario: Multi service instances on same version_
- Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
- User wants to add instance of ZooKeeper vX which is being used by STORM and 
KAFKA

_Scenario: Multi service instances on different versions_
- Cluster includes instance of SPARK vX.
- User wants to add additional instance of SPARK vY.

*Multi Host Component Instances*
_Scenario: Multi component instances from a service instance on the same host_
- Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
- Single instance of KAFKA broker is unable to utilize all the resources on the 
host. 
- User wants to scale up performance by deploying multiple instances of the 
KAFKA brokers/host.

*Multi Cluster*
_Scenario: Manage multiple Hadoop clusters under single Ambari Server_
- Customer has multiple small Hadoop clusters and would like to manage and 
monitor them with a single Ambari Server instance.

*Multi Yarn Hosted Services*
_Scenario: HBase on YARN_
- Deploy second instance of HBase as a long running YARN service.
- Manage YARN hosted service similar to traditional hosted services. 
- First class support for Yarn hosted services.

_Scenario: Credit Fraud Detection YARN Assembly_
- YARN Assembly can have its own ZK, KAFKA etc.
- Manage YARN Assemblies as first-class citizen.







  was:
*Multi Stack Services*
Scenario: Deploy HDP & HDF services in same cluster
- Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.

*Multiple Service Instances*
Scenario: Multi service instances on same version
- Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
- User wants to add instance of ZooKeeper vX which is being used by STORM and 
KAFKA

Scenario: Multi service instances on different versions
- Cluster includes instance of SPARK vX.
- User wants to add additional instance of SPARK vY.

*Multi Host Component Instances*
Scenario: Multi component instances from a service instance on the same host
- Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
- Single instance of KAFKA broker is unable to utilize all the resources on the 
host. 
- User wants to scale up performance by deploying multiple instances of the 
KAFKA brokers/host.

*Multi Cluster*
Scenario: Manage multiple Hadoop clusters under single Ambari Server
- Customer has multiple small Hadoop clusters and would like to manage and 
monitor them with a single Ambari Server instance.

*Multi Yarn Hosted Services*
Scenario: HBase on YARN
- Deploy second instance of HBase as a long running YARN service.
- Manage YARN hosted service similar to traditional hosted services. 
- First class support for Yarn hosted services.

Scenario: Credit Fraud Detection YARN Assembly
- YARN Assembly can have its own ZK, KAFKA etc.
- Manage YARN Assemblies as first-class citizen.








> [Umbrella] Multi Everything Architecture
> 
>
> Key: AMBARI-14714
> URL: https://issues.apache.org/jira/browse/AMBARI-14714
> Project: Ambari
>  Issue Type: Epic
>  Components: ambari-agent, ambari-server, ambari-upgrade, ambari-web, 
> stacks
>Affects Versions: 2.5.0
>Reporter: Jeff Sposetti
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 3.0.0
>
>
> *Multi Stack Services*
> _Scenario: Deploy HDP & HDF services in same cluster_
> - Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.
> *Multiple Service Instances*
> _Scenario: Multi service instances on same version_
> - Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
> - User wants to add instance of ZooKeeper vX which is being used by STORM and 
> KAFKA
> _Scenario: Multi service instances on different versions_
> - Cluster includes instance of SPARK vX.
> - User wants to add additional instance of SPARK vY.
> *Multi Host Component Instances*
> _Scenario: Multi component instances from a service instance on the same host_
> - Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
> - Single instance of KAFKA broker is unable to utilize all the resources on 
> the host. 
> - User wants to scale up performance by deploying multiple instances of the 
> KAFKA brokers/host.
> *Multi Cluster*
> _Scenario: Manage multiple Hadoop clusters under single Ambari Server_
> - Customer has multiple small Hadoop clusters and would like to manage and 
> monitor them with a single Ambari Server instance.
> *Multi Yarn Hosted Services*
> _Scenario: HBase on YARN_

[jira] [Updated] (AMBARI-19621) Mpack Based Operations Model

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-19621:
---
Epic Name: Mpack V2  (was: V2Mpack)

> Mpack Based Operations Model
> 
>
> Key: AMBARI-19621
> URL: https://issues.apache.org/jira/browse/AMBARI-19621
> Project: Ambari
>  Issue Type: Epic
>  Components: ambari-agent, ambari-server, ambari-web
>Affects Versions: 3.0.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-14714) [Umbrella] Multi Everything Architecture

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14714:
---
Epic Name: Multi Everything Architecture  (was: MultiEverythingArchitecture)

> [Umbrella] Multi Everything Architecture
> 
>
> Key: AMBARI-14714
> URL: https://issues.apache.org/jira/browse/AMBARI-14714
> Project: Ambari
>  Issue Type: Epic
>  Components: ambari-agent, ambari-server, ambari-upgrade, ambari-web, 
> stacks
>Affects Versions: 2.5.0
>Reporter: Jeff Sposetti
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 3.0.0
>
>
> *Multi Stack Services*
> Scenario: Deploy HDP & HDF services in same cluster
> - Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.
> *Multiple Service Instances*
> Scenario: Multi service instances on same version
> - Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
> - User wants to add instance of ZooKeeper vX which is being used by STORM and 
> KAFKA
> Scenario: Multi service instances on different versions
> - Cluster includes instance of SPARK vX.
> - User wants to add additional instance of SPARK vY.
> *Multi Host Component Instances*
> Scenario: Multi component instances from a service instance on the same host
> - Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
> - Single instance of KAFKA broker is unable to utilize all the resources on 
> the host. 
> - User wants to scale up performance by deploying multiple instances of the 
> KAFKA brokers/host.
> *Multi Cluster*
> Scenario: Manage multiple Hadoop clusters under single Ambari Server
> - Customer has multiple small Hadoop clusters and would like to manage and 
> monitor them with a single Ambari Server instance.
> *Multi Yarn Hosted Services*
> Scenario: HBase on YARN
> - Deploy second instance of HBase as a long running YARN service.
> - Manage YARN hosted service similar to traditional hosted services. 
> - First class support for Yarn hosted services.
> Scenario: Credit Fraud Detection YARN Assembly
> - YARN Assembly can have its own ZK, KAFKA etc.
> - Manage YARN Assemblies as first-class citizen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20463) Multi Service Instance Support

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-20463:
---
Epic Name: Multi Service Instance  (was: Multi Service Instance Support)

> Multi Service Instance Support
> --
>
> Key: AMBARI-20463
> URL: https://issues.apache.org/jira/browse/AMBARI-20463
> Project: Ambari
>  Issue Type: Epic
>  Components: ambari-agent, ambari-server, ambari-web
>Affects Versions: 3.0.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-14714) [Umbrella] Multi Everything Architecture

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14714:
---
Epic Name: MultiEverythingArchitecture

> [Umbrella] Multi Everything Architecture
> 
>
> Key: AMBARI-14714
> URL: https://issues.apache.org/jira/browse/AMBARI-14714
> Project: Ambari
>  Issue Type: Epic
>  Components: ambari-agent, ambari-server, ambari-upgrade, ambari-web, 
> stacks
>Affects Versions: 2.5.0
>Reporter: Jeff Sposetti
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 3.0.0
>
>
> *Multi Stack Services*
> Scenario: Deploy HDP & HDF services in same cluster
> - Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.
> *Multiple Service Instances*
> Scenario: Multi service instances on same version
> - Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
> - User wants to add instance of ZooKeeper vX which is being used by STORM and 
> KAFKA
> Scenario: Multi service instances on different versions
> - Cluster includes instance of SPARK vX.
> - User wants to add additional instance of SPARK vY.
> *Multi Host Component Instances*
> Scenario: Multi component instances from a service instance on the same host
> - Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
> - Single instance of KAFKA broker is unable to utilize all the resources on 
> the host. 
> - User wants to scale up performance by deploying multiple instances of the 
> KAFKA brokers/host.
> *Multi Cluster*
> Scenario: Manage multiple Hadoop clusters under single Ambari Server
> - Customer has multiple small Hadoop clusters and would like to manage and 
> monitor them with a single Ambari Server instance.
> *Multi Yarn Hosted Services*
> Scenario: HBase on YARN
> - Deploy second instance of HBase as a long running YARN service.
> - Manage YARN hosted service similar to traditional hosted services. 
> - First class support for Yarn hosted services.
> Scenario: Credit Fraud Detection YARN Assembly
> - YARN Assembly can have its own ZK, KAFKA etc.
> - Manage YARN Assemblies as first-class citizen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-14714) [Umbrella] Multi Everything Architecture

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14714:
---
Component/s: ambari-web
 ambari-upgrade
 ambari-server
 ambari-agent

> [Umbrella] Multi Everything Architecture
> 
>
> Key: AMBARI-14714
> URL: https://issues.apache.org/jira/browse/AMBARI-14714
> Project: Ambari
>  Issue Type: Epic
>  Components: ambari-agent, ambari-server, ambari-upgrade, ambari-web, 
> stacks
>Affects Versions: 2.5.0
>Reporter: Jeff Sposetti
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 3.0.0
>
>
> *Multi Stack Services*
> Scenario: Deploy HDP & HDF services in same cluster
> - Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.
> *Multiple Service Instances*
> Scenario: Multi service instances on same version
> - Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
> - User wants to add instance of ZooKeeper vX which is being used by STORM and 
> KAFKA
> Scenario: Multi service instances on different versions
> - Cluster includes instance of SPARK vX.
> - User wants to add additional instance of SPARK vY.
> *Multi Host Component Instances*
> Scenario: Multi component instances from a service instance on the same host
> - Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
> - Single instance of KAFKA broker is unable to utilize all the resources on 
> the host. 
> - User wants to scale up performance by deploying multiple instances of the 
> KAFKA brokers/host.
> *Multi Cluster*
> Scenario: Manage multiple Hadoop clusters under single Ambari Server
> - Customer has multiple small Hadoop clusters and would like to manage and 
> monitor them with a single Ambari Server instance.
> *Multi Yarn Hosted Services*
> Scenario: HBase on YARN
> - Deploy second instance of HBase as a long running YARN service.
> - Manage YARN hosted service similar to traditional hosted services. 
> - First class support for Yarn hosted services.
> Scenario: Credit Fraud Detection YARN Assembly
> - YARN Assembly can have its own ZK, KAFKA etc.
> - Manage YARN Assemblies as first-class citizen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-14714) [Umbrella] Multi Everything Architecture

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14714:
---
Affects Version/s: (was: 2.4.0)
   2.5.0

> [Umbrella] Multi Everything Architecture
> 
>
> Key: AMBARI-14714
> URL: https://issues.apache.org/jira/browse/AMBARI-14714
> Project: Ambari
>  Issue Type: Epic
>  Components: ambari-agent, ambari-server, ambari-upgrade, ambari-web, 
> stacks
>Affects Versions: 2.5.0
>Reporter: Jeff Sposetti
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 3.0.0
>
>
> *Multi Stack Services*
> Scenario: Deploy HDP & HDF services in same cluster
> - Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.
> *Multiple Service Instances*
> Scenario: Multi service instances on same version
> - Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
> - User wants to add instance of ZooKeeper vX which is being used by STORM and 
> KAFKA
> Scenario: Multi service instances on different versions
> - Cluster includes instance of SPARK vX.
> - User wants to add additional instance of SPARK vY.
> *Multi Host Component Instances*
> Scenario: Multi component instances from a service instance on the same host
> - Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
> - Single instance of KAFKA broker is unable to utilize all the resources on 
> the host. 
> - User wants to scale up performance by deploying multiple instances of the 
> KAFKA brokers/host.
> *Multi Cluster*
> Scenario: Manage multiple Hadoop clusters under single Ambari Server
> - Customer has multiple small Hadoop clusters and would like to manage and 
> monitor them with a single Ambari Server instance.
> *Multi Yarn Hosted Services*
> Scenario: HBase on YARN
> - Deploy second instance of HBase as a long running YARN service.
> - Manage YARN hosted service similar to traditional hosted services. 
> - First class support for Yarn hosted services.
> Scenario: Credit Fraud Detection YARN Assembly
> - YARN Assembly can have its own ZK, KAFKA etc.
> - Manage YARN Assemblies as first-class citizen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-14714) [Umbrella] Multi Everything Architecture

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14714:
---
Description: 
*Multi Stack Services*
Scenario: Deploy HDP & HDF services in same cluster
- Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.

*Multiple Service Instances*
Scenario: Multi service instances on same version
- Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
- User wants to add instance of ZooKeeper vX which is being used by STORM and 
KAFKA

Scenario: Multi service instances on different versions
- Cluster includes instance of SPARK vX.
- User wants to add additional instance of SPARK vY.

*Multi Host Component Instances*
Scenario: Multi component instances from a service instance on the same host
- Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
- Single instance of KAFKA broker is unable to utilize all the resources on the 
host. 
- User wants to scale up performance by deploying multiple instances of the 
KAFKA brokers/host.

*Multi Cluster*
Scenario: Manage multiple Hadoop clusters under single Ambari Server
- Customer has multiple small Hadoop clusters and would like to manage and 
monitor them with a single Ambari Server instance.

*Multi Yarn Hosted Services*
Scenario: HBase on YARN
- Deploy second instance of HBase as a long running YARN service.
- Manage YARN hosted service similar to traditional hosted services. 
- First class support for Yarn hosted services.

Scenario: Credit Fraud Detection YARN Assembly
- YARN Assembly can have its own ZK, KAFKA etc.
- Manage YARN Assemblies as first-class citizen.







  was:
*Multi Stack Services*
Scenario: Deploy HDP & HDF services in same cluster
- Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.

*Multiple Service Instances*
Scenario: Multi service instances on same version
- Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
- User wants to add instance of ZooKeeper vX which is being used by STORM and 
KAFKA

Scenario: Multi service instances on different versions
- Cluster includes instance of SPARK vX.
- User wants to add additional instance of SPARK vY.

*Multi Host Component Instances*
Scenario: Multi component instances from a service instance on the same host
- Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
- Single instance of KAFKA broker is unable to utilize all the resources on the 
host. 
- User wants to scale up performance by deploying multiple instances of the 
KAFKA brokers/host.

*Multi Cluster*
Scenario: Manage multiple Hadoop clusters under single Ambari Server
- Customer has multiple small Hadoop clusters and would like to manage and 
monitor them with a single Ambari Server instance.

*Multi Yarn Hosted Services*
Scenario: HBase on YARN
- Deploy second instance of HBase as a long running YARN service.
- Manage YARN hosted service similar to traditional hosted services. 
- First class support for Yarn hosted services.
Scenario: Credit Fraud Detection YARN Assembly
- YARN Assembly can have its own ZK, KAFKA etc.
- Manage YARN Assemblies as first-class citizen.








> [Umbrella] Multi Everything Architecture
> 
>
> Key: AMBARI-14714
> URL: https://issues.apache.org/jira/browse/AMBARI-14714
> Project: Ambari
>  Issue Type: Epic
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Jeff Sposetti
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 3.0.0
>
>
> *Multi Stack Services*
> Scenario: Deploy HDP & HDF services in same cluster
> - Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.
> *Multiple Service Instances*
> Scenario: Multi service instances on same version
> - Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
> - User wants to add instance of ZooKeeper vX which is being used by STORM and 
> KAFKA
> Scenario: Multi service instances on different versions
> - Cluster includes instance of SPARK vX.
> - User wants to add additional instance of SPARK vY.
> *Multi Host Component Instances*
> Scenario: Multi component instances from a service instance on the same host
> - Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
> - Single instance of KAFKA broker is unable to utilize all the resources on 
> the host. 
> - User wants to scale up performance by deploying multiple instances of the 
> KAFKA brokers/host.
> *Multi Cluster*
> Scenario: Manage multiple Hadoop clusters under single Ambari Server
> - Customer has multiple small Hadoop clusters and would like to manage and 
> monitor them with a single Ambari Server instance.
> *Multi Yarn Hosted Services*
> Scenario: HBase on YARN
> - Deploy second instance of HBase as a long running YARN service.
> - Manage YARN 

[jira] [Updated] (AMBARI-14714) [Umbrella] Multi Everything Architecture

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14714:
---
Description: 
*Multi Stack Services*
Scenario: Deploy HDP & HDF services in same cluster
- Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.

*Multiple Service Instances*
Scenario: Multi service instances on same version
- Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
- User wants to add instance of ZooKeeper vX which is being used by STORM and 
KAFKA
Scenario: Multi service instances on different versions
- Cluster includes instance of SPARK vX.
- User wants to add additional instance of SPARK vY.

*Multi Host Component Instances*
Scenario: Multi component instances from a service instance on the same host
- Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
- Single instance of KAFKA broker is unable to utilize all the resources on the 
host. 
- User wants to scale up performance by deploying multiple instances of the 
KAFKA brokers/host.

*Multi Cluster*
Scenario: Manage multiple Hadoop clusters under single Ambari Server
- Customer has multiple small Hadoop clusters and would like to manage and 
monitor them with a single Ambari Server instance.

*Multi Yarn Hosted Services*
Scenario: HBase on YARN
- Deploy second instance of HBase as a long running YARN service.
- Manage YARN hosted service similar to traditional hosted services. 
- First class support for Yarn hosted services.
Scenario: Credit Fraud Detection YARN Assembly
- YARN Assembly can have its own ZK, KAFKA etc.
- Manage YARN Assemblies as first-class citizen.







  was:
*Multi Stack Services*
# Scenario: Deploy HDP & HDF services in same cluster
- Deploy HDF services in an HDP cluster (Nifi from HDF, HDFS from HDP stack).

*Multiple Service Instances*
# Scenario: Multi service instances on same version
- Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
- User wants to add instance of ZooKeeper vX which is being used by STORM and 
KAFKA
# Scenario: Multi service instances on different versions
- Cluster includes instance of SPARK vX.
- User wants to add additional instance of SPARK vY.


# Deploy cluster with multiple instances of a service
# 
Provide an ability to handle multiple instances of a Service in a given 
cluster. In addition, provide ability for a Stack definition to handle multiple 
versions of a given Service (which than can have 0 or more instances in a given 
cluster).


> [Umbrella] Multi Everything Architecture
> 
>
> Key: AMBARI-14714
> URL: https://issues.apache.org/jira/browse/AMBARI-14714
> Project: Ambari
>  Issue Type: Epic
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Jeff Sposetti
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 3.0.0
>
>
> *Multi Stack Services*
> Scenario: Deploy HDP & HDF services in same cluster
> - Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.
> *Multiple Service Instances*
> Scenario: Multi service instances on same version
> - Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
> - User wants to add instance of ZooKeeper vX which is being used by STORM and 
> KAFKA
> Scenario: Multi service instances on different versions
> - Cluster includes instance of SPARK vX.
> - User wants to add additional instance of SPARK vY.
> *Multi Host Component Instances*
> Scenario: Multi component instances from a service instance on the same host
> - Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
> - Single instance of KAFKA broker is unable to utilize all the resources on 
> the host. 
> - User wants to scale up performance by deploying multiple instances of the 
> KAFKA brokers/host.
> *Multi Cluster*
> Scenario: Manage multiple Hadoop clusters under single Ambari Server
> - Customer has multiple small Hadoop clusters and would like to manage and 
> monitor them with a single Ambari Server instance.
> *Multi Yarn Hosted Services*
> Scenario: HBase on YARN
> - Deploy second instance of HBase as a long running YARN service.
> - Manage YARN hosted service similar to traditional hosted services. 
> - First class support for Yarn hosted services.
> Scenario: Credit Fraud Detection YARN Assembly
> - YARN Assembly can have its own ZK, KAFKA etc.
> - Manage YARN Assemblies as first-class citizen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-14714) [Umbrella] Multi Everything Architecture

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14714:
---
Description: 
*Multi Stack Services*
Scenario: Deploy HDP & HDF services in same cluster
- Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.

*Multiple Service Instances*
Scenario: Multi service instances on same version
- Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
- User wants to add instance of ZooKeeper vX which is being used by STORM and 
KAFKA

Scenario: Multi service instances on different versions
- Cluster includes instance of SPARK vX.
- User wants to add additional instance of SPARK vY.

*Multi Host Component Instances*
Scenario: Multi component instances from a service instance on the same host
- Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
- Single instance of KAFKA broker is unable to utilize all the resources on the 
host. 
- User wants to scale up performance by deploying multiple instances of the 
KAFKA brokers/host.

*Multi Cluster*
Scenario: Manage multiple Hadoop clusters under single Ambari Server
- Customer has multiple small Hadoop clusters and would like to manage and 
monitor them with a single Ambari Server instance.

*Multi Yarn Hosted Services*
Scenario: HBase on YARN
- Deploy second instance of HBase as a long running YARN service.
- Manage YARN hosted service similar to traditional hosted services. 
- First class support for Yarn hosted services.
Scenario: Credit Fraud Detection YARN Assembly
- YARN Assembly can have its own ZK, KAFKA etc.
- Manage YARN Assemblies as first-class citizen.







  was:
*Multi Stack Services*
Scenario: Deploy HDP & HDF services in same cluster
- Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.

*Multiple Service Instances*
Scenario: Multi service instances on same version
- Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
- User wants to add instance of ZooKeeper vX which is being used by STORM and 
KAFKA
Scenario: Multi service instances on different versions
- Cluster includes instance of SPARK vX.
- User wants to add additional instance of SPARK vY.

*Multi Host Component Instances*
Scenario: Multi component instances from a service instance on the same host
- Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
- Single instance of KAFKA broker is unable to utilize all the resources on the 
host. 
- User wants to scale up performance by deploying multiple instances of the 
KAFKA brokers/host.

*Multi Cluster*
Scenario: Manage multiple Hadoop clusters under single Ambari Server
- Customer has multiple small Hadoop clusters and would like to manage and 
monitor them with a single Ambari Server instance.

*Multi Yarn Hosted Services*
Scenario: HBase on YARN
- Deploy second instance of HBase as a long running YARN service.
- Manage YARN hosted service similar to traditional hosted services. 
- First class support for Yarn hosted services.
Scenario: Credit Fraud Detection YARN Assembly
- YARN Assembly can have its own ZK, KAFKA etc.
- Manage YARN Assemblies as first-class citizen.








> [Umbrella] Multi Everything Architecture
> 
>
> Key: AMBARI-14714
> URL: https://issues.apache.org/jira/browse/AMBARI-14714
> Project: Ambari
>  Issue Type: Epic
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Jeff Sposetti
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 3.0.0
>
>
> *Multi Stack Services*
> Scenario: Deploy HDP & HDF services in same cluster
> - Deploy HDFS from HDP and Kafka, Storm, NiFI from HDF in the same cluster.
> *Multiple Service Instances*
> Scenario: Multi service instances on same version
> - Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
> - User wants to add instance of ZooKeeper vX which is being used by STORM and 
> KAFKA
> Scenario: Multi service instances on different versions
> - Cluster includes instance of SPARK vX.
> - User wants to add additional instance of SPARK vY.
> *Multi Host Component Instances*
> Scenario: Multi component instances from a service instance on the same host
> - Single host with 128GB RAM, 16 (actual) Cores, 12*4TB disks.
> - Single instance of KAFKA broker is unable to utilize all the resources on 
> the host. 
> - User wants to scale up performance by deploying multiple instances of the 
> KAFKA brokers/host.
> *Multi Cluster*
> Scenario: Manage multiple Hadoop clusters under single Ambari Server
> - Customer has multiple small Hadoop clusters and would like to manage and 
> monitor them with a single Ambari Server instance.
> *Multi Yarn Hosted Services*
> Scenario: HBase on YARN
> - Deploy second instance of HBase as a long running YARN service.
> - Manage YARN 

[jira] [Updated] (AMBARI-14714) [Umbrella] Multi Everything Architecture

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14714:
---
Description: 
*Multi Stack Services*
# Scenario: Deploy HDP & HDF services in same cluster
- Deploy HDF services in an HDP cluster (Nifi from HDF, HDFS from HDP stack).

*Multiple Service Instances*
# Scenario: Multi service instances on same version
- Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
- User wants to add instance of ZooKeeper vX which is being used by STORM and 
KAFKA
# Scenario: Multi service instances on different versions
- Cluster includes instance of SPARK vX.
- User wants to add additional instance of SPARK vY.


# Deploy cluster with multiple instances of a service
# 
Provide an ability to handle multiple instances of a Service in a given 
cluster. In addition, provide ability for a Stack definition to handle multiple 
versions of a given Service (which than can have 0 or more instances in a given 
cluster).

  was:Provide an ability to handle multiple instances of a Service in a given 
cluster. In addition, provide ability for a Stack definition to handle multiple 
versions of a given Service (which than can have 0 or more instances in a given 
cluster).


> [Umbrella] Multi Everything Architecture
> 
>
> Key: AMBARI-14714
> URL: https://issues.apache.org/jira/browse/AMBARI-14714
> Project: Ambari
>  Issue Type: Epic
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Jeff Sposetti
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 3.0.0
>
>
> *Multi Stack Services*
> # Scenario: Deploy HDP & HDF services in same cluster
> - Deploy HDF services in an HDP cluster (Nifi from HDF, HDFS from HDP stack).
> *Multiple Service Instances*
> # Scenario: Multi service instances on same version
> - Cluster includes instance of ZooKeeper vX which is being used by HDFS, YARN.
> - User wants to add instance of ZooKeeper vX which is being used by STORM and 
> KAFKA
> # Scenario: Multi service instances on different versions
> - Cluster includes instance of SPARK vX.
> - User wants to add additional instance of SPARK vY.
> # Deploy cluster with multiple instances of a service
> # 
> Provide an ability to handle multiple instances of a Service in a given 
> cluster. In addition, provide ability for a Stack definition to handle 
> multiple versions of a given Service (which than can have 0 or more instances 
> in a given cluster).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (AMBARI-20463) Multi Service Instance Support

2017-03-15 Thread Jayush Luniya (JIRA)
Jayush Luniya created AMBARI-20463:
--

 Summary: Multi Service Instance Support
 Key: AMBARI-20463
 URL: https://issues.apache.org/jira/browse/AMBARI-20463
 Project: Ambari
  Issue Type: Epic
  Components: ambari-agent, ambari-server, ambari-web
Affects Versions: 3.0.0
Reporter: Jayush Luniya
Assignee: Jayush Luniya
 Fix For: 3.0.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-14714) [Umbrella] Multi Everything Architecture

2017-03-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14714:
---
Summary: [Umbrella] Multi Everything Architecture  (was: [Umbrella] Multi 
Instance Architecture)

> [Umbrella] Multi Everything Architecture
> 
>
> Key: AMBARI-14714
> URL: https://issues.apache.org/jira/browse/AMBARI-14714
> Project: Ambari
>  Issue Type: Epic
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Jeff Sposetti
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 3.0.0
>
>
> Provide an ability to handle multiple instances of a Service in a given 
> cluster. In addition, provide ability for a Stack definition to handle 
> multiple versions of a given Service (which than can have 0 or more instances 
> in a given cluster).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-19429) Create an ODPi stack definition

2017-03-14 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925568#comment-15925568
 ] 

Jayush Luniya commented on AMBARI-19429:


[~rshaposhnik]
Please can you apply the patch 
https://issues.apache.org/jira/secure/attachment/12858833/AMBARI-19429-mpack_trunk.patch,
 build the ODPi mpack and try to install ODPi with the built ODPi stack

Instructions to build and deploy ODPi mpack:

*Building ODPi Management Pack*
{code}
1. cd contrib/management-packs/odpi-ambari-mpack
2. mvn clean package
3. ls -lh target/odpi-ambari-mpack-1.0.0.0-SNAPSHOT.tar.gz
{code}
The ODPi Mpack is built at target/odpi-ambari-mpack-1.0.0.0-SNAPSHOT.tar.gz

*Installing ODPi Managment Pack*
{code}
yum install ambari-server
ambari-server install-mpack 
--mpack=/path/to/odpi-ambari-mpack-1.0.0.0-SNAPSHOT.tar.gz --purge --verbose
ambari-server setup
ambari-server start
{code}
Installing the ODPi mpack with --purge flag will remove the HDP stack 
definition and slipstream in the ODPi stack definition. When you log into 
Ambari Web you will only see ODPi 2.0 stack as an option. 




> Create an ODPi stack definition
> ---
>
> Key: AMBARI-19429
> URL: https://issues.apache.org/jira/browse/AMBARI-19429
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Affects Versions: 2.4.2
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: trunk
>
> Attachments: AMBARI-19429-mpack_trunk.patch, AMBARI-19429.patch2.gz, 
> AMBARI-19429.patch.gz, AMBARI-19429_trunk.patch
>
>
> ODPi is a nonprofit organization committed to simplification & 
> standardization of the big data ecosystem with common reference 
> specifications and test suites. As part of its mission, ODPi has been 
> developing a series of specifications for how to integrate upstream Apache 
> projects into the coherent platform. Part of this standardization effort is 
> maintenance of the ODPi core stack definition which today includes:
>* Apache Zookeeper
>* Apache Hadoop
>* Apache Hive
> and has been maintained as a custom stack on ODPi side:
> 
> https://github.com/odpi/bigtop/tree/odpi-master/bigtop-packages/src/common/ambari/ODPi/1.0
> In conjunction with merge effort for Apache Bigtop BIGTOP-2666 I'd like to 
> propose that instead of migrating the stack definition to Bigtop, we should 
> actually migrate it to Ambari.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (AMBARI-19429) Create an ODPi stack definition

2017-03-14 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925568#comment-15925568
 ] 

Jayush Luniya edited comment on AMBARI-19429 at 3/15/17 5:48 AM:
-

[~rshaposhnik]
Please can you apply the patch 
https://issues.apache.org/jira/secure/attachment/12858833/AMBARI-19429-mpack_trunk.patch,
 build the ODPi mpack and try to install ODPi with the built ODPi stack

Instructions to build and deploy ODPi mpack:

*Building ODPi Management Pack*
{code}
1. cd contrib/management-packs/odpi-ambari-mpack
2. mvn clean package
3. ls -lh target/odpi-ambari-mpack-1.0.0.0-SNAPSHOT.tar.gz
{code}
The ODPi Mpack is built at target/odpi-ambari-mpack-1.0.0.0-SNAPSHOT.tar.gz

*Installing ODPi Managment Pack*
{code}
1. yum install ambari-server
2. ambari-server install-mpack 
--mpack=/path/to/odpi-ambari-mpack-1.0.0.0-SNAPSHOT.tar.gz --purge --verbose
3. ambari-server setup
4. ambari-server start
{code}
Installing the ODPi mpack with --purge flag will remove the HDP stack 
definition and slipstream in the ODPi stack definition. When you log into 
Ambari Web you will only see ODPi 2.0 stack as an option. 





was (Author: jluniya):
[~rshaposhnik]
Please can you apply the patch 
https://issues.apache.org/jira/secure/attachment/12858833/AMBARI-19429-mpack_trunk.patch,
 build the ODPi mpack and try to install ODPi with the built ODPi stack

Instructions to build and deploy ODPi mpack:

*Building ODPi Management Pack*
{code}
1. cd contrib/management-packs/odpi-ambari-mpack
2. mvn clean package
3. ls -lh target/odpi-ambari-mpack-1.0.0.0-SNAPSHOT.tar.gz
{code}
The ODPi Mpack is built at target/odpi-ambari-mpack-1.0.0.0-SNAPSHOT.tar.gz

*Installing ODPi Managment Pack*
{code}
yum install ambari-server
ambari-server install-mpack 
--mpack=/path/to/odpi-ambari-mpack-1.0.0.0-SNAPSHOT.tar.gz --purge --verbose
ambari-server setup
ambari-server start
{code}
Installing the ODPi mpack with --purge flag will remove the HDP stack 
definition and slipstream in the ODPi stack definition. When you log into 
Ambari Web you will only see ODPi 2.0 stack as an option. 




> Create an ODPi stack definition
> ---
>
> Key: AMBARI-19429
> URL: https://issues.apache.org/jira/browse/AMBARI-19429
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Affects Versions: 2.4.2
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: trunk
>
> Attachments: AMBARI-19429-mpack_trunk.patch, AMBARI-19429.patch2.gz, 
> AMBARI-19429.patch.gz, AMBARI-19429_trunk.patch
>
>
> ODPi is a nonprofit organization committed to simplification & 
> standardization of the big data ecosystem with common reference 
> specifications and test suites. As part of its mission, ODPi has been 
> developing a series of specifications for how to integrate upstream Apache 
> projects into the coherent platform. Part of this standardization effort is 
> maintenance of the ODPi core stack definition which today includes:
>* Apache Zookeeper
>* Apache Hadoop
>* Apache Hive
> and has been maintained as a custom stack on ODPi side:
> 
> https://github.com/odpi/bigtop/tree/odpi-master/bigtop-packages/src/common/ambari/ODPi/1.0
> In conjunction with merge effort for Apache Bigtop BIGTOP-2666 I'd like to 
> propose that instead of migrating the stack definition to Bigtop, we should 
> actually migrate it to Ambari.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-19429) Create an ODPi stack definition

2017-03-14 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-19429:
---
Attachment: AMBARI-19429-mpack_trunk.patch

> Create an ODPi stack definition
> ---
>
> Key: AMBARI-19429
> URL: https://issues.apache.org/jira/browse/AMBARI-19429
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Affects Versions: 2.4.2
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: trunk
>
> Attachments: AMBARI-19429-mpack_trunk.patch, AMBARI-19429.patch2.gz, 
> AMBARI-19429.patch.gz, AMBARI-19429_trunk.patch
>
>
> ODPi is a nonprofit organization committed to simplification & 
> standardization of the big data ecosystem with common reference 
> specifications and test suites. As part of its mission, ODPi has been 
> developing a series of specifications for how to integrate upstream Apache 
> projects into the coherent platform. Part of this standardization effort is 
> maintenance of the ODPi core stack definition which today includes:
>* Apache Zookeeper
>* Apache Hadoop
>* Apache Hive
> and has been maintained as a custom stack on ODPi side:
> 
> https://github.com/odpi/bigtop/tree/odpi-master/bigtop-packages/src/common/ambari/ODPi/1.0
> In conjunction with merge effort for Apache Bigtop BIGTOP-2666 I'd like to 
> propose that instead of migrating the stack definition to Bigtop, we should 
> actually migrate it to Ambari.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (AMBARI-20331) testSafeCreateCommandNotExisting UT fails

2017-03-06 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya reassigned AMBARI-20331:
--

Assignee: Madhuvanthi Radhakrishnan

> testSafeCreateCommandNotExisting UT fails
> -
>
> Key: AMBARI-20331
> URL: https://issues.apache.org/jira/browse/AMBARI-20331
> Project: Ambari
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>Assignee: Madhuvanthi Radhakrishnan
>
> testSafeCreateCommandNotExisting is failing with below error.
> {code}
> Error Message
> org/apache/commons/io/Charsets
> Stacktrace
> java.lang.NoClassDefFoundError: org/apache/commons/io/Charsets
>   at 
> org.apache.ambari.server.credentialapi.CredentialUtilTest.executeCommand(CredentialUtilTest.java:215)
>   at 
> org.apache.ambari.server.credentialapi.CredentialUtilTest.testSafeCreateCommandNotExisting(CredentialUtilTest.java:350)
> Caused by: java.lang.ClassNotFoundException: org.apache.commons.io.Charsets
>   at 
> org.apache.ambari.server.credentialapi.CredentialUtilTest.executeCommand(CredentialUtilTest.java:215)
>   at 
> org.apache.ambari.server.credentialapi.CredentialUtilTest.testSafeCreateCommandNotExisting(CredentialUtilTest.java:350){code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (AMBARI-16880) Add common log rotation settings to Smart Config

2017-03-06 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-16880.

Resolution: Fixed

> Add common log rotation settings to Smart Config
> 
>
> Key: AMBARI-16880
> URL: https://issues.apache.org/jira/browse/AMBARI-16880
> Project: Ambari
>  Issue Type: New Feature
>Reporter: Paul Codding
>Assignee: Madhuvanthi Radhakrishnan
> Fix For: 2.5.0
>
>
> Common log4j configurations for the rolling file appender used by components 
> like Hive, HDFS, Kafka, etc. should be easily configurable as Smart Config 
> fields.  
> Specifically configurations like:
> * MaxBackupIndex
> * MaxFileSize
> These fields should be exposed in each component in a Logging section that 
> has input fields such as:
> * Maximum Backup File Size: 100 MB
> * Maximum Number of Backup Files: 10



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-16880) Add common log rotation settings to Smart Config

2017-03-06 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-16880:
---
Fix Version/s: (was: 3.0.0)
   2.5.0

> Add common log rotation settings to Smart Config
> 
>
> Key: AMBARI-16880
> URL: https://issues.apache.org/jira/browse/AMBARI-16880
> Project: Ambari
>  Issue Type: New Feature
>Reporter: Paul Codding
>Assignee: Madhuvanthi Radhakrishnan
> Fix For: 2.5.0
>
>
> Common log4j configurations for the rolling file appender used by components 
> like Hive, HDFS, Kafka, etc. should be easily configurable as Smart Config 
> fields.  
> Specifically configurations like:
> * MaxBackupIndex
> * MaxFileSize
> These fields should be exposed in each component in a Logging section that 
> has input fields such as:
> * Maximum Backup File Size: 100 MB
> * Maximum Number of Backup Files: 10



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (AMBARI-16880) Add common log rotation settings to Smart Config

2017-03-06 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya reassigned AMBARI-16880:
--

Assignee: Madhuvanthi Radhakrishnan

> Add common log rotation settings to Smart Config
> 
>
> Key: AMBARI-16880
> URL: https://issues.apache.org/jira/browse/AMBARI-16880
> Project: Ambari
>  Issue Type: New Feature
>Reporter: Paul Codding
>Assignee: Madhuvanthi Radhakrishnan
> Fix For: 3.0.0, 2.5.0
>
>
> Common log4j configurations for the rolling file appender used by components 
> like Hive, HDFS, Kafka, etc. should be easily configurable as Smart Config 
> fields.  
> Specifically configurations like:
> * MaxBackupIndex
> * MaxFileSize
> These fields should be exposed in each component in a Logging section that 
> has input fields such as:
> * Maximum Backup File Size: 100 MB
> * Maximum Number of Backup Files: 10



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-20264) HiveServer2 Interactive start failed after WE enable

2017-03-01 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891713#comment-15891713
 ] 

Jayush Luniya commented on AMBARI-20264:


Trunk
commit 7cf7fcf556bed2ecc765828a9e2954908fdaf8a8
Author: Jayush Luniya 
Date:   Wed Mar 1 22:52:51 2017 -0800

AMBARI-20264: HiveServer2 Interactive start failed after WE enable (jluniya)

branch-2.5
commit eb8526d5fdab12616cc229874c5a1fa2f58f07c9
Author: Jayush Luniya 
Date:   Wed Mar 1 22:52:51 2017 -0800

AMBARI-20264: HiveServer2 Interactive start failed after WE enable (jluniya)

> HiveServer2 Interactive start failed after WE enable
> 
>
> Key: AMBARI-20264
> URL: https://issues.apache.org/jira/browse/AMBARI-20264
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-20264.patch
>
>
> {code}
> hive --service llapstatus -w -r 0.8 -i 2 -t 200
>  Hortonworks #
> This is MOTD message, added for testing in qe infra
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/grid/0/hdp/2.6.0.0-572/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/grid/0/hdp/2.6.0.0-572/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT 
> type value
> LLAPSTATUS WatchMode with timeout=200 s
> 
> LLAP Starting up with AppId=application_1488363522091_0001.
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> {
>   "amInfo" : {
> "appName" : "llap0",
> "appType" : "org-apache-slider",
> "appId" : "application_1488363522091_0001",
> "containerId" : "container_e01_1488363522091_0001_01_01",
> "hostname" : "ctr-e129-1487033772569-30632-01-04.hwx.site",
> "amWebUrl" : 
> "http://ctr-e129-1487033772569-30632-01-04.hwx.site:32864/;
>   },
>   "state" : "LAUNCHING",
>   "originalConfigurationPath" : 
> "hdfs://ctr-e129-1487033772569-30632-01-02.hwx.site:8020/user/cstm-hive/.slider/cluster/llap0/snapshot",
>   "generatedConfigurationPath" : 
> 

[jira] [Updated] (AMBARI-20264) HiveServer2 Interactive start failed after WE enable

2017-03-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-20264:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> HiveServer2 Interactive start failed after WE enable
> 
>
> Key: AMBARI-20264
> URL: https://issues.apache.org/jira/browse/AMBARI-20264
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-20264.patch
>
>
> {code}
> hive --service llapstatus -w -r 0.8 -i 2 -t 200
>  Hortonworks #
> This is MOTD message, added for testing in qe infra
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/grid/0/hdp/2.6.0.0-572/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/grid/0/hdp/2.6.0.0-572/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT 
> type value
> LLAPSTATUS WatchMode with timeout=200 s
> 
> LLAP Starting up with AppId=application_1488363522091_0001.
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> {
>   "amInfo" : {
> "appName" : "llap0",
> "appType" : "org-apache-slider",
> "appId" : "application_1488363522091_0001",
> "containerId" : "container_e01_1488363522091_0001_01_01",
> "hostname" : "ctr-e129-1487033772569-30632-01-04.hwx.site",
> "amWebUrl" : 
> "http://ctr-e129-1487033772569-30632-01-04.hwx.site:32864/;
>   },
>   "state" : "LAUNCHING",
>   "originalConfigurationPath" : 
> "hdfs://ctr-e129-1487033772569-30632-01-02.hwx.site:8020/user/cstm-hive/.slider/cluster/llap0/snapshot",
>   "generatedConfigurationPath" : 
> "hdfs://ctr-e129-1487033772569-30632-01-02.hwx.site:8020/user/cstm-hive/.slider/cluster/llap0/generated",
>   "desiredInstances" : 1,
>   "liveInstances" : 0,
>   "appStartTime" : 1488363931836,
>   "runningThresholdAchieved" : false
> }
> WARN cli.LlapStatusServiceDriver: Watch timeout 200s exhausted before desired 
> state RUNNING is attained.
> 2017-03-01 10:28:51,909 - LLAP app 'llap0' current state is LAUNCHING.
> 

[jira] [Updated] (AMBARI-20264) HiveServer2 Interactive start failed after WE enable

2017-03-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-20264:
---
Attachment: AMBARI-20264.patch

> HiveServer2 Interactive start failed after WE enable
> 
>
> Key: AMBARI-20264
> URL: https://issues.apache.org/jira/browse/AMBARI-20264
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-20264.patch
>
>
> {code}
> hive --service llapstatus -w -r 0.8 -i 2 -t 200
>  Hortonworks #
> This is MOTD message, added for testing in qe infra
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/grid/0/hdp/2.6.0.0-572/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/grid/0/hdp/2.6.0.0-572/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT 
> type value
> LLAPSTATUS WatchMode with timeout=200 s
> 
> LLAP Starting up with AppId=application_1488363522091_0001.
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> {
>   "amInfo" : {
> "appName" : "llap0",
> "appType" : "org-apache-slider",
> "appId" : "application_1488363522091_0001",
> "containerId" : "container_e01_1488363522091_0001_01_01",
> "hostname" : "ctr-e129-1487033772569-30632-01-04.hwx.site",
> "amWebUrl" : 
> "http://ctr-e129-1487033772569-30632-01-04.hwx.site:32864/;
>   },
>   "state" : "LAUNCHING",
>   "originalConfigurationPath" : 
> "hdfs://ctr-e129-1487033772569-30632-01-02.hwx.site:8020/user/cstm-hive/.slider/cluster/llap0/snapshot",
>   "generatedConfigurationPath" : 
> "hdfs://ctr-e129-1487033772569-30632-01-02.hwx.site:8020/user/cstm-hive/.slider/cluster/llap0/generated",
>   "desiredInstances" : 1,
>   "liveInstances" : 0,
>   "appStartTime" : 1488363931836,
>   "runningThresholdAchieved" : false
> }
> WARN cli.LlapStatusServiceDriver: Watch timeout 200s exhausted before desired 
> state RUNNING is attained.
> 2017-03-01 10:28:51,909 - LLAP app 'llap0' current state is LAUNCHING.
> 2017-03-01 10:28:51,909 - LLAP app 

[jira] [Updated] (AMBARI-20264) HiveServer2 Interactive start failed after WE enable

2017-03-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-20264:
---
Status: Patch Available  (was: In Progress)

> HiveServer2 Interactive start failed after WE enable
> 
>
> Key: AMBARI-20264
> URL: https://issues.apache.org/jira/browse/AMBARI-20264
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.5.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-20264.patch
>
>
> {code}
> hive --service llapstatus -w -r 0.8 -i 2 -t 200
>  Hortonworks #
> This is MOTD message, added for testing in qe infra
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/grid/0/hdp/2.6.0.0-572/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/grid/0/hdp/2.6.0.0-572/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT 
> type value
> LLAPSTATUS WatchMode with timeout=200 s
> 
> LLAP Starting up with AppId=application_1488363522091_0001.
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
> instances
> 
> {
>   "amInfo" : {
> "appName" : "llap0",
> "appType" : "org-apache-slider",
> "appId" : "application_1488363522091_0001",
> "containerId" : "container_e01_1488363522091_0001_01_01",
> "hostname" : "ctr-e129-1487033772569-30632-01-04.hwx.site",
> "amWebUrl" : 
> "http://ctr-e129-1487033772569-30632-01-04.hwx.site:32864/;
>   },
>   "state" : "LAUNCHING",
>   "originalConfigurationPath" : 
> "hdfs://ctr-e129-1487033772569-30632-01-02.hwx.site:8020/user/cstm-hive/.slider/cluster/llap0/snapshot",
>   "generatedConfigurationPath" : 
> "hdfs://ctr-e129-1487033772569-30632-01-02.hwx.site:8020/user/cstm-hive/.slider/cluster/llap0/generated",
>   "desiredInstances" : 1,
>   "liveInstances" : 0,
>   "appStartTime" : 1488363931836,
>   "runningThresholdAchieved" : false
> }
> WARN cli.LlapStatusServiceDriver: Watch timeout 200s exhausted before desired 
> state RUNNING is attained.
> 2017-03-01 10:28:51,909 - LLAP app 'llap0' current state is LAUNCHING.
> 2017-03-01 10:28:51,909 - 

[jira] [Updated] (AMBARI-20264) HiveServer2 Interactive start failed after WE enable

2017-03-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-20264:
---
Description: 
{code}
hive --service llapstatus -w -r 0.8 -i 2 -t 200
 Hortonworks #
This is MOTD message, added for testing in qe infra
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/2.6.0.0-572/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/2.6.0.0-572/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT 
type value

LLAPSTATUS WatchMode with timeout=200 s

LLAP Starting up with AppId=application_1488363522091_0001.

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances





{
  "amInfo" : {
"appName" : "llap0",
"appType" : "org-apache-slider",
"appId" : "application_1488363522091_0001",
"containerId" : "container_e01_1488363522091_0001_01_01",
"hostname" : "ctr-e129-1487033772569-30632-01-04.hwx.site",
"amWebUrl" : "http://ctr-e129-1487033772569-30632-01-04.hwx.site:32864/;
  },
  "state" : "LAUNCHING",
  "originalConfigurationPath" : 
"hdfs://ctr-e129-1487033772569-30632-01-02.hwx.site:8020/user/cstm-hive/.slider/cluster/llap0/snapshot",
  "generatedConfigurationPath" : 
"hdfs://ctr-e129-1487033772569-30632-01-02.hwx.site:8020/user/cstm-hive/.slider/cluster/llap0/generated",
  "desiredInstances" : 1,
  "liveInstances" : 0,
  "appStartTime" : 1488363931836,
  "runningThresholdAchieved" : false
}
WARN cli.LlapStatusServiceDriver: Watch timeout 200s exhausted before desired 
state RUNNING is attained.
2017-03-01 10:28:51,909 - LLAP app 'llap0' current state is LAUNCHING.
2017-03-01 10:28:51,909 - LLAP app 'llap0' current state is LAUNCHING.
2017-03-01 10:28:51,909 - LLAP app 'llap0' deployment unsuccessful.

Command failed after 1 tries
{code}

Need to increase the retry count and hence the total_timeout value used for 
llapstatus check.


  was:
hive --service llapstatus -w -r 0.8 -i 2 -t 200
 Hortonworks #
This is MOTD message, added for testing in qe infra
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/2.6.0.0-572/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 

[jira] [Created] (AMBARI-20264) HiveServer2 Interactive start failed after WE enable

2017-03-01 Thread Jayush Luniya (JIRA)
Jayush Luniya created AMBARI-20264:
--

 Summary: HiveServer2 Interactive start failed after WE enable
 Key: AMBARI-20264
 URL: https://issues.apache.org/jira/browse/AMBARI-20264
 Project: Ambari
  Issue Type: Bug
  Components: stacks
Affects Versions: 2.5.0
Reporter: Jayush Luniya
Assignee: Jayush Luniya
 Fix For: 2.5.0


hive --service llapstatus -w -r 0.8 -i 2 -t 200
 Hortonworks #
This is MOTD message, added for testing in qe infra
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/2.6.0.0-572/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/2.6.0.0-572/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT 
type value

LLAPSTATUS WatchMode with timeout=200 s

LLAP Starting up with AppId=application_1488363522091_0001.

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances

LLAP Starting up with AppId=application_1488363522091_0001. Started 0/1 
instances





{
  "amInfo" : {
"appName" : "llap0",
"appType" : "org-apache-slider",
"appId" : "application_1488363522091_0001",
"containerId" : "container_e01_1488363522091_0001_01_01",
"hostname" : "ctr-e129-1487033772569-30632-01-04.hwx.site",
"amWebUrl" : "http://ctr-e129-1487033772569-30632-01-04.hwx.site:32864/;
  },
  "state" : "LAUNCHING",
  "originalConfigurationPath" : 
"hdfs://ctr-e129-1487033772569-30632-01-02.hwx.site:8020/user/cstm-hive/.slider/cluster/llap0/snapshot",
  "generatedConfigurationPath" : 
"hdfs://ctr-e129-1487033772569-30632-01-02.hwx.site:8020/user/cstm-hive/.slider/cluster/llap0/generated",
  "desiredInstances" : 1,
  "liveInstances" : 0,
  "appStartTime" : 1488363931836,
  "runningThresholdAchieved" : false
}
WARN cli.LlapStatusServiceDriver: Watch timeout 200s exhausted before desired 
state RUNNING is attained.
2017-03-01 10:28:51,909 - LLAP app 'llap0' current state is LAUNCHING.
2017-03-01 10:28:51,909 - LLAP app 'llap0' current state is LAUNCHING.
2017-03-01 10:28:51,909 - LLAP app 'llap0' deployment unsuccessful.

Command failed after 1 tries





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-19429) Create an ODPi stack definition

2017-03-01 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891253#comment-15891253
 ] 

Jayush Luniya commented on AMBARI-19429:


[~rshaposhnik]
# Why are the ODPi service definitions not using common-services (ex: HIVE)? 
# Has anyone tested ODPi stack with latest Ambari? 
# Why not put ODPi as a management pack?

{code}
ls 
/Users/jluniya/trunk/ambari/ambari-server/src/main/resources/stacks/ODPi/2.0/services/HIVE/package/scripts
__init__.py hive_interactive.py 
hive_service_interactive.py params_linux.py 
webhcat.py
hcat.py hive_metastore.py   
mysql_server.py params_windows.py   
webhcat_server.py
hcat_client.py  hive_server.py  
mysql_service.pyservice_check.py
webhcat_service.py
hcat_service_check.py   hive_server_interactive.py  
mysql_users.py  setup_ranger_hive.py
webhcat_service_check.py
hive.py hive_server_upgrade.py  
mysql_utils.py  setup_ranger_hive_interactive.py
hive_client.py  hive_service.py 
params.py   status_params.py
{code}

> Create an ODPi stack definition
> ---
>
> Key: AMBARI-19429
> URL: https://issues.apache.org/jira/browse/AMBARI-19429
> Project: Ambari
>  Issue Type: Improvement
>  Components: stacks
>Affects Versions: 2.4.2
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: trunk
>
> Attachments: AMBARI-19429.patch2.gz, AMBARI-19429.patch.gz
>
>
> ODPi is a nonprofit organization committed to simplification & 
> standardization of the big data ecosystem with common reference 
> specifications and test suites. As part of its mission, ODPi has been 
> developing a series of specifications for how to integrate upstream Apache 
> projects into the coherent platform. Part of this standardization effort is 
> maintenance of the ODPi core stack definition which today includes:
>* Apache Zookeeper
>* Apache Hadoop
>* Apache Hive
> and has been maintained as a custom stack on ODPi side:
> 
> https://github.com/odpi/bigtop/tree/odpi-master/bigtop-packages/src/common/ambari/ODPi/1.0
> In conjunction with merge effort for Apache Bigtop BIGTOP-2666 I'd like to 
> propose that instead of migrating the stack definition to Bigtop, we should 
> actually migrate it to Ambari.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20260) Misc errors in Ambari Server log that need to be cleaned up

2017-03-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-20260:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Misc errors in Ambari Server log that need to be cleaned up
> ---
>
> Key: AMBARI-20260
> URL: https://issues.apache.org/jira/browse/AMBARI-20260
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: AMBARI-20260.patch
>
>
> Misc ERRORs in Ambari Server log:
> {noformat}
> 27 Feb 2017 21:06:07,574 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_BROKER doesn't advertise 
> version, however ServiceHostComponent DRUID_BROKER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:09,568 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_COORDINATOR doesn't 
> advertise version, however ServiceHostComponent DRUID_COORDINATOR on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:10,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_HISTORICAL doesn't 
> advertise version, however ServiceHostComponent DRUID_HISTORICAL on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:11,580 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_MIDDLEMANAGER doesn't 
> advertise version, however ServiceHostComponent DRUID_MIDDLEMANAGER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:13,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_OVERLORD doesn't advertise 
> version, however ServiceHostComponent DRUID_OVERLORD on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:14,611 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_ROUTER doesn't advertise 
> version, however ServiceHostComponent DRUID_ROUTER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:15,589 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_SUPERSET doesn't advertise 
> version, however ServiceHostComponent DRUID_SUPERSET on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> ERROR [ambari-client-thread-33] ClusterImpl:2882 - No service found for 
> config types '[cluster-env]', service config version not created
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20260) Misc errors in Ambari Server log that need to be cleaned up

2017-03-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-20260:
---
Affects Version/s: 2.5.0

> Misc errors in Ambari Server log that need to be cleaned up
> ---
>
> Key: AMBARI-20260
> URL: https://issues.apache.org/jira/browse/AMBARI-20260
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: AMBARI-20260.patch
>
>
> Misc ERRORs in Ambari Server log:
> {noformat}
> 27 Feb 2017 21:06:07,574 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_BROKER doesn't advertise 
> version, however ServiceHostComponent DRUID_BROKER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:09,568 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_COORDINATOR doesn't 
> advertise version, however ServiceHostComponent DRUID_COORDINATOR on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:10,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_HISTORICAL doesn't 
> advertise version, however ServiceHostComponent DRUID_HISTORICAL on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:11,580 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_MIDDLEMANAGER doesn't 
> advertise version, however ServiceHostComponent DRUID_MIDDLEMANAGER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:13,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_OVERLORD doesn't advertise 
> version, however ServiceHostComponent DRUID_OVERLORD on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:14,611 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_ROUTER doesn't advertise 
> version, however ServiceHostComponent DRUID_ROUTER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:15,589 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_SUPERSET doesn't advertise 
> version, however ServiceHostComponent DRUID_SUPERSET on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> ERROR [ambari-client-thread-33] ClusterImpl:2882 - No service found for 
> config types '[cluster-env]', service config version not created
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20260) Misc errors in Ambari Server log that need to be cleaned up

2017-03-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-20260:
---
Fix Version/s: 3.0.0

> Misc errors in Ambari Server log that need to be cleaned up
> ---
>
> Key: AMBARI-20260
> URL: https://issues.apache.org/jira/browse/AMBARI-20260
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: AMBARI-20260.patch
>
>
> Misc ERRORs in Ambari Server log:
> {noformat}
> 27 Feb 2017 21:06:07,574 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_BROKER doesn't advertise 
> version, however ServiceHostComponent DRUID_BROKER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:09,568 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_COORDINATOR doesn't 
> advertise version, however ServiceHostComponent DRUID_COORDINATOR on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:10,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_HISTORICAL doesn't 
> advertise version, however ServiceHostComponent DRUID_HISTORICAL on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:11,580 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_MIDDLEMANAGER doesn't 
> advertise version, however ServiceHostComponent DRUID_MIDDLEMANAGER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:13,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_OVERLORD doesn't advertise 
> version, however ServiceHostComponent DRUID_OVERLORD on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:14,611 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_ROUTER doesn't advertise 
> version, however ServiceHostComponent DRUID_ROUTER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:15,589 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_SUPERSET doesn't advertise 
> version, however ServiceHostComponent DRUID_SUPERSET on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> ERROR [ambari-client-thread-33] ClusterImpl:2882 - No service found for 
> config types '[cluster-env]', service config version not created
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-20260) Misc errors in Ambari Server log that need to be cleaned up

2017-03-01 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891227#comment-15891227
 ] 

Jayush Luniya commented on AMBARI-20260:


No tests required. Committed to trunk

commit 54374e9781e3d8cda52f36ecf328546f9fc9c69d
Author: Jayush Luniya 
Date:   Wed Mar 1 14:39:04 2017 -0800

AMBARI-20260: Misc errors in Ambari Server log that need to be cleaned up 
(jluniya)

> Misc errors in Ambari Server log that need to be cleaned up
> ---
>
> Key: AMBARI-20260
> URL: https://issues.apache.org/jira/browse/AMBARI-20260
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: AMBARI-20260.patch
>
>
> Misc ERRORs in Ambari Server log:
> {noformat}
> 27 Feb 2017 21:06:07,574 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_BROKER doesn't advertise 
> version, however ServiceHostComponent DRUID_BROKER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:09,568 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_COORDINATOR doesn't 
> advertise version, however ServiceHostComponent DRUID_COORDINATOR on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:10,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_HISTORICAL doesn't 
> advertise version, however ServiceHostComponent DRUID_HISTORICAL on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:11,580 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_MIDDLEMANAGER doesn't 
> advertise version, however ServiceHostComponent DRUID_MIDDLEMANAGER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:13,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_OVERLORD doesn't advertise 
> version, however ServiceHostComponent DRUID_OVERLORD on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:14,611 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_ROUTER doesn't advertise 
> version, however ServiceHostComponent DRUID_ROUTER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:15,589 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_SUPERSET doesn't advertise 
> version, however ServiceHostComponent DRUID_SUPERSET on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> ERROR [ambari-client-thread-33] ClusterImpl:2882 - No service found for 
> config types '[cluster-env]', service config version not created
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20260) Misc errors in Ambari Server log that need to be cleaned up

2017-03-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-20260:
---
Attachment: AMBARI-20260.patch

> Misc errors in Ambari Server log that need to be cleaned up
> ---
>
> Key: AMBARI-20260
> URL: https://issues.apache.org/jira/browse/AMBARI-20260
> Project: Ambari
>  Issue Type: Bug
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Minor
> Attachments: AMBARI-20260.patch
>
>
> Misc ERRORs in Ambari Server log:
> {noformat}
> 27 Feb 2017 21:06:07,574 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_BROKER doesn't advertise 
> version, however ServiceHostComponent DRUID_BROKER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:09,568 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_COORDINATOR doesn't 
> advertise version, however ServiceHostComponent DRUID_COORDINATOR on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:10,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_HISTORICAL doesn't 
> advertise version, however ServiceHostComponent DRUID_HISTORICAL on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:11,580 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_MIDDLEMANAGER doesn't 
> advertise version, however ServiceHostComponent DRUID_MIDDLEMANAGER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:13,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_OVERLORD doesn't advertise 
> version, however ServiceHostComponent DRUID_OVERLORD on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:14,611 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_ROUTER doesn't advertise 
> version, however ServiceHostComponent DRUID_ROUTER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:15,589 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_SUPERSET doesn't advertise 
> version, however ServiceHostComponent DRUID_SUPERSET on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> ERROR [ambari-client-thread-33] ClusterImpl:2882 - No service found for 
> config types '[cluster-env]', service config version not created
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-20260) Misc errors in Ambari Server log that need to be cleaned up

2017-03-01 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890760#comment-15890760
 ] 

Jayush Luniya commented on AMBARI-20260:


[~afernandez] [~sumitmohanty] can you review the patch?

> Misc errors in Ambari Server log that need to be cleaned up
> ---
>
> Key: AMBARI-20260
> URL: https://issues.apache.org/jira/browse/AMBARI-20260
> Project: Ambari
>  Issue Type: Bug
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Minor
> Attachments: AMBARI-20260.patch
>
>
> Misc ERRORs in Ambari Server log:
> {noformat}
> 27 Feb 2017 21:06:07,574 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_BROKER doesn't advertise 
> version, however ServiceHostComponent DRUID_BROKER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:09,568 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_COORDINATOR doesn't 
> advertise version, however ServiceHostComponent DRUID_COORDINATOR on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:10,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_HISTORICAL doesn't 
> advertise version, however ServiceHostComponent DRUID_HISTORICAL on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:11,580 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_MIDDLEMANAGER doesn't 
> advertise version, however ServiceHostComponent DRUID_MIDDLEMANAGER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:13,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_OVERLORD doesn't advertise 
> version, however ServiceHostComponent DRUID_OVERLORD on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:14,611 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_ROUTER doesn't advertise 
> version, however ServiceHostComponent DRUID_ROUTER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:15,589 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_SUPERSET doesn't advertise 
> version, however ServiceHostComponent DRUID_SUPERSET on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> ERROR [ambari-client-thread-33] ClusterImpl:2882 - No service found for 
> config types '[cluster-env]', service config version not created
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20260) Misc errors in Ambari Server log that need to be cleaned up

2017-03-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-20260:
---
Status: Patch Available  (was: In Progress)

> Misc errors in Ambari Server log that need to be cleaned up
> ---
>
> Key: AMBARI-20260
> URL: https://issues.apache.org/jira/browse/AMBARI-20260
> Project: Ambari
>  Issue Type: Bug
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Minor
> Attachments: AMBARI-20260.patch
>
>
> Misc ERRORs in Ambari Server log:
> {noformat}
> 27 Feb 2017 21:06:07,574 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_BROKER doesn't advertise 
> version, however ServiceHostComponent DRUID_BROKER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:09,568 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_COORDINATOR doesn't 
> advertise version, however ServiceHostComponent DRUID_COORDINATOR on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:10,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_HISTORICAL doesn't 
> advertise version, however ServiceHostComponent DRUID_HISTORICAL on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:11,580 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_MIDDLEMANAGER doesn't 
> advertise version, however ServiceHostComponent DRUID_MIDDLEMANAGER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:13,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_OVERLORD doesn't advertise 
> version, however ServiceHostComponent DRUID_OVERLORD on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:14,611 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_ROUTER doesn't advertise 
> version, however ServiceHostComponent DRUID_ROUTER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:15,589 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_SUPERSET doesn't advertise 
> version, however ServiceHostComponent DRUID_SUPERSET on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> ERROR [ambari-client-thread-33] ClusterImpl:2882 - No service found for 
> config types '[cluster-env]', service config version not created
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-20260) Misc errors in Ambari Server log that need to be cleaned up

2017-03-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-20260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-20260:
---
Summary: Misc errors in Ambari Server log that need to be cleaned up  (was: 
0 Misc errors in Ambari Server log that need to be cleaned up)

> Misc errors in Ambari Server log that need to be cleaned up
> ---
>
> Key: AMBARI-20260
> URL: https://issues.apache.org/jira/browse/AMBARI-20260
> Project: Ambari
>  Issue Type: Bug
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Minor
>
> Misc ERRORs in Ambari Server log:
> {noformat}
> 27 Feb 2017 21:06:07,574 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_BROKER doesn't advertise 
> version, however ServiceHostComponent DRUID_BROKER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:09,568 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_COORDINATOR doesn't 
> advertise version, however ServiceHostComponent DRUID_COORDINATOR on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:10,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_HISTORICAL doesn't 
> advertise version, however ServiceHostComponent DRUID_HISTORICAL on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:11,580 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_MIDDLEMANAGER doesn't 
> advertise version, however ServiceHostComponent DRUID_MIDDLEMANAGER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:13,570 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_OVERLORD doesn't advertise 
> version, however ServiceHostComponent DRUID_OVERLORD on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:14,611 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_ROUTER doesn't advertise 
> version, however ServiceHostComponent DRUID_ROUTER on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> 27 Feb 2017 21:06:15,589 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:128 - ServiceComponent DRUID_SUPERSET doesn't advertise 
> version, however ServiceHostComponent DRUID_SUPERSET on host 
> alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
> Skipping version update
> ERROR [ambari-client-thread-33] ClusterImpl:2882 - No service found for 
> config types '[cluster-env]', service config version not created
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (AMBARI-20260) 0 Misc errors in Ambari Server log that need to be cleaned up

2017-03-01 Thread Jayush Luniya (JIRA)
Jayush Luniya created AMBARI-20260:
--

 Summary: 0 Misc errors in Ambari Server log that need to be 
cleaned up
 Key: AMBARI-20260
 URL: https://issues.apache.org/jira/browse/AMBARI-20260
 Project: Ambari
  Issue Type: Bug
Reporter: Jayush Luniya
Assignee: Jayush Luniya
Priority: Minor


Misc ERRORs in Ambari Server log:
{noformat}
27 Feb 2017 21:06:07,574 ERROR [ambari-heartbeat-processor-0] 
StackVersionListener:128 - ServiceComponent DRUID_BROKER doesn't advertise 
version, however ServiceHostComponent DRUID_BROKER on host 
alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
Skipping version update
27 Feb 2017 21:06:09,568 ERROR [ambari-heartbeat-processor-0] 
StackVersionListener:128 - ServiceComponent DRUID_COORDINATOR doesn't advertise 
version, however ServiceHostComponent DRUID_COORDINATOR on host 
alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
Skipping version update
27 Feb 2017 21:06:10,570 ERROR [ambari-heartbeat-processor-0] 
StackVersionListener:128 - ServiceComponent DRUID_HISTORICAL doesn't advertise 
version, however ServiceHostComponent DRUID_HISTORICAL on host 
alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
Skipping version update
27 Feb 2017 21:06:11,580 ERROR [ambari-heartbeat-processor-0] 
StackVersionListener:128 - ServiceComponent DRUID_MIDDLEMANAGER doesn't 
advertise version, however ServiceHostComponent DRUID_MIDDLEMANAGER on host 
alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
Skipping version update
27 Feb 2017 21:06:13,570 ERROR [ambari-heartbeat-processor-0] 
StackVersionListener:128 - ServiceComponent DRUID_OVERLORD doesn't advertise 
version, however ServiceHostComponent DRUID_OVERLORD on host 
alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
Skipping version update
27 Feb 2017 21:06:14,611 ERROR [ambari-heartbeat-processor-0] 
StackVersionListener:128 - ServiceComponent DRUID_ROUTER doesn't advertise 
version, however ServiceHostComponent DRUID_ROUTER on host 
alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
Skipping version update
27 Feb 2017 21:06:15,589 ERROR [ambari-heartbeat-processor-0] 
StackVersionListener:128 - ServiceComponent DRUID_SUPERSET doesn't advertise 
version, however ServiceHostComponent DRUID_SUPERSET on host 
alejandro-3.c.pramod-thangali.internal advertised version as 2.6.0.0-559. 
Skipping version update


ERROR [ambari-client-thread-33] ClusterImpl:2882 - No service found for config 
types '[cluster-env]', service config version not created
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-20212) Remove HDP version check in KAFKA service

2017-02-27 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15887071#comment-15887071
 ] 

Jayush Luniya commented on AMBARI-20212:


+1 on addendum. Committed to trunk and branch-2.5

> Remove HDP version check in KAFKA service
> -
>
> Key: AMBARI-20212
> URL: https://issues.apache.org/jira/browse/AMBARI-20212
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Madhuvanthi Radhakrishnan
>Assignee: Madhuvanthi Radhakrishnan
> Fix For: 2.5.0
>
> Attachments: AMBARI-20212_addendum.patch, AMBARI-20212.patch, 
> AMBARI-20212_trunk.patch
>
>
> Need to fix following code in KAFKA.
> {code}
>   if compare_versions(src_version, '2.3.4.0') < 0 and 
> compare_versions(dst_version, '2.3.4.0') >= 0:
> # Calling the acl migration script requires the configs to be present.
> self.configure(env, upgrade_type=upgrade_type)
> upgrade.run_migration(env, upgrade_type)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-20034) USER to GROUP mapping (hdfs_user -> hadoop_group) should be stack driven

2017-02-24 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-20034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883227#comment-15883227
 ] 

Jayush Luniya commented on AMBARI-20034:


+1 on addendum patch and committed it to trunk and branch-2.5

> USER to GROUP mapping (hdfs_user -> hadoop_group) should be stack driven
> 
>
> Key: AMBARI-20034
> URL: https://issues.apache.org/jira/browse/AMBARI-20034
> Project: Ambari
>  Issue Type: Bug
>Reporter: Madhuvanthi Radhakrishnan
>Assignee: Madhuvanthi Radhakrishnan
> Fix For: 2.5.0
>
> Attachments: AMBARI-20034_2.5.patch, AMBARI-20034.patch, 
> AMBARI_20034_trunk_addendum.patch, AMBARI-20034_trunk.patch
>
>
> There is a hard coded logic to creating the user-group mapping for services. 
> This presents an issue for custom services. Fix is to make it stack driven by 
> connecting user to its groups within the configuration/stack definition itself



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-14746) Upgrade of Spark from HDP 2.2 to 2.3 and 2.2 to 2.4 is missing

2017-02-17 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-14746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15872513#comment-15872513
 ] 

Jayush Luniya commented on AMBARI-14746:


cc: [~bikassaha]
I believe this is a stale JIRA. Can you confirm?

> Upgrade of Spark from HDP 2.2 to 2.3 and 2.2 to 2.4 is missing
> --
>
> Key: AMBARI-14746
> URL: https://issues.apache.org/jira/browse/AMBARI-14746
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Jeff Zhang
>Assignee: Jeff Zhang
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-19841) Add 'yarn.client.failover-proxy-provider' in yarn-site.xml by default for all HA enabled clusters

2017-02-09 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860063#comment-15860063
 ] 

Jayush Luniya commented on AMBARI-19841:


Committed Addendum patch

Trunk
commit 259f31ae11840c0b807e1aa9623df20d7b382da8
Author: Jayush Luniya 
Date:   Thu Feb 9 11:37:35 2017 -0800

AMBARI-19841: Add 'yarn.client.failover-proxy-provider' in yarn-site.xml by 
default for all HA enabled clusters - addendum (Madhuvanthi Radhakrishnan via 
jluniya)

Branch-2.5
commit 2ea4a0328781e63dbde55d5efb2174f61cb3b743
Author: Jayush Luniya 
Date:   Thu Feb 9 11:37:35 2017 -0800

AMBARI-19841: Add 'yarn.client.failover-proxy-provider' in yarn-site.xml by 
default for all HA enabled clusters - addendum (Madhuvanthi Radhakrishnan via 
jluniya)

> Add 'yarn.client.failover-proxy-provider' in yarn-site.xml by default for all 
> HA enabled clusters
> -
>
> Key: AMBARI-19841
> URL: https://issues.apache.org/jira/browse/AMBARI-19841
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Madhuvanthi Radhakrishnan
>Assignee: Madhuvanthi Radhakrishnan
> Fix For: 2.5.0
>
>
> 
> When HA is enabled, the class to be used by Clients, AMs and
>   NMs to failover to the Active RM. It should extend
>   org.apache.hadoop.yarn.client.RMFailoverProxyProvider
> yarn.client.failover-proxy-provider
> 
> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
>   
> needs to be added to yarn-site.xml for HDP 2.6/fenton



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (AMBARI-19841) Add 'yarn.client.failover-proxy-provider' in yarn-site.xml by default for all HA enabled clusters

2017-02-09 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-19841.

Resolution: Fixed

> Add 'yarn.client.failover-proxy-provider' in yarn-site.xml by default for all 
> HA enabled clusters
> -
>
> Key: AMBARI-19841
> URL: https://issues.apache.org/jira/browse/AMBARI-19841
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Madhuvanthi Radhakrishnan
>Assignee: Madhuvanthi Radhakrishnan
> Fix For: 2.5.0
>
>
> 
> When HA is enabled, the class to be used by Clients, AMs and
>   NMs to failover to the Active RM. It should extend
>   org.apache.hadoop.yarn.client.RMFailoverProxyProvider
> yarn.client.failover-proxy-provider
> 
> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
>   
> needs to be added to yarn-site.xml for HDP 2.6/fenton



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (AMBARI-19841) Add 'yarn.client.failover-proxy-provider' in yarn-site.xml by default for all HA enabled clusters

2017-02-09 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya reopened AMBARI-19841:


> Add 'yarn.client.failover-proxy-provider' in yarn-site.xml by default for all 
> HA enabled clusters
> -
>
> Key: AMBARI-19841
> URL: https://issues.apache.org/jira/browse/AMBARI-19841
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Madhuvanthi Radhakrishnan
>Assignee: Madhuvanthi Radhakrishnan
> Fix For: 2.5.0
>
>
> 
> When HA is enabled, the class to be used by Clients, AMs and
>   NMs to failover to the Active RM. It should extend
>   org.apache.hadoop.yarn.client.RMFailoverProxyProvider
> yarn.client.failover-proxy-provider
> 
> org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
>   
> needs to be added to yarn-site.xml for HDP 2.6/fenton



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-14671) Reevaluate the use of threadpools in Ambari code base

2017-02-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14671:
---
Fix Version/s: (was: 2.5.0)
   3.0.0

> Reevaluate the use of threadpools in Ambari code base
> -
>
> Key: AMBARI-14671
> URL: https://issues.apache.org/jira/browse/AMBARI-14671
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 3.0.0
>
> Attachments: DynamicScaling.tgz
>
>
> As part of investigation for BUG-43981, noticed that in many places in the 
> Ambari code base, the way we use the threadpool is not quite correct.  We 
> will never scale up the number of threads on high load when we use unbounded 
> queues. This could lead to performance bottlenecks especially if we are 
> configure ThreadPoolExecutor with corePoolSize=0 and maxPoolSize=10, only one 
> thread will ever be spawned. See observations below
> Observations: 
> 1. When a ThreadPoolExecutor object is created, the pool size is 0 (i.e. no 
> new threads are created then) unless prestartAllCoreThreads() is called. Also 
> if we set allowCoreThreadTimeOut(true), idle core threads will also be 
> reclaimed.
> 2. In our code base, I observed that we create a threadpool using unlimited 
> queue. However the pool will never scale up from coreThreads -> maxThreads as 
> the request will always get queued. 
> {code:java}
> LinkedBlockingQueue queue = new LinkedBlockingQueue(); // 
> unlimited Queue
> ThreadPoolExecutor threadPoolExecutor =
> new ThreadPoolExecutor(
> THREAD_POOL_CORE_SIZE,
> THREAD_POOL_MAX_SIZE,
> THREAD_POOL_TIMEOUT_MILLIS,
> TimeUnit.MILLISECONDS,
> queue);
> {code}
> http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ThreadPoolExecutor.html
> {quote}
> Unbounded queues. Using an unbounded queue (for example a LinkedBlockingQueue 
> without a predefined capacity) will cause new tasks to wait in the queue when 
> all corePoolSize threads are busy. Thus, no more than corePoolSize threads 
> will ever be created. (And the value of the maximumPoolSize therefore doesn't 
> have any effect.) This may be appropriate when each task is completely 
> independent of others, so tasks cannot affect each others execution; for 
> example, in a web page server. While this style of queuing can be useful in 
> smoothing out transient bursts of requests, it admits the possibility of 
> unbounded work queue growth when commands continue to arrive on average 
> faster than they can be processed.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMBARI-19733) Regression in Spark2 keytab and {{stack_root}} for Livy2

2017-02-01 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-19733:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Regression in Spark2 keytab and {{stack_root}} for Livy2
> 
>
> Key: AMBARI-19733
> URL: https://issues.apache.org/jira/browse/AMBARI-19733
> Project: Ambari
>  Issue Type: Bug
>Reporter: Bikas Saha
>Assignee: Bikas Saha
> Fix For: 2.5.0
>
> Attachments: AMBARI-19733.1.patch, AMBARI-19733.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-19733) Regression in Spark2 keytab and {{stack_root}} for Livy2

2017-02-01 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-19733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849481#comment-15849481
 ] 

Jayush Luniya commented on AMBARI-19733:


Trunk
commit b01438c7dc82ec9f0263eed8b6c1575491e78f6b
Author: Jayush Luniya 
Date:   Wed Feb 1 21:54:08 2017 -0800

AMBARI-19733: Regression in Spark2 keytab and {{stack_root}} for Livy2 
(Bikas Saha via jluniya)

Branch-2.5
commit 2ccee3d2617f1e620dac49ced5414697d9928555
Author: Jayush Luniya 
Date:   Wed Feb 1 21:54:08 2017 -0800

AMBARI-19733: Regression in Spark2 keytab and {{stack_root}} for Livy2 
(Bikas Saha via jluniya)


> Regression in Spark2 keytab and {{stack_root}} for Livy2
> 
>
> Key: AMBARI-19733
> URL: https://issues.apache.org/jira/browse/AMBARI-19733
> Project: Ambari
>  Issue Type: Bug
>Reporter: Bikas Saha
>Assignee: Bikas Saha
> Fix For: 2.5.0
>
> Attachments: AMBARI-19733.1.patch, AMBARI-19733.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMBARI-19690) NM Memory can end up being too high on nodes with many components

2017-01-25 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-19690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838301#comment-15838301
 ] 

Jayush Luniya commented on AMBARI-19690:


Trunk
commit 6a8115572b328785532aed27c1dc44a1bac17a01
Author: Jayush Luniya 
Date:   Wed Jan 25 09:40:56 2017 -0800

AMBARI-19690: NM Memory can end up being too high on nodes with many 
components (jluniya)

branch-2.5
commit b84a32b374adbbb97ee0141d4fd8deb3ec2fbcee
Author: Jayush Luniya 
Date:   Wed Jan 25 10:21:57 2017 -0800

AMBARI-19690: NM Memory can end up being too high on nodes with many 
components (jluniya)

> NM Memory can end up being too high on nodes with many components
> -
>
> Key: AMBARI-19690
> URL: https://issues.apache.org/jira/browse/AMBARI-19690
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Attachments: AMBARI-19690.patch
>
>
> Ambari's Stack Advisor has a static method to compute OS/component overheads 
> when computing the YARN config 'yarn.nodemanager.resource.memory-mb' for NM
> We should add validation to check if there are any NodeManagers that would 
> have high memory usage based on co-located service components and report a 
> warning. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-19690) NM Memory can end up being too high on nodes with many components

2017-01-25 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-19690:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> NM Memory can end up being too high on nodes with many components
> -
>
> Key: AMBARI-19690
> URL: https://issues.apache.org/jira/browse/AMBARI-19690
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Attachments: AMBARI-19690.patch
>
>
> Ambari's Stack Advisor has a static method to compute OS/component overheads 
> when computing the YARN config 'yarn.nodemanager.resource.memory-mb' for NM
> We should add validation to check if there are any NodeManagers that would 
> have high memory usage based on co-located service components and report a 
> warning. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-19601) Warnings on ambari server upgrade related to HDF paths

2017-01-19 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-19601:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Warnings on ambari server upgrade related to HDF paths
> --
>
> Key: AMBARI-19601
> URL: https://issues.apache.org/jira/browse/AMBARI-19601
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.2
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-19601.patch
>
>
> Ambari 2.4.1.0 and HDF 2.0.1.0 was upgraded to Ambari 2.4.2.0 and HDF 2.1 
> mpack
> Saw the following warnings on ambari-server upgrade
> {code}
> [root@arpit-hdf-eu-5 ~]# ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> Updating properties in ambari.properties ...
> WARNING: Original file ambari-env.sh kept
> WARNING: Original file krb5JAASLogin.conf kept
> File krb5JAASLogin.conf updated.
> Fixing database objects owner
> Ambari Server configured for MySQL. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> Upgrading database schema
> Adjusting ambari-server permissions and ownership...
> WARNING: Command chown  -R -L root /var/lib/ambari-server returned exit code 
> /var/lib/ambari-server with message: chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/repos': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/properties':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/metainfo.xml':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/ZOOKEEPER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/LOGSEARCH':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KAFKA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/stack_advisor.py':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_INFRA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/STORM':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_METRICS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KERBEROS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/NIFI':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/RANGER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/widgets.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/kerberos.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/configuration':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/role_command_order.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/hooks': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/common-services_21_11_16_21_33.old/NIFI/1.0.0':
>  No such file or directory
> Ambari Server 'upgrade' completed successfully.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-19601) Warnings on ambari server upgrade related to HDF paths

2017-01-19 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-19601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831021#comment-15831021
 ] 

Jayush Luniya commented on AMBARI-19601:


Trunk
commit 77b5776617bb448019250d7785c103dbde4ecd24
Author: Jayush Luniya 
Date:   Thu Jan 19 17:52:21 2017 -0800

AMBARI-19601: Warnings on ambari server upgrade related to HDF paths 
(jluniya)

branch-2.5
commit 9dc0e75e0f3886730e2d64b855d37dc459c13642
Author: Jayush Luniya 
Date:   Thu Jan 19 17:52:21 2017 -0800

AMBARI-19601: Warnings on ambari server upgrade related to HDF paths 
(jluniya)

> Warnings on ambari server upgrade related to HDF paths
> --
>
> Key: AMBARI-19601
> URL: https://issues.apache.org/jira/browse/AMBARI-19601
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.2
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-19601.patch
>
>
> Ambari 2.4.1.0 and HDF 2.0.1.0 was upgraded to Ambari 2.4.2.0 and HDF 2.1 
> mpack
> Saw the following warnings on ambari-server upgrade
> {code}
> [root@arpit-hdf-eu-5 ~]# ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> Updating properties in ambari.properties ...
> WARNING: Original file ambari-env.sh kept
> WARNING: Original file krb5JAASLogin.conf kept
> File krb5JAASLogin.conf updated.
> Fixing database objects owner
> Ambari Server configured for MySQL. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> Upgrading database schema
> Adjusting ambari-server permissions and ownership...
> WARNING: Command chown  -R -L root /var/lib/ambari-server returned exit code 
> /var/lib/ambari-server with message: chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/repos': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/properties':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/metainfo.xml':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/ZOOKEEPER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/LOGSEARCH':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KAFKA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/stack_advisor.py':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_INFRA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/STORM':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_METRICS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KERBEROS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/NIFI':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/RANGER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/widgets.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/kerberos.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/configuration':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/role_command_order.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/hooks': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/common-services_21_11_16_21_33.old/NIFI/1.0.0':
>  No such file or directory
> Ambari Server 'upgrade' completed successfully.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-19541) Add log rotation settings - handle HDP upgrade scenario

2017-01-19 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-19541.

Resolution: Fixed

> Add log rotation settings - handle HDP upgrade scenario
> ---
>
> Key: AMBARI-19541
> URL: https://issues.apache.org/jira/browse/AMBARI-19541
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Madhuvanthi Radhakrishnan
>Assignee: Madhuvanthi Radhakrishnan
> Attachments: AMBARI-19541_addendum.patch, AMBARI-19541_trunk.patch
>
>
> This jira will have the upgrade pack work for the following services
> YARN
> HDFS
> HBASE
> ZOOKEEPER
> OOZIE
> FALCON
> ATLAS
> RANGER
> RANGER-KMS
> KAFKA
> KNOX



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-19621) Management Pack v2

2017-01-18 Thread Jayush Luniya (JIRA)
Jayush Luniya created AMBARI-19621:
--

 Summary: Management Pack v2
 Key: AMBARI-19621
 URL: https://issues.apache.org/jira/browse/AMBARI-19621
 Project: Ambari
  Issue Type: Epic
  Components: ambari-agent, ambari-server, ambari-web
Affects Versions: 3.0.0
Reporter: Jayush Luniya
Assignee: Jayush Luniya
 Fix For: 3.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-14714) [Umbrella] Multi Instance Architecture

2017-01-18 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-14714:
---
Summary: [Umbrella] Multi Instance Architecture  (was: Stacks: Support 
Service Multi-Version and Multi-Instance)

> [Umbrella] Multi Instance Architecture
> --
>
> Key: AMBARI-14714
> URL: https://issues.apache.org/jira/browse/AMBARI-14714
> Project: Ambari
>  Issue Type: Epic
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Jeff Sposetti
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 3.0.0
>
>
> Provide an ability to handle multiple instances of a Service in a given 
> cluster. In addition, provide ability for a Stack definition to handle 
> multiple versions of a given Service (which than can have 0 or more instances 
> in a given cluster).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17355) POC: BE changes for first class support for Yarn hosted services

2017-01-18 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829183#comment-15829183
 ] 

Jayush Luniya commented on AMBARI-17355:


POC changes were committed to branch-yarnapps-dev

> POC: BE changes for first class support for Yarn hosted services
> 
>
> Key: AMBARI-17355
> URL: https://issues.apache.org/jira/browse/AMBARI-17355
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Critical
>
> JIRA for backend proof of concept work to provide first class support for 
> Yarn hosted services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17354) POC: FE changes for first class support for Yarn hosted services

2017-01-18 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829182#comment-15829182
 ] 

Jayush Luniya commented on AMBARI-17354:


POC changes were committed to branch-yarnapps-dev

> POC: FE changes for first class support for Yarn hosted services
> 
>
> Key: AMBARI-17354
> URL: https://issues.apache.org/jira/browse/AMBARI-17354
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-web
>Reporter: Jayush Luniya
>Assignee: Jaimin Jetly
>Priority: Critical
>
> JIRA for front end proof of concept work to provide first class support for 
> Yarn hosted services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-17355) POC: BE changes for first class support for Yarn hosted services

2017-01-18 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-17355.

Resolution: Fixed

> POC: BE changes for first class support for Yarn hosted services
> 
>
> Key: AMBARI-17355
> URL: https://issues.apache.org/jira/browse/AMBARI-17355
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Critical
>
> JIRA for backend proof of concept work to provide first class support for 
> Yarn hosted services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-17354) POC: FE changes for first class support for Yarn hosted services

2017-01-18 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-17354.

Resolution: Fixed

> POC: FE changes for first class support for Yarn hosted services
> 
>
> Key: AMBARI-17354
> URL: https://issues.apache.org/jira/browse/AMBARI-17354
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-web
>Reporter: Jayush Luniya
>Assignee: Jaimin Jetly
>Priority: Critical
>
> JIRA for front end proof of concept work to provide first class support for 
> Yarn hosted services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-18613) Minor fixes for HDF mpack

2017-01-18 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-18613.

Resolution: Fixed

> Minor fixes for HDF mpack
> -
>
> Key: AMBARI-18613
> URL: https://issues.apache.org/jira/browse/AMBARI-18613
> Project: Ambari
>  Issue Type: Bug
>  Components: contrib
>Affects Versions: trunk
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: trunk
>
> Attachments: AMBARI-18613.patch
>
>
> # Change to regex to obtain common_name_for_certificate
> # Update repoinfo.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-19601) Warnings on ambari server upgrade related to HDF paths

2017-01-17 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-19601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15827555#comment-15827555
 ] 

Jayush Luniya commented on AMBARI-19601:


UT failures not related to the patch.

> Warnings on ambari server upgrade related to HDF paths
> --
>
> Key: AMBARI-19601
> URL: https://issues.apache.org/jira/browse/AMBARI-19601
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.2
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-19601.patch
>
>
> Ambari 2.4.1.0 and HDF 2.0.1.0 was upgraded to Ambari 2.4.2.0 and HDF 2.1 
> mpack
> Saw the following warnings on ambari-server upgrade
> {code}
> [root@arpit-hdf-eu-5 ~]# ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> Updating properties in ambari.properties ...
> WARNING: Original file ambari-env.sh kept
> WARNING: Original file krb5JAASLogin.conf kept
> File krb5JAASLogin.conf updated.
> Fixing database objects owner
> Ambari Server configured for MySQL. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> Upgrading database schema
> Adjusting ambari-server permissions and ownership...
> WARNING: Command chown  -R -L root /var/lib/ambari-server returned exit code 
> /var/lib/ambari-server with message: chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/repos': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/properties':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/metainfo.xml':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/ZOOKEEPER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/LOGSEARCH':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KAFKA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/stack_advisor.py':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_INFRA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/STORM':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_METRICS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KERBEROS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/NIFI':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/RANGER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/widgets.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/kerberos.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/configuration':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/role_command_order.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/hooks': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/common-services_21_11_16_21_33.old/NIFI/1.0.0':
>  No such file or directory
> Ambari Server 'upgrade' completed successfully.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-19601) Warnings on ambari server upgrade related to HDF paths

2017-01-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-19601:
---
Status: Patch Available  (was: In Progress)

> Warnings on ambari server upgrade related to HDF paths
> --
>
> Key: AMBARI-19601
> URL: https://issues.apache.org/jira/browse/AMBARI-19601
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.2
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-19601.patch
>
>
> Ambari 2.4.1.0 and HDF 2.0.1.0 was upgraded to Ambari 2.4.2.0 and HDF 2.1 
> mpack
> Saw the following warnings on ambari-server upgrade
> {code}
> [root@arpit-hdf-eu-5 ~]# ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> Updating properties in ambari.properties ...
> WARNING: Original file ambari-env.sh kept
> WARNING: Original file krb5JAASLogin.conf kept
> File krb5JAASLogin.conf updated.
> Fixing database objects owner
> Ambari Server configured for MySQL. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> Upgrading database schema
> Adjusting ambari-server permissions and ownership...
> WARNING: Command chown  -R -L root /var/lib/ambari-server returned exit code 
> /var/lib/ambari-server with message: chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/repos': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/properties':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/metainfo.xml':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/ZOOKEEPER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/LOGSEARCH':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KAFKA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/stack_advisor.py':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_INFRA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/STORM':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_METRICS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KERBEROS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/NIFI':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/RANGER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/widgets.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/kerberos.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/configuration':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/role_command_order.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/hooks': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/common-services_21_11_16_21_33.old/NIFI/1.0.0':
>  No such file or directory
> Ambari Server 'upgrade' completed successfully.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-19601) Warnings on ambari server upgrade related to HDF paths

2017-01-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-19601:
---
Attachment: AMBARI-19601.patch

> Warnings on ambari server upgrade related to HDF paths
> --
>
> Key: AMBARI-19601
> URL: https://issues.apache.org/jira/browse/AMBARI-19601
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.2
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-19601.patch
>
>
> Ambari 2.4.1.0 and HDF 2.0.1.0 was upgraded to Ambari 2.4.2.0 and HDF 2.1 
> mpack
> Saw the following warnings on ambari-server upgrade
> {code}
> [root@arpit-hdf-eu-5 ~]# ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> Updating properties in ambari.properties ...
> WARNING: Original file ambari-env.sh kept
> WARNING: Original file krb5JAASLogin.conf kept
> File krb5JAASLogin.conf updated.
> Fixing database objects owner
> Ambari Server configured for MySQL. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> Upgrading database schema
> Adjusting ambari-server permissions and ownership...
> WARNING: Command chown  -R -L root /var/lib/ambari-server returned exit code 
> /var/lib/ambari-server with message: chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/repos': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/properties':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/metainfo.xml':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/ZOOKEEPER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/LOGSEARCH':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KAFKA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/stack_advisor.py':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_INFRA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/STORM':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_METRICS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KERBEROS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/NIFI':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/RANGER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/widgets.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/kerberos.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/configuration':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/role_command_order.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/hooks': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/common-services_21_11_16_21_33.old/NIFI/1.0.0':
>  No such file or directory
> Ambari Server 'upgrade' completed successfully.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (AMBARI-19601) Warnings on ambari server upgrade related to HDF paths

2017-01-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya reassigned AMBARI-19601:
--

Assignee: Jayush Luniya

> Warnings on ambari server upgrade related to HDF paths
> --
>
> Key: AMBARI-19601
> URL: https://issues.apache.org/jira/browse/AMBARI-19601
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.2
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
>
> Ambari 2.4.1.0 and HDF 2.0.1.0 was upgraded to Ambari 2.4.2.0 and HDF 2.1 
> mpack
> Saw the following warnings on ambari-server upgrade
> {code}
> [root@arpit-hdf-eu-5 ~]# ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> Updating properties in ambari.properties ...
> WARNING: Original file ambari-env.sh kept
> WARNING: Original file krb5JAASLogin.conf kept
> File krb5JAASLogin.conf updated.
> Fixing database objects owner
> Ambari Server configured for MySQL. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> Upgrading database schema
> Adjusting ambari-server permissions and ownership...
> WARNING: Command chown  -R -L root /var/lib/ambari-server returned exit code 
> /var/lib/ambari-server with message: chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/repos': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/properties':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/metainfo.xml':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/ZOOKEEPER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/LOGSEARCH':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KAFKA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/stack_advisor.py':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_INFRA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/STORM':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_METRICS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KERBEROS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/NIFI':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/RANGER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/widgets.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/kerberos.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/configuration':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/role_command_order.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/hooks': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/common-services_21_11_16_21_33.old/NIFI/1.0.0':
>  No such file or directory
> Ambari Server 'upgrade' completed successfully.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-19601) Warnings on ambari server upgrade related to HDF paths

2017-01-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-19601:
---
Affects Version/s: 2.4.2

> Warnings on ambari server upgrade related to HDF paths
> --
>
> Key: AMBARI-19601
> URL: https://issues.apache.org/jira/browse/AMBARI-19601
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.2
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
>
> Ambari 2.4.1.0 and HDF 2.0.1.0 was upgraded to Ambari 2.4.2.0 and HDF 2.1 
> mpack
> Saw the following warnings on ambari-server upgrade
> {code}
> [root@arpit-hdf-eu-5 ~]# ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> Updating properties in ambari.properties ...
> WARNING: Original file ambari-env.sh kept
> WARNING: Original file krb5JAASLogin.conf kept
> File krb5JAASLogin.conf updated.
> Fixing database objects owner
> Ambari Server configured for MySQL. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> Upgrading database schema
> Adjusting ambari-server permissions and ownership...
> WARNING: Command chown  -R -L root /var/lib/ambari-server returned exit code 
> /var/lib/ambari-server with message: chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/repos': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/properties':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/metainfo.xml':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/ZOOKEEPER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/LOGSEARCH':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KAFKA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/stack_advisor.py':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_INFRA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/STORM':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_METRICS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KERBEROS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/NIFI':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/RANGER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/widgets.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/kerberos.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/configuration':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/role_command_order.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/hooks': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/common-services_21_11_16_21_33.old/NIFI/1.0.0':
>  No such file or directory
> Ambari Server 'upgrade' completed successfully.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-19601) Warnings on ambari server upgrade related to HDF paths

2017-01-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-19601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-19601:
---
Fix Version/s: 2.5.0

> Warnings on ambari server upgrade related to HDF paths
> --
>
> Key: AMBARI-19601
> URL: https://issues.apache.org/jira/browse/AMBARI-19601
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.2
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
>
> Ambari 2.4.1.0 and HDF 2.0.1.0 was upgraded to Ambari 2.4.2.0 and HDF 2.1 
> mpack
> Saw the following warnings on ambari-server upgrade
> {code}
> [root@arpit-hdf-eu-5 ~]# ambari-server upgrade
> Using python  /usr/bin/python
> Upgrading ambari-server
> Updating properties in ambari.properties ...
> WARNING: Original file ambari-env.sh kept
> WARNING: Original file krb5JAASLogin.conf kept
> File krb5JAASLogin.conf updated.
> Fixing database objects owner
> Ambari Server configured for MySQL. Confirm you have made a backup of the 
> Ambari Server database [y/n] (y)?
> Upgrading database schema
> Adjusting ambari-server permissions and ownership...
> WARNING: Command chown  -R -L root /var/lib/ambari-server returned exit code 
> /var/lib/ambari-server with message: chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/repos': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/properties':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/metainfo.xml':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/ZOOKEEPER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/LOGSEARCH':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KAFKA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/stack_advisor.py':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_INFRA':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/STORM':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_METRICS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KERBEROS':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/NIFI':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/RANGER':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/widgets.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/kerberos.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/configuration':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/role_command_order.json':
>  No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/hooks': 
> No such file or directory
> chown: cannot dereference 
> `/var/lib/ambari-server/resources/common-services_21_11_16_21_33.old/NIFI/1.0.0':
>  No such file or directory
> Ambari Server 'upgrade' completed successfully.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-19601) Warnings on ambari server upgrade related to HDF paths

2017-01-17 Thread Jayush Luniya (JIRA)
Jayush Luniya created AMBARI-19601:
--

 Summary: Warnings on ambari server upgrade related to HDF paths
 Key: AMBARI-19601
 URL: https://issues.apache.org/jira/browse/AMBARI-19601
 Project: Ambari
  Issue Type: Bug
Reporter: Jayush Luniya


Ambari 2.4.1.0 and HDF 2.0.1.0 was upgraded to Ambari 2.4.2.0 and HDF 2.1 mpack

Saw the following warnings on ambari-server upgrade

{code}
[root@arpit-hdf-eu-5 ~]# ambari-server upgrade
Using python  /usr/bin/python
Upgrading ambari-server
Updating properties in ambari.properties ...
WARNING: Original file ambari-env.sh kept
WARNING: Original file krb5JAASLogin.conf kept
File krb5JAASLogin.conf updated.
Fixing database objects owner
Ambari Server configured for MySQL. Confirm you have made a backup of the 
Ambari Server database [y/n] (y)?
Upgrading database schema
Adjusting ambari-server permissions and ownership...
WARNING: Command chown  -R -L root /var/lib/ambari-server returned exit code 
/var/lib/ambari-server with message: chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/repos': No 
such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/properties':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/metainfo.xml':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/ZOOKEEPER':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/LOGSEARCH':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KAFKA':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/stack_advisor.py':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_INFRA':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/STORM':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/AMBARI_METRICS':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/KERBEROS':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/NIFI':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/services/RANGER':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/widgets.json':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/kerberos.json':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/configuration':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/role_command_order.json':
 No such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/stacks_21_11_16_21_33.old/HDF/2.0/hooks': No 
such file or directory
chown: cannot dereference 
`/var/lib/ambari-server/resources/common-services_21_11_16_21_33.old/NIFI/1.0.0':
 No such file or directory

Ambari Server 'upgrade' completed successfully.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18842) Provide support for removing an mpack on the command line

2016-11-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18842:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Provide support for removing an mpack on the command line
> -
>
> Key: AMBARI-18842
> URL: https://issues.apache.org/jira/browse/AMBARI-18842
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-18842.3.patch
>
>
> # Provide command line support to remove a management pack.
> # Removal should work for add-on service mpacks.
> # Removal command should remove all versions the mpack to be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18842) Provide support for removing an mpack on the command line

2016-11-23 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15690977#comment-15690977
 ] 

Jayush Luniya commented on AMBARI-18842:


Trunk
commit bd2cd4ab61798a44a670797a9348b6e818b7701b
Author: Jayush Luniya 
Date:   Wed Nov 23 10:30:00 2016 -0800

AMBARI-18842: Provide support for removing an mpack on the command line 
(jluniya)

Branch-2.5
commit d5337a5a3baf21516471b1070edcb52e2fe254d8
Author: Jayush Luniya 
Date:   Wed Nov 23 10:30:00 2016 -0800

AMBARI-18842: Provide support for removing an mpack on the command line 
(jluniya)

> Provide support for removing an mpack on the command line
> -
>
> Key: AMBARI-18842
> URL: https://issues.apache.org/jira/browse/AMBARI-18842
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-18842.3.patch
>
>
> # Provide command line support to remove a management pack.
> # Removal should work for add-on service mpacks.
> # Removal command should remove all versions the mpack to be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18842) Provide support for removing an mpack on the command line

2016-11-23 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15690972#comment-15690972
 ] 

Jayush Luniya commented on AMBARI-18842:


Hadoop QA test failure is not related to this patch. 
{code}
--
Failed tests:
FAIL: test_start_secured (test_webhcat_server.TestWebHCatServer)
--
Traceback (most recent call last):
  File 
"/home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-common/src/test/python/mock/mock.py",
 line 1199, in patched
return func(*args, **keywargs)
  File 
"/home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/test/python/stacks/2.0.6/HIVE/test_webhcat_server.py",
 line 134, in test_start_secured
self.assert_configure_secured()
  File 
"/home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/test/python/stacks/2.0.6/HIVE/test_webhcat_server.py",
 line 257, in assert_configure_secured
user = 'hcat',
  File 
"/home/jenkins/jenkins-slave/workspace/Ambari-trunk-test-patch/ambari/ambari-server/src/test/python/stacks/utils/RMFTestCase.py",
 line 280, in assertResourceCalled
self.assertEquals(resource_type, resource.__class__.__name__)
AssertionError: 'Execute' != 'XmlConfig'

--
{code}

> Provide support for removing an mpack on the command line
> -
>
> Key: AMBARI-18842
> URL: https://issues.apache.org/jira/browse/AMBARI-18842
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-18842.3.patch
>
>
> # Provide command line support to remove a management pack.
> # Removal should work for add-on service mpacks.
> # Removal command should remove all versions the mpack to be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18842) Provide support for removing an mpack on the command line

2016-11-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18842:
---
Status: Patch Available  (was: Open)

> Provide support for removing an mpack on the command line
> -
>
> Key: AMBARI-18842
> URL: https://issues.apache.org/jira/browse/AMBARI-18842
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-18842.3.patch
>
>
> # Provide command line support to remove a management pack.
> # Removal should work for add-on service mpacks.
> # Removal command should remove all versions the mpack to be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18842) Provide support for removing an mpack on the command line

2016-11-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18842:
---
Status: Open  (was: Patch Available)

> Provide support for removing an mpack on the command line
> -
>
> Key: AMBARI-18842
> URL: https://issues.apache.org/jira/browse/AMBARI-18842
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-18842.3.patch
>
>
> # Provide command line support to remove a management pack.
> # Removal should work for add-on service mpacks.
> # Removal command should remove all versions the mpack to be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18842) Provide support for removing an mpack on the command line

2016-11-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18842:
---
Attachment: (was: AMBARI-18842.patch)

> Provide support for removing an mpack on the command line
> -
>
> Key: AMBARI-18842
> URL: https://issues.apache.org/jira/browse/AMBARI-18842
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-18842.3.patch
>
>
> # Provide command line support to remove a management pack.
> # Removal should work for add-on service mpacks.
> # Removal command should remove all versions the mpack to be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18637) Management pack purge option should warn user and ask for confirmation before purging

2016-11-16 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671058#comment-15671058
 ] 

Jayush Luniya commented on AMBARI-18637:


Build failures not related to the addendum patch as it is a python code change. 

mvn clean install -DskipSurefireTests
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 5:16.144s
[INFO] Finished at: Wed Nov 16 08:38:04 PST 2016
[INFO] Final Memory: 184M/1730M
[INFO] 

> Management pack purge option should warn user and ask for confirmation before 
> purging
> -
>
> Key: AMBARI-18637
> URL: https://issues.apache.org/jira/browse/AMBARI-18637
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.4.2
>
> Attachments: AMBARI-18637.addendum.patch, AMBARI-18637.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-18637) Management pack purge option should warn user and ask for confirmation before purging

2016-11-16 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-18637.

Resolution: Fixed

> Management pack purge option should warn user and ask for confirmation before 
> purging
> -
>
> Key: AMBARI-18637
> URL: https://issues.apache.org/jira/browse/AMBARI-18637
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.4.2
>
> Attachments: AMBARI-18637.addendum.patch, AMBARI-18637.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (AMBARI-18637) Management pack purge option should warn user and ask for confirmation before purging

2016-11-16 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya reopened AMBARI-18637:


> Management pack purge option should warn user and ask for confirmation before 
> purging
> -
>
> Key: AMBARI-18637
> URL: https://issues.apache.org/jira/browse/AMBARI-18637
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.4.2
>
> Attachments: AMBARI-18637.addendum.patch, AMBARI-18637.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18431) Storm Ambari view - Fixes to DAG, kafka offset info , Misc fixes.

2016-11-15 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668479#comment-15668479
 ] 

Jayush Luniya commented on AMBARI-18431:


cc: [~swagle]

> Storm Ambari view - Fixes to DAG, kafka offset info , Misc fixes.
> -
>
> Key: AMBARI-18431
> URL: https://issues.apache.org/jira/browse/AMBARI-18431
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 2.5.0
>
> Attachments: 
> 0001-Changed-topology-DAG-Fixed-Kafka-Spout-Lag-data-form.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18431) Storm Ambari view - Fixes to DAG, kafka offset info , Misc fixes.

2016-11-15 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668476#comment-15668476
 ] 

Jayush Luniya commented on AMBARI-18431:


[~sriharsha]
We are almost about to kickoff a 2.4.2 release and given this is an 
improvement, it doesnt meet the bar for 2.4.2. We can push this in for 2.5.


> Storm Ambari view - Fixes to DAG, kafka offset info , Misc fixes.
> -
>
> Key: AMBARI-18431
> URL: https://issues.apache.org/jira/browse/AMBARI-18431
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 2.5.0
>
> Attachments: 
> 0001-Changed-topology-DAG-Fixed-Kafka-Spout-Lag-data-form.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18431) Storm Ambari view - Fixes to DAG, kafka offset info , Misc fixes.

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18431:
---
Fix Version/s: (was: trunk)
   2.5.0

> Storm Ambari view - Fixes to DAG, kafka offset info , Misc fixes.
> -
>
> Key: AMBARI-18431
> URL: https://issues.apache.org/jira/browse/AMBARI-18431
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 2.5.0
>
> Attachments: 
> 0001-Changed-topology-DAG-Fixed-Kafka-Spout-Lag-data-form.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18380) For rolling upgrade of Kafka 0.10.0.1 we need to configs for backward compatibility

2016-11-15 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668430#comment-15668430
 ] 

Jayush Luniya commented on AMBARI-18380:


[~afernandez]
Are you fixing this in 2.4.2? Can we move this out to 2.5.0?

> For rolling upgrade of Kafka 0.10.0.1 we need to configs for backward 
> compatibility 
> 
>
> Key: AMBARI-18380
> URL: https://issues.apache.org/jira/browse/AMBARI-18380
> Project: Ambari
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 2.4.2
>
>
> following properties need to be set
> {code}
> inter.broker.protocol.version=0.9.0.0
> log.message.format.version=0.9.0.0
> {code}
> after upgrade is done we should delte the inter.broker.protocol.version.
> Users should remove log.message.format.version once they update their clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18380) For rolling upgrade of Kafka 0.10.0.1 we need to configs for backward compatibility

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18380:
---
Assignee: Alejandro Fernandez

> For rolling upgrade of Kafka 0.10.0.1 we need to configs for backward 
> compatibility 
> 
>
> Key: AMBARI-18380
> URL: https://issues.apache.org/jira/browse/AMBARI-18380
> Project: Ambari
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Alejandro Fernandez
>Priority: Blocker
> Fix For: 2.4.2
>
>
> following properties need to be set
> {code}
> inter.broker.protocol.version=0.9.0.0
> log.message.format.version=0.9.0.0
> {code}
> after upgrade is done we should delte the inter.broker.protocol.version.
> Users should remove log.message.format.version once they update their clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18794) Remove PHD stack from Ambari source code

2016-11-15 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668305#comment-15668305
 ] 

Jayush Luniya commented on AMBARI-18794:


Moving out of 2.4.2

> Remove PHD stack from Ambari source code
> 
>
> Key: AMBARI-18794
> URL: https://issues.apache.org/jira/browse/AMBARI-18794
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Reporter: Matt
>Assignee: Matt
>Priority: Minor
> Fix For: trunk, 2.5.0
>
>
> PHD stack is no longer used. Hence it should be removed from Ambari 2.4+ 
> branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18794) Remove PHD stack from Ambari source code

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18794:
---
Fix Version/s: (was: 2.4.2)

> Remove PHD stack from Ambari source code
> 
>
> Key: AMBARI-18794
> URL: https://issues.apache.org/jira/browse/AMBARI-18794
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Reporter: Matt
>Assignee: Matt
>Priority: Minor
> Fix For: trunk, 2.5.0
>
>
> PHD stack is no longer used. Hence it should be removed from Ambari 2.4+ 
> branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18770) Service check and Ambari Alerting for RM fails against Yarn with HA and SPNEGO

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18770:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Resolving since its already committed.

> Service check and Ambari Alerting for RM fails against Yarn with HA and 
> SPNEGO
> ---
>
> Key: AMBARI-18770
> URL: https://issues.apache.org/jira/browse/AMBARI-18770
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent, ambari-server
>Affects Versions: 2.4.0, 2.4.2
> Environment: Hortonworks HDP 2.5 and HDP 2.4
>Reporter: Greg Senia
>Assignee: Attila Magyar
> Fix For: 2.4.2
>
> Attachments: AMBARI-18770.patch, AMBARI-18770_branch-2.4.patch, 
> AMBARI-18770_branch-2.5.patch, AMBARI-18770_trunk.patch
>
>
> If both HA and SPNEGO is configured for the cluster, the service check and 
> ambari Alerting for RM fails for Yarn.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18597) Rename service to "Microsoft R Server" and component to "Microsoft R Node Client"

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18597:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Looks like its already committed to 2.4.2 as well.

> Rename service to "Microsoft R Server" and component to "Microsoft R Node 
> Client"
> -
>
> Key: AMBARI-18597
> URL: https://issues.apache.org/jira/browse/AMBARI-18597
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Balázs Bence Sári
>Assignee: Balázs Bence Sári
>Priority: Critical
> Fix For: trunk, 2.5.0, 2.4.2
>
> Attachments: AMBARI-18597-rename-to-node-client.patch
>
>
> Rename service to "Microsoft R Server" and component to "Microsoft R Node 
> Client" in the Microsoft R management pack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18666) Move HAWQ and PXF RCO from stacks to common-services

2016-11-15 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668296#comment-15668296
 ] 

Jayush Luniya commented on AMBARI-18666:


Moving this out of 2.4.2 as its not a blocker for 2.4.2

> Move HAWQ and PXF RCO from stacks to common-services
> 
>
> Key: AMBARI-18666
> URL: https://issues.apache.org/jira/browse/AMBARI-18666
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: trunk, 2.5.0, 2.4.2
>Reporter: Matt
>Assignee: Matt
>Priority: Minor
> Fix For: trunk, 2.5.0
>
> Attachments: AMBARI-18666-trunk-orig.patch
>
>
> Move HAWQ and PXF RCO from stacks to common-services



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18666) Move HAWQ and PXF RCO from stacks to common-services

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18666:
---
Fix Version/s: (was: 2.4.2)

> Move HAWQ and PXF RCO from stacks to common-services
> 
>
> Key: AMBARI-18666
> URL: https://issues.apache.org/jira/browse/AMBARI-18666
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: trunk, 2.5.0, 2.4.2
>Reporter: Matt
>Assignee: Matt
>Priority: Minor
> Fix For: trunk, 2.5.0
>
> Attachments: AMBARI-18666-trunk-orig.patch
>
>
> Move HAWQ and PXF RCO from stacks to common-services



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18431) Storm Ambari view - Fixes to DAG, kafka offset info , Misc fixes.

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18431:
---
Fix Version/s: (was: 2.4.2)
   trunk

> Storm Ambari view - Fixes to DAG, kafka offset info , Misc fixes.
> -
>
> Key: AMBARI-18431
> URL: https://issues.apache.org/jira/browse/AMBARI-18431
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: trunk
>
> Attachments: 
> 0001-Changed-topology-DAG-Fixed-Kafka-Spout-Lag-data-form.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18578) Grafana fails to start after deployment

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18578:
---
Fix Version/s: (was: 2.4.2)
   2.5.0

> Grafana fails to start after deployment
> ---
>
> Key: AMBARI-18578
> URL: https://issues.apache.org/jira/browse/AMBARI-18578
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.0
>
>
> Noticed this issue in Ambari system tests clusters that run on YCloud, that
> Grafana fails to start after deployment with the below error
> 
> 
> 
> "stderr" : "Traceback (most recent call last):\n  File 
> \"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py\",
>  line 67, in \nAmsGrafana().execute()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 280, in execute\nmethod(env)\n  File 
> \"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py\",
>  line 46, in start\nuser=params.ams_user\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 
> 155, in __init__\nself.env.run()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 160, in run\nself.run_action(resource, action)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 124, in run_action\nprovider_action()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 273, in action_run\ntries=self.resource.tries, 
> try_sleep=self.resource.try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 71, in inner\nresult = function(command, **kwargs)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 93, in checked_call\ntries=tries, try_sleep=try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 141, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 294, in _call\nraise 
> Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of 
> '/usr/sbin/ambari-metrics-grafana start' returned 1.  Hortonworks 
> #\nThis is MOTD message, added for testing in qe infra\nStarting 
> Ambari Metrics Grafana:  FAILED",
> 
> 
> Please find the artifacts for the run [here](http://qelog.hortonworks.com/log
> /nat-yc-ambari-2-4-2-0-amb-r7-dmyu-ambari-config-6/test-logs/ambari-config/art
> ifacts/screenshots/com.hw.ambari.ui.tests.installer.InstallHadoop/install/_8_2
> 3_23_7_Component__Phoenix_Query_Server__is_not__STARTED__on_host__ctr_e45_1475
> 874954070_0581_01_/lastAvailableRequests.txt)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-18317) ambari-agent script does not check for unset variables. (Leading to chown root:root /)

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-18317.

Resolution: Fixed

> ambari-agent script does not check for unset variables. (Leading to chown 
> root:root /)
> --
>
> Key: AMBARI-18317
> URL: https://issues.apache.org/jira/browse/AMBARI-18317
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: 2.4.0
> Environment: Ubuntu 14.04
> Hortonworks provided package: ambari-agent 4.2.0.1-1
>Reporter: Ryan Walder
>Assignee: Andrew Onischuk
> Fix For: 2.4.2
>
>
> Using the following config (unchanged from a previous install, so missing 
> options relevant to 2.4.0.1) with ambari-agent 2.4.0.1 causes the 
> ambari-agent script to chown the entire filesystem as root.
> {noformat}
> [logging]
> syslog_enabled=0
> [agent]
> ping_port=8670
> data_cleanup_max_size_MB=100
> prefix=/var/lib/ambari-agent/data
> cache_dir=/var/lib/ambari-agent/cache
> tolerate_download_failures=true
> parallel_execution=0
> data_cleanup_interval=86400
> tolerate_download_failuresf=false
> data_cleanup_max_age=2592000
> loglevel=INFO
> run_as_user=root
> [server]
> secured_url_port=8441
> hostname=cs-vagrant-hadoop-ambarimaster-01.gel.zone
> url_port=8440
> [services]
> pidLookupPath=/var/run/
> [heartbeat]
> dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,/etc/sqoop,/etc/ganglia,/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
> log_lines_count=300
> state_interval=6
> [security]
> server_crt=ca.crt
> keysdir=/var/lib/ambari-agent/keys
> passphrase_env_var_name=AMBARI_PASSPHRASE
> ryanwalder@ryanwlaptop:~$ vi old 
> ryanwalder@ryanwlaptop:~$ vi old 
> ryanwalder@ryanwlaptop:~$ vi old 
> ryanwalder@ryanwlaptop:~$ cat old 
> [logging]
> syslog_enabled=0
> [agent]
> ping_port=8670
> data_cleanup_max_size_MB=100
> prefix=/var/lib/ambari-agent/data
> cache_dir=/var/lib/ambari-agent/cache
> tolerate_download_failures=true
> parallel_execution=0
> data_cleanup_interval=86400
> tolerate_download_failuresf=false
> data_cleanup_max_age=2592000
> loglevel=INFO
> run_as_user=root
> [server]
> secured_url_port=8441
> hostname=cs-vagrant-hadoop-ambarimaster-01.gel.zone
> url_port=8440
> [services]
> pidLookupPath=/var/run/
> [heartbeat]
> dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,/etc/sqoop,/etc/ganglia,/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
> log_lines_count=300
> state_interval=6
> [security]
> server_crt=ca.crt
> keysdir=/var/lib/ambari-agent/keys
> passphrase_env_var_name=AMBARI_PASSPHRASE
> {noformat}
> It looks like the following lines are to blame
> {noformat}
> ambari-sudo.sh chown -R $current_user "$AMBARI_PID_DIR/"
> ambari-sudo.sh mkdir -p "$AMBARI_AGENT_LOG_DIR"
> ambari-sudo.sh chown -R $current_user:$current_group 
> "$AMBARI_AGENT_LOG_DIR/"
> {noformat}
> No checking for unset variables in 2016? Top notch.
> http://www.davidpashley.com/articles/writing-robust-shell-scripts/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18317) ambari-agent script does not check for unset variables. (Leading to chown root:root /)

2016-11-15 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668271#comment-15668271
 ] 

Jayush Luniya commented on AMBARI-18317:


Looks like this is already in 2.4. Resolving JIRA.

commit ec16e4975f03294e2edacbd6884798c0cd5fdd23
Author: Andrew Onishuk 
Date:   Mon Sep 19 19:37:21 2016 +0300

AMBARI-18360. ambari-agent check for unset variables (AMBARI-18317) 
(aonishuk)

> ambari-agent script does not check for unset variables. (Leading to chown 
> root:root /)
> --
>
> Key: AMBARI-18317
> URL: https://issues.apache.org/jira/browse/AMBARI-18317
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: 2.4.0
> Environment: Ubuntu 14.04
> Hortonworks provided package: ambari-agent 4.2.0.1-1
>Reporter: Ryan Walder
>Assignee: Andrew Onischuk
> Fix For: 2.4.2
>
>
> Using the following config (unchanged from a previous install, so missing 
> options relevant to 2.4.0.1) with ambari-agent 2.4.0.1 causes the 
> ambari-agent script to chown the entire filesystem as root.
> {noformat}
> [logging]
> syslog_enabled=0
> [agent]
> ping_port=8670
> data_cleanup_max_size_MB=100
> prefix=/var/lib/ambari-agent/data
> cache_dir=/var/lib/ambari-agent/cache
> tolerate_download_failures=true
> parallel_execution=0
> data_cleanup_interval=86400
> tolerate_download_failuresf=false
> data_cleanup_max_age=2592000
> loglevel=INFO
> run_as_user=root
> [server]
> secured_url_port=8441
> hostname=cs-vagrant-hadoop-ambarimaster-01.gel.zone
> url_port=8440
> [services]
> pidLookupPath=/var/run/
> [heartbeat]
> dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,/etc/sqoop,/etc/ganglia,/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
> log_lines_count=300
> state_interval=6
> [security]
> server_crt=ca.crt
> keysdir=/var/lib/ambari-agent/keys
> passphrase_env_var_name=AMBARI_PASSPHRASE
> ryanwalder@ryanwlaptop:~$ vi old 
> ryanwalder@ryanwlaptop:~$ vi old 
> ryanwalder@ryanwlaptop:~$ vi old 
> ryanwalder@ryanwlaptop:~$ cat old 
> [logging]
> syslog_enabled=0
> [agent]
> ping_port=8670
> data_cleanup_max_size_MB=100
> prefix=/var/lib/ambari-agent/data
> cache_dir=/var/lib/ambari-agent/cache
> tolerate_download_failures=true
> parallel_execution=0
> data_cleanup_interval=86400
> tolerate_download_failuresf=false
> data_cleanup_max_age=2592000
> loglevel=INFO
> run_as_user=root
> [server]
> secured_url_port=8441
> hostname=cs-vagrant-hadoop-ambarimaster-01.gel.zone
> url_port=8440
> [services]
> pidLookupPath=/var/run/
> [heartbeat]
> dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,/etc/sqoop,/etc/ganglia,/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
> log_lines_count=300
> state_interval=6
> [security]
> server_crt=ca.crt
> keysdir=/var/lib/ambari-agent/keys
> passphrase_env_var_name=AMBARI_PASSPHRASE
> {noformat}
> It looks like the following lines are to blame
> {noformat}
> ambari-sudo.sh chown -R $current_user "$AMBARI_PID_DIR/"
> ambari-sudo.sh mkdir -p "$AMBARI_AGENT_LOG_DIR"
> ambari-sudo.sh chown -R $current_user:$current_group 
> "$AMBARI_AGENT_LOG_DIR/"
> {noformat}
> No checking for unset variables in 2016? Top notch.
> http://www.davidpashley.com/articles/writing-robust-shell-scripts/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-18314) Users page after LDAP sync shows blank

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-18314.

Resolution: Incomplete

> Users page after LDAP sync shows blank
> --
>
> Key: AMBARI-18314
> URL: https://issues.apache.org/jira/browse/AMBARI-18314
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.1
>Reporter: Shreya Bhat
> Fix For: 2.5.0
>
>
> The network shows 500 Server error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18314) Users page after LDAP sync shows blank

2016-11-15 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668254#comment-15668254
 ] 

Jayush Luniya commented on AMBARI-18314:


[~shreyabh...@gmail.com] There is not enough details on this JIRA. I will go 
ahead and close this JIRA for now, please reopen if you can repro and provide 
detailed logs. 

cc: [~rlevas]

> Users page after LDAP sync shows blank
> --
>
> Key: AMBARI-18314
> URL: https://issues.apache.org/jira/browse/AMBARI-18314
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.1
>Reporter: Shreya Bhat
> Fix For: 2.5.0
>
>
> The network shows 500 Server error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18314) Users page after LDAP sync shows blank

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18314:
---
Fix Version/s: (was: 2.4.2)
   2.5.0

> Users page after LDAP sync shows blank
> --
>
> Key: AMBARI-18314
> URL: https://issues.apache.org/jira/browse/AMBARI-18314
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.1
>Reporter: Shreya Bhat
> Fix For: 2.5.0
>
>
> The network shows 500 Server error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-15537) Service Level Extensions for Add-On Services

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-15537.

Resolution: Fixed

> Service Level Extensions for Add-On Services
> 
>
> Key: AMBARI-15537
> URL: https://issues.apache.org/jira/browse/AMBARI-15537
> Project: Ambari
>  Issue Type: Epic
>Affects Versions: 2.1.0, 2.2.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.4.2
>
>
> In order to make add-on services self contained we need to support following 
> extension points
> - Upgrade Pack Extensions
> - Stack Advisor Extensions
> - Role Command Order Extensions. This is covered in AMBARI-9363 (service 
> level RCO extension)
> - Repo Extensions



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18666) Move HAWQ and PXF RCO from stacks to common-services

2016-11-15 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668244#comment-15668244
 ] 

Jayush Luniya commented on AMBARI-18666:


[~mithmatt] What's the latest on this JIRA? Looks like its already committed. 
Can you close this JIRA or move it out of 2.4.2?

> Move HAWQ and PXF RCO from stacks to common-services
> 
>
> Key: AMBARI-18666
> URL: https://issues.apache.org/jira/browse/AMBARI-18666
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: trunk, 2.5.0, 2.4.2
>Reporter: Matt
>Assignee: Matt
>Priority: Minor
> Fix For: trunk, 2.5.0, 2.4.2
>
> Attachments: AMBARI-18666-trunk-orig.patch
>
>
> Move HAWQ and PXF RCO from stacks to common-services



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18637) Management pack purge option should warn user and ask for confirmation before purging

2016-11-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18637:
---
Attachment: AMBARI-18637.addendum.patch

> Management pack purge option should warn user and ask for confirmation before 
> purging
> -
>
> Key: AMBARI-18637
> URL: https://issues.apache.org/jira/browse/AMBARI-18637
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.4.2
>
> Attachments: AMBARI-18637.addendum.patch, AMBARI-18637.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18842) Provide support for removing an mpack on the command line

2016-11-10 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18842:
---
Status: Patch Available  (was: In Progress)

> Provide support for removing an mpack on the command line
> -
>
> Key: AMBARI-18842
> URL: https://issues.apache.org/jira/browse/AMBARI-18842
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-18842.patch
>
>
> # Provide command line support to remove a management pack.
> # Removal should work for add-on service mpacks.
> # Removal command should remove all versions the mpack to be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18842) Provide support for removing an mpack on the command line

2016-11-10 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18842:
---
Attachment: AMBARI-18842.patch

> Provide support for removing an mpack on the command line
> -
>
> Key: AMBARI-18842
> URL: https://issues.apache.org/jira/browse/AMBARI-18842
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
> Attachments: AMBARI-18842.patch
>
>
> # Provide command line support to remove a management pack.
> # Removal should work for add-on service mpacks.
> # Removal command should remove all versions the mpack to be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18842) Provide support for removing an mpack on the command line

2016-11-10 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18842:
---
Summary: Provide support for removing an mpack on the command line  (was: 
Provide support for removing an mpack in the command line)

> Provide support for removing an mpack on the command line
> -
>
> Key: AMBARI-18842
> URL: https://issues.apache.org/jira/browse/AMBARI-18842
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: 2.5.0
>
>
> # Provide command line support to remove a management pack.
> # Removal should work for add-on service mpacks.
> # Removal command should remove all versions the mpack to be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18842) Provide support for removing an mpack in the command line

2016-11-10 Thread Jayush Luniya (JIRA)
Jayush Luniya created AMBARI-18842:
--

 Summary: Provide support for removing an mpack in the command line
 Key: AMBARI-18842
 URL: https://issues.apache.org/jira/browse/AMBARI-18842
 Project: Ambari
  Issue Type: Task
  Components: ambari-server
Affects Versions: 2.4.0
Reporter: Jayush Luniya
Assignee: Jayush Luniya
 Fix For: 2.5.0


# Provide command line support to remove a management pack.
# Removal should work for add-on service mpacks.
# Removal command should remove all versions the mpack to be removed.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18774) Install Package for non-HDP stack fails with non-VDF repo version

2016-11-03 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15634115#comment-15634115
 ] 

Jayush Luniya commented on AMBARI-18774:


Trunk
commit 2da6fa789f6ac64c4e699469bde1185bb2ab126d
Author: Jayush Luniya 
Date:   Thu Nov 3 13:17:44 2016 -0700

AMBARI-18774: Install Package for non-HDP stack fails with non-VDF repo 
version (jluniya)

Branch-2.5
commit 298a50e91de43f4186d65b55e028e2a4b87f7f48
Author: Jayush Luniya 
Date:   Thu Nov 3 13:17:44 2016 -0700

AMBARI-18774: Install Package for non-HDP stack fails with non-VDF repo 
version (jluniya)

Branch-2.4
commit 4692a16bc230f563211ec02b7326cf13cb519366
Author: Jayush Luniya 
Date:   Thu Nov 3 13:17:44 2016 -0700

AMBARI-18774: Install Package for non-HDP stack fails with non-VDF repo 
version (jluniya)


> Install Package for non-HDP stack fails with non-VDF repo version
> -
>
> Key: AMBARI-18774
> URL: https://issues.apache.org/jira/browse/AMBARI-18774
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 2.4.2
>
> Attachments: AMBARI-18774.2.patch, AMBARI-18774.patch
>
>
> Install Package fails when registering a new non-HDP version without using a 
> VDF. This is because the repo version is saved as display name (i.e. 
> "HDF-2.1.0.0-30" instead of "2.1.0.0-30"). If a new version is registered 
> using VDF, we dont run into this issue as the version is set correctly.
> {code}
> Ambari cannot install version HDF-2.1.0.0.  Version 2.0.1.0-12 is already 
> installed.
> {code}
> Fix:
> Remove HDP specific hardcodings from RepositoryVersionEntity.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18774) Install Package for non-HDP stack fails with non-VDF repo version

2016-11-03 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18774:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Install Package for non-HDP stack fails with non-VDF repo version
> -
>
> Key: AMBARI-18774
> URL: https://issues.apache.org/jira/browse/AMBARI-18774
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 2.4.2
>
> Attachments: AMBARI-18774.2.patch, AMBARI-18774.patch
>
>
> Install Package fails when registering a new non-HDP version without using a 
> VDF. This is because the repo version is saved as display name (i.e. 
> "HDF-2.1.0.0-30" instead of "2.1.0.0-30"). If a new version is registered 
> using VDF, we dont run into this issue as the version is set correctly.
> {code}
> Ambari cannot install version HDF-2.1.0.0.  Version 2.0.1.0-12 is already 
> installed.
> {code}
> Fix:
> Remove HDP specific hardcodings from RepositoryVersionEntity.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2   3   4   5   6   7   8   9   10   >