[jira] [Reopened] (AMBARI-15538) Support service-specific repo for add-on services

2016-09-15 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya reopened AMBARI-15538:


Please commit this to branch-2.4 too so that it is in 2.4.2 release.

> Support service-specific repo for add-on services
> -
>
> Key: AMBARI-15538
> URL: https://issues.apache.org/jira/browse/AMBARI-15538
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.1.0, 2.2.0, 2.4.0
>Reporter: Jayush Luniya
>Assignee: Balázs Bence Sári
>Priority: Critical
> Fix For: 2.5.0, 2.4.2
>
> Attachments: AMBARI-15538-custom-repos-patch6-branch-25.diff, 
> AMBARI-15538-custom-repos-patch6-trunk.diff
>
>
> The approach for custom-services to specify their own repo location will be 
> to provide a {{/repos/repoinfo.xml}} inside the stack-version they will be 
> in. This repo file will be loaded by Ambari during startup into the 
> {{/api/v1/stacks/HDP/versions/2.4/repository_versions}} repos. *Service repo 
> files have a restriction that their (repo-name, base-url) locations should be 
> unique and not conflict*. When conflicts do occur, they will not be loaded 
> into the stacks model.
> Now the management-pack will provide such repos/ folder in 
> {{mpacks/custom-services/8.0.0/repos}} which will be linked into the stacks/ 
> folder.
> {{ambari/ambari-server/src/main/resources/stacks/HDP/2.3/services/SERVICE_NAME/repos
>  -> mpacks/custom-services/8.0.0/repos}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15538) Support service-specific repo for add-on services

2016-09-15 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15494453#comment-15494453
 ] 

Jayush Luniya commented on AMBARI-15538:


Ok. But at some stage we will add support multiple service versions in a stack 
(ex: Spark 1.6 and Spark 2.0 in HDP-2.x). We can add the option of a repo at 
extension/mpack level, it can go as a separate improvement. 

> Support service-specific repo for add-on services
> -
>
> Key: AMBARI-15538
> URL: https://issues.apache.org/jira/browse/AMBARI-15538
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.1.0, 2.2.0, 2.4.0
>Reporter: Jayush Luniya
>Assignee: Balázs Bence Sári
>Priority: Critical
> Fix For: 2.5.0, 2.4.2
>
> Attachments: AMBARI-15538-custom-repos-patch6-branch-25.diff, 
> AMBARI-15538-custom-repos-patch6-trunk.diff
>
>
> The approach for custom-services to specify their own repo location will be 
> to provide a {{/repos/repoinfo.xml}} inside the stack-version they will be 
> in. This repo file will be loaded by Ambari during startup into the 
> {{/api/v1/stacks/HDP/versions/2.4/repository_versions}} repos. *Service repo 
> files have a restriction that their (repo-name, base-url) locations should be 
> unique and not conflict*. When conflicts do occur, they will not be loaded 
> into the stacks model.
> Now the management-pack will provide such repos/ folder in 
> {{mpacks/custom-services/8.0.0/repos}} which will be linked into the stacks/ 
> folder.
> {{ambari/ambari-server/src/main/resources/stacks/HDP/2.3/services/SERVICE_NAME/repos
>  -> mpacks/custom-services/8.0.0/repos}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15538) Support service-specific repo for add-on services

2016-09-15 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15494441#comment-15494441
 ] 

Jayush Luniya commented on AMBARI-15538:


Yes reopened so its not missed. cc: [~bsari]

> Support service-specific repo for add-on services
> -
>
> Key: AMBARI-15538
> URL: https://issues.apache.org/jira/browse/AMBARI-15538
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.1.0, 2.2.0, 2.4.0
>Reporter: Jayush Luniya
>Assignee: Balázs Bence Sári
>Priority: Critical
> Fix For: 2.5.0, 2.4.2
>
> Attachments: AMBARI-15538-custom-repos-patch6-branch-25.diff, 
> AMBARI-15538-custom-repos-patch6-trunk.diff
>
>
> The approach for custom-services to specify their own repo location will be 
> to provide a {{/repos/repoinfo.xml}} inside the stack-version they will be 
> in. This repo file will be loaded by Ambari during startup into the 
> {{/api/v1/stacks/HDP/versions/2.4/repository_versions}} repos. *Service repo 
> files have a restriction that their (repo-name, base-url) locations should be 
> unique and not conflict*. When conflicts do occur, they will not be loaded 
> into the stacks model.
> Now the management-pack will provide such repos/ folder in 
> {{mpacks/custom-services/8.0.0/repos}} which will be linked into the stacks/ 
> folder.
> {{ambari/ambari-server/src/main/resources/stacks/HDP/2.3/services/SERVICE_NAME/repos
>  -> mpacks/custom-services/8.0.0/repos}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15538) Support service-specific repo for add-on services

2016-09-14 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-15538:
---
Priority: Critical  (was: Major)

> Support service-specific repo for add-on services
> -
>
> Key: AMBARI-15538
> URL: https://issues.apache.org/jira/browse/AMBARI-15538
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.1.0, 2.2.0, 2.4.0
>Reporter: Jayush Luniya
>Assignee: Balázs Bence Sári
>Priority: Critical
> Fix For: 2.5.0, 2.4.2
>
> Attachments: AMBARI-15538-custom-repos-patch6-trunk.diff
>
>
> The approach for custom-services to specify their own repo location will be 
> to provide a {{/repos/repoinfo.xml}} inside the stack-version they will be 
> in. This repo file will be loaded by Ambari during startup into the 
> {{/api/v1/stacks/HDP/versions/2.4/repository_versions}} repos. *Service repo 
> files have a restriction that their (repo-name, base-url) locations should be 
> unique and not conflict*. When conflicts do occur, they will not be loaded 
> into the stacks model.
> Now the management-pack will provide such repos/ folder in 
> {{mpacks/custom-services/8.0.0/repos}} which will be linked into the stacks/ 
> folder.
> {{ambari/ambari-server/src/main/resources/stacks/HDP/2.3/services/SERVICE_NAME/repos
>  -> mpacks/custom-services/8.0.0/repos}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18385) Add HDF management pack

2016-09-13 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489471#comment-15489471
 ] 

Jayush Luniya commented on AMBARI-18385:


Trunk
commit 37e71db741cacb5acc4113131a27d2c1b7ac5791
Author: Jayush Luniya 
Date:   Tue Sep 13 22:26:38 2016 -0700

AMBARI-18385: Add HDF management pack (jluniya)

> Add HDF management pack
> ---
>
> Key: AMBARI-18385
> URL: https://issues.apache.org/jira/browse/AMBARI-18385
> Project: Ambari
>  Issue Type: Bug
>  Components: contrib
>Affects Versions: trunk
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: trunk
>
> Attachments: AMBARI-18385.patch
>
>
> Add HDF management pack to Ambari



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18385) Add HDF management pack

2016-09-13 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489010#comment-15489010
 ] 

Jayush Luniya commented on AMBARI-18385:


[~sumitmohanty] [~mahadev]
Can you review the patch for adding HDF mpack?

> Add HDF management pack
> ---
>
> Key: AMBARI-18385
> URL: https://issues.apache.org/jira/browse/AMBARI-18385
> Project: Ambari
>  Issue Type: Bug
>  Components: contrib
>Affects Versions: trunk
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: trunk
>
> Attachments: AMBARI-18385.patch
>
>
> Add HDF management pack to Ambari



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMBARI-18385) Add HDF management pack

2016-09-13 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489008#comment-15489008
 ] 

Jayush Luniya edited comment on AMBARI-18385 at 9/14/16 1:16 AM:
-

{code}
mvn clean apache-rat:check
cd contrib/management-packs
mvn clean package
mvn clean apache-rat:check
{code}


was (Author: jluniya):
{code}
 mvn clean apache-rat:check
cd contrib/management-packs
mvn clean package
 mvn clean apache-rat:check
{code}

> Add HDF management pack
> ---
>
> Key: AMBARI-18385
> URL: https://issues.apache.org/jira/browse/AMBARI-18385
> Project: Ambari
>  Issue Type: Bug
>  Components: contrib
>Affects Versions: trunk
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: trunk
>
> Attachments: AMBARI-18385.patch
>
>
> Add HDF management pack to Ambari



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18385) Add HDF management pack

2016-09-13 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489008#comment-15489008
 ] 

Jayush Luniya commented on AMBARI-18385:


{code}
 mvn clean apache-rat:check
cd contrib/management-packs
mvn clean package
 mvn clean apache-rat:check
{code}

> Add HDF management pack
> ---
>
> Key: AMBARI-18385
> URL: https://issues.apache.org/jira/browse/AMBARI-18385
> Project: Ambari
>  Issue Type: Bug
>  Components: contrib
>Affects Versions: trunk
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: trunk
>
> Attachments: AMBARI-18385.patch
>
>
> Add HDF management pack to Ambari



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18385) Add HDF management pack

2016-09-13 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18385:
---
Attachment: AMBARI-18385.patch

> Add HDF management pack
> ---
>
> Key: AMBARI-18385
> URL: https://issues.apache.org/jira/browse/AMBARI-18385
> Project: Ambari
>  Issue Type: Bug
>  Components: contrib
>Affects Versions: trunk
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
> Fix For: trunk
>
> Attachments: AMBARI-18385.patch
>
>
> Add HDF management pack to Ambari



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18385) Add HDF management pack

2016-09-13 Thread Jayush Luniya (JIRA)
Jayush Luniya created AMBARI-18385:
--

 Summary: Add HDF management pack
 Key: AMBARI-18385
 URL: https://issues.apache.org/jira/browse/AMBARI-18385
 Project: Ambari
  Issue Type: Bug
  Components: contrib
Affects Versions: trunk
Reporter: Jayush Luniya
Assignee: Jayush Luniya
 Fix For: trunk


Add HDF management pack to Ambari



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMBARI-15538) Support service-specific repo for add-on services

2016-09-13 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488361#comment-15488361
 ] 

Jayush Luniya edited comment on AMBARI-15538 at 9/13/16 8:42 PM:
-

[~Tim Thorpe]
Latest patch submitted by [~bsari] addresses some of your comments (it handles 
repos defined for a service definition for an extension correctly as well has 
handles stack inheritance correctly). 

Lets say that in an extension we have MYSERVICE/1.0 -> HDP-2.4 and 
MYSERVICE/2.0  -> HDP-2.5. In that case we will need to add different repos for 
MYSERVICE/1.0 and MYSERVICE/2.0 (i.e. we will need to add repos at service 
level). 

BTW, I don't see a way to add more than one versions of a service in an 
extension. I think we need to support that? 


was (Author: jluniya):
[~Tim Thorpe]
Latest patch submitted by [~bsari] addresses some of your comments (it handles 
repos defined for a service definition for an extension correctly as well has 
handles stack inheritance correctly). 

Lets say that in an extension we have MYSERVICE/1.0 -> HDP-2.4 and 
MYSERVICE/2.0  -> HDP-2.5. In that case we will need to add different repos for 
MYSERVICE/1.0 and MYSERVICE/2.0 (i.e. we will need to add repos at service 
level). BTW, I don't see a way to add more than one versions of a service in an 
extension. I think we need to support that? 

> Support service-specific repo for add-on services
> -
>
> Key: AMBARI-15538
> URL: https://issues.apache.org/jira/browse/AMBARI-15538
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.1.0, 2.2.0, 2.4.0
>Reporter: Jayush Luniya
>Assignee: Balázs Bence Sári
> Fix For: 2.5.0, 2.4.2
>
> Attachments: AMBARI-15538-custom-repos-patch3-trunk.diff
>
>
> The approach for custom-services to specify their own repo location will be 
> to provide a {{/repos/repoinfo.xml}} inside the stack-version they will be 
> in. This repo file will be loaded by Ambari during startup into the 
> {{/api/v1/stacks/HDP/versions/2.4/repository_versions}} repos. *Service repo 
> files have a restriction that their (repo-name, base-url) locations should be 
> unique and not conflict*. When conflicts do occur, they will not be loaded 
> into the stacks model.
> Now the management-pack will provide such repos/ folder in 
> {{mpacks/custom-services/8.0.0/repos}} which will be linked into the stacks/ 
> folder.
> {{ambari/ambari-server/src/main/resources/stacks/HDP/2.3/services/SERVICE_NAME/repos
>  -> mpacks/custom-services/8.0.0/repos}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-15538) Support service-specific repo for add-on services

2016-09-13 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488361#comment-15488361
 ] 

Jayush Luniya commented on AMBARI-15538:


[~Tim Thorpe]
Latest patch submitted by [~bsari] addresses some of your comments (it handles 
repos defined for a service definition for an extension correctly as well has 
handles stack inheritance correctly). 

Lets say that in an extension we have MYSERVICE/1.0 -> HDP-2.4 and 
MYSERVICE/2.0  -> HDP-2.5. In that case we will need to add different repos for 
MYSERVICE/1.0 and MYSERVICE/2.0 (i.e. we will need to add repos at service 
level). BTW, I don't see a way to add more than one versions of a service in an 
extension. I think we need to support that? 

> Support service-specific repo for add-on services
> -
>
> Key: AMBARI-15538
> URL: https://issues.apache.org/jira/browse/AMBARI-15538
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.1.0, 2.2.0, 2.4.0
>Reporter: Jayush Luniya
>Assignee: Balázs Bence Sári
> Fix For: 2.5.0, 2.4.2
>
> Attachments: AMBARI-15538-custom-repos-patch3-trunk.diff
>
>
> The approach for custom-services to specify their own repo location will be 
> to provide a {{/repos/repoinfo.xml}} inside the stack-version they will be 
> in. This repo file will be loaded by Ambari during startup into the 
> {{/api/v1/stacks/HDP/versions/2.4/repository_versions}} repos. *Service repo 
> files have a restriction that their (repo-name, base-url) locations should be 
> unique and not conflict*. When conflicts do occur, they will not be loaded 
> into the stacks model.
> Now the management-pack will provide such repos/ folder in 
> {{mpacks/custom-services/8.0.0/repos}} which will be linked into the stacks/ 
> folder.
> {{ambari/ambari-server/src/main/resources/stacks/HDP/2.3/services/SERVICE_NAME/repos
>  -> mpacks/custom-services/8.0.0/repos}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-17285) Custom service repos in repoinfo.xml got overwritten by public VDFs

2016-09-13 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-17285.

Resolution: Duplicate

AMBARI-15538 should address this issue.

> Custom service repos in repoinfo.xml got overwritten by public VDFs
> ---
>
> Key: AMBARI-17285
> URL: https://issues.apache.org/jira/browse/AMBARI-17285
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Alexander Denissov
>Assignee: Nate Cole
>Priority: Critical
> Fix For: 2.4.2
>
>
> Ambari 2.4 introduced Version Definition Files that break the functionality 
> of adding a custom service repo, since custom services do not have an entry 
> in the public VDF.
> In the case of HAWQ, the plugin is installed on Ambari host and it adds the 
> new repo information to the repoinfo.xml of all available stacks on the file 
> system. Once Ambari cluster creation wizard queries the latest repo info from 
> the public URLs, it will get the info for all stack repos, but not the custom 
> ones. 
> So, the logic should be:
> 1. Use default repoinfo (from file system) as the base
> 2. Query public VDF, if available
> 3. For each entry in public VDF overwrite values in the default repoinfo
> 4. Entries in default repoinfo that do not have corresponding entries in VDF 
> should stay intact
> This way custom services can be added via file edit and the latest 
> information can still be retrieved and applied for the standard stack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17728) Error message does not deliver when executing ambari-server command as a non-root user

2016-09-12 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15484609#comment-15484609
 ] 

Jayush Luniya commented on AMBARI-17728:


commit e4cb41e0ab469788180f3ac5741d331706b46ea0
Author: Jayush Luniya 
Date:   Mon Sep 12 09:50:49 2016 -0700

AMBARI-17728: Error message does not deliver when executing ambari-server 
command as a non-root use (wang yaoxin via jluniya)

> Error message does not deliver when executing ambari-server command as a 
> non-root user
> --
>
> Key: AMBARI-17728
> URL: https://issues.apache.org/jira/browse/AMBARI-17728
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: wangyaoxin
>Assignee: wangyaoxin
> Fix For: trunk
>
> Attachments: AMBARI-17728-1.patch, AMBARI-17728-2.patch, 
> AMBARI-17728.patch
>
>
> non-root user: like hdfs
> excute : ambari-server stop
> show :  Using python  /usr/bin/python2.6 Stopping ambari-server
> intent to: You can't perform this operation as non-sudoer user. Please, 
> re-login or configure sudo access for this user



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17728) Error message does not deliver when executing ambari-server command as a non-root user

2016-09-12 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-17728:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Error message does not deliver when executing ambari-server command as a 
> non-root user
> --
>
> Key: AMBARI-17728
> URL: https://issues.apache.org/jira/browse/AMBARI-17728
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: wangyaoxin
>Assignee: wangyaoxin
> Fix For: trunk
>
> Attachments: AMBARI-17728-1.patch, AMBARI-17728-2.patch, 
> AMBARI-17728.patch
>
>
> non-root user: like hdfs
> excute : ambari-server stop
> show :  Using python  /usr/bin/python2.6 Stopping ambari-server
> intent to: You can't perform this operation as non-sudoer user. Please, 
> re-login or configure sudo access for this user



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-15538) Support service-specific repo for add-on services

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-15538:
---
Fix Version/s: 2.4.2

> Support service-specific repo for add-on services
> -
>
> Key: AMBARI-15538
> URL: https://issues.apache.org/jira/browse/AMBARI-15538
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.1.0, 2.2.0, 2.4.0
>Reporter: Jayush Luniya
>Assignee: Balázs Bence Sári
> Fix For: 2.5.0, 2.4.2
>
> Attachments: AMBARI-15538-trunk-v1.patch
>
>
> The approach for custom-services to specify their own repo location will be 
> to provide a {{/repos/repoinfo.xml}} inside the stack-version they will be 
> in. This repo file will be loaded by Ambari during startup into the 
> {{/api/v1/stacks/HDP/versions/2.4/repository_versions}} repos. *Service repo 
> files have a restriction that their (repo-name, base-url) locations should be 
> unique and not conflict*. When conflicts do occur, they will not be loaded 
> into the stacks model.
> Now the management-pack will provide such repos/ folder in 
> {{mpacks/custom-services/8.0.0/repos}} which will be linked into the stacks/ 
> folder.
> {{ambari/ambari-server/src/main/resources/stacks/HDP/2.3/services/SERVICE_NAME/repos
>  -> mpacks/custom-services/8.0.0/repos}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18335) After upgrading cluster from HDP-2.4.x to HDP-2.5.x and added atlas service - missing kafka security properties

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18335:
---
Priority: Blocker  (was: Major)

> After upgrading cluster from HDP-2.4.x to HDP-2.5.x and added atlas service - 
> missing kafka security properties
> ---
>
> Key: AMBARI-18335
> URL: https://issues.apache.org/jira/browse/AMBARI-18335
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Blocker
>  Labels: kerberos_descriptor, upgrade
> Fix For: 2.4.1
>
> Attachments: AMBARI-18335_branch-2.4_01.patch, 
> AMBARI-18335_branch-2.4_02.patch, AMBARI-18335_trunk_01.patch, 
> AMBARI-18335_trunk_02.patch
>
>
> Steps to repro:
> * Install Ambari 2.2.2
> * Install HDP-2.4.x cluster with Atlas
> * Stop Atlas
> * Upgrade Ambari to 2.4
> * Delete Atlas service
> * Upgrade the cluster to HDP-2.5.x cluster
> * Add Atlas service.
> *Below config properties are missing from atlas-applicataion.properties file 
> for Atlas, Storm, Falcon, Hive services.*
> #atlas.jaas.KafkaClient.option.keyTab = 
> /etc/security/keytabs/atlas.service.keytab
> #atlas.jaas.KafkaClient.option.principal = atlas/_h...@example.com
> From HDP 2.4 to 2.5, the kerberos.json file for Atlas changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18335) After upgrading cluster from HDP-2.4.x to HDP-2.5.x and added atlas service - missing kafka security properties

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18335:
---
Fix Version/s: (was: 2.4.2)
   2.4.1

> After upgrading cluster from HDP-2.4.x to HDP-2.5.x and added atlas service - 
> missing kafka security properties
> ---
>
> Key: AMBARI-18335
> URL: https://issues.apache.org/jira/browse/AMBARI-18335
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Robert Levas
>Assignee: Robert Levas
>  Labels: kerberos_descriptor, upgrade
> Fix For: 2.4.1
>
> Attachments: AMBARI-18335_branch-2.4_01.patch, 
> AMBARI-18335_branch-2.4_02.patch, AMBARI-18335_trunk_01.patch, 
> AMBARI-18335_trunk_02.patch
>
>
> Steps to repro:
> * Install Ambari 2.2.2
> * Install HDP-2.4.x cluster with Atlas
> * Stop Atlas
> * Upgrade Ambari to 2.4
> * Delete Atlas service
> * Upgrade the cluster to HDP-2.5.x cluster
> * Add Atlas service.
> *Below config properties are missing from atlas-applicataion.properties file 
> for Atlas, Storm, Falcon, Hive services.*
> #atlas.jaas.KafkaClient.option.keyTab = 
> /etc/security/keytabs/atlas.service.keytab
> #atlas.jaas.KafkaClient.option.principal = atlas/_h...@example.com
> From HDP 2.4 to 2.5, the kerberos.json file for Atlas changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18285) Ambari upgrade from 2.4.0.x version fails.

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18285:
---
Priority: Blocker  (was: Major)

> Ambari upgrade from 2.4.0.x version fails. 
> ---
>
> Key: AMBARI-18285
> URL: https://issues.apache.org/jira/browse/AMBARI-18285
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
>Priority: Blocker
> Fix For: 2.4.1
>
> Attachments: AMBARI-18285.patch
>
>
> 
> nat-s11-4-aiws-1226-5:~ # ambari-server upgrade -v
> Using python  /usr/bin/python
> Upgrading ambari-server
> Updating properties in ambari.properties ...
> WARNING: Can not find ambari.properties.rpmsave file from previous 
> version, skipping import of settings
> INFO: Can not find ambari-env.sh.rpmsave file from previous version, 
> skipping restore of environment settings. ambari-env.sh may not include any 
> user customization.
> INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
> INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
> INFO: No mpack replay logs found. Skipping replaying mpack commands
> INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
> INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
> Fixing database objects owner
> INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
> INFO: about to run command: ['ambari-sudo.sh', 'su', 'postgres', '-', 
> '--command=/var/lib/ambari-server/resources/scripts/change_owner.sh -d ambari 
> -s ambari -o \'"ambari"\'']
> INFO: Fixed database objects owner
> INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
> Traceback (most recent call last):
>   File "/usr/sbin/ambari-server.py", line 754, in 
> mainBody()
>   File "/usr/sbin/ambari-server.py", line 725, in mainBody
> main(options, args, parser)
>   File "/usr/sbin/ambari-server.py", line 678, in main
> action_obj.execute()
>   File "/usr/sbin/ambari-server.py", line 69, in execute
> self.fn(*self.args, **self.kwargs)
>   File "/usr/lib/python2.6/site-packages/ambari_server/serverUpgrade.py", 
> line 370, in upgrade
> retcode = run_schema_upgrade(args)
>   File "/usr/lib/python2.6/site-packages/ambari_server/serverUpgrade.py", 
> line 249, in run_schema_upgrade
> db_title = get_db_type(get_ambari_properties()).title
> AttributeError: 'NoneType' object has no attribute 'title'
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18316) Ambari upgrade to Ambari 2.4.0 failed during DB upgrade due to incorrect TEZ view regex

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18316:
---
Priority: Blocker  (was: Major)

> Ambari upgrade to Ambari 2.4.0 failed during DB upgrade due to incorrect TEZ 
> view regex
> ---
>
> Key: AMBARI-18316
> URL: https://issues.apache.org/jira/browse/AMBARI-18316
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: DIPAYAN BHOWMICK
>Assignee: DIPAYAN BHOWMICK
>Priority: Blocker
> Fix For: 2.4.1
>
> Attachments: AMBARI-18316.branch-2.4.patch
>
>
> Ambari upgrade to Ambari 2.4.0 failed during DB upgrade due to incorrect TEZ 
> view regular expression.  
> *Steps to Reproduce*
> # Install Ambari 2.2.0
> # Install cluster with TEZ
> # Change {{tez-site/tez.tez-ui.history-url.base}} from something like 
> {{http://c6501.ambari.apache.org:8080/#/main/views/TEZ/0.7.0.2.3.4.0-460/TEZ_CLUSTER_INSTANCE}}
>  to 
> {{http://c6501.ambari.apache.org:8080/#/main/views/TEZ/0.7.0.2.3.4.0-460/tezv1}}
>  
> ** Notice "TEZ_CLUSTER_INSTANCE" was changed to "tezv1"
> # Upgrade Ambari to 2.4.0.1
> # Execute {{ambari-server upgrade}}
> # See error
> {noformat}
> Using python /usr/bin/python 
> Upgrading ambari-server 
> Updating properties in ambari.properties ... 
> WARNING: Original file ambari-env.sh kept 
> Fixing database objects owner 
> Ambari Server configured for Embedded Postgres. Confirm you have made a 
> backup of the Ambari Server database [y/n] (y)? y 
> Upgrading database schema 
> Error output from schema upgrade command: 
> Exception in thread "main" org.apache.ambari.server.AmbariException: Cannot 
> prepare the new value for property: 'tez.tez-ui.history-url.base' using the 
> old value: 
> 'https://c6501.ambari.apache.org:8080/#/main/views/TEZ/0.7.0.2.3.4.0-460/tezv1;
>  
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:237)
>  
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:353)
>  
> Caused by: org.apache.ambari.server.AmbariException: Cannot prepare the new 
> value for property: 'tez.tez-ui.history-url.base' using the old value: 
> 'https://c6501.ambari.apache.org:8080/#/main/views/TEZ/0.7.0.2.3.4.0-460/tezv1;
>  
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.getUpdatedTezHistoryUrlBase(AbstractUpgradeCatalog.java:951)
>  
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.updateTezHistoryUrlBase(AbstractUpgradeCatalog.java:923)
>  
> at 
> org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:900)
>  
> at 
> org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:234)
>  
> ... 1 more 
> {noformat}
> *Cause*
> The cause for this error is in the regular expression below:
> {code:title=At or around 
> org/apache/ambari/server/upgrade/AbstractUpgradeCatalog.java:986}
> String pattern = "(.*\\/TEZ\\/)(.*)(\\/TEZ_CLUSTER_INSTANCE)";
> {code}
> This pattern assumes the URL will end with "TEZ_CLUSTER_INSTANCE", however 
> this may be changed by the user causing a failure when matching and yielding 
> an exception being thrown.
> *Workaround*
> If the Ambari server package has not yet been upgraded
> # Edit the {{tez-site/tez.tez-ui.history-url.base}} config to match the 
> pattern
> # Perform the upgrade
> # Edit the {{tez-site/tez.tez-ui.history-url.base}} config to fix the URL as 
> needed
> If the Ambari server package has been upgraded
> # If {{ambari-server upgrade}} has been executed and failed, restore the 
> database
> # Using some database access utility (example, Toad), edit the  
> {{config_data}} column of the {{clusterconfig}} table for the record that 
> represents the _desired_ version of the {{tez-site}} config to match the 
> pattern
> # Perform the upgrade
> # Edit the {{tez-site/tez.tez-ui.history-url.base}} config to fix the URL as 
> needed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18294) Ambari Server Start/Stop fails on Centos 7.1+

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18294:
---
Priority: Blocker  (was: Major)

> Ambari Server Start/Stop fails on Centos 7.1+
> -
>
> Key: AMBARI-18294
> URL: https://issues.apache.org/jira/browse/AMBARI-18294
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
>Priority: Blocker
> Fix For: 2.4.1
>
> Attachments: AMBARI-18294.patch
>
>
> Brand new install on Centos 7.2
> Ambari Server 'setup' completed successfully.
> [root@c7001 ~]# ambari-server start
> Using python  /usr/bin/python
> Starting ambari-server
> Ambari Server running with administrator privileges.
> Organizing resource files at /var/lib/ambari-server/resources...
> Ambari database consistency check started...
> No errors were found.
> Ambari database consistency check finished
> Server PID at: /var/run/ambari-server/ambari-server.pid
> Server out at: /var/log/ambari-server/ambari-server.out
> Server log at: /var/log/ambari-server/ambari-server.log
> Waiting for server start
> Ambari Server 'start' completed successfully.
> [root@c7001 ~]# cat /var/run/ambari-server/ambari-server.pid
> 12302
> 12303
> [root@c7001 ~]# ps aux | grep AmbariServer
> root 12302  0.0  0.0 113116   644 pts/0S20:42   0:00 /bin/sh -c 
> ulimit -n 1 ; /usr/jdk64/jdk1.8.0_77/bin/java -server -XX:NewRatio=3 
> -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit 
> -XX:CMSInitiatingOccupancyFraction=60 -XX:+CMSClassUnloadingEnabled 
> -Dsun.zip.disableMemoryMapping=true   -Xms512m -Xmx2048m -XX:MaxPermSize=128m 
> -Djava.security.auth.login.config=$ROOT/etc/ambari-server/conf/krb5JAASLogin.conf
>  -Djava.security.krb5.conf=/etc/krb5.conf 
> -Djavax.security.auth.useSubjectCredsOnly=false -cp 
> '/etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/postgresql-jdbc.jar'
>  org.apache.ambari.server.controller.AmbariServer > 
> /var/log/ambari-server/ambari-server.out 2>&1 || echo $? > 
> /var/run/ambari-server/ambari-server.exitcode &
> root 12303 87.7 14.8 4377224 431856 pts/0  Sl   20:42   0:29 
> /usr/jdk64/jdk1.8.0_77/bin/java -server -XX:NewRatio=3 
> -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit 
> -XX:CMSInitiatingOccupancyFraction=60 -XX:+CMSClassUnloadingEnabled 
> -Dsun.zip.disableMemoryMapping=true -Xms512m -Xmx2048m -XX:MaxPermSize=128m 
> -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf 
> -Djava.security.krb5.conf=/etc/krb5.conf 
> -Djavax.security.auth.useSubjectCredsOnly=false -cp 
> /etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/postgresql-jdbc.jar
>  org.apache.ambari.server.controller.AmbariServer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18335) After upgrading cluster from HDP-2.4.x to HDP-2.5.x and added atlas service - missing kafka security properties

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18335:
---
Fix Version/s: (was: 2.4.1)
   2.4.2

> After upgrading cluster from HDP-2.4.x to HDP-2.5.x and added atlas service - 
> missing kafka security properties
> ---
>
> Key: AMBARI-18335
> URL: https://issues.apache.org/jira/browse/AMBARI-18335
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Robert Levas
>Assignee: Robert Levas
>  Labels: kerberos_descriptor, upgrade
> Fix For: 2.4.2
>
> Attachments: AMBARI-18335_branch-2.4_01.patch, 
> AMBARI-18335_branch-2.4_02.patch, AMBARI-18335_trunk_01.patch, 
> AMBARI-18335_trunk_02.patch
>
>
> Steps to repro:
> * Install Ambari 2.2.2
> * Install HDP-2.4.x cluster with Atlas
> * Stop Atlas
> * Upgrade Ambari to 2.4
> * Delete Atlas service
> * Upgrade the cluster to HDP-2.5.x cluster
> * Add Atlas service.
> *Below config properties are missing from atlas-applicataion.properties file 
> for Atlas, Storm, Falcon, Hive services.*
> #atlas.jaas.KafkaClient.option.keyTab = 
> /etc/security/keytabs/atlas.service.keytab
> #atlas.jaas.KafkaClient.option.principal = atlas/_h...@example.com
> From HDP 2.4 to 2.5, the kerberos.json file for Atlas changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-16226) Execute topology tasks in parallel by hosts.

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-16226:
---
Fix Version/s: (was: 2.4.1)
   2.4.2

> Execute topology tasks in parallel by hosts.
> 
>
> Key: AMBARI-16226
> URL: https://issues.apache.org/jira/browse/AMBARI-16226
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Sebastian Toader
> Fix For: 2.4.2
>
>
> Currently a when a cluster is created using Blueprints 
> PersistHostResourcesTask, RegisterWithConfigGroupTask, InstallHostTask and 
> StartHostTask topology tasks are created for each host in this order. These 
> tasks than are executed by a single threaded executor 
> TopologyManager.executor as hosts are being assigned to the cluster.
> Since TopologyManager is singleton this will leads to sequential execution of 
> topology tasks on a single thread. The execution of the each individual 
> topology tasks involves db operations under the hood. If for any reason there 
> is some latency introduced by the db operations (e.g. the db server is not 
> local but a remote one is used) than this latency builds up a considerable 
> delay if there are many hosts to execute topology tasks for.
> Executing the topology tasks in parallel will reduce the delay in this case.
> Since topology tasks for a host must be executed in order only tasks that 
> belong to different hosts. E.g. the PersistHostResourcesTask, 
> RegisterWithConfigGroupTask, InstallHostTask and StartHostTask topology tasks 
> would be executed sequentially by one thread for host1 and by another thread 
> for host2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-16218) Identify Starts (e.g. NN/Oozie) that are the longest and optimize work done during start

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-16218:
---
Fix Version/s: (was: 2.4.1)
   2.4.2

> Identify Starts (e.g. NN/Oozie) that are the longest and optimize work done 
> during start
> 
>
> Key: AMBARI-16218
> URL: https://issues.apache.org/jira/browse/AMBARI-16218
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Sebastian Toader
> Fix For: 2.4.2
>
>
> There are some components that take longer to start than the other. Identify 
> the long-poles and optimize the start. Candidates are NameNode and Oozie.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-16219) Agent computes the DAG due to RCO and executes

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-16219:
---
Fix Version/s: (was: 2.4.1)
   2.4.2

> Agent computes the DAG due to RCO and executes
> --
>
> Key: AMBARI-16219
> URL: https://issues.apache.org/jira/browse/AMBARI-16219
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Sebastian Toader
> Fix For: 2.4.2
>
>
> Make agents independent of servers where server can send all the details of 
> all the commands one-shot and the agents compute the DAG and execute the 
> command without any further coordination from the server. At least for 
> initial start, this can reduce the time taken significantly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18314) Users page after LDAP sync shows blank

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18314:
---
Fix Version/s: (was: 2.4.1)
   2.4.2

> Users page after LDAP sync shows blank
> --
>
> Key: AMBARI-18314
> URL: https://issues.apache.org/jira/browse/AMBARI-18314
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.1
>Reporter: Shreya Bhat
> Fix For: 2.4.2
>
>
> The network shows 500 Server error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-16221) AMS/LogSearch service should be outside the cluster and shared across clusters

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-16221:
---
Fix Version/s: (was: 2.4.1)
   2.4.2

> AMS/LogSearch service should be outside the cluster and shared across clusters
> --
>
> Key: AMBARI-16221
> URL: https://issues.apache.org/jira/browse/AMBARI-16221
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Sebastian Toader
> Fix For: 2.4.2
>
>
> AMS and LogSearch can be shared services that is shared across clusters. 
> Cluster deployments can simply deploy the agents (AMS Monitor, Sinks, and Log 
> Feeders) that can communicate with shared services. In some ways, logs are 
> already being collected by log4j sinks and syslog handlers. Metrics should 
> also be collected in the same way by a shared service and made accessible to 
> the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-16215) Ambari starts the right component when multi instances are deployed (e.g. AMS)

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-16215:
---
Fix Version/s: (was: 2.4.1)
   2.4.2

> Ambari starts the right component when multi instances are deployed (e.g. AMS)
> --
>
> Key: AMBARI-16215
> URL: https://issues.apache.org/jira/browse/AMBARI-16215
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Sebastian Toader
> Fix For: 2.4.2
>
>
> Currently, components that are HA through externally managed failover are 
> deployed with the desired state INSTALLED - AMS Collector and JHS/ATS. Ambari 
> can auto-start them on a specific host during the deployment if it is 
> possible to guarantee that for the duration of initial cluster deployment the 
> host  is always available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17285) Custom service repos in repoinfo.xml got overwritten by public VDFs

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-17285:
---
Fix Version/s: (was: 2.4.1)
   2.4.2

> Custom service repos in repoinfo.xml got overwritten by public VDFs
> ---
>
> Key: AMBARI-17285
> URL: https://issues.apache.org/jira/browse/AMBARI-17285
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Alexander Denissov
>Assignee: Nate Cole
>Priority: Critical
> Fix For: 2.4.2
>
>
> Ambari 2.4 introduced Version Definition Files that break the functionality 
> of adding a custom service repo, since custom services do not have an entry 
> in the public VDF.
> In the case of HAWQ, the plugin is installed on Ambari host and it adds the 
> new repo information to the repoinfo.xml of all available stacks on the file 
> system. Once Ambari cluster creation wizard queries the latest repo info from 
> the public URLs, it will get the info for all stack repos, but not the custom 
> ones. 
> So, the logic should be:
> 1. Use default repoinfo (from file system) as the base
> 2. Query public VDF, if available
> 3. For each entry in public VDF overwrite values in the default repoinfo
> 4. Entries in default repoinfo that do not have corresponding entries in VDF 
> should stay intact
> This way custom services can be added via file edit and the latest 
> information can still be retrieved and applied for the standard stack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-16213) Initial Start All should be similar to a later Start All

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-16213:
---
Fix Version/s: (was: 2.4.1)
   2.4.2

> Initial Start All should be similar to a later Start All
> 
>
> Key: AMBARI-16213
> URL: https://issues.apache.org/jira/browse/AMBARI-16213
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Sebastian Toader
> Fix For: 2.4.2
>
>
> Starting a cluster in a pre-provisioned environment should need minimal setup 
> activities. So first start should not be different than any start from a 
> full-stop. Of course, it is not. Typical differences are:
> * Files and folders do not exist and are created (config files, log folders, 
> etc.)
> * Several services perform initialization - AMS/HDFS/OOZIE
> Investigate what operations from the initial Start-All that can be moved to 
> sys-provisioning step and what can be optimized in general.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18340) Kafka acls setup is failing as part of atlas start

2016-09-08 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18340:
---
Fix Version/s: (was: 2.4.1)
   2.4.2

> Kafka acls setup is failing as part of atlas start
> --
>
> Key: AMBARI-18340
> URL: https://issues.apache.org/jira/browse/AMBARI-18340
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: 2.4.1
>Reporter: Ayub Khan
>Assignee: Ayub Khan
>Priority: Critical
> Fix For: 2.4.2
>
> Attachments: AMBARI-18340.patch
>
>
> As part of AMBARI-18321, kafka-acls setup is added and it is failing.
> {noformat}
> 2016-09-08 06:16:23,280 - Execute['kinit -kt 
> /etc//keytabs/hbase.headless.keytab hb...@example.com; cat 
> /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n'] {'tries': 
> 5, 'user': 'hbase', 'try_sleep': 10}
> 2016-09-08 06:16:31,867 - Execute['kinit -kt 
> /etc//keytabs/kafka.service.keytab 
> kafka/atlas-secure-no-ranger-2.openstacklo...@example.com; bash 
> /var/lib/ambari-agent/tmp/atlas_kafka_acl.sh'] {'tries': 5, 'user': 'kafka', 
> 'try_sleep': 10}
> 2016-09-08 06:16:32,016 - Retrying after 10 seconds. Reason: Execution of 
> 'kinit -kt /etc//keytabs/kafka.service.keytab 
> kafka/atlas-secure-no-ranger-2.openstacklo...@example.com; bash 
> /var/lib/ambari-agent/tmp/atlas_kafka_acl.sh' returned 127. bash: 
> /var/lib/ambari-agent/tmp/atlas_kafka_acl.sh: No such file or directory
> 2016-09-08 06:16:42,219 - Retrying after 10 seconds. Reason: Execution of 
> 'kinit -kt /etc//keytabs/kafka.service.keytab 
> kafka/atlas-secure-no-ranger-2.openstacklo...@example.com; bash 
> /var/lib/ambari-agent/tmp/atlas_kafka_acl.sh' returned 127. bash: 
> /var/lib/ambari-agent/tmp/atlas_kafka_acl.sh: No such file or directory
> 2016-09-08 06:16:52,429 - Retrying after 10 seconds. Reason: Execution of 
> 'kinit -kt /etc//keytabs/kafka.service.keytab 
> kafka/atlas-secure-no-ranger-2.openstacklo...@example.com; bash 
> /var/lib/ambari-agent/tmp/atlas_kafka_acl.sh' returned 127. bash: 
> /var/lib/ambari-agent/tmp/atlas_kafka_acl.sh: No such file or directory
> 2016-09-08 06:17:02,642 - Retrying after 10 seconds. Reason: Execution of 
> 'kinit -kt /etc//keytabs/kafka.service.keytab 
> kafka/atlas-secure-no-ranger-2.openstacklo...@example.com; bash 
> /var/lib/ambari-agent/tmp/atlas_kafka_acl.sh' returned 127. bash: 
> /var/lib/ambari-agent/tmp/atlas_kafka_acl.sh: No such file or directory
> 2016-09-08 06:17:12,839 - Execute['source /etc/atlas/conf/atlas-env.sh ; 
> /usr/hdp/current/atlas-server/bin/atlas_start.py'] {'not_if': 'ls 
> /var/run/atlas/atlas.pid >/dev/null 2>&1 && ps -p `cat 
> /var/run/atlas/atlas.pid` >/dev/null 2>&1', 'user': 'atlas'}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18282) By default 2.3.ECS stack should be disabled for HDP

2016-08-30 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15450473#comment-15450473
 ] 

Jayush Luniya commented on AMBARI-18282:


+1

> By default 2.3.ECS stack should be disabled for HDP
> ---
>
> Key: AMBARI-18282
> URL: https://issues.apache.org/jira/browse/AMBARI-18282
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.1
>Reporter: Sumit Mohanty
>Assignee: Sumit Mohanty
> Fix For: trunk, 2.4.1, 2.5.0
>
> Attachments: AMBARI-18282.patch
>
>
> By default 2.3.ECS stack should be disabled for HDP



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18252) Storm service check fails after disabling kerberos

2016-08-26 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18252:
---
Fix Version/s: (was: 2.4.0)
   trunk

> Storm service check fails after disabling kerberos
> --
>
> Key: AMBARI-18252
> URL: https://issues.apache.org/jira/browse/AMBARI-18252
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Priority: Blocker
>  Labels: regression, system_test
> Fix For: trunk
>
>
> Storm service check fails after disabling kerberos with the stderr :
> {code}
> fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of 'storm 
> jar /tmp/wordCount.jar storm.starter.WordCountTopology 
> WordCountid16acec54_date272316' returned 1. 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-18239.

Resolution: Fixed

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433926#comment-15433926
 ] 

Jayush Luniya commented on AMBARI-18239:


Trunk
commit 6b969905215f32ef333ec9d7b01f43af9c55ba47
Author: Jayush Luniya 
Date:   Tue Aug 23 17:23:38 2016 -0700

AMBARI-18239: oozie.py is reading invalid 'version' attribute which results 
in not copying required atlas hook jars (jluniya)

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433900#comment-15433900
 ] 

Jayush Luniya commented on AMBARI-18239:


Previous patch wouldnt address the issue. New patch attached.

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18239:
---
Attachment: (was: AMBARI-18239.patch)

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18239:
---
Attachment: AMBARI-18239.trunk.patch

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18239:
---
Attachment: AMBARI-18239.patch

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18239:
---
Priority: Critical  (was: Major)

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya reassigned AMBARI-18239:
--

Assignee: Jayush Luniya  (was: Ayub Khan)

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18239:
---
Status: Open  (was: Patch Available)

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to find /etc/atlas/conf/users-credentials.properties

2016-08-23 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433890#comment-15433890
 ] 

Jayush Luniya commented on AMBARI-18244:


+1

> Add Service for Atlas did not call conf-select, so failed to find 
> /etc/atlas/conf/users-credentials.properties 
> ---
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
> Attachments: AMBARI-18244.patch
>
>
> STR:
> * Install Ambari 2.4.0.0 with HDP 2.5.0.0 and basic services except Atlas
> * Add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18217) Zeppelin service check fails after enabling SSL for Zeppelin

2016-08-22 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18217:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Zeppelin service check fails after enabling SSL for Zeppelin
> 
>
> Key: AMBARI-18217
> URL: https://issues.apache.org/jira/browse/AMBARI-18217
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Yesha Vora
>Assignee: Renjith Kamath
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18217_trunk+branch-2.4_v1.patch
>
>
> Zeppelin service is running fine after enabling Zeppelin SSL. Howerver, 
> service check fails with below error.
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 39, in 
> ZeppelinServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 36, in service_check
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 71, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -s -o /dev/null 
> -w'%{http_code}' --negotiate -u: -k xxx:9995 | grep 200' returned 1.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18217) Zeppelin service check fails after enabling SSL for Zeppelin

2016-08-22 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15431297#comment-15431297
 ] 

Jayush Luniya commented on AMBARI-18217:


Branch-2.4
commit f21b94b071a1dabb680bb0b1e1449bfbc7c24354
Author: Jayush Luniya 
Date:   Mon Aug 22 10:46:59 2016 -0700

AMBARI-18217: Zeppelin service check fails after enabling SSL for Zeppelin 
(Renjith Kamath via jluniya)

> Zeppelin service check fails after enabling SSL for Zeppelin
> 
>
> Key: AMBARI-18217
> URL: https://issues.apache.org/jira/browse/AMBARI-18217
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Yesha Vora
>Assignee: Renjith Kamath
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18217_trunk+branch-2.4_v1.patch
>
>
> Zeppelin service is running fine after enabling Zeppelin SSL. Howerver, 
> service check fails with below error.
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 39, in 
> ZeppelinServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 36, in service_check
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 71, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -s -o /dev/null 
> -w'%{http_code}' --negotiate -u: -k xxx:9995 | grep 200' returned 1.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18217) Zeppelin service check fails after enabling SSL for Zeppelin

2016-08-22 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15431296#comment-15431296
 ] 

Jayush Luniya commented on AMBARI-18217:


Trunk

commit a56891e781480993c92333006b6ccf5b22bb3e54
Author: Jayush Luniya 
Date:   Mon Aug 22 10:46:59 2016 -0700

AMBARI-18217: Zeppelin service check fails after enabling SSL for Zeppelin 
(Renjith Kamath via jluniya)

> Zeppelin service check fails after enabling SSL for Zeppelin
> 
>
> Key: AMBARI-18217
> URL: https://issues.apache.org/jira/browse/AMBARI-18217
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Yesha Vora
>Assignee: Renjith Kamath
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18217_trunk+branch-2.4_v1.patch
>
>
> Zeppelin service is running fine after enabling Zeppelin SSL. Howerver, 
> service check fails with below error.
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 39, in 
> ZeppelinServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 36, in service_check
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 71, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -s -o /dev/null 
> -w'%{http_code}' --negotiate -u: -k xxx:9995 | grep 200' returned 1.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMBARI-18217) Zeppelin service check fails after enabling SSL for Zeppelin

2016-08-22 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15431282#comment-15431282
 ] 

Jayush Luniya edited comment on AMBARI-18217 at 8/22/16 5:52 PM:
-

Putting this JIRA back into 2.4 release as this is a blocker for Zeppelin with 
SSL.


was (Author: jluniya):
Putting this JIRA back into 2.4 release.

> Zeppelin service check fails after enabling SSL for Zeppelin
> 
>
> Key: AMBARI-18217
> URL: https://issues.apache.org/jira/browse/AMBARI-18217
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Yesha Vora
>Assignee: Renjith Kamath
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18217_trunk+branch-2.4_v1.patch
>
>
> Zeppelin service is running fine after enabling Zeppelin SSL. Howerver, 
> service check fails with below error.
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 39, in 
> ZeppelinServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 36, in service_check
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 71, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -s -o /dev/null 
> -w'%{http_code}' --negotiate -u: -k xxx:9995 | grep 200' returned 1.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18217) Zeppelin service check fails after enabling SSL for Zeppelin

2016-08-22 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18217:
---
Priority: Blocker  (was: Critical)

> Zeppelin service check fails after enabling SSL for Zeppelin
> 
>
> Key: AMBARI-18217
> URL: https://issues.apache.org/jira/browse/AMBARI-18217
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Yesha Vora
>Assignee: Renjith Kamath
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18217_trunk+branch-2.4_v1.patch
>
>
> Zeppelin service is running fine after enabling Zeppelin SSL. Howerver, 
> service check fails with below error.
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 39, in 
> ZeppelinServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 36, in service_check
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 71, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -s -o /dev/null 
> -w'%{http_code}' --negotiate -u: -k xxx:9995 | grep 200' returned 1.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18217) Zeppelin service check fails after enabling SSL for Zeppelin

2016-08-22 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15431282#comment-15431282
 ] 

Jayush Luniya commented on AMBARI-18217:


Putting this JIRA back into 2.4 release.

> Zeppelin service check fails after enabling SSL for Zeppelin
> 
>
> Key: AMBARI-18217
> URL: https://issues.apache.org/jira/browse/AMBARI-18217
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Yesha Vora
>Assignee: Renjith Kamath
>Priority: Critical
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18217_trunk+branch-2.4_v1.patch
>
>
> Zeppelin service is running fine after enabling Zeppelin SSL. Howerver, 
> service check fails with below error.
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 39, in 
> ZeppelinServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 36, in service_check
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 71, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -s -o /dev/null 
> -w'%{http_code}' --negotiate -u: -k xxx:9995 | grep 200' returned 1.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18217) Zeppelin service check fails after enabling SSL for Zeppelin

2016-08-22 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18217:
---
Labels: 240RMApproved  (was: )

> Zeppelin service check fails after enabling SSL for Zeppelin
> 
>
> Key: AMBARI-18217
> URL: https://issues.apache.org/jira/browse/AMBARI-18217
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Yesha Vora
>Assignee: Renjith Kamath
>Priority: Critical
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18217_trunk+branch-2.4_v1.patch
>
>
> Zeppelin service is running fine after enabling Zeppelin SSL. Howerver, 
> service check fails with below error.
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 39, in 
> ZeppelinServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 36, in service_check
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 71, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -s -o /dev/null 
> -w'%{http_code}' --negotiate -u: -k xxx:9995 | grep 200' returned 1.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18217) Zeppelin service check fails after enabling SSL for Zeppelin

2016-08-22 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18217:
---
Fix Version/s: (was: trunk)
   2.4.0

> Zeppelin service check fails after enabling SSL for Zeppelin
> 
>
> Key: AMBARI-18217
> URL: https://issues.apache.org/jira/browse/AMBARI-18217
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Yesha Vora
>Assignee: Renjith Kamath
>Priority: Critical
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18217_trunk+branch-2.4_v1.patch
>
>
> Zeppelin service is running fine after enabling Zeppelin SSL. Howerver, 
> service check fails with below error.
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 39, in 
> ZeppelinServiceCheck().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/service_check.py",
>  line 36, in service_check
> logoutput=True)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 71, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -s -o /dev/null 
> -w'%{http_code}' --negotiate -u: -k xxx:9995 | grep 200' returned 1.{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18219) Ambari should use oozied.sh for stopping oozie so that optional catalina args can be provided

2016-08-20 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18219:
---
Labels: 240RMApproved  (was: )

> Ambari should use oozied.sh for stopping oozie so that optional catalina args 
> can be provided
> -
>
> Key: AMBARI-18219
> URL: https://issues.apache.org/jira/browse/AMBARI-18219
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Venkat Ranganathan
>Assignee: Venkat Ranganathan
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18219.patch
>
>
> In some scenarios, the oozie stop can take longer and if a oozie start is 
> attempted it can fail with address already in use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18221) Oozie server start fails while enabling wire encryption

2016-08-20 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18221:
---
Labels: 240RMApproved  (was: )

> Oozie server start fails while enabling wire encryption
> ---
>
> Key: AMBARI-18221
> URL: https://issues.apache.org/jira/browse/AMBARI-18221
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Sumit Mohanty
>Assignee: Sumit Mohanty
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18221.patch
>
>
> Oozie start fails with the following after wire encryption is enabled.
> {code}
> 2016-08-20 17:16:11,341 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': '/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-20 17:16:15,494 - Found 3 files/directories inside Atlas Hive hook 
> directory /usr/hdp/2.5.0.0-1234/atlas/hook/hive/
> 2016-08-20 17:16:15,701 - call['source 
> /usr/hdp/current/oozie-server/conf/oozie-env.sh ; oozie admin -shareliblist 
> hive | grep "\[Available ShareLib\]" -A 5'] {'logoutput': True, 'tries': 10, 
> 'user': 'oozie', 'try_sleep': 5}
> Error: IO_ERROR : java.io.IOException: Error while connecting Oozie server. 
> No of retries = 1. Exception = Could not authenticate, Authentication failed, 
> URL: 
> http://nat-s11-4-bjps-stackdeploy-4.openstacklocal:11000/oozie/versions?user.name=oozie,
>  status: 302, message: Found
> 2016-08-20 17:16:34,257 - Retrying after 5 seconds. Reason: Execution of 
> 'source /usr/hdp/current/oozie-server/conf/oozie-env.sh ; oozie admin 
> -shareliblist hive | grep "\[Available ShareLib\]" -A 5' returned 1. Error: 
> IO_ERROR : java.io.IOException: Error while connecting Oozie server. No of 
> retries = 1. Exception = Could not authenticate, Authentication failed, URL: 
> http://nat-s11-4-bjps-stackdeploy-4.openstacklocal:11000/oozie/versions?user.name=oozie,
>  status: 302, message: Found
> {code}
> Looks like the oozie URL used is still pointing to the http instead of https.
> This is result of call {{oozie admin -shareliblist hive}} where it defaults 
> to http url. So the calls need to be modified to include {{-oozie oozie_url}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18213) RU: Storm components were stopped during RU and can not be started

2016-08-19 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18213:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> RU: Storm components were stopped during RU and can not be started
> --
>
> Key: AMBARI-18213
> URL: https://issues.apache.org/jira/browse/AMBARI-18213
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18213.patch
>
>
> STR:
> # Install cluster 2.4.2.0-258 on Ambari 2.2.2.0
> # Enable HA
> # Enable security
> # Upgrade ambari to 240
> # Perform RU to 2.5.0.0-1208
> Deeper study shows that kerberos descriptor json in database ("artifact" 
> table) still contains values and properties that are actual for 2.4 stack.
> So the issue workflow should look like:
> - Old stack version is installed
> - Kerberos descriptor gets saved to database
> - Security is enabled
> - Stack upgrade is performed
> - Keytab regeneration is performed, and it populates service config with 
> obsolete property values
> The issue happens on "Stack upgrade is performed" step. We never update 
> kerberos descriptor json in database to correspond to a new stack.
> From nimbus.out
> {code}Exception in thread "main" java.lang.ExceptionInInitializerError
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:270)
> at clojure.lang.RT.classForName(RT.java:2154)
> at clojure.lang.RT.classForName(RT.java:2163)
> at clojure.lang.RT.loadClassForName(RT.java:2182)
> at clojure.lang.RT.load(RT.java:436)
> at clojure.lang.RT.load(RT.java:412)
> at clojure.core$load$fn__5448.invoke(core.clj:5866)
> at clojure.core$load.doInvoke(core.clj:5865)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at clojure.core$load_one.invoke(core.clj:5671)
> at clojure.core$load_lib$fn__5397.invoke(core.clj:5711)
> at clojure.core$load_lib.doInvoke(core.clj:5710)
> at clojure.lang.RestFn.applyTo(RestFn.java:142)
> at clojure.core$apply.invoke(core.clj:632)
> at clojure.core$load_libs.doInvoke(core.clj:5749)
> at clojure.lang.RestFn.applyTo(RestFn.java:137)
> at clojure.core$apply.invoke(core.clj:632)
> at clojure.core$require.doInvoke(core.clj:5832)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at 
> org.apache.storm.daemon.nimbus$loading__5340__auto8560.invoke(nimbus.clj:16)
> at org.apache.storm.daemon.nimbus__init.load(Unknown Source)
> at org.apache.storm.daemon.nimbus__init.(Unknown Source)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:270)
> at clojure.lang.RT.classForName(RT.java:2154)
> at clojure.lang.RT.classForName(RT.java:2163)
> at clojure.lang.RT.loadClassForName(RT.java:2182)
> at clojure.lang.RT.load(RT.java:436)
> at clojure.lang.RT.load(RT.java:412)
> at clojure.core$load$fn__5448.invoke(core.clj:5866)
> at clojure.core$load.doInvoke(core.clj:5865)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at clojure.lang.Var.invoke(Var.java:379)
> at org.apache.storm.daemon.nimbus.(Unknown Source)
> Caused by: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.authorizer.SimpleACLAuthorizer
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:190)
> at 
> org.apache.storm.daemon.common$mk_authorization_handler.invoke(common.clj:412)
> at org.apache.storm.ui.core__init.load(Unknown Source)
> at org.apache.storm.ui.core__init.(Unknown Source)
> ... 35 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18213) RU: Storm components were stopped during RU and can not be started

2016-08-19 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429122#comment-15429122
 ] 

Jayush Luniya commented on AMBARI-18213:


Looks like this is already committed but the Apache JIRA # is missing on it.

Trunk
commit ae47ae96c70c4fbc672d1539a07999a2819ea998
Author: Nate Cole 
Date:   Fri Aug 19 16:58:23 2016 -0400

Storm components were stopped during RU and can not be started (Dmitry 
Lysnichenko via ncole)

Branch-2.4

commit c8282d6917311aba1eb52885030177210fcc54a0
Author: Nate Cole 
Date:   Fri Aug 19 16:58:52 2016 -0400

Storm components were stopped during RU and can not be started (Dmitry 
Lysnichenko via ncole)

> RU: Storm components were stopped during RU and can not be started
> --
>
> Key: AMBARI-18213
> URL: https://issues.apache.org/jira/browse/AMBARI-18213
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18213.patch
>
>
> STR:
> # Install cluster 2.4.2.0-258 on Ambari 2.2.2.0
> # Enable HA
> # Enable security
> # Upgrade ambari to 240
> # Perform RU to 2.5.0.0-1208
> Deeper study shows that kerberos descriptor json in database ("artifact" 
> table) still contains values and properties that are actual for 2.4 stack.
> So the issue workflow should look like:
> - Old stack version is installed
> - Kerberos descriptor gets saved to database
> - Security is enabled
> - Stack upgrade is performed
> - Keytab regeneration is performed, and it populates service config with 
> obsolete property values
> The issue happens on "Stack upgrade is performed" step. We never update 
> kerberos descriptor json in database to correspond to a new stack.
> From nimbus.out
> {code}Exception in thread "main" java.lang.ExceptionInInitializerError
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:270)
> at clojure.lang.RT.classForName(RT.java:2154)
> at clojure.lang.RT.classForName(RT.java:2163)
> at clojure.lang.RT.loadClassForName(RT.java:2182)
> at clojure.lang.RT.load(RT.java:436)
> at clojure.lang.RT.load(RT.java:412)
> at clojure.core$load$fn__5448.invoke(core.clj:5866)
> at clojure.core$load.doInvoke(core.clj:5865)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at clojure.core$load_one.invoke(core.clj:5671)
> at clojure.core$load_lib$fn__5397.invoke(core.clj:5711)
> at clojure.core$load_lib.doInvoke(core.clj:5710)
> at clojure.lang.RestFn.applyTo(RestFn.java:142)
> at clojure.core$apply.invoke(core.clj:632)
> at clojure.core$load_libs.doInvoke(core.clj:5749)
> at clojure.lang.RestFn.applyTo(RestFn.java:137)
> at clojure.core$apply.invoke(core.clj:632)
> at clojure.core$require.doInvoke(core.clj:5832)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at 
> org.apache.storm.daemon.nimbus$loading__5340__auto8560.invoke(nimbus.clj:16)
> at org.apache.storm.daemon.nimbus__init.load(Unknown Source)
> at org.apache.storm.daemon.nimbus__init.(Unknown Source)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:270)
> at clojure.lang.RT.classForName(RT.java:2154)
> at clojure.lang.RT.classForName(RT.java:2163)
> at clojure.lang.RT.loadClassForName(RT.java:2182)
> at clojure.lang.RT.load(RT.java:436)
> at clojure.lang.RT.load(RT.java:412)
> at clojure.core$load$fn__5448.invoke(core.clj:5866)
> at clojure.core$load.doInvoke(core.clj:5865)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at clojure.lang.Var.invoke(Var.java:379)
> at org.apache.storm.daemon.nimbus.(Unknown Source)
> Caused by: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.authorizer.SimpleACLAuthorizer
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:190)
> at 
> org.apache.storm.daemon.common$mk_authorization_handler.invoke(common.clj:412)
> at org.apache.storm.ui.core__init.load(Unknown Source)
> at org.apache.storm.ui.core__init.(Unknown Source)
> ... 35 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18208) Bug Fixing in HueMigration View

2016-08-19 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18208:
---
Fix Version/s: (was: 2.4.0)
   trunk

> Bug Fixing in HueMigration View
> ---
>
> Key: AMBARI-18208
> URL: https://issues.apache.org/jira/browse/AMBARI-18208
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: 2.4.0
> Environment: JDK 1.8
> Centos 6.4
> Ambari DB:-Mysql,Postgress
> Hue DB:-Sqlite,Mysql
>Reporter: Pradarttana
>  Labels: None
> Fix For: trunk
>
> Attachments: AMBARI-18208.1_trunk.patch, AMBARI-18208_trunk.patch
>
>
> 1. Hive History Query Insertion in hive 1.5 and hive 1.0 (Due to DB schema 
> changed) 
> 2. Not able to migrate with kerberos enabled
> 3. UI Validation bugs
> 4. Mysql and Postgres Hue error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18208) Bug Fixing in HueMigration View

2016-08-19 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15429113#comment-15429113
 ] 

Jayush Luniya commented on AMBARI-18208:


Moving this out of 2.4.0 since its not a blocker. We should not commit this in 
branch-2.4

> Bug Fixing in HueMigration View
> ---
>
> Key: AMBARI-18208
> URL: https://issues.apache.org/jira/browse/AMBARI-18208
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: 2.4.0
> Environment: JDK 1.8
> Centos 6.4
> Ambari DB:-Mysql,Postgress
> Hue DB:-Sqlite,Mysql
>Reporter: Pradarttana
>  Labels: None
> Fix For: trunk
>
> Attachments: AMBARI-18208.1_trunk.patch, AMBARI-18208_trunk.patch
>
>
> 1. Hive History Query Insertion in hive 1.5 and hive 1.0 (Due to DB schema 
> changed) 
> 2. Not able to migrate with kerberos enabled
> 3. UI Validation bugs
> 4. Mysql and Postgres Hue error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18213) RU: Storm components were stopped during RU and can not be started

2016-08-19 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18213:
---
Affects Version/s: 2.4.0

> RU: Storm components were stopped during RU and can not be started
> --
>
> Key: AMBARI-18213
> URL: https://issues.apache.org/jira/browse/AMBARI-18213
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18213.patch
>
>
> STR:
> # Install cluster 2.4.2.0-258 on Ambari 2.2.2.0
> # Enable HA
> # Enable security
> # Upgrade ambari to 240
> # Perform RU to 2.5.0.0-1208
> Deeper study shows that kerberos descriptor json in database ("artifact" 
> table) still contains values and properties that are actual for 2.4 stack.
> So the issue workflow should look like:
> - Old stack version is installed
> - Kerberos descriptor gets saved to database
> - Security is enabled
> - Stack upgrade is performed
> - Keytab regeneration is performed, and it populates service config with 
> obsolete property values
> The issue happens on "Stack upgrade is performed" step. We never update 
> kerberos descriptor json in database to correspond to a new stack.
> From nimbus.out
> {code}Exception in thread "main" java.lang.ExceptionInInitializerError
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:270)
> at clojure.lang.RT.classForName(RT.java:2154)
> at clojure.lang.RT.classForName(RT.java:2163)
> at clojure.lang.RT.loadClassForName(RT.java:2182)
> at clojure.lang.RT.load(RT.java:436)
> at clojure.lang.RT.load(RT.java:412)
> at clojure.core$load$fn__5448.invoke(core.clj:5866)
> at clojure.core$load.doInvoke(core.clj:5865)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at clojure.core$load_one.invoke(core.clj:5671)
> at clojure.core$load_lib$fn__5397.invoke(core.clj:5711)
> at clojure.core$load_lib.doInvoke(core.clj:5710)
> at clojure.lang.RestFn.applyTo(RestFn.java:142)
> at clojure.core$apply.invoke(core.clj:632)
> at clojure.core$load_libs.doInvoke(core.clj:5749)
> at clojure.lang.RestFn.applyTo(RestFn.java:137)
> at clojure.core$apply.invoke(core.clj:632)
> at clojure.core$require.doInvoke(core.clj:5832)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at 
> org.apache.storm.daemon.nimbus$loading__5340__auto8560.invoke(nimbus.clj:16)
> at org.apache.storm.daemon.nimbus__init.load(Unknown Source)
> at org.apache.storm.daemon.nimbus__init.(Unknown Source)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:270)
> at clojure.lang.RT.classForName(RT.java:2154)
> at clojure.lang.RT.classForName(RT.java:2163)
> at clojure.lang.RT.loadClassForName(RT.java:2182)
> at clojure.lang.RT.load(RT.java:436)
> at clojure.lang.RT.load(RT.java:412)
> at clojure.core$load$fn__5448.invoke(core.clj:5866)
> at clojure.core$load.doInvoke(core.clj:5865)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at clojure.lang.Var.invoke(Var.java:379)
> at org.apache.storm.daemon.nimbus.(Unknown Source)
> Caused by: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.authorizer.SimpleACLAuthorizer
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:190)
> at 
> org.apache.storm.daemon.common$mk_authorization_handler.invoke(common.clj:412)
> at org.apache.storm.ui.core__init.load(Unknown Source)
> at org.apache.storm.ui.core__init.(Unknown Source)
> ... 35 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18213) RU: Storm components were stopped during RU and can not be started

2016-08-19 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18213:
---
Priority: Blocker  (was: Major)

> RU: Storm components were stopped during RU and can not be started
> --
>
> Key: AMBARI-18213
> URL: https://issues.apache.org/jira/browse/AMBARI-18213
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18213.patch
>
>
> STR:
> # Install cluster 2.4.2.0-258 on Ambari 2.2.2.0
> # Enable HA
> # Enable security
> # Upgrade ambari to 240
> # Perform RU to 2.5.0.0-1208
> Deeper study shows that kerberos descriptor json in database ("artifact" 
> table) still contains values and properties that are actual for 2.4 stack.
> So the issue workflow should look like:
> - Old stack version is installed
> - Kerberos descriptor gets saved to database
> - Security is enabled
> - Stack upgrade is performed
> - Keytab regeneration is performed, and it populates service config with 
> obsolete property values
> The issue happens on "Stack upgrade is performed" step. We never update 
> kerberos descriptor json in database to correspond to a new stack.
> From nimbus.out
> {code}Exception in thread "main" java.lang.ExceptionInInitializerError
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:270)
> at clojure.lang.RT.classForName(RT.java:2154)
> at clojure.lang.RT.classForName(RT.java:2163)
> at clojure.lang.RT.loadClassForName(RT.java:2182)
> at clojure.lang.RT.load(RT.java:436)
> at clojure.lang.RT.load(RT.java:412)
> at clojure.core$load$fn__5448.invoke(core.clj:5866)
> at clojure.core$load.doInvoke(core.clj:5865)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at clojure.core$load_one.invoke(core.clj:5671)
> at clojure.core$load_lib$fn__5397.invoke(core.clj:5711)
> at clojure.core$load_lib.doInvoke(core.clj:5710)
> at clojure.lang.RestFn.applyTo(RestFn.java:142)
> at clojure.core$apply.invoke(core.clj:632)
> at clojure.core$load_libs.doInvoke(core.clj:5749)
> at clojure.lang.RestFn.applyTo(RestFn.java:137)
> at clojure.core$apply.invoke(core.clj:632)
> at clojure.core$require.doInvoke(core.clj:5832)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at 
> org.apache.storm.daemon.nimbus$loading__5340__auto8560.invoke(nimbus.clj:16)
> at org.apache.storm.daemon.nimbus__init.load(Unknown Source)
> at org.apache.storm.daemon.nimbus__init.(Unknown Source)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:270)
> at clojure.lang.RT.classForName(RT.java:2154)
> at clojure.lang.RT.classForName(RT.java:2163)
> at clojure.lang.RT.loadClassForName(RT.java:2182)
> at clojure.lang.RT.load(RT.java:436)
> at clojure.lang.RT.load(RT.java:412)
> at clojure.core$load$fn__5448.invoke(core.clj:5866)
> at clojure.core$load.doInvoke(core.clj:5865)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at clojure.lang.Var.invoke(Var.java:379)
> at org.apache.storm.daemon.nimbus.(Unknown Source)
> Caused by: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.authorizer.SimpleACLAuthorizer
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:190)
> at 
> org.apache.storm.daemon.common$mk_authorization_handler.invoke(common.clj:412)
> at org.apache.storm.ui.core__init.load(Unknown Source)
> at org.apache.storm.ui.core__init.(Unknown Source)
> ... 35 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18213) RU: Storm components were stopped during RU and can not be started

2016-08-19 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18213:
---
Labels: 240RMApproved  (was: )

> RU: Storm components were stopped during RU and can not be started
> --
>
> Key: AMBARI-18213
> URL: https://issues.apache.org/jira/browse/AMBARI-18213
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18213.patch
>
>
> STR:
> # Install cluster 2.4.2.0-258 on Ambari 2.2.2.0
> # Enable HA
> # Enable security
> # Upgrade ambari to 240
> # Perform RU to 2.5.0.0-1208
> Deeper study shows that kerberos descriptor json in database ("artifact" 
> table) still contains values and properties that are actual for 2.4 stack.
> So the issue workflow should look like:
> - Old stack version is installed
> - Kerberos descriptor gets saved to database
> - Security is enabled
> - Stack upgrade is performed
> - Keytab regeneration is performed, and it populates service config with 
> obsolete property values
> The issue happens on "Stack upgrade is performed" step. We never update 
> kerberos descriptor json in database to correspond to a new stack.
> From nimbus.out
> {code}Exception in thread "main" java.lang.ExceptionInInitializerError
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:270)
> at clojure.lang.RT.classForName(RT.java:2154)
> at clojure.lang.RT.classForName(RT.java:2163)
> at clojure.lang.RT.loadClassForName(RT.java:2182)
> at clojure.lang.RT.load(RT.java:436)
> at clojure.lang.RT.load(RT.java:412)
> at clojure.core$load$fn__5448.invoke(core.clj:5866)
> at clojure.core$load.doInvoke(core.clj:5865)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at clojure.core$load_one.invoke(core.clj:5671)
> at clojure.core$load_lib$fn__5397.invoke(core.clj:5711)
> at clojure.core$load_lib.doInvoke(core.clj:5710)
> at clojure.lang.RestFn.applyTo(RestFn.java:142)
> at clojure.core$apply.invoke(core.clj:632)
> at clojure.core$load_libs.doInvoke(core.clj:5749)
> at clojure.lang.RestFn.applyTo(RestFn.java:137)
> at clojure.core$apply.invoke(core.clj:632)
> at clojure.core$require.doInvoke(core.clj:5832)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at 
> org.apache.storm.daemon.nimbus$loading__5340__auto8560.invoke(nimbus.clj:16)
> at org.apache.storm.daemon.nimbus__init.load(Unknown Source)
> at org.apache.storm.daemon.nimbus__init.(Unknown Source)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:270)
> at clojure.lang.RT.classForName(RT.java:2154)
> at clojure.lang.RT.classForName(RT.java:2163)
> at clojure.lang.RT.loadClassForName(RT.java:2182)
> at clojure.lang.RT.load(RT.java:436)
> at clojure.lang.RT.load(RT.java:412)
> at clojure.core$load$fn__5448.invoke(core.clj:5866)
> at clojure.core$load.doInvoke(core.clj:5865)
> at clojure.lang.RestFn.invoke(RestFn.java:408)
> at clojure.lang.Var.invoke(Var.java:379)
> at org.apache.storm.daemon.nimbus.(Unknown Source)
> Caused by: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.authorizer.SimpleACLAuthorizer
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:190)
> at 
> org.apache.storm.daemon.common$mk_authorization_handler.invoke(common.clj:412)
> at org.apache.storm.ui.core__init.load(Unknown Source)
> at org.apache.storm.ui.core__init.(Unknown Source)
> ... 35 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18188) Typo in stack_advisor.py for KAFKA

2016-08-18 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427345#comment-15427345
 ] 

Jayush Luniya commented on AMBARI-18188:


commit 2d6ade5a21f2c055ef73a5b8d321a4265c41c6f9
Author: Jayush Luniya 
Date:   Thu Aug 18 16:13:33 2016 -0700

Revert "AMBARI-18188. KAFKA is spelled as 'KAKFA' in stack_advisor.py 
because of which the function validate (Anita Jebaraj via rlevas)"

This reverts commit 3c51317e62efbdc213dab92f076c95126465387f.

> Typo in stack_advisor.py for KAFKA
> --
>
> Key: AMBARI-18188
> URL: https://issues.apache.org/jira/browse/AMBARI-18188
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Anita Gnanamalar Jebaraj
>Assignee: Anita Gnanamalar Jebaraj
> Fix For: trunk, 2.4.0
>
> Attachments: AMBARI-18188.patch
>
>
> KAFKA is spelled as 'KAKFA' in stack_advisor.py because of which the function 
> validateKAFKAConfigurations will not be called



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18188) Typo in stack_advisor.py for KAFKA

2016-08-18 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427332#comment-15427332
 ] 

Jayush Luniya commented on AMBARI-18188:


-1 for 2.4.0. Backing this out from 2.4 branch as it does not meet the agreed 
upon bug bar.

> Typo in stack_advisor.py for KAFKA
> --
>
> Key: AMBARI-18188
> URL: https://issues.apache.org/jira/browse/AMBARI-18188
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Anita Gnanamalar Jebaraj
>Assignee: Anita Gnanamalar Jebaraj
> Fix For: trunk, 2.4.0
>
> Attachments: AMBARI-18188.patch
>
>
> KAFKA is spelled as 'KAKFA' in stack_advisor.py because of which the function 
> validateKAFKAConfigurations will not be called



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-18197) HBase region server start fails after disabling kerberos

2016-08-18 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-18197.

Resolution: Duplicate

> HBase region server start fails after disabling kerberos
> 
>
> Key: AMBARI-18197
> URL: https://issues.apache.org/jira/browse/AMBARI-18197
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>  Labels: system_test
> Fix For: 2.4.0
>
>
> STR :
> 1. Install cluster
> 2. Enable kerberos
> 3. Disable kerberos
> The disable process fails at start services
> Error :
> {code}
> "Traceback (most recent call last):\n  File 
> \"/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_regionserver.py\",
>  line 198, in \nHbaseRegionServer().execute()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 280, in execute\nmethod(env)\n  File 
> \"/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_regionserver.py\",
>  line 124, in start\nself.post_start(env, upgrade_type=upgrade_type)\n  
> File 
> \"/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_regionserver.py\",
>  line 89, in post_start\nself.apply_atlas_acl(params.hbase_user)\n  File 
> \"/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_regionserver.py\",
>  line 114, in apply_atlas_acl\nshell.checked_call(format(\"{kinit_cmd}; 
> {perm_cmd}\"), user=params.hbase_user, tries=10, try_sleep=10)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 71, in inner\nresult = function(command, **kwargs)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 93, in checked_call\ntries=tries, try_sleep=try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 141, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 294, in _call\nraise 
> Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of 
> '/usr/bin/kinit -kt /etc/security/keytabs/hbase.service.keytab 
> hbase/nat-s11-4-kdxs-ambari-rbac-1-1.openstacklo...@hwqe.hortonworks.com; 
> echo \"grant 'atlas', 'RWXCA', 'atlas_titan'\" | hbase shell -n' returned 1. 
> ERROR ArgumentError: Can't find a table: atlas_titan",
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-18195) HST server down after deploy

2016-08-18 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-18195.

Resolution: Invalid

Smartsense was explicitly stopped

> HST server down after deploy
> 
>
> Key: AMBARI-18195
> URL: https://issues.apache.org/jira/browse/AMBARI-18195
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Priority: Blocker
>  Labels: system_test
> Fix For: 2.4.0
>
>
> Deploy type : UI
> Error in hst-server logs :
> {code}
> 18 Aug 2016 03:50:07,620  INFO [main] HttpSecurityBeanDefinitionParser:264 - 
> Checking sorted filter chain: [Root bean: class 
> [org.springframework.security.web.context.SecurityContextPersistenceFilter]; 
> scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; 
> autowireCandidate=true; primary=false; factoryBeanName=null; 
> factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 
> 300, Root bean: class 
> [org.springframework.security.web.authentication.www.BasicAuthenticationFilter];
>  scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; 
> autowireCandidate=true; primary=false; factoryBeanName=null; 
> factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 
> 1200, , order = 1201, Root bean: class 
> [org.springframework.security.web.savedrequest.RequestCacheAwareFilter]; 
> scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; 
> autowireCandidate=true; primary=false; factoryBeanName=null; 
> factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 
> 1300, Root bean: class 
> [org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter];
>  scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; 
> autowireCandidate=true; primary=false; factoryBeanName=null; 
> factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 
> 1400, Root bean: class 
> [org.springframework.security.web.authentication.AnonymousAuthenticationFilter];
>  scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; 
> autowireCandidate=true; primary=false; factoryBeanName=null; 
> factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 
> 1700, Root bean: class 
> [org.springframework.security.web.session.SessionManagementFilter]; scope=; 
> abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; 
> autowireCandidate=true; primary=false; factoryBeanName=null; 
> factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 
> 1800, Root bean: class 
> [org.springframework.security.web.access.ExceptionTranslationFilter]; scope=; 
> abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; 
> autowireCandidate=true; primary=false; factoryBeanName=null; 
> factoryMethodName=null; initMethodName=null; destroyMethodName=null, order = 
> 1900, 
> ,
>  order = 2000]
> 18 Aug 2016 03:50:07,772  INFO [main] DefaultSecurityFilterChain:28 - 
> Creating filter chain: 
> org.springframework.security.web.util.AnyRequestMatcher@1, 
> [org.springframework.security.web.context.SecurityContextPersistenceFilter@58496c97,
>  
> org.springframework.security.web.authentication.www.BasicAuthenticationFilter@ad3324b,
>  
> com.hortonworks.support.tools.server.security.authorization.SupportToolAuthorizationFilter@3872bc37,
>  
> org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1a87b51,
>  
> org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@12968227,
>  
> org.springframework.security.web.authentication.AnonymousAuthenticationFilter@144ab54,
>  org.springframework.security.web.session.SessionManagementFilter@2cfa2c4f, 
> org.springframework.security.web.access.ExceptionTranslationFilter@6ecab872, 
> org.springframework.security.web.access.intercept.FilterSecurityInterceptor@48eb9836]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18142) Define keytab/principal for Spark Thrift Server

2016-08-18 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18142:
---
Labels: 240RMApproved  (was: )

> Define keytab/principal for Spark Thrift Server
> ---
>
> Key: AMBARI-18142
> URL: https://issues.apache.org/jira/browse/AMBARI-18142
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Jeff Zhang
>Assignee: Jeff Zhang
>Priority: Critical
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18142-1.patch, AMBARI-18142-2.patch, 
> AMBARI-18142.addendum.patch
>
>
> PROBLEM: As of now spark thrift server seems to be picking up the tgt from 
> cache upon start up and will not renew the ticket when it expires. This 
> causes the spark thrift server processes only valid for 7 days. 
> Users need to created a script to manually renew the ticket , but this is 
> causing inconvenience. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18157) Hive service check is failing after cluster Kerberization

2016-08-18 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426994#comment-15426994
 ] 

Jayush Luniya commented on AMBARI-18157:


2.4
commit f5b3c238b84c4225603412d69b46bcc8243d56fb
Author: Jayush Luniya 
Date:   Thu Aug 18 12:01:35 2016 -0700

AMBARI-18157: Hive service check is failing after cluster Kerberization 
(jluniya)

> Hive service check is failing after cluster Kerberization
> -
>
> Key: AMBARI-18157
> URL: https://issues.apache.org/jira/browse/AMBARI-18157
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Assignee: Jayush Luniya
>Priority: Blocker
>  Labels: 240RMApproved, system_test
> Fix For: 2.4.0
>
> Attachments: AMBARI-18157-branch24.patch, AMBARI-18157-trunk.patch
>
>
> {code}
> resource_management.core.exceptions.Fail: Execution of 
> '/var/lib/ambari-agent/tmp/templetonSmoke.sh $HOSTNAME 50111 
> idtest.ambari-qa.1471340206.23.pig 
> /etc/security/keytabs/smokeuser.headless.keytab true /usr/bin/kinit 
> ambari-qa@REALM_NAME /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke 
> Test (ddl cmd): Failed. : {"error":"Unauthorized connection for super-user: 
> HTTP/HOST_NAME@REALM_NAME from IP HOST_IP"}http_code <500>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18157) Hive service check is failing after cluster Kerberization

2016-08-18 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18157:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Hive service check is failing after cluster Kerberization
> -
>
> Key: AMBARI-18157
> URL: https://issues.apache.org/jira/browse/AMBARI-18157
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Assignee: Jayush Luniya
>Priority: Blocker
>  Labels: 240RMApproved, system_test
> Fix For: 2.4.0
>
> Attachments: AMBARI-18157-branch24.patch, AMBARI-18157-trunk.patch
>
>
> {code}
> resource_management.core.exceptions.Fail: Execution of 
> '/var/lib/ambari-agent/tmp/templetonSmoke.sh $HOSTNAME 50111 
> idtest.ambari-qa.1471340206.23.pig 
> /etc/security/keytabs/smokeuser.headless.keytab true /usr/bin/kinit 
> ambari-qa@REALM_NAME /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke 
> Test (ddl cmd): Failed. : {"error":"Unauthorized connection for super-user: 
> HTTP/HOST_NAME@REALM_NAME from IP HOST_IP"}http_code <500>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18157) Hive service check is failing after cluster Kerberization

2016-08-18 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426992#comment-15426992
 ] 

Jayush Luniya commented on AMBARI-18157:


Trunk
commit 0e5fb8ee8e205ccb9da889e44ba77ffbde3f86dd
Author: Jayush Luniya 
Date:   Thu Aug 18 12:01:35 2016 -0700

AMBARI-18157: Hive service check is failing after cluster Kerberization 
(jluniya)

> Hive service check is failing after cluster Kerberization
> -
>
> Key: AMBARI-18157
> URL: https://issues.apache.org/jira/browse/AMBARI-18157
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Assignee: Jayush Luniya
>Priority: Blocker
>  Labels: 240RMApproved, system_test
> Fix For: 2.4.0
>
> Attachments: AMBARI-18157-branch24.patch, AMBARI-18157-trunk.patch
>
>
> {code}
> resource_management.core.exceptions.Fail: Execution of 
> '/var/lib/ambari-agent/tmp/templetonSmoke.sh $HOSTNAME 50111 
> idtest.ambari-qa.1471340206.23.pig 
> /etc/security/keytabs/smokeuser.headless.keytab true /usr/bin/kinit 
> ambari-qa@REALM_NAME /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke 
> Test (ddl cmd): Failed. : {"error":"Unauthorized connection for super-user: 
> HTTP/HOST_NAME@REALM_NAME from IP HOST_IP"}http_code <500>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (AMBARI-18157) Hive service check is failing after cluster Kerberization

2016-08-18 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya reassigned AMBARI-18157:
--

Assignee: Jayush Luniya  (was: Dmitry Lysnichenko)

> Hive service check is failing after cluster Kerberization
> -
>
> Key: AMBARI-18157
> URL: https://issues.apache.org/jira/browse/AMBARI-18157
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Assignee: Jayush Luniya
>Priority: Blocker
>  Labels: 240RMApproved, system_test
> Fix For: 2.4.0
>
> Attachments: AMBARI-18157-branch24.patch, AMBARI-18157-trunk.patch
>
>
> {code}
> resource_management.core.exceptions.Fail: Execution of 
> '/var/lib/ambari-agent/tmp/templetonSmoke.sh $HOSTNAME 50111 
> idtest.ambari-qa.1471340206.23.pig 
> /etc/security/keytabs/smokeuser.headless.keytab true /usr/bin/kinit 
> ambari-qa@REALM_NAME /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke 
> Test (ddl cmd): Failed. : {"error":"Unauthorized connection for super-user: 
> HTTP/HOST_NAME@REALM_NAME from IP HOST_IP"}http_code <500>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18202) On UI Sometimes: after clicking delete property button, property does not deleted instead Configuration Group window is opened

2016-08-18 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18202:
---
Labels: 240RMApproved  (was: )

> On UI Sometimes: after clicking delete property button, property does not 
> deleted instead Configuration Group window is opened
> --
>
> Key: AMBARI-18202
> URL: https://issues.apache.org/jira/browse/AMBARI-18202
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18202.patch
>
>
> Configuration Group window is opened instead deleting



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18184) Hive Metastore restart failed during EU with 'Internal credentials cache error' while running kinit

2016-08-18 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18184:
---
Labels: 240RMApproved  (was: )

> Hive Metastore restart failed during EU with 'Internal credentials cache 
> error' while running kinit
> ---
>
> Key: AMBARI-18184
> URL: https://issues.apache.org/jira/browse/AMBARI-18184
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18184.patch
>
>
> ambari-server --hash
> 8250e90dc9ebcf1bd3dbac9b9eca8a6e21e073c9
> ambari-server-2.4.0.0-1127.x86_64
> Observed this issue in one EU run with below steps:
> # Install HDP-2.4.0.0 cluster with Ambari 2.2.1.1 (secure, HA cluster)
> # Upgrade Ambari to 2.4.0.0
> # Perform EU to 2.4.2.0 and let it complete
> # Start EU to 2.5.0.0
> Observed below error during Hive Metastore restart
> {code}
> Traceback (most recent call last):
>   File 
> \"/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py\",
>  line 254, in 
> HiveMetastore().execute()
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 280, in execute
> method(env)
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 696, in restart
> self.pre_upgrade_restart(env, upgrade_type=upgrade_type)
>   File 
> \"/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py\",
>  line 114, in pre_upgrade_restart
> self.upgrade_schema(env)
>   File 
> \"/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py\",
>  line 193, in upgrade_schema
> Execute(kinit_command,user=params.smokeuser)
>   File \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", 
> line 155, in __init__
> self.env.run()
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 124, in run_action
> provider_action()
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 273, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 71, in inner
> result = function(command, **kwargs)
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/smokeuser.headless.keytab ambari...@example.com; ' 
> returned 1. kinit: Internal credentials cache error while storing credentials 
> while getting initial credentials"
> {code}
> *A retry of the above failed task was successful and then EU proceeded to 
> completion*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18189) All storm commands failed

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18189:
---
Labels: 240RMApproved  (was: )

> All storm commands failed
> -
>
> Key: AMBARI-18189
> URL: https://issues.apache.org/jira/browse/AMBARI-18189
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18189.patch
>
>
> Cluster is running with AD + MIT setup. looks to be an issue with the AD 
> kerberos as we are getting 
> {code}
> Caused by: sun.security.krb5.KrbException: Clock skew too great (37)
> at sun.security.krb5.KrbAsRep.(KrbAsRep.java:76) ~[?:1.7.0_101]
> at sun.security.krb5.KrbAsReqBuilder.send(KrbAsReqBuilder.java:316) 
> ~[?:1.7.0_101]
> at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:361) 
> ~[?:1.7.0_101]
> at 
> com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:735)
>  ~[?:1.7.0_101]
> at 
> com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584) 
> ~[?:1.7.0_101]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18157) Hive service check is failing after RU

2016-08-17 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15425008#comment-15425008
 ] 

Jayush Luniya commented on AMBARI-18157:


[~dmitriusan] can you help with getting this fix committed whenever its ready?

> Hive service check is failing after RU
> --
>
> Key: AMBARI-18157
> URL: https://issues.apache.org/jira/browse/AMBARI-18157
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>  Labels: 240RMApproved, system_test
> Fix For: 2.4.0
>
>
> {code}
> resource_management.core.exceptions.Fail: Execution of 
> '/var/lib/ambari-agent/tmp/templetonSmoke.sh $HOSTNAME 50111 
> idtest.ambari-qa.1471340206.23.pig 
> /etc/security/keytabs/smokeuser.headless.keytab true /usr/bin/kinit 
> ambari-qa@REALM_NAME /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke 
> Test (ddl cmd): Failed. : {"error":"Unauthorized connection for super-user: 
> HTTP/HOST_NAME@REALM_NAME from IP HOST_IP"}http_code <500>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18157) Hive service check is failing after RU

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18157:
---
Assignee: Dmitry Lysnichenko

> Hive service check is failing after RU
> --
>
> Key: AMBARI-18157
> URL: https://issues.apache.org/jira/browse/AMBARI-18157
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Assignee: Dmitry Lysnichenko
>Priority: Blocker
>  Labels: 240RMApproved, system_test
> Fix For: 2.4.0
>
>
> {code}
> resource_management.core.exceptions.Fail: Execution of 
> '/var/lib/ambari-agent/tmp/templetonSmoke.sh $HOSTNAME 50111 
> idtest.ambari-qa.1471340206.23.pig 
> /etc/security/keytabs/smokeuser.headless.keytab true /usr/bin/kinit 
> ambari-qa@REALM_NAME /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke 
> Test (ddl cmd): Failed. : {"error":"Unauthorized connection for super-user: 
> HTTP/HOST_NAME@REALM_NAME from IP HOST_IP"}http_code <500>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18157) Hive service check is failing after RU

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18157:
---
External issue ID:   (was: BUG-64471)

> Hive service check is failing after RU
> --
>
> Key: AMBARI-18157
> URL: https://issues.apache.org/jira/browse/AMBARI-18157
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Priority: Blocker
>  Labels: 240RMApproved, system_test
> Fix For: 2.4.0
>
>
> {code}
> resource_management.core.exceptions.Fail: Execution of 
> '/var/lib/ambari-agent/tmp/templetonSmoke.sh $HOSTNAME 50111 
> idtest.ambari-qa.1471340206.23.pig 
> /etc/security/keytabs/smokeuser.headless.keytab true /usr/bin/kinit 
> ambari-qa@REALM_NAME /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke 
> Test (ddl cmd): Failed. : {"error":"Unauthorized connection for super-user: 
> HTTP/HOST_NAME@REALM_NAME from IP HOST_IP"}http_code <500>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18157) Hive service check is failing after RU

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18157:
---
Labels: 240RMApproved system_test  (was: system_test)

> Hive service check is failing after RU
> --
>
> Key: AMBARI-18157
> URL: https://issues.apache.org/jira/browse/AMBARI-18157
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Priority: Blocker
>  Labels: 240RMApproved, system_test
> Fix For: 2.4.0
>
>
> {code}
> resource_management.core.exceptions.Fail: Execution of 
> '/var/lib/ambari-agent/tmp/templetonSmoke.sh $HOSTNAME 50111 
> idtest.ambari-qa.1471340206.23.pig 
> /etc/security/keytabs/smokeuser.headless.keytab true /usr/bin/kinit 
> ambari-qa@REALM_NAME /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke 
> Test (ddl cmd): Failed. : {"error":"Unauthorized connection for super-user: 
> HTTP/HOST_NAME@REALM_NAME from IP HOST_IP"}http_code <500>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18157) Hive service check is failing after RU

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18157:
---
Fix Version/s: (was: 2.5.0)
   2.4.0

> Hive service check is failing after RU
> --
>
> Key: AMBARI-18157
> URL: https://issues.apache.org/jira/browse/AMBARI-18157
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Priority: Blocker
>  Labels: system_test
> Fix For: 2.4.0
>
>
> {code}
> resource_management.core.exceptions.Fail: Execution of 
> '/var/lib/ambari-agent/tmp/templetonSmoke.sh $HOSTNAME 50111 
> idtest.ambari-qa.1471340206.23.pig 
> /etc/security/keytabs/smokeuser.headless.keytab true /usr/bin/kinit 
> ambari-qa@REALM_NAME /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke 
> Test (ddl cmd): Failed. : {"error":"Unauthorized connection for super-user: 
> HTTP/HOST_NAME@REALM_NAME from IP HOST_IP"}http_code <500>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18182) After the upgrade of jetty, zeppelin-view fails to load

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18182:
---
Labels: 240RMApproved  (was: )

> After the upgrade of jetty, zeppelin-view fails to load
> ---
>
> Key: AMBARI-18182
> URL: https://issues.apache.org/jira/browse/AMBARI-18182
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: 2.4.0
>Reporter: Prabhjyot Singh
>Assignee: Prabhjyot Singh
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: trunk, 2.4.0
>
> Attachments: AMBARI-18182_branch-2.4.patch
>
>
> Observed this issue after upgrade of Ambari server from 2.2.2.0 to 2.4.0.0 
> where after running "ambari-server start" command, the logs throw below 
> errors:
> {code}
> HTTP ERROR 500
> Problem accessing /views/ZEPPELIN/1.0.0/Zepplin/. Reason:
> Server Error
> Caused by:
> java.lang.ClassCastException: org.apache.jasper.runtime.ELContextImpl cannot 
> be cast to org.apache.jasper.runtime.ELContextImpl
>   at 
> org.apache.jasper.runtime.PageContextImpl.evaluateExpression(PageContextImpl.java:994)
>   at 
> org.apache.jsp.WEB_002dINF.index_jsp._jspService(org.apache.jsp.WEB_002dINF.index_jsp:72)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:109)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:389)
>   at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:486)
>   at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:380)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:575)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
>   at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:276)
>   at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:103)
>   at 
> org.apache.ambari.view.zeppelin.ZeppelinServlet.doGet(ZeppelinServlet.java:55)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1507)
>   at 
> org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.jav
> {code}
> [~jhurley] helped to look at the environment and mentioned that this is due 
> to 
> https://github.com/apache/ambari/commit/c03b6d4b01fbc336c296c9a1a92ca1308cba6ffc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18178) yarn capacity scheduler queue issue

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18178:
---
Labels: 240RMApproved  (was: )

> yarn capacity scheduler queue issue
> ---
>
> Key: AMBARI-18178
> URL: https://issues.apache.org/jira/browse/AMBARI-18178
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: 2.4.0
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18178.001_branch-2.4.patch
>
>
> Configure yarn queue using the yarn queue manager view. ( Below configuration 
> hints at how the queue was configured using UI and this was not set using the 
> configs under "YARN" service )
> {code}
> yarn.scheduler.capacity.root.hive.capacity=80
> yarn.scheduler.capacity.root.hive.maximum-capacity=80
> yarn.scheduler.capacity.root.queues=default,hive
> yarn.scheduler.capacity.root.hive.queues=microstrategy,tpcds,tpch
> yarn.scheduler.capacity.root.default.capacity=20
> yarn.scheduler.capacity.root.hive.tpcds.capacity=25
> yarn.scheduler.capacity.root.hive.tpcds.maximum-capacity=25
> yarn.scheduler.capacity.root.hive.tpch.capacity=25
> yarn.scheduler.capacity.root.hive.tpch.maximum-capacity=25
> yarn.scheduler.capacity.root.hive.microstrategy.capacity=50
> yarn.scheduler.capacity.root.hive.microstrategy.maximum-capacity=50
> {code}
> Below error is thrown while refreshing the queues.
> {code}
> esource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/rm.service.keytab 
> rm/ctr-e25-1471039652053-0001-01-02.hwx.s...@hwx.vibgyor.com; export 
> HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && 
> /usr/hdp/current/hadoop-yarn-resourcemanager/bin/yarn rmadmin -refreshQueues' 
> returned 255. 16/08/16 05:01:39 INFO 
> client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
> 16/08/16 05:01:39 WARN retry.RetryInvocationHandler: Exception while invoking 
> ResourceManagerAdministrationProtocolPBClientImpl.refreshQueues over rm2. Not 
> retrying because try once and fail.
> org.apache.hadoop.yarn.exceptions.YarnException: java.io.IOException: Failed 
> to re-init queues
>   at 
> org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:38)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.logAndWrapException(AdminService.java:762)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:398)
>   at 
> org.apache.hadoop.yarn.server.api.impl.pb.service.ResourceManagerAdministrationProtocolPBServiceImpl.refreshQueues(ResourceManagerAdministrationProtocolPBServiceImpl.java:102)
>   at 
> org.apache.hadoop.yarn.proto.ResourceManagerAdministrationProtocol$ResourceManagerAdministrationProtocolService$2.callBlockingMethod(ResourceManagerAdministrationProtocol.java:239)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
> Caused by: java.io.IOException: Failed to re-init queues
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:364)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:388)
>   ... 10 more
> Caused by: java.lang.NumberFormatException: For input string: "1.0"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Integer.parseInt(Integer.java:580)
>   at java.lang.Integer.parseInt(Integer.java:615)
>   at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1258)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getMaximumSystemApplications(CapacitySchedulerConfiguration.java:289)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.setupQueueConfigs(LeafQueue.java:166)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.(LeafQueue.java:139)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.parseQueue(CapacityScheduler.java:629)
>   at 
> 

[jira] [Updated] (AMBARI-18172) Hive Service check is failing after moving webhcat server

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18172:
---
Labels: 240RMApproved  (was: )

> Hive Service check is failing after moving webhcat server
> -
>
> Key: AMBARI-18172
> URL: https://issues.apache.org/jira/browse/AMBARI-18172
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Zhe (Joe) Wang
>Assignee: Zhe (Joe) Wang
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18172.v0.patch, AMBARI-18172.v1.patch
>
>
> Moving of webhcat server should change proxyuser configs for hosts in 
> core-site for webhcat user in non-kerberozed environment. Not doing so is 
> causing this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-16028) Namenode marked as INITIAL standby could potentially never start if other namenode is down

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-16028:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Namenode marked as INITIAL standby could potentially never start if other 
> namenode is down
> --
>
> Key: AMBARI-16028
> URL: https://issues.apache.org/jira/browse/AMBARI-16028
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.2.0
>Reporter: Jayush Luniya
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: 2.2-next
>
> Attachments: AMBARI-16028-trunk.patch
>
>
> *Issue:*
> # During Namenode HA blueprint deployment, we configure the name nodes to 
> start in active/standby mode based on the following properties
> {code}
>  {
> "hadoop-env": {
>   "properties" : {
> "dfs_ha_initial_namenode_active" : "host1",
> "dfs_ha_initial_namenode_standby" : "host2”
>   }
> }
>   }
> {code}
> # The current logic is to always bootstrap the name node marked as standby. 
> # This will lead to the Namenode marked as Standby to never start under the 
> following situation
> - Cluster is deployed successfully
> - Both name nodes are stopped
> - Start the name node marked as standby. Namenode will never start.
> - This is because the standby name node will try to bootstrap again. 
> - However to bootstrap a name node an active name node is required. Based on 
> the HDFS logic the first step done when bootstrapping is to connect to the 
> Active Namenode. 
> - Also there is no need to bootstrap here as the name node should already be 
> bootstrapped and should come back up as “Active"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18172) Hive Service check is failing after moving webhcat server

2016-08-17 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423973#comment-15423973
 ] 

Jayush Luniya commented on AMBARI-18172:


Please send email to d...@apache.ambari.org for visibility and formal approval 
before committing to branch-2.4

> Hive Service check is failing after moving webhcat server
> -
>
> Key: AMBARI-18172
> URL: https://issues.apache.org/jira/browse/AMBARI-18172
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Zhe (Joe) Wang
>Assignee: Zhe (Joe) Wang
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: AMBARI-18172.v0.patch
>
>
> Moving of webhcat server should change proxyuser configs for hosts in 
> core-site for webhcat user in non-kerberozed environment. Not doing so is 
> causing this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18157) Hive service check is failing after RU

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18157:
---
Fix Version/s: (was: 2.4.0)
   2.5.0

> Hive service check is failing after RU
> --
>
> Key: AMBARI-18157
> URL: https://issues.apache.org/jira/browse/AMBARI-18157
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Priority: Blocker
>  Labels: system_test
> Fix For: 2.5.0
>
>
> {code}
> resource_management.core.exceptions.Fail: Execution of 
> '/var/lib/ambari-agent/tmp/templetonSmoke.sh $HOSTNAME 50111 
> idtest.ambari-qa.1471340206.23.pig 
> /etc/security/keytabs/smokeuser.headless.keytab true /usr/bin/kinit 
> ambari-qa@REALM_NAME /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke 
> Test (ddl cmd): Failed. : {"error":"Unauthorized connection for super-user: 
> HTTP/HOST_NAME@REALM_NAME from IP HOST_IP"}http_code <500>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18134) Hive metastore stop failed

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18134:
---
Fix Version/s: (was: 2.4.0)
   2.5.0

> Hive metastore stop failed
> --
>
> Key: AMBARI-18134
> URL: https://issues.apache.org/jira/browse/AMBARI-18134
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Priority: Blocker
>  Labels: system_test
> Fix For: 2.5.0
>
>
> Hive metastore start is failing after install with :
> {code}
>  "Traceback (most recent call last):\n  File 
> \"/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py\",
>  line 254, in \nHiveMetastore().execute()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 280, in execute\nmethod(env)\n  File 
> \"/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py\",
>  line 59, in start\nself.configure(env)\n  File 
> \"/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py\",
>  line 73, in configure\nhive(name = 'metastore')\n  File 
> \"/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py\", line 
> 89, in thunk\nreturn fn(*args, **kwargs)\n  File 
> \"/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py\",
>  line 320, in hive\nuser = params.hive_user\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 
> 155, in __init__\nself.env.run()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 160, in run\nself.run_action(resource, action)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 124, in run_action\nprovider_action()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 273, in action_run\ntries=self.resource.tries, 
> try_sleep=self.resource.try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 71, in inner\nresult = function(command, **kwargs)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 93, in checked_call\ntries=tries, try_sleep=try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 141, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 294, in _call\nraise 
> Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of 'export 
> HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; 
> /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql 
> -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path 
> contains multiple SLF4J bindings.\nSLF4J: Found binding in 
> [jar:file:/grid/0/hdp/2.5.0.0-1189/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
>  Found binding in 
> [jar:file:/grid/0/hdp/2.5.0.0-1189/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.\nSLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]\nMetastore connection URL:\t 
> jdbc:mysql://nat-s11-4-sies-ambari-hosts-6-3.openstacklocal/hive\nMetastore 
> Connection Driver :\t com.mysql.jdbc.Driver\nMetastore connection User:\t 
> hive\norg.apache.hadoop.hive.metastore.HiveMetaException: Failed to load 
> driver\nUnderlying cause: java.lang.ClassNotFoundException : 
> com.mysql.jdbc.Driver\norg.apache.hadoop.hive.metastore.HiveMetaException: 
> Failed to load driver\n\tat 
> org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:82)\n\tat
>  
> org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:133)\n\tat
>  
> org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:187)\n\tat
>  org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:291)\n\tat 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277)\n\tat 
> org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526)\n\tat 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat
>  
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat
>  java.lang.reflect.Method.invoke(Method.java:498)\n\tat 
> 

[jira] [Updated] (AMBARI-18080) EU/RU merges empty storm.topology.submission.notifier.plugin.class which cause service check to fail

2016-08-17 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18080:
---
Fix Version/s: (was: 2.4.0)
   2.5.0

> EU/RU merges empty storm.topology.submission.notifier.plugin.class which 
> cause service check to fail
> 
>
> Key: AMBARI-18080
> URL: https://issues.apache.org/jira/browse/AMBARI-18080
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmytro Grinenko
>Priority: Blocker
> Fix For: 2.5.0
>
>
> *Steps:*
> # Deploy HDP-2.2.9.0 cluster with Ambari 2.2.1.1
> # Upgrade Ambari to 2.4.0.0 (at this point 
> storm.topology.submission.notifier.plugin.class does not exist)
> # Perform EU to HDP-2.4.2.0 (this is where 
> storm.topology.submission.notifier.plugin.class gets added)
> # Perform another EU to 2.5.0.0 and observed same error during Storm service 
> check



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18173) HSI is not started even when "Enable Interactive Query" is set to Yes in installer wizard

2016-08-16 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18173:
---
Labels: 240RMApproved  (was: )

> HSI is not started even when "Enable Interactive Query" is set to Yes in 
> installer wizard
> -
>
> Key: AMBARI-18173
> URL: https://issues.apache.org/jira/browse/AMBARI-18173
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Jaimin D Jetly
>Assignee: Jaimin D Jetly
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18173.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18167) RU: Kafka brokers restart was stopped during downgrade cluster

2016-08-16 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423391#comment-15423391
 ] 

Jayush Luniya commented on AMBARI-18167:


+1 for the patch. The Hadoop QA failure is not related.

> RU: Kafka brokers restart was stopped during downgrade cluster
> --
>
> Key: AMBARI-18167
> URL: https://issues.apache.org/jira/browse/AMBARI-18167
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Mugdha Varadkar
>Assignee: Mugdha Varadkar
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18167.patch
>
>
> Kafka broker restart failed due to below error:
> {noformat}
> raise Fail("Configuration parameter '" + self.name + "' was not found in 
> configurations dictionary!")
> resource_management.core.exceptions.Fail: Configuration parameter 
> 'xasecure.audit.destination.db' was not found in configurations dictionary!
> {noformat}
> Solution:
> During RU downgrade,  runs on downgrade as well, which deleted 
> {{xasecure.audit.destination.db}} config property. 
> {noformat}
> 
>id="hdp_2_5_0_0_remove_ranger_kafka_audit_db" />
> 
> {noformat}
> Need to override it with a blank  element.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18167) RU: Kafka brokers restart was stopped during downgrade cluster

2016-08-16 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18167:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> RU: Kafka brokers restart was stopped during downgrade cluster
> --
>
> Key: AMBARI-18167
> URL: https://issues.apache.org/jira/browse/AMBARI-18167
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Mugdha Varadkar
>Assignee: Mugdha Varadkar
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18167.patch
>
>
> Kafka broker restart failed due to below error:
> {noformat}
> raise Fail("Configuration parameter '" + self.name + "' was not found in 
> configurations dictionary!")
> resource_management.core.exceptions.Fail: Configuration parameter 
> 'xasecure.audit.destination.db' was not found in configurations dictionary!
> {noformat}
> Solution:
> During RU downgrade,  runs on downgrade as well, which deleted 
> {{xasecure.audit.destination.db}} config property. 
> {noformat}
> 
>id="hdp_2_5_0_0_remove_ranger_kafka_audit_db" />
> 
> {noformat}
> Need to override it with a blank  element.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18167) RU: Kafka brokers restart was stopped during downgrade cluster

2016-08-16 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423396#comment-15423396
 ] 

Jayush Luniya commented on AMBARI-18167:


Trunk
commit 43457696137486daa338f39cef5e899554f025b3
Author: Jayush Luniya 
Date:   Tue Aug 16 14:18:13 2016 -0700

AMBARI-18167: RU: Kafka brokers restart was stopped during downgrade 
cluster (Mugdha Varadkar via jluniya)

Branch-2.4
commit 473733c4e73a5328f75798936e8bdf89096d03db
Author: Jayush Luniya 
Date:   Tue Aug 16 14:18:13 2016 -0700

AMBARI-18167: RU: Kafka brokers restart was stopped during downgrade 
cluster (Mugdha Varadkar via jluniya)

> RU: Kafka brokers restart was stopped during downgrade cluster
> --
>
> Key: AMBARI-18167
> URL: https://issues.apache.org/jira/browse/AMBARI-18167
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Mugdha Varadkar
>Assignee: Mugdha Varadkar
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18167.patch
>
>
> Kafka broker restart failed due to below error:
> {noformat}
> raise Fail("Configuration parameter '" + self.name + "' was not found in 
> configurations dictionary!")
> resource_management.core.exceptions.Fail: Configuration parameter 
> 'xasecure.audit.destination.db' was not found in configurations dictionary!
> {noformat}
> Solution:
> During RU downgrade,  runs on downgrade as well, which deleted 
> {{xasecure.audit.destination.db}} config property. 
> {noformat}
> 
>id="hdp_2_5_0_0_remove_ranger_kafka_audit_db" />
> 
> {noformat}
> Need to override it with a blank  element.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18151) Oozie Hive actions fail when Atlas is installed since Atlas Hive Hooks need to be copied to Oozie Share Lib in HDFS

2016-08-16 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18151:
---
Labels: 240RMApproved  (was: )

> Oozie Hive actions fail when Atlas is installed since Atlas Hive Hooks need 
> to be copied to Oozie Share Lib in HDFS
> ---
>
> Key: AMBARI-18151
> URL: https://issues.apache.org/jira/browse/AMBARI-18151
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18151.patch
>
>
> After the Falcon-Atlas hook has been enabled, the following properties are 
> added.
> startup.properties
> {noformat}
> *.application.services=org.apache.falcon.security.AuthenticationInitializationService,\
>   org.apache.falcon.workflow.WorkflowJobEndNotificationService, \
>   org.apache.falcon.service.ProcessSubscriberService,\
>   org.apache.falcon.extensions.ExtensionService,\
>   org.apache.falcon.service.LifecyclePolicyMap,\
>   org.apache.falcon.entity.store.ConfigurationStore,\
>   org.apache.falcon.rerun.service.RetryService,\
>   org.apache.falcon.rerun.service.LateRunService,\
>   org.apache.falcon.service.LogCleanupService,\
>   org.apache.falcon.metadata.MetadataMappingService,\
> org.apache.atlas.falcon.service.AtlasService
> {noformat}
> falcon-env.sh
> {noformat}
> # Add the Atlas Falcon hook to the Falcon classpath
> export 
> FALCON_EXTRA_CLASS_PATH=/usr/hdp/current/atlas-client/hook/falcon/*:${FALCON_EXTRA_CLASS_PATH}
> {noformat} 
> Whenever Oozie submits Hive actions, they fail and the application logs show
> {noformat}
> hive.exec.post.hooks Class not found:org.apache.atlas.hive.hook.HiveHook
> FAILED: Hive Internal Error: 
> java.lang.ClassNotFoundException(org.apache.atlas.hive.hook.HiveHook)
> java.lang.ClassNotFoundException: org.apache.atlas.hive.hook.HiveHook
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at org.apache.hadoop.hive.ql.hooks.HookUtils.getHooks(HookUtils.java:60)
>   at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1384)
>   at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1368)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1595)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1289)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1156)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1146)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:379)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:314)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:412)
>   at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:428)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:717)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
>   at org.apache.oozie.action.hadoop.HiveMain.runHive(HiveMain.java:335)
>   at org.apache.oozie.action.hadoop.HiveMain.run(HiveMain.java:312)
>   at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:51)
>   at org.apache.oozie.action.hadoop.HiveMain.main(HiveMain.java:69)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:242)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 

[jira] [Updated] (AMBARI-18165) Hbase RegionServer start fails

2016-08-16 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18165:
---
Labels: 240RMApproved  (was: )

> Hbase RegionServer start fails
> --
>
> Key: AMBARI-18165
> URL: https://issues.apache.org/jira/browse/AMBARI-18165
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Vitaly Brodetskyi
>Assignee: Vitaly Brodetskyi
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18165.patch
>
>
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_regionserver.py",
>  line 198, in 
> HbaseRegionServer().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 720, in restart
> self.start(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_regionserver.py",
>  line 124, in start
> self.post_start(env, upgrade_type=upgrade_type)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_regionserver.py",
>  line 89, in post_start
> self.apply_atlas_acl(params.hbase_user)
>   File 
> "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_regionserver.py",
>  line 114, in apply_atlas_acl
> shell.checked_call(format("{kinit_cmd}; {perm_cmd}"), 
> user=params.hbase_user, tries=10, try_sleep=10)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 71, in inner
> result = function(command, **kwargs)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 93, in checked_call
> tries=tries, try_sleep=try_sleep)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 141, in _call_wrapper
> result = _call(command, **kwargs_copy)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 294, in _call
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt 
> /etc/security/keytabs/hbase.service.keytab 
> hbase/nat-r6-gjss-ambari-blueprints-4re-1-1.openstacklo...@example.com; echo 
> "grant 'atlas', 'RWXCA', 'atlas_titan'" | hbase shell -n' returned 1. 
>  Hortonworks #
> This is MOTD message, added for testing in qe infra
> ERROR ArgumentError: Can't find a table: atlas_titan
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18135) Enable Namenode HA failing at install journal nodes with cluster operator user

2016-08-16 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18135:
---
Labels: 240RMApproved rbac  (was: rbac)

> Enable Namenode HA failing at install journal nodes with cluster operator user
> --
>
> Key: AMBARI-18135
> URL: https://issues.apache.org/jira/browse/AMBARI-18135
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Assignee: Myroslav Papirkovskyi
>Priority: Blocker
>  Labels: 240RMApproved, rbac
> Fix For: 2.4.0
>
> Attachments: AMBARI-18135.patch
>
>
> Enabling name node HA is failing at journal node install for cluster operator 
> user. Looking at network from the chrome browser, looks like there is a 403.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (AMBARI-18135) Enable Namenode HA failing at install journal nodes with cluster operator user

2016-08-16 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18135:
---
Comment: was deleted

(was: Pushing out non-blocker JIRAs from 2.4.0 to 2.5.0
)

> Enable Namenode HA failing at install journal nodes with cluster operator user
> --
>
> Key: AMBARI-18135
> URL: https://issues.apache.org/jira/browse/AMBARI-18135
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Assignee: Myroslav Papirkovskyi
>Priority: Blocker
>  Labels: 240RMApproved, rbac
> Fix For: 2.4.0
>
> Attachments: AMBARI-18135.patch
>
>
> Enabling name node HA is failing at journal node install for cluster operator 
> user. Looking at network from the chrome browser, looks like there is a 403.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18167) RU: Kafka brokers restart was stopped during downgrade cluster

2016-08-16 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18167:
---
Labels: 240RMApproved  (was: )

> RU: Kafka brokers restart was stopped during downgrade cluster
> --
>
> Key: AMBARI-18167
> URL: https://issues.apache.org/jira/browse/AMBARI-18167
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Mugdha Varadkar
>Assignee: Mugdha Varadkar
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: AMBARI-18167.patch
>
>
> Kafka broker restart failed due to below error:
> {noformat}
> raise Fail("Configuration parameter '" + self.name + "' was not found in 
> configurations dictionary!")
> resource_management.core.exceptions.Fail: Configuration parameter 
> 'xasecure.audit.destination.db' was not found in configurations dictionary!
> {noformat}
> Solution:
> During RU downgrade,  runs on downgrade as well, which deleted 
> {{xasecure.audit.destination.db}} config property. 
> {noformat}
> 
>id="hdp_2_5_0_0_remove_ranger_kafka_audit_db" />
> 
> {noformat}
> Need to override it with a blank  element.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18168) Create Atlas log4j changes for failed notifications

2016-08-16 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18168:
---
Labels: 240RMApproved  (was: )

> Create Atlas log4j changes for failed notifications
> ---
>
> Key: AMBARI-18168
> URL: https://issues.apache.org/jira/browse/AMBARI-18168
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Nahappan Somasundaram
>Assignee: Nahappan Somasundaram
>Priority: Blocker
>  Labels: 240RMApproved
> Fix For: 2.4.0
>
> Attachments: rb51144.patch
>
>
> Create Atlas log4j in 
> ambari-server/src/main/resources/common-services/ATLAS/0.7.0.2.5/configuration/atlas-log4j.xml
>  which is based on the earlier version, but makes the changes specified in
> https://issues.apache.org/jira/browse/ATLAS-
> No need to change EU/RU config packs since HDP 2.3/2.4 -> HDP 2.5 requires 
> removing Atlas service and all of its configs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-16764) When "Use Local Repository" option is selected, the stack cannot be changed

2016-08-16 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-16764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423174#comment-15423174
 ] 

Jayush Luniya commented on AMBARI-16764:


[~onechiporenko]
Moving this out of 2.4.0 to 2.5.0. Please retriage if required. 

> When "Use Local Repository" option is selected, the stack cannot be changed
> ---
>
> Key: AMBARI-16764
> URL: https://issues.apache.org/jira/browse/AMBARI-16764
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Oleg Nechiporenko
>Assignee: Oleg Nechiporenko
>Priority: Critical
> Fix For: 2.5.0
>
>
> Clicking on the stack version tab in "Select Stack" page does nothing when 
> "Use Local Repository" option is selected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-16764) When "Use Local Repository" option is selected, the stack cannot be changed

2016-08-16 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-16764:
---
Fix Version/s: (was: 2.4.0)
   2.5.0

> When "Use Local Repository" option is selected, the stack cannot be changed
> ---
>
> Key: AMBARI-16764
> URL: https://issues.apache.org/jira/browse/AMBARI-16764
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Oleg Nechiporenko
>Assignee: Oleg Nechiporenko
>Priority: Critical
> Fix For: 2.5.0
>
>
> Clicking on the stack version tab in "Select Stack" page does nothing when 
> "Use Local Repository" option is selected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18037) Supervisor service check and restart is failing

2016-08-16 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423061#comment-15423061
 ] 

Jayush Luniya commented on AMBARI-18037:


[~shreyabh...@gmail.com]
Can you assign the JIRA to the developer who is working on this?

> Supervisor service check and restart is failing
> ---
>
> Key: AMBARI-18037
> URL: https://issues.apache.org/jira/browse/AMBARI-18037
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Priority: Blocker
>  Labels: system_test
> Fix For: 2.4.0
>
>
> Storm service checks (and restart) fails with :
> "Configuration parameter '\" + self.name + \"' was not found in 
> configurations dictionary!\")\nresource_management.core.exceptions.Fail: 
> Configuration parameter 'storm_principal_name' was not found in 
> configurations dictionary!",
> Looks related to : https://issues.apache.org/jira/browse/AMBARI-17772



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18134) Hive metastore stop failed

2016-08-16 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423057#comment-15423057
 ] 

Jayush Luniya commented on AMBARI-18134:


[~shreyabh...@gmail.com]
Can you assign the JIRA to the developer who is working on this?


> Hive metastore stop failed
> --
>
> Key: AMBARI-18134
> URL: https://issues.apache.org/jira/browse/AMBARI-18134
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Priority: Blocker
>  Labels: system_test
> Fix For: 2.4.0
>
>
> Hive metastore start is failing after install with :
> {code}
>  "Traceback (most recent call last):\n  File 
> \"/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py\",
>  line 254, in \nHiveMetastore().execute()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
>  line 280, in execute\nmethod(env)\n  File 
> \"/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py\",
>  line 59, in start\nself.configure(env)\n  File 
> \"/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py\",
>  line 73, in configure\nhive(name = 'metastore')\n  File 
> \"/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py\", line 
> 89, in thunk\nreturn fn(*args, **kwargs)\n  File 
> \"/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py\",
>  line 320, in hive\nuser = params.hive_user\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 
> 155, in __init__\nself.env.run()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 160, in run\nself.run_action(resource, action)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", 
> line 124, in run_action\nprovider_action()\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
>  line 273, in action_run\ntries=self.resource.tries, 
> try_sleep=self.resource.try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 71, in inner\nresult = function(command, **kwargs)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 93, in checked_call\ntries=tries, try_sleep=try_sleep)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 141, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n  File 
> \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 
> 294, in _call\nraise 
> Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of 'export 
> HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; 
> /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql 
> -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path 
> contains multiple SLF4J bindings.\nSLF4J: Found binding in 
> [jar:file:/grid/0/hdp/2.5.0.0-1189/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
>  Found binding in 
> [jar:file:/grid/0/hdp/2.5.0.0-1189/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.\nSLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]\nMetastore connection URL:\t 
> jdbc:mysql://nat-s11-4-sies-ambari-hosts-6-3.openstacklocal/hive\nMetastore 
> Connection Driver :\t com.mysql.jdbc.Driver\nMetastore connection User:\t 
> hive\norg.apache.hadoop.hive.metastore.HiveMetaException: Failed to load 
> driver\nUnderlying cause: java.lang.ClassNotFoundException : 
> com.mysql.jdbc.Driver\norg.apache.hadoop.hive.metastore.HiveMetaException: 
> Failed to load driver\n\tat 
> org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:82)\n\tat
>  
> org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:133)\n\tat
>  
> org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:187)\n\tat
>  org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:291)\n\tat 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277)\n\tat 
> org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526)\n\tat 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat
>  
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat
>  

[jira] [Commented] (AMBARI-18157) Hive service check is failing after RU

2016-08-16 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423058#comment-15423058
 ] 

Jayush Luniya commented on AMBARI-18157:


[~shreyabh...@gmail.com]
Can you assign the JIRA to the developer who is working on this?

> Hive service check is failing after RU
> --
>
> Key: AMBARI-18157
> URL: https://issues.apache.org/jira/browse/AMBARI-18157
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Shreya Bhat
>Priority: Blocker
>  Labels: system_test
> Fix For: 2.4.0
>
>
> {code}
> resource_management.core.exceptions.Fail: Execution of 
> '/var/lib/ambari-agent/tmp/templetonSmoke.sh $HOSTNAME 50111 
> idtest.ambari-qa.1471340206.23.pig 
> /etc/security/keytabs/smokeuser.headless.keytab true /usr/bin/kinit 
> ambari-qa@REALM_NAME /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke 
> Test (ddl cmd): Failed. : {"error":"Unauthorized connection for super-user: 
> HTTP/HOST_NAME@REALM_NAME from IP HOST_IP"}http_code <500>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    2   3   4   5   6   7   8   9   10   11   >