[jira] [Updated] (AMBARI-18248) Parsing of /var/log/messages and /var/log/secure (Log Feeder)

2016-08-23 Thread Hayat Behlim (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hayat Behlim updated AMBARI-18248:
--
Fix Version/s: 2.5.0

> Parsing of /var/log/messages and /var/log/secure (Log Feeder)
> -
>
> Key: AMBARI-18248
> URL: https://issues.apache.org/jira/browse/AMBARI-18248
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 2.4.0
>Reporter: Hayat Behlim
>Assignee: Hayat Behlim
>Priority: Minor
> Fix For: 2.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18248) Parsing of /var/log/messages and /var/log/secure (Log Feeder)

2016-08-23 Thread Hayat Behlim (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hayat Behlim updated AMBARI-18248:
--
Affects Version/s: 2.4.0

> Parsing of /var/log/messages and /var/log/secure (Log Feeder)
> -
>
> Key: AMBARI-18248
> URL: https://issues.apache.org/jira/browse/AMBARI-18248
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 2.4.0
>Reporter: Hayat Behlim
>Assignee: Hayat Behlim
>Priority: Minor
> Fix For: 2.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18248) Parsing of /var/log/messages and /var/log/secure (Log Feeder)

2016-08-23 Thread Hayat Behlim (JIRA)
Hayat Behlim created AMBARI-18248:
-

 Summary: Parsing of /var/log/messages and /var/log/secure (Log 
Feeder)
 Key: AMBARI-18248
 URL: https://issues.apache.org/jira/browse/AMBARI-18248
 Project: Ambari
  Issue Type: Bug
  Components: ambari-logsearch
Reporter: Hayat Behlim
Assignee: Hayat Behlim
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-16278) Give more time for HBase system tables to be assigned

2016-08-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated AMBARI-16278:

Description: 
We have observed extended cluster downtime due to HBase system tables not being 
assigned at cluster start up.

The default values for the following two parameters are too low:

hbase.regionserver.executor.openregion.threads (default: 3)
hbase.master.namespace.init.timeout (default: 30)

We set hbase.regionserver.executor.openregion.threads=200 and 
hbase.master.namespace.init.timeout=240 in some case to work around 
HBASE-14190.

Ambari can use 20 for hbase.regionserver.executor.openregion.threads and 
240 for hbase.master.namespace.init.timeout as default value.

  was:
We have observed extended cluster downtime due to HBase system tables not being 
assigned at cluster start up.

The default values for the following two parameters are too low:

hbase.regionserver.executor.openregion.threads (default: 3)
hbase.master.namespace.init.timeout (default: 30)

We set hbase.regionserver.executor.openregion.threads=200 and 
hbase.master.namespace.init.timeout=240 in some case to work around 
HBASE-14190.

Ambari can use 20 for hbase.regionserver.executor.openregion.threads and 
240 for hbase.master.namespace.init.timeout as default value.



> Give more time for HBase system tables to be assigned
> -
>
> Key: AMBARI-16278
> URL: https://issues.apache.org/jira/browse/AMBARI-16278
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> We have observed extended cluster downtime due to HBase system tables not 
> being assigned at cluster start up.
> The default values for the following two parameters are too low:
> hbase.regionserver.executor.openregion.threads (default: 3)
> hbase.master.namespace.init.timeout (default: 30)
> We set hbase.regionserver.executor.openregion.threads=200 and 
> hbase.master.namespace.init.timeout=240 in some case to work around 
> HBASE-14190.
> Ambari can use 20 for hbase.regionserver.executor.openregion.threads and 
> 240 for hbase.master.namespace.init.timeout as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17346) Dependent components should be shutdown before stopping hdfs

2016-08-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated AMBARI-17346:

Description: 
Sometimes admin shuts down hdfs first, then hbase. 

By the time hbase is shutdown, no data can be persisted (including metadata). 
This results in large number of inconsistencies when hbase cluster is brought 
back up.


Before hdfs is shutdown, the components dependent on hdfs should be shutdown 
first.

  was:
Sometimes admin shuts down hdfs first, then hbase. 

By the time hbase is shutdown, no data can be persisted (including metadata). 
This results in large number of inconsistencies when hbase cluster is brought 
back up.

Before hdfs is shutdown, the components dependent on hdfs should be shutdown 
first.


> Dependent components should be shutdown before stopping hdfs
> 
>
> Key: AMBARI-17346
> URL: https://issues.apache.org/jira/browse/AMBARI-17346
> Project: Ambari
>  Issue Type: Bug
>Reporter: Ted Yu
>
> Sometimes admin shuts down hdfs first, then hbase. 
> By the time hbase is shutdown, no data can be persisted (including metadata). 
> This results in large number of inconsistencies when hbase cluster is brought 
> back up.
> Before hdfs is shutdown, the components dependent on hdfs should be shutdown 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18064) Decrease the number of retry count for check_ranger_login_urllib2

2016-08-23 Thread Sumit Mohanty (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumit Mohanty updated AMBARI-18064:
---
Assignee: JaySenSharma

> Decrease the number of retry count for check_ranger_login_urllib2
> -
>
> Key: AMBARI-18064
> URL: https://issues.apache.org/jira/browse/AMBARI-18064
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: trunk
> Environment: All
>Reporter: JaySenSharma
>Assignee: JaySenSharma
>  Labels: patch-available
> Fix For: trunk
>
> Attachments: AMBARI-18064.patch, output-297.txt
>
>
> If the Ranger Admin is down then while starting any service from Ambari it 
> keeps retrying  75 times in the interval of 8 seconds (total 600 Seconds , 
> Means 10 minutes) and then it finally starts the service like Kafka Broker 
> service.
> Following kind of logging we can see in the ambari console when the Ranger 
> Admin is Down and when the kafka broker start request is triggered (Attaching 
> the "/var/lib/ambari-agent/data/output-297.txt" log):
> Snippet of the retry attempts:
> {code}
> 2016-08-08 13:45:27,802 - HdfsResource[None] {'security_enabled': False, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 
> 'default_fs': 'hdfs://jss1.example.com:8020', 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 
> 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']}
> 2016-08-08 13:45:27,853 - RangeradminV2: Skip ranger admin if it's down !
> 2016-08-08 13:45:27,858 - Will retry 74 time(s), caught exception: Connection 
> failed to Ranger Admin. Reason - [Errno 111] Connection refused.. Sleeping 
> for 8 sec(s)
> 2016-08-08 13:45:35,869 - Will retry 73 time(s), caught exception: Connection 
> failed to Ranger Admin. Reason - [Errno 111] Connection refused.. Sleeping 
> for 8 sec(s)
> .
> .
> .
> 2016-08-08 13:55:04,653 - Will retry 2 time(s), caught exception: Connection 
> failed to Ranger Admin. Reason - [Errno 111] Connection refused.. Sleeping 
> for 8 sec(s)
> 2016-08-08 13:55:12,665 - Will retry 1 time(s), caught exception: Connection 
> failed to Ranger Admin. Reason - [Errno 111] Connection refused.. Sleeping 
> for 8 sec(s)
> 2016-08-08 13:55:20,676 - Connection failed to Ranger Admin. Reason - [Errno 
> 111] Connection refused.
> 2016-08-08 13:55:20,683 - 
> File['/usr/hdp/current/kafka-broker/config/ranger-security.xml'] {'content': 
> InlineTemplate(...), 'owner': 'kafka', 'group': 'hadoop', 'mode': 0644}
> {code}
> *What is Needed?*
> Here we see that it is not worth to wait for 600 Seconds (10 Minutes) to 
> retry and then start the service (kafka broker Or any other component).  
> Instead it can be reduced retry attempts to 15 times instead of trying 75 
> times.
> *What was previous behavior?*
> Before the [AMBARI-14710|https://issues.apache.org/jira/browse/AMBARI-14710] 
> the retry attempt was set to 15 times which was more accurate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18064) Decrease the number of retry count for check_ranger_login_urllib2

2016-08-23 Thread Sumit Mohanty (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434156#comment-15434156
 ] 

Sumit Mohanty commented on AMBARI-18064:


LGTM, +1

> Decrease the number of retry count for check_ranger_login_urllib2
> -
>
> Key: AMBARI-18064
> URL: https://issues.apache.org/jira/browse/AMBARI-18064
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: trunk
> Environment: All
>Reporter: JaySenSharma
>Assignee: JaySenSharma
>  Labels: patch-available
> Fix For: trunk
>
> Attachments: AMBARI-18064.patch, output-297.txt
>
>
> If the Ranger Admin is down then while starting any service from Ambari it 
> keeps retrying  75 times in the interval of 8 seconds (total 600 Seconds , 
> Means 10 minutes) and then it finally starts the service like Kafka Broker 
> service.
> Following kind of logging we can see in the ambari console when the Ranger 
> Admin is Down and when the kafka broker start request is triggered (Attaching 
> the "/var/lib/ambari-agent/data/output-297.txt" log):
> Snippet of the retry attempts:
> {code}
> 2016-08-08 13:45:27,802 - HdfsResource[None] {'security_enabled': False, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 
> 'default_fs': 'hdfs://jss1.example.com:8020', 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 
> 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']}
> 2016-08-08 13:45:27,853 - RangeradminV2: Skip ranger admin if it's down !
> 2016-08-08 13:45:27,858 - Will retry 74 time(s), caught exception: Connection 
> failed to Ranger Admin. Reason - [Errno 111] Connection refused.. Sleeping 
> for 8 sec(s)
> 2016-08-08 13:45:35,869 - Will retry 73 time(s), caught exception: Connection 
> failed to Ranger Admin. Reason - [Errno 111] Connection refused.. Sleeping 
> for 8 sec(s)
> .
> .
> .
> 2016-08-08 13:55:04,653 - Will retry 2 time(s), caught exception: Connection 
> failed to Ranger Admin. Reason - [Errno 111] Connection refused.. Sleeping 
> for 8 sec(s)
> 2016-08-08 13:55:12,665 - Will retry 1 time(s), caught exception: Connection 
> failed to Ranger Admin. Reason - [Errno 111] Connection refused.. Sleeping 
> for 8 sec(s)
> 2016-08-08 13:55:20,676 - Connection failed to Ranger Admin. Reason - [Errno 
> 111] Connection refused.
> 2016-08-08 13:55:20,683 - 
> File['/usr/hdp/current/kafka-broker/config/ranger-security.xml'] {'content': 
> InlineTemplate(...), 'owner': 'kafka', 'group': 'hadoop', 'mode': 0644}
> {code}
> *What is Needed?*
> Here we see that it is not worth to wait for 600 Seconds (10 Minutes) to 
> retry and then start the service (kafka broker Or any other component).  
> Instead it can be reduced retry attempts to 15 times instead of trying 75 
> times.
> *What was previous behavior?*
> Before the [AMBARI-14710|https://issues.apache.org/jira/browse/AMBARI-14710] 
> the retry attempt was set to 15 times which was more accurate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17999) Typo in property name "yarn.nodemanager.log.retain-second", should be "seconds"

2016-08-23 Thread Ying Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ying Zhang updated AMBARI-17999:

Description: In Ambari code, property name 
"yarn.nodemanager.log.retain-second" is wrong. It should be 
"yarn.nodemanager.log.retain-seconds" instead, which is the property name being 
looked up in Hadoop code.  (was: In yarn-site.xml, property name 
"yarn.nodemanager.log.retain-second" is wrong, should be 
"yarn.nodemanager.log.retain-seconds" instead.)

> Typo in property name "yarn.nodemanager.log.retain-second", should be 
> "seconds"
> ---
>
> Key: AMBARI-17999
> URL: https://issues.apache.org/jira/browse/AMBARI-17999
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.0
>Reporter: Ying Zhang
>Priority: Minor
> Attachments: AMBARI-17999.patch, AMBARI-17999.rebased.patch
>
>
> In Ambari code, property name "yarn.nodemanager.log.retain-second" is wrong. 
> It should be "yarn.nodemanager.log.retain-seconds" instead, which is the 
> property name being looked up in Hadoop code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17999) Typo in property name "yarn.nodemanager.log.retain-second", should be "seconds"

2016-08-23 Thread Ying Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ying Zhang updated AMBARI-17999:

Attachment: AMBARI-17999.rebased.patch

> Typo in property name "yarn.nodemanager.log.retain-second", should be 
> "seconds"
> ---
>
> Key: AMBARI-17999
> URL: https://issues.apache.org/jira/browse/AMBARI-17999
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.0
>Reporter: Ying Zhang
>Priority: Minor
> Attachments: AMBARI-17999.patch, AMBARI-17999.rebased.patch
>
>
> In yarn-site.xml, property name "yarn.nodemanager.log.retain-second" is 
> wrong, should be "yarn.nodemanager.log.retain-seconds" instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17999) Typo in property name "yarn.nodemanager.log.retain-second", should be "seconds"

2016-08-23 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434103#comment-15434103
 ] 

Ying Zhang commented on AMBARI-17999:
-

[~Tim Thorpe], thanks for the reply. I've run "mvn clean install" locally. 
There is one test failure 
"org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebApp.testAboutPage",
 seems not related.

> Typo in property name "yarn.nodemanager.log.retain-second", should be 
> "seconds"
> ---
>
> Key: AMBARI-17999
> URL: https://issues.apache.org/jira/browse/AMBARI-17999
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.2.0
>Reporter: Ying Zhang
>Priority: Minor
> Attachments: AMBARI-17999.patch
>
>
> In yarn-site.xml, property name "yarn.nodemanager.log.retain-second" is 
> wrong, should be "yarn.nodemanager.log.retain-seconds" instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to find /etc/atlas/conf/users-credentials.properties

2016-08-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434098#comment-15434098
 ] 

Hudson commented on AMBARI-18244:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #5580 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5580/])
AMBARI-18244. Add Service for Atlas did not call conf-select, so failed 
(afernandez: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=5b6971ae47a826f773b8ccd37dadc049e48a6a68])
* (edit) 
ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py


> Add Service for Atlas did not call conf-select, so failed to find 
> /etc/atlas/conf/users-credentials.properties 
> ---
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
> Attachments: AMBARI-18244.patch
>
>
> STR:
> * Install Ambari 2.4.0.0 with HDP 2.5.0.0 and basic services except Atlas
> * Add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434099#comment-15434099
 ] 

Hudson commented on AMBARI-18239:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #5580 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5580/])
AMBARI-18239: oozie.py is reading invalid 'version' attribute which (jluniya: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=6b969905215f32ef333ec9d7b01f43af9c55ba47])
* (edit) 
ambari-common/src/main/python/resource_management/libraries/functions/copy_tarball.py
* (edit) 
ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py


> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18243) WFManager view is broken in trunk builds

2016-08-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434100#comment-15434100
 ] 

Hudson commented on AMBARI-18243:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #5580 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5580/])
AMBARI-18243. WFManager view is broken in trunk builds. (Venkat (yusaku: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=2a36fc5ec56c5b697d7c815d3b8147a1e9df41c8])
* (edit) contrib/views/wfmanager/pom.xml


> WFManager view is broken in trunk builds
> 
>
> Key: AMBARI-18243
> URL: https://issues.apache.org/jira/browse/AMBARI-18243
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: trunk
>Reporter: Venkat Ranganathan
>Assignee: Venkat Ranganathan
> Fix For: trunk
>
> Attachments: AMBARI-18243-2.patch, AMBARI-18243.patch
>
>
> The parent version and some of the dependency version was hard coded to 
> 2.4.0.0.0 - fixing this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18247) Capacity scheduler's dependent config suggestion is incomprehensible

2016-08-23 Thread Zhe (Joe) Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe (Joe) Wang updated AMBARI-18247:

Status: Patch Available  (was: Open)

> Capacity scheduler's dependent config suggestion is incomprehensible
> 
>
> Key: AMBARI-18247
> URL: https://issues.apache.org/jira/browse/AMBARI-18247
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Zhe (Joe) Wang
>Assignee: Zhe (Joe) Wang
> Fix For: trunk
>
> Attachments: AMBARI-18247.v0.patch
>
>
> Dependent configs dialog shows recommended values for capacity scheduler 
> which is incomprehensible to read. It will be good to breakdown the specific 
> changes into separate rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18247) Capacity scheduler's dependent config suggestion is incomprehensible

2016-08-23 Thread Zhe (Joe) Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe (Joe) Wang updated AMBARI-18247:

Attachment: AMBARI-18247.v0.patch

Modified unit test.
Local ambari-web test passed.
29244 tests complete (25 seconds)
  154 tests pending
Manual testing done.

> Capacity scheduler's dependent config suggestion is incomprehensible
> 
>
> Key: AMBARI-18247
> URL: https://issues.apache.org/jira/browse/AMBARI-18247
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Zhe (Joe) Wang
>Assignee: Zhe (Joe) Wang
> Fix For: trunk
>
> Attachments: AMBARI-18247.v0.patch
>
>
> Dependent configs dialog shows recommended values for capacity scheduler 
> which is incomprehensible to read. It will be good to breakdown the specific 
> changes into separate rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18247) Capacity scheduler's dependent config suggestion is incomprehensible

2016-08-23 Thread Zhe (Joe) Wang (JIRA)
Zhe (Joe) Wang created AMBARI-18247:
---

 Summary: Capacity scheduler's dependent config suggestion is 
incomprehensible
 Key: AMBARI-18247
 URL: https://issues.apache.org/jira/browse/AMBARI-18247
 Project: Ambari
  Issue Type: Bug
  Components: ambari-web
Affects Versions: 2.4.0
Reporter: Zhe (Joe) Wang
Assignee: Zhe (Joe) Wang
 Fix For: trunk


Dependent configs dialog shows recommended values for capacity scheduler which 
is incomprehensible to read. It will be good to breakdown the specific changes 
into separate rows.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (AMBARI-18179) Ambari Blueprint - Namenode HA should check for ZKFC

2016-08-23 Thread Amruta Borkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amruta Borkar reassigned AMBARI-18179:
--

Assignee: Amruta Borkar

> Ambari Blueprint - Namenode HA should check for ZKFC
> 
>
> Key: AMBARI-18179
> URL: https://issues.apache.org/jira/browse/AMBARI-18179
> Project: Ambari
>  Issue Type: Bug
>  Components: blueprints
>Affects Versions: 2.2.2
> Environment: HDP - 2.3.4.7
> Ambari - 2.2.2.0
>Reporter: Kuldeep Kulkarni
>Assignee: Amruta Borkar
> Attachments: cluster_config.json, generate_json.sh, hostmap.json, 
> repo-utils.json, repo.json
>
>
> I was trying NN HA with Ambari blueprint and blueprint was accepted even 
> though ZKFC service was missing in my blueprint.
> P.S - I did not use validate_topology=false
> Please find attached blueprint.
> I referred my own article for curl commands.
> http://crazyadmins.com/automate-hdp-installation-using-ambari-blueprints-part-2/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18246) Clean up Log Feeder skeleton

2016-08-23 Thread Miklos Gergely (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated AMBARI-18246:

Status: Patch Available  (was: In Progress)

> Clean up Log Feeder skeleton
> 
>
> Key: AMBARI-18246
> URL: https://issues.apache.org/jira/browse/AMBARI-18246
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 2.5.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
> Fix For: 2.5.0
>
> Attachments: AMBARI-18246.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18246) Clean up Log Feeder skeleton

2016-08-23 Thread Miklos Gergely (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated AMBARI-18246:

Attachment: AMBARI-18246.patch

> Clean up Log Feeder skeleton
> 
>
> Key: AMBARI-18246
> URL: https://issues.apache.org/jira/browse/AMBARI-18246
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 2.5.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
> Fix For: 2.5.0
>
> Attachments: AMBARI-18246.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya resolved AMBARI-18239.

Resolution: Fixed

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433926#comment-15433926
 ] 

Jayush Luniya commented on AMBARI-18239:


Trunk
commit 6b969905215f32ef333ec9d7b01f43af9c55ba47
Author: Jayush Luniya 
Date:   Tue Aug 23 17:23:38 2016 -0700

AMBARI-18239: oozie.py is reading invalid 'version' attribute which results 
in not copying required atlas hook jars (jluniya)

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Mahadev konar (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433921#comment-15433921
 ] 

Mahadev konar commented on AMBARI-18239:


+1 for the patch.

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Alejandro Fernandez (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433916#comment-15433916
 ] 

Alejandro Fernandez commented on AMBARI-18239:
--

+1 for [^AMBARI-18239.patch]

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18246) Clean up Log Feeder skeleton

2016-08-23 Thread Miklos Gergely (JIRA)
Miklos Gergely created AMBARI-18246:
---

 Summary: Clean up Log Feeder skeleton
 Key: AMBARI-18246
 URL: https://issues.apache.org/jira/browse/AMBARI-18246
 Project: Ambari
  Issue Type: Bug
  Components: ambari-logsearch
Affects Versions: 2.5.0
Reporter: Miklos Gergely
Assignee: Miklos Gergely
 Fix For: 2.5.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433900#comment-15433900
 ] 

Jayush Luniya commented on AMBARI-18239:


Previous patch wouldnt address the issue. New patch attached.

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18239:
---
Attachment: (was: AMBARI-18239.patch)

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to find /etc/atlas/conf/users-credentials.properties

2016-08-23 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18244:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to trunk, commit 5b6971ae47a826f773b8ccd37dadc049e48a6a68

> Add Service for Atlas did not call conf-select, so failed to find 
> /etc/atlas/conf/users-credentials.properties 
> ---
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
> Attachments: AMBARI-18244.patch
>
>
> STR:
> * Install Ambari 2.4.0.0 with HDP 2.5.0.0 and basic services except Atlas
> * Add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18239:
---
Attachment: AMBARI-18239.trunk.patch

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.trunk.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18239:
---
Attachment: AMBARI-18239.patch

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch, AMBARI-18239.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18236) Fix package structure in Logfeeder

2016-08-23 Thread Miklos Gergely (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated AMBARI-18236:

Attachment: AMBARI-18236.patch

> Fix package structure in Logfeeder
> --
>
> Key: AMBARI-18236
> URL: https://issues.apache.org/jira/browse/AMBARI-18236
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 2.5.0
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
> Fix For: 2.5.0
>
> Attachments: AMBARI-18236.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18239:
---
Priority: Critical  (was: Major)

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya reassigned AMBARI-18239:
--

Assignee: Jayush Luniya  (was: Ayub Khan)

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18245) Upgrade node to version 4.x

2016-08-23 Thread Yusaku Sako (JIRA)
Yusaku Sako created AMBARI-18245:


 Summary: Upgrade node to version 4.x
 Key: AMBARI-18245
 URL: https://issues.apache.org/jira/browse/AMBARI-18245
 Project: Ambari
  Issue Type: Bug
  Components: ambari-admin, ambari-views, ambari-web
Affects Versions: trunk
Reporter: Yusaku Sako
Assignee: Zhe (Joe) Wang
 Fix For: trunk


We are currently using 0.10, which is very old and going EOL 2016-10-01: 
https://github.com/nodejs/LTS

We should look into upgrading to Node 4.x:

* Upgrade Node on Ambari Web 
* Upgrade Node on Ambari Admin
* Upgrade Node on contrib/views/* modules



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18245) Upgrade node to version 4.x

2016-08-23 Thread Yusaku Sako (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yusaku Sako updated AMBARI-18245:
-
Issue Type: Task  (was: Bug)

> Upgrade node to version 4.x
> ---
>
> Key: AMBARI-18245
> URL: https://issues.apache.org/jira/browse/AMBARI-18245
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-admin, ambari-views, ambari-web
>Affects Versions: trunk
>Reporter: Yusaku Sako
>Assignee: Zhe (Joe) Wang
> Fix For: trunk
>
>
> We are currently using 0.10, which is very old and going EOL 2016-10-01: 
> https://github.com/nodejs/LTS
> We should look into upgrading to Node 4.x:
> * Upgrade Node on Ambari Web 
> * Upgrade Node on Ambari Admin
> * Upgrade Node on contrib/views/* modules



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-18239:
---
Status: Open  (was: Patch Available)

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Jayush Luniya
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to find /etc/atlas/conf/users-credentials.properties

2016-08-23 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433890#comment-15433890
 ] 

Jayush Luniya commented on AMBARI-18244:


+1

> Add Service for Atlas did not call conf-select, so failed to find 
> /etc/atlas/conf/users-credentials.properties 
> ---
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
> Attachments: AMBARI-18244.patch
>
>
> STR:
> * Install Ambari 2.4.0.0 with HDP 2.5.0.0 and basic services except Atlas
> * Add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to find /etc/atlas/conf/users-credentials.properties

2016-08-23 Thread Nate Cole (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433879#comment-15433879
 ] 

Nate Cole commented on AMBARI-18244:


+1

> Add Service for Atlas did not call conf-select, so failed to find 
> /etc/atlas/conf/users-credentials.properties 
> ---
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
> Attachments: AMBARI-18244.patch
>
>
> STR:
> * Install Ambari 2.4.0.0 with HDP 2.5.0.0 and basic services except Atlas
> * Add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to find /etc/atlas/conf/users-credentials.properties

2016-08-23 Thread Jonathan Hurley (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433881#comment-15433881
 ] 

Jonathan Hurley commented on AMBARI-18244:
--

+1

> Add Service for Atlas did not call conf-select, so failed to find 
> /etc/atlas/conf/users-credentials.properties 
> ---
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
> Attachments: AMBARI-18244.patch
>
>
> STR:
> * Install Ambari 2.4.0.0 with HDP 2.5.0.0 and basic services except Atlas
> * Add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to find /etc/atlas/conf/users-credentials.properties

2016-08-23 Thread Alejandro Fernandez (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433877#comment-15433877
 ] 

Alejandro Fernandez commented on AMBARI-18244:
--

Python unit tests passed,
--
Total run:1123
Total errors:0
Total failures:0
OK

> Add Service for Atlas did not call conf-select, so failed to find 
> /etc/atlas/conf/users-credentials.properties 
> ---
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
> Attachments: AMBARI-18244.patch
>
>
> STR:
> * Install Ambari 2.4.0.0 with HDP 2.5.0.0 and basic services except Atlas
> * Add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to find /etc/atlas/conf/users-credentials.properties

2016-08-23 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18244:
-
Attachment: AMBARI-18244.patch

> Add Service for Atlas did not call conf-select, so failed to find 
> /etc/atlas/conf/users-credentials.properties 
> ---
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
> Attachments: AMBARI-18244.patch
>
>
> STR:
> * Install Ambari 2.4.0.0 with HDP 2.5.0.0 and basic services except Atlas
> * Add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to find /etc/atlas/conf/users-credentials.properties

2016-08-23 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18244:
-
Status: Patch Available  (was: Open)

> Add Service for Atlas did not call conf-select, so failed to find 
> /etc/atlas/conf/users-credentials.properties 
> ---
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
> Attachments: AMBARI-18244.patch
>
>
> STR:
> * Install Ambari 2.4.0.0 with HDP 2.5.0.0 and basic services except Atlas
> * Add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to find file users-credentials.properties

2016-08-23 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18244:
-
Summary: Add Service for Atlas did not call conf-select, so failed to find 
file users-credentials.properties   (was: Add Service for Atlas did not call 
conf-select, so failed to copy file users-credentials.properties )

> Add Service for Atlas did not call conf-select, so failed to find file 
> users-credentials.properties 
> 
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
>
> STR:
> * Install Ambari 2.4.0.0 with HDP 2.5.0.0 and basic services except Atlas
> * Add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to find /etc/atlas/conf/users-credentials.properties

2016-08-23 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18244:
-
Summary: Add Service for Atlas did not call conf-select, so failed to find 
/etc/atlas/conf/users-credentials.properties   (was: Add Service for Atlas did 
not call conf-select, so failed to find file users-credentials.properties )

> Add Service for Atlas did not call conf-select, so failed to find 
> /etc/atlas/conf/users-credentials.properties 
> ---
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
>
> STR:
> * Install Ambari 2.4.0.0 with HDP 2.5.0.0 and basic services except Atlas
> * Add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18244) Add Service for Atlas did not call conf-select, so failed to copy file users-credentials.properties

2016-08-23 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18244:
-
Description: 
STR:
* Install Ambari 2.2.2.0 with HDP 2.3.6.0
* Install Atlas
* Upgrade to Ambari 2.4.0.0
* Remove Atlas
* Stack Upgrade to HDP 2.5.0.0
* Re-add Atlas service

On the Atlas server host, the file /etc/atlas/conf/users-credentials.properties 
is missing. This is because conf-select was not called on it after the service 
was added because it did not contain a mapping for Atlas.

Right now,
{noformat}
ls -la /etc/atlas/conf/  (this is a dir)
-rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties

ls -la /usr/hdp/current/atlas-client
lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
/usr/hdp/2.5.0.0-1237/atlas

# This is incorrect
ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
/etc/atlas/conf
{noformat}

To fix this, we need to have /etc/atlas/conf -> 
/usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
/etc/atlas/2.5.0.0-1237/0

  was:
STR:
* Install Ambari 2.2.2.0 with HDP 2.3.6.0
* Install Atlas
* Upgrade to Ambari 2.4.0.0
* Remove Atlas
* Stack Upgrade to HDP 2.5.0.0
* Re-add Atlas service

On the Atlas server host, the file /etc/atlas/conf/users-credentials.properties 
is missing. This is because conf-select was not called on it after the service 
was added because it did not contain a mapping for Atlas.



> Add Service for Atlas did not call conf-select, so failed to copy file 
> users-credentials.properties 
> 
>
> Key: AMBARI-18244
> URL: https://issues.apache.org/jira/browse/AMBARI-18244
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: trunk
>
>
> STR:
> * Install Ambari 2.2.2.0 with HDP 2.3.6.0
> * Install Atlas
> * Upgrade to Ambari 2.4.0.0
> * Remove Atlas
> * Stack Upgrade to HDP 2.5.0.0
> * Re-add Atlas service
> On the Atlas server host, the file 
> /etc/atlas/conf/users-credentials.properties is missing. This is because 
> conf-select was not called on it after the service was added because it did 
> not contain a mapping for Atlas.
> Right now,
> {noformat}
> ls -la /etc/atlas/conf/  (this is a dir)
> -rw-r--r-- 1 root  root207 Aug 22 14:57 users-credentials.properties
> ls -la /usr/hdp/current/atlas-client
> lrwxrwxrwx 1 root root 27 Aug 23 23:24 /usr/hdp/current/atlas-client -> 
> /usr/hdp/2.5.0.0-1237/atlas
> # This is incorrect
> ls -la /usr/hdp/2.5.0.0-1237/atlas/conf 
> lrwxrwxrwx 1 root root 15 Aug 23 23:24 /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/conf
> {noformat}
> To fix this, we need to have /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf and /usr/hdp/2.5.0.0-1237/atlas/conf -> 
> /etc/atlas/2.5.0.0-1237/0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18243) WFManager view is broken in trunk builds

2016-08-23 Thread Yusaku Sako (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yusaku Sako updated AMBARI-18243:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk.

> WFManager view is broken in trunk builds
> 
>
> Key: AMBARI-18243
> URL: https://issues.apache.org/jira/browse/AMBARI-18243
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: trunk
>Reporter: Venkat Ranganathan
>Assignee: Venkat Ranganathan
> Fix For: trunk
>
> Attachments: AMBARI-18243-2.patch, AMBARI-18243.patch
>
>
> The parent version and some of the dependency version was hard coded to 
> 2.4.0.0.0 - fixing this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18243) WFManager view is broken in trunk builds

2016-08-23 Thread Aravindan Vijayan (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433836#comment-15433836
 ] 

Aravindan Vijayan commented on AMBARI-18243:


+1

> WFManager view is broken in trunk builds
> 
>
> Key: AMBARI-18243
> URL: https://issues.apache.org/jira/browse/AMBARI-18243
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: trunk
>Reporter: Venkat Ranganathan
>Assignee: Venkat Ranganathan
> Fix For: trunk
>
> Attachments: AMBARI-18243-2.patch, AMBARI-18243.patch
>
>
> The parent version and some of the dependency version was hard coded to 
> 2.4.0.0.0 - fixing this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18243) WFManager view is broken in trunk builds

2016-08-23 Thread Yusaku Sako (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433826#comment-15433826
 ] 

Yusaku Sako commented on AMBARI-18243:
--

I tried the original patch on the cluster and got:

[ERROR] Failed to execute goal 
com.github.eirslett:frontend-maven-plugin:1.0:install-node-and-npm (install 
node and npm) on project wfmanager: The plugin 
com.github.eirslett:frontend-maven-plugin:1.0 requires Maven version 3.1.0 -> 
[Help 1]

So I'm attaching a new patch that uses 0.0.16 version of the plugin, which is 
compatible with Maven 3.0.5 (as it is the Maven version the development guide 
says to use)

> WFManager view is broken in trunk builds
> 
>
> Key: AMBARI-18243
> URL: https://issues.apache.org/jira/browse/AMBARI-18243
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: trunk
>Reporter: Venkat Ranganathan
>Assignee: Venkat Ranganathan
> Fix For: trunk
>
> Attachments: AMBARI-18243-2.patch, AMBARI-18243.patch
>
>
> The parent version and some of the dependency version was hard coded to 
> 2.4.0.0.0 - fixing this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18243) WFManager view is broken in trunk builds

2016-08-23 Thread Yusaku Sako (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yusaku Sako updated AMBARI-18243:
-
Attachment: AMBARI-18243-2.patch

> WFManager view is broken in trunk builds
> 
>
> Key: AMBARI-18243
> URL: https://issues.apache.org/jira/browse/AMBARI-18243
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: trunk
>Reporter: Venkat Ranganathan
>Assignee: Venkat Ranganathan
> Fix For: trunk
>
> Attachments: AMBARI-18243-2.patch, AMBARI-18243.patch
>
>
> The parent version and some of the dependency version was hard coded to 
> 2.4.0.0.0 - fixing this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18243) WFManager view is broken in trunk builds

2016-08-23 Thread Yusaku Sako (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yusaku Sako updated AMBARI-18243:
-
Status: Patch Available  (was: Open)

> WFManager view is broken in trunk builds
> 
>
> Key: AMBARI-18243
> URL: https://issues.apache.org/jira/browse/AMBARI-18243
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: trunk
>Reporter: Venkat Ranganathan
>Assignee: Venkat Ranganathan
> Fix For: trunk
>
> Attachments: AMBARI-18243-2.patch, AMBARI-18243.patch
>
>
> The parent version and some of the dependency version was hard coded to 
> 2.4.0.0.0 - fixing this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18240) During a Rolling Downgrade Oozie Long Running Jobs Can Fail

2016-08-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433819#comment-15433819
 ] 

Hudson commented on AMBARI-18240:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #5578 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5578/])
AMBARI-18240 - During a Rolling Downgrade Oozie Long Running Jobs Can (jhurley: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=04a534ceacb1887c4666c97ea0d1a2670fe4a1cd])
* (edit) ambari-server/src/test/python/stacks/2.0.6/HDFS/test_namenode.py
* (edit) 
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py


> During a Rolling Downgrade Oozie Long Running Jobs Can Fail
> ---
>
> Key: AMBARI-18240
> URL: https://issues.apache.org/jira/browse/AMBARI-18240
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Blocker
> Fix For: trunk
>
> Attachments: AMBARI-18240.patch
>
>
> - Install HDP-2.3.2.0-2950 with Ambari 2.4.0
> - Being a long-running job (LRJ) in Oozie
> - Start upgrading to HDP-2.5.0.0-1235
> - Before finalizing step, start downgrading to HDP-2.3.2.0-2950. 
> Sometimes, the LRJ will fail:
> {code}
> /usr/hdp/current/oozie-client/bin/oozie job -oozie 
> http://natr66-grls-dlm10toeriedwngdsec-r6-10.openstacklocal:11000/oozie   
> -info 001-160821214718970-oozie-oozi-C@248 
> ID : 001-160821214718970-oozie-oozi-C@248
> 
> Action Number: 248
> Console URL  : -
> Error Code   : -
> Error Message: -
> External ID  : 030-160822042035608-oozie-oozi-W
> External Status  : -
> Job ID   : 001-160821214718970-oozie-oozi-C
> Tracker URI  : -
> Created  : 2016-08-22 00:37 GMT
> Nominal Time : 2009-01-01 21:35 GMT
> Status   : FAILED
> Last Modified: 2016-08-22 05:15 GMT
> First Missing Dependency : -
> 
> [hrt_qa@natr66-grls-dlm10toeriedwngdsec-r6-21 ~]$  
> /usr/hdp/current/oozie-client/bin/oozie job -oozie 
> http://natr66-grls-dlm10toeriedwngdsec-r6-10.openstacklocal:11000/oozie   
> -info 030-160822042035608-oozie-oozi-W
> Job ID : 030-160822042035608-oozie-oozi-W
> 
> Workflow Name : wordcount
> App Path  : hdfs://nameservice/user/hrt_qa/test_oozie_long_running
> Status: FAILED
> Run   : 0
> User  : hrt_qa
> Group : -
> Created   : 2016-08-22 05:08 GMT
> Started   : 2016-08-22 05:08 GMT
> Last Modified : 2016-08-22 05:15 GMT
> Ended : 2016-08-22 05:15 GMT
> CoordAction ID: 001-160821214718970-oozie-oozi-C@248
> Actions
> 
> ID
> StatusExt ID Ext Status Err Code  
> 
> 030-160822042035608-oozie-oozi-W@wc   
> FAILEDjob_1471842441396_0002 FAILED JA017 
> 
> 030-160822042035608-oozie-oozi-W@:start:  
> OK-  OK - 
> 
> {code}
> This is caused by an outage of both NameNodes during the downgrade. 
> - We have two NNs at the "Finalize Upgrade" state; 
> -- nn1 is standby (out of safemode)
> -- nn2 is active (out of safemode)
> - A downgrade begins and we restart nn1
> -- After the restart of nn1, it hasn't come online yet. Our code tries to 
> contact it and can't, so we move onto nn2.
> -- nn2 is online and active and out of safemode (because it hasn't been 
> downgraded yet), so we let the downgrade continue
> - The downgrade continues and we restart nn2
> -- However, nn1 is still coming online and isn't even standby yet
> Now we have an nn1 which isn't fully loaded and an nn2 which is restarting 
> and trying to figure 

[jira] [Commented] (AMBARI-18243) WFManager view is broken in trunk builds

2016-08-23 Thread Venkat Ranganathan (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433658#comment-15433658
 ] 

Venkat Ranganathan commented on AMBARI-18243:
-

Skipping RB

> WFManager view is broken in trunk builds
> 
>
> Key: AMBARI-18243
> URL: https://issues.apache.org/jira/browse/AMBARI-18243
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: trunk
>Reporter: Venkat Ranganathan
>Assignee: Venkat Ranganathan
> Fix For: trunk
>
> Attachments: AMBARI-18243.patch
>
>
> The parent version and some of the dependency version was hard coded to 
> 2.4.0.0.0 - fixing this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18243) WFManager view is broken in trunk builds

2016-08-23 Thread Venkat Ranganathan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkat Ranganathan updated AMBARI-18243:

Attachment: AMBARI-18243.patch

> WFManager view is broken in trunk builds
> 
>
> Key: AMBARI-18243
> URL: https://issues.apache.org/jira/browse/AMBARI-18243
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: trunk
>Reporter: Venkat Ranganathan
>Assignee: Venkat Ranganathan
> Fix For: trunk
>
> Attachments: AMBARI-18243.patch
>
>
> The parent version and some of the dependency version was hard coded to 
> 2.4.0.0.0 - fixing this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18243) WFManager view is broken in trunk builds

2016-08-23 Thread Venkat Ranganathan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkat Ranganathan updated AMBARI-18243:

Fix Version/s: trunk

> WFManager view is broken in trunk builds
> 
>
> Key: AMBARI-18243
> URL: https://issues.apache.org/jira/browse/AMBARI-18243
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: trunk
>Reporter: Venkat Ranganathan
>Assignee: Venkat Ranganathan
> Fix For: trunk
>
> Attachments: AMBARI-18243.patch
>
>
> The parent version and some of the dependency version was hard coded to 
> 2.4.0.0.0 - fixing this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18243) WFManager view is broken in trunk builds

2016-08-23 Thread Venkat Ranganathan (JIRA)
Venkat Ranganathan created AMBARI-18243:
---

 Summary: WFManager view is broken in trunk builds
 Key: AMBARI-18243
 URL: https://issues.apache.org/jira/browse/AMBARI-18243
 Project: Ambari
  Issue Type: Bug
  Components: ambari-views
Affects Versions: trunk
Reporter: Venkat Ranganathan
Assignee: Venkat Ranganathan


The parent version and some of the dependency version was hard coded to 
2.4.0.0.0 - fixing this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18240) During a Rolling Downgrade Oozie Long Running Jobs Can Fail

2016-08-23 Thread Jonathan Hurley (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hurley updated AMBARI-18240:
-
Status: Patch Available  (was: Open)

> During a Rolling Downgrade Oozie Long Running Jobs Can Fail
> ---
>
> Key: AMBARI-18240
> URL: https://issues.apache.org/jira/browse/AMBARI-18240
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Blocker
> Fix For: trunk
>
> Attachments: AMBARI-18240.patch
>
>
> - Install HDP-2.3.2.0-2950 with Ambari 2.4.0
> - Being a long-running job (LRJ) in Oozie
> - Start upgrading to HDP-2.5.0.0-1235
> - Before finalizing step, start downgrading to HDP-2.3.2.0-2950. 
> Sometimes, the LRJ will fail:
> {code}
> /usr/hdp/current/oozie-client/bin/oozie job -oozie 
> http://natr66-grls-dlm10toeriedwngdsec-r6-10.openstacklocal:11000/oozie   
> -info 001-160821214718970-oozie-oozi-C@248 
> ID : 001-160821214718970-oozie-oozi-C@248
> 
> Action Number: 248
> Console URL  : -
> Error Code   : -
> Error Message: -
> External ID  : 030-160822042035608-oozie-oozi-W
> External Status  : -
> Job ID   : 001-160821214718970-oozie-oozi-C
> Tracker URI  : -
> Created  : 2016-08-22 00:37 GMT
> Nominal Time : 2009-01-01 21:35 GMT
> Status   : FAILED
> Last Modified: 2016-08-22 05:15 GMT
> First Missing Dependency : -
> 
> [hrt_qa@natr66-grls-dlm10toeriedwngdsec-r6-21 ~]$  
> /usr/hdp/current/oozie-client/bin/oozie job -oozie 
> http://natr66-grls-dlm10toeriedwngdsec-r6-10.openstacklocal:11000/oozie   
> -info 030-160822042035608-oozie-oozi-W
> Job ID : 030-160822042035608-oozie-oozi-W
> 
> Workflow Name : wordcount
> App Path  : hdfs://nameservice/user/hrt_qa/test_oozie_long_running
> Status: FAILED
> Run   : 0
> User  : hrt_qa
> Group : -
> Created   : 2016-08-22 05:08 GMT
> Started   : 2016-08-22 05:08 GMT
> Last Modified : 2016-08-22 05:15 GMT
> Ended : 2016-08-22 05:15 GMT
> CoordAction ID: 001-160821214718970-oozie-oozi-C@248
> Actions
> 
> ID
> StatusExt ID Ext Status Err Code  
> 
> 030-160822042035608-oozie-oozi-W@wc   
> FAILEDjob_1471842441396_0002 FAILED JA017 
> 
> 030-160822042035608-oozie-oozi-W@:start:  
> OK-  OK - 
> 
> {code}
> This is caused by an outage of both NameNodes during the downgrade. 
> - We have two NNs at the "Finalize Upgrade" state; 
> -- nn1 is standby (out of safemode)
> -- nn2 is active (out of safemode)
> - A downgrade begins and we restart nn1
> -- After the restart of nn1, it hasn't come online yet. Our code tries to 
> contact it and can't, so we move onto nn2.
> -- nn2 is online and active and out of safemode (because it hasn't been 
> downgraded yet), so we let the downgrade continue
> - The downgrade continues and we restart nn2
> -- However, nn1 is still coming online and isn't even standby yet
> Now we have an nn1 which isn't fully loaded and an nn2 which is restarting 
> and trying to figure out whether to be active or standby. It's during this 
> gap that the tests must be failing. 
> So, it seems like we need to be a little bit smarter about waiting for the 
> namenode to restart; we can't just look at the "active" one and say things 
> are OK because it might be the next one to restart. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18240) During a Rolling Downgrade Oozie Long Running Jobs Can Fail

2016-08-23 Thread Jonathan Hurley (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hurley updated AMBARI-18240:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

{code}
commit 04a534ceacb1887c4666c97ea0d1a2670fe4a1cd (HEAD -> trunk, origin/trunk, 
origin/HEAD)
Author: Jonathan Hurley 
Date:   Tue Aug 23 12:03:19 2016 -0400

AMBARI-18240 - During a Rolling Downgrade Oozie Long Running Jobs Can Fail 
(jonathanhurley)
{code}

> During a Rolling Downgrade Oozie Long Running Jobs Can Fail
> ---
>
> Key: AMBARI-18240
> URL: https://issues.apache.org/jira/browse/AMBARI-18240
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Blocker
> Fix For: trunk
>
> Attachments: AMBARI-18240.patch
>
>
> - Install HDP-2.3.2.0-2950 with Ambari 2.4.0
> - Being a long-running job (LRJ) in Oozie
> - Start upgrading to HDP-2.5.0.0-1235
> - Before finalizing step, start downgrading to HDP-2.3.2.0-2950. 
> Sometimes, the LRJ will fail:
> {code}
> /usr/hdp/current/oozie-client/bin/oozie job -oozie 
> http://natr66-grls-dlm10toeriedwngdsec-r6-10.openstacklocal:11000/oozie   
> -info 001-160821214718970-oozie-oozi-C@248 
> ID : 001-160821214718970-oozie-oozi-C@248
> 
> Action Number: 248
> Console URL  : -
> Error Code   : -
> Error Message: -
> External ID  : 030-160822042035608-oozie-oozi-W
> External Status  : -
> Job ID   : 001-160821214718970-oozie-oozi-C
> Tracker URI  : -
> Created  : 2016-08-22 00:37 GMT
> Nominal Time : 2009-01-01 21:35 GMT
> Status   : FAILED
> Last Modified: 2016-08-22 05:15 GMT
> First Missing Dependency : -
> 
> [hrt_qa@natr66-grls-dlm10toeriedwngdsec-r6-21 ~]$  
> /usr/hdp/current/oozie-client/bin/oozie job -oozie 
> http://natr66-grls-dlm10toeriedwngdsec-r6-10.openstacklocal:11000/oozie   
> -info 030-160822042035608-oozie-oozi-W
> Job ID : 030-160822042035608-oozie-oozi-W
> 
> Workflow Name : wordcount
> App Path  : hdfs://nameservice/user/hrt_qa/test_oozie_long_running
> Status: FAILED
> Run   : 0
> User  : hrt_qa
> Group : -
> Created   : 2016-08-22 05:08 GMT
> Started   : 2016-08-22 05:08 GMT
> Last Modified : 2016-08-22 05:15 GMT
> Ended : 2016-08-22 05:15 GMT
> CoordAction ID: 001-160821214718970-oozie-oozi-C@248
> Actions
> 
> ID
> StatusExt ID Ext Status Err Code  
> 
> 030-160822042035608-oozie-oozi-W@wc   
> FAILEDjob_1471842441396_0002 FAILED JA017 
> 
> 030-160822042035608-oozie-oozi-W@:start:  
> OK-  OK - 
> 
> {code}
> This is caused by an outage of both NameNodes during the downgrade. 
> - We have two NNs at the "Finalize Upgrade" state; 
> -- nn1 is standby (out of safemode)
> -- nn2 is active (out of safemode)
> - A downgrade begins and we restart nn1
> -- After the restart of nn1, it hasn't come online yet. Our code tries to 
> contact it and can't, so we move onto nn2.
> -- nn2 is online and active and out of safemode (because it hasn't been 
> downgraded yet), so we let the downgrade continue
> - The downgrade continues and we restart nn2
> -- However, nn1 is still coming online and isn't even standby yet
> Now we have an nn1 which isn't fully loaded and an nn2 which is restarting 
> and trying to figure out whether to be active or standby. It's during this 
> gap that the tests must be failing. 
> So, it seems like we need to be a little bit smarter about 

[jira] [Updated] (AMBARI-18240) During a Rolling Downgrade Oozie Long Running Jobs Can Fail

2016-08-23 Thread Jonathan Hurley (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hurley updated AMBARI-18240:
-
Attachment: AMBARI-18240.patch

> During a Rolling Downgrade Oozie Long Running Jobs Can Fail
> ---
>
> Key: AMBARI-18240
> URL: https://issues.apache.org/jira/browse/AMBARI-18240
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Blocker
> Fix For: trunk
>
> Attachments: AMBARI-18240.patch
>
>
> - Install HDP-2.3.2.0-2950 with Ambari 2.4.0
> - Being a long-running job (LRJ) in Oozie
> - Start upgrading to HDP-2.5.0.0-1235
> - Before finalizing step, start downgrading to HDP-2.3.2.0-2950. 
> Sometimes, the LRJ will fail:
> {code}
> /usr/hdp/current/oozie-client/bin/oozie job -oozie 
> http://natr66-grls-dlm10toeriedwngdsec-r6-10.openstacklocal:11000/oozie   
> -info 001-160821214718970-oozie-oozi-C@248 
> ID : 001-160821214718970-oozie-oozi-C@248
> 
> Action Number: 248
> Console URL  : -
> Error Code   : -
> Error Message: -
> External ID  : 030-160822042035608-oozie-oozi-W
> External Status  : -
> Job ID   : 001-160821214718970-oozie-oozi-C
> Tracker URI  : -
> Created  : 2016-08-22 00:37 GMT
> Nominal Time : 2009-01-01 21:35 GMT
> Status   : FAILED
> Last Modified: 2016-08-22 05:15 GMT
> First Missing Dependency : -
> 
> [hrt_qa@natr66-grls-dlm10toeriedwngdsec-r6-21 ~]$  
> /usr/hdp/current/oozie-client/bin/oozie job -oozie 
> http://natr66-grls-dlm10toeriedwngdsec-r6-10.openstacklocal:11000/oozie   
> -info 030-160822042035608-oozie-oozi-W
> Job ID : 030-160822042035608-oozie-oozi-W
> 
> Workflow Name : wordcount
> App Path  : hdfs://nameservice/user/hrt_qa/test_oozie_long_running
> Status: FAILED
> Run   : 0
> User  : hrt_qa
> Group : -
> Created   : 2016-08-22 05:08 GMT
> Started   : 2016-08-22 05:08 GMT
> Last Modified : 2016-08-22 05:15 GMT
> Ended : 2016-08-22 05:15 GMT
> CoordAction ID: 001-160821214718970-oozie-oozi-C@248
> Actions
> 
> ID
> StatusExt ID Ext Status Err Code  
> 
> 030-160822042035608-oozie-oozi-W@wc   
> FAILEDjob_1471842441396_0002 FAILED JA017 
> 
> 030-160822042035608-oozie-oozi-W@:start:  
> OK-  OK - 
> 
> {code}
> This is caused by an outage of both NameNodes during the downgrade. 
> - We have two NNs at the "Finalize Upgrade" state; 
> -- nn1 is standby (out of safemode)
> -- nn2 is active (out of safemode)
> - A downgrade begins and we restart nn1
> -- After the restart of nn1, it hasn't come online yet. Our code tries to 
> contact it and can't, so we move onto nn2.
> -- nn2 is online and active and out of safemode (because it hasn't been 
> downgraded yet), so we let the downgrade continue
> - The downgrade continues and we restart nn2
> -- However, nn1 is still coming online and isn't even standby yet
> Now we have an nn1 which isn't fully loaded and an nn2 which is restarting 
> and trying to figure out whether to be active or standby. It's during this 
> gap that the tests must be failing. 
> So, it seems like we need to be a little bit smarter about waiting for the 
> namenode to restart; we can't just look at the "active" one and say things 
> are OK because it might be the next one to restart. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18187) AMS should work in SPNEGO enabled clusters.

2016-08-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433528#comment-15433528
 ] 

Hudson commented on AMBARI-18187:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #5577 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5577/])
AMBARI-18187 : AMS should work in SPNEGO enabled clusters. (avijayan) 
(avijayan: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=527a118d5b5840be2bda74b41fefe7c56092a91c])
* (edit) 
ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
Revert "AMBARI-18187 : AMS should work in SPNEGO enabled clusters. (avijayan: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=2897071c10ef0476749162927189c42c5fd57f84])
* (edit) 
ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
AMBARI-18187 : AMS should work in SPNEGO enabled clusters. (avijayan) 
(avijayan: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=cb0d4e1868d89f9d567b05ac8e0c6c7e33bc1f4a])
* (edit) 
ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py


> AMS should work in SPNEGO enabled clusters.
> ---
>
> Key: AMBARI-18187
> URL: https://issues.apache.org/jira/browse/AMBARI-18187
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-metrics
>Affects Versions: trunk
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18187.patch
>
>
> AMS should work in SPNEGO enabled clusters, even if AMS does not natively 
> support SPNEGO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18191) "Restart all required" services operation failed at Metrics Collector since HDFS was not yet up

2016-08-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433527#comment-15433527
 ] 

Hudson commented on AMBARI-18191:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #5577 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5577/])
AMBARI-18191. Restart all required services operation failed at Metrics 
(swagle: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=2cee921c8572b85ab598448280d1a7be5bd50b4e])
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/metadata/RoleCommandOrderTest.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/metadata/RoleCommandOrder.java


> "Restart all required" services operation failed at Metrics Collector since 
> HDFS was not yet up
> ---
>
> Key: AMBARI-18191
> URL: https://issues.apache.org/jira/browse/AMBARI-18191
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-metrics
>Affects Versions: 2.4.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Blocker
> Fix For: 2.5.0
>
> Attachments: AMBARI-18191.patch
>
>
> ambari-server --hash
> 4017036da951a10f519a578de934308cf866ba50
> *Steps*
> # Deploy HDP-2.3.6 cluster with Ambari 2.2.2.0 (AMS is configured in 
> distributed mode)
> # Upgrade Ambari to 2.4.0.0 and let it complete
> # Open Ambari web UI and hit "Restart all required" under Actions menu
> *Result*
> The operation fails while trying to restart Metrics Collector as it tried to 
> make a WebHDFS call while HDFS was not started:
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
>  line 148, in 
> AmsCollector().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 725, in restart
> self.start(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
>  line 46, in start
> self.configure(env, action = 'start') # for security
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
>  line 41, in configure
> hbase('master', action)
>   File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", 
> line 89, in thunk
> return fn(*args, **kwargs)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/hbase.py",
>  line 213, in hbase
> dfs_type=params.dfs_type
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 459, in action_create_on_execute
> self.action_delayed("create")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 456, in action_delayed
> self.get_hdfs_resource_executor().action_delayed(action_name, self)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 256, in action_delayed
> self._set_mode(self.target_status)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 363, in _set_mode
> self.util.run_command(self.main_resource.resource.target, 
> 'SETPERMISSION', method='PUT', permission=self.mode, assertable_result=False)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 179, in run_command
> _, out, err = get_user_call_output(cmd, user=self.run_user, 
> logoutput=self.logoutput, quiet=False)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py",
>  line 61, in get_user_call_output
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
> '%{http_code}' -X PUT --negotiate -u : 
> 'http://vsharma-eu-mt-5.openstacklocal:50070/webhdfs/v1/user/ams/hbase?op=SETPERMISSION=hdfs=775'
>  1>/tmp/tmp8twcZt 2>/tmp/tmpLPih9a' returned 7. curl: (7) couldn't connect to 
> host
> 401
> {code}
> Afterwards, restarted HDFS individually first and then hit 

[jira] [Updated] (AMBARI-18242) Move service metadata into stack's service version folder

2016-08-23 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18242:
-
Fix Version/s: trunk

> Move service metadata into stack's service version folder
> -
>
> Key: AMBARI-18242
> URL: https://issues.apache.org/jira/browse/AMBARI-18242
> Project: Ambari
>  Issue Type: Epic
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
> Fix For: trunk
>
>
> Today, there's a lot of hardcodings and metadata that belongs inside the 
> stack for a particular service and version.
> Instead, this logic is in
> 1. python files in common-services
> 2. Config Packs
> 3. Upgrade Packs
> 4. Stack Advisor
> Details
> 1.
> ambari/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
> {code}
> _PACKAGE_DIRS = {
>   "accumulo": [
> {
>   "conf_dir": "/etc/accumulo/conf",
>   "current_dir": 
> "{0}/current/accumulo-client/conf".format(STACK_ROOT_PATTERN)
> }
>   ],
>   "falcon": [
> {
>   "conf_dir": "/etc/falcon/conf",
>   "current_dir": 
> "{0}/current/falcon-client/conf".format(STACK_ROOT_PATTERN)
> }
>   ],
> {code}
> 2. Any config-upgrade.xml
> E.g.,
> ambari/ambari-server/src/main/resources/stacks/HDP/2.5/upgrades/config-upgrade.xml
> {code}
> 
>   
> 
>id="hdp_2_5_0_0_hive_server_set_transport_mode">
>  value="binary">
>   hive-site
>   hive.server2.thrift.port
>   10010
> 
>  value="http">
>   hive-site
>   hive.server2.http.port
>   10011
> 
>   
>id="hdp_2_5_0_0_hive_server_restore_transport_mode_on_downgrade">
>  value="binary">
>   hive-site
>   hive.server2.thrift.port
>   1
> 
>  value="http">
>   hive-site
>   hive.server2.http.port
>   10001
> 
>   
> 
>   
> 
> {code}
> 3. Any upgrade pack
> E.g., 
> ambari/ambari-server/src/main/resources/stacks/HDP/2.5/upgrades/upgrade-2.5.xml
> {code}
> 
>   
> 
>id="hdp_2_5_0_0_remove_empty_storm_topology_submission_notifier_plugin_class"/>
> 
> 
>   
> 
>   
>   
> 
>   
> 
>   
>   
> 
>   
> 
>   
>   
> 
>   
> 
>   
> 
> {code}
> Plus, any of the Java classes that can be called as PreChecks
> 4. Stack Advisor functions
> {code}
> def validateAtlasConfigurations()
> def recommendFalconConfigurations()
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18242) Move service metadata into stack's service version folder

2016-08-23 Thread Alejandro Fernandez (JIRA)
Alejandro Fernandez created AMBARI-18242:


 Summary: Move service metadata into stack's service version folder
 Key: AMBARI-18242
 URL: https://issues.apache.org/jira/browse/AMBARI-18242
 Project: Ambari
  Issue Type: Epic
Reporter: Alejandro Fernandez


Today, there's a lot of hardcodings and metadata that belongs inside the stack 
for a particular service and version.
Instead, this logic is in

1. python files in common-services
2. Config Packs
3. Upgrade Packs
4. Stack Advisor

Details
1.
ambari/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
{code}
_PACKAGE_DIRS = {
  "accumulo": [
{
  "conf_dir": "/etc/accumulo/conf",
  "current_dir": 
"{0}/current/accumulo-client/conf".format(STACK_ROOT_PATTERN)
}
  ],
  "falcon": [
{
  "conf_dir": "/etc/falcon/conf",
  "current_dir": "{0}/current/falcon-client/conf".format(STACK_ROOT_PATTERN)
}
  ],
{code}

2. Any config-upgrade.xml
E.g.,
ambari/ambari-server/src/main/resources/stacks/HDP/2.5/upgrades/config-upgrade.xml
{code}

  

  

  hive-site
  hive.server2.thrift.port
  10010


  hive-site
  hive.server2.http.port
  10011

  

  

  hive-site
  hive.server2.thrift.port
  1


  hive-site
  hive.server2.http.port
  10001

  

  

{code}

3. Any upgrade pack
E.g., 
ambari/ambari-server/src/main/resources/stacks/HDP/2.5/upgrades/upgrade-2.5.xml
{code}

  

  



  

  
  

  

  
  

  

  
  

  

  

{code}
Plus, any of the Java classes that can be called as PreChecks

4. Stack Advisor functions
{code}
def validateAtlasConfigurations()
def recommendFalconConfigurations()
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18242) Move service metadata into stack's service version folder

2016-08-23 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18242:
-
Affects Version/s: 2.4.0

> Move service metadata into stack's service version folder
> -
>
> Key: AMBARI-18242
> URL: https://issues.apache.org/jira/browse/AMBARI-18242
> Project: Ambari
>  Issue Type: Epic
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
> Fix For: trunk
>
>
> Today, there's a lot of hardcodings and metadata that belongs inside the 
> stack for a particular service and version.
> Instead, this logic is in
> 1. python files in common-services
> 2. Config Packs
> 3. Upgrade Packs
> 4. Stack Advisor
> Details
> 1.
> ambari/ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
> {code}
> _PACKAGE_DIRS = {
>   "accumulo": [
> {
>   "conf_dir": "/etc/accumulo/conf",
>   "current_dir": 
> "{0}/current/accumulo-client/conf".format(STACK_ROOT_PATTERN)
> }
>   ],
>   "falcon": [
> {
>   "conf_dir": "/etc/falcon/conf",
>   "current_dir": 
> "{0}/current/falcon-client/conf".format(STACK_ROOT_PATTERN)
> }
>   ],
> {code}
> 2. Any config-upgrade.xml
> E.g.,
> ambari/ambari-server/src/main/resources/stacks/HDP/2.5/upgrades/config-upgrade.xml
> {code}
> 
>   
> 
>id="hdp_2_5_0_0_hive_server_set_transport_mode">
>  value="binary">
>   hive-site
>   hive.server2.thrift.port
>   10010
> 
>  value="http">
>   hive-site
>   hive.server2.http.port
>   10011
> 
>   
>id="hdp_2_5_0_0_hive_server_restore_transport_mode_on_downgrade">
>  value="binary">
>   hive-site
>   hive.server2.thrift.port
>   1
> 
>  value="http">
>   hive-site
>   hive.server2.http.port
>   10001
> 
>   
> 
>   
> 
> {code}
> 3. Any upgrade pack
> E.g., 
> ambari/ambari-server/src/main/resources/stacks/HDP/2.5/upgrades/upgrade-2.5.xml
> {code}
> 
>   
> 
>id="hdp_2_5_0_0_remove_empty_storm_topology_submission_notifier_plugin_class"/>
> 
> 
>   
> 
>   
>   
> 
>   
> 
>   
>   
> 
>   
> 
>   
>   
> 
>   
> 
>   
> 
> {code}
> Plus, any of the Java classes that can be called as PreChecks
> 4. Stack Advisor functions
> {code}
> def validateAtlasConfigurations()
> def recommendFalconConfigurations()
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18241) A wrapper util to validate blueprint by submitting it via blueprint REST API

2016-08-23 Thread Di Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Di Li updated AMBARI-18241:
---
Attachment: AMBARI-18241.patch

> A wrapper util to validate blueprint by submitting it via blueprint REST API
> 
>
> Key: AMBARI-18241
> URL: https://issues.apache.org/jira/browse/AMBARI-18241
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Di Li
>Assignee: Di Li
>Priority: Minor
> Attachments: AMBARI-18241.patch
>
>
> A wrapper util to validate blueprint by submitting it via blueprint REST API. 
> This is to be included in the existing preinstall_checker.py in contrib/utils



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18241) A wrapper util to validate blueprint by submitting it via blueprint REST API

2016-08-23 Thread Di Li (JIRA)
Di Li created AMBARI-18241:
--

 Summary: A wrapper util to validate blueprint by submitting it via 
blueprint REST API
 Key: AMBARI-18241
 URL: https://issues.apache.org/jira/browse/AMBARI-18241
 Project: Ambari
  Issue Type: Improvement
Reporter: Di Li
Assignee: Di Li
Priority: Minor


A wrapper util to validate blueprint by submitting it via blueprint REST API. 
This is to be included in the existing preinstall_checker.py in contrib/utils



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17159) Upon successful start, log the process id for daemons started

2016-08-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1549#comment-1549
 ] 

Hudson commented on AMBARI-17159:
-

FAILURE: Integrated in Jenkins build Ambari-trunk-Commit #5576 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5576/])
AMBARI-17159. Upon successful start, log the process id for daemons 
(mpapyrkovskyy: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=bdae70187acac5de9890ee964357298121219487])
* (edit) 
ambari-server/src/main/resources/common-services/KAFKA/0.8.1/package/scripts/kafka_broker.py
* (edit) 
ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py
* (edit) 
ambari-server/src/main/resources/common-services/STORM/0.9.1/package/scripts/pacemaker.py
* (edit) 
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/webhcat_server.py
* (edit) 
ambari-server/src/main/resources/common-services/SPARK/1.2.1/package/scripts/livy_server.py
* (edit) 
ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status.py
* (edit) 
ambari-server/src/main/resources/common-services/SPARK/1.2.1/package/scripts/job_history_server.py
* (edit) 
ambari-server/src/main/resources/common-services/STORM/0.9.1/package/scripts/ui_server.py
* (edit) 
ambari-server/src/main/resources/common-services/ZOOKEEPER/3.4.5/package/scripts/zookeeper_server.py
* (edit) 
ambari-server/src/main/resources/common-services/RANGER/0.4.0/package/scripts/ranger_tagsync.py
* (edit) 
ambari-server/src/main/resources/common-services/FLUME/1.4.0.2.0/package/scripts/flume_handler.py
* (edit) 
ambari-common/src/main/python/resource_management/libraries/script/script.py
* (edit) 
ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/phoenix_queryserver.py
* (edit) 
ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py
* (edit) 
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py
* (edit) 
ambari-server/src/main/resources/common-services/SPARK2/2.0.0/package/scripts/spark_thrift_server.py
* (edit) 
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py
* (edit) 
ambari-common/src/main/python/resource_management/libraries/functions/flume_agent_helper.py
* (edit) 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py
* (edit) 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py
* (edit) 
ambari-server/src/main/resources/common-services/STORM/0.9.1/package/scripts/supervisor.py
* (edit) 
ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_regionserver.py
* (edit) 
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/journalnode.py
* (edit) 
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py
* (edit) 
ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py
* (edit) 
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py
* (edit) 
ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py
* (edit) 
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/nfsgateway.py
* (edit) 
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/snamenode.py
* (edit) 
ambari-server/src/main/resources/common-services/SPARK/1.2.1/package/scripts/spark_thrift_server.py
* (edit) 
ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/phoenix_service.py
* (edit) 
ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_monitor.py
* (edit) 
ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/falcon_server.py
* (edit) 
ambari-server/src/main/resources/common-services/STORM/0.9.1/package/scripts/nimbus.py
* (edit) 
ambari-server/src/test/python/stacks/2.0.6/configs/default_ams_embedded.json
* (edit) 
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py
* (edit) ambari-server/src/test/python/stacks/2.0.6/HDFS/test_namenode.py
* (edit) 
ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py
* (edit) 
ambari-server/src/main/resources/common-services/KNOX/0.5.0.2.2/package/scripts/knox_gateway.py
* (edit) 
ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py
* (edit) 
ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_master.py
* (edit) 
ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/status_params.py

[jira] [Commented] (AMBARI-17728) Error message does not deliver when executing ambari-server command as a non-root user

2016-08-23 Thread Alejandro Fernandez (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1544#comment-1544
 ] 

Alejandro Fernandez commented on AMBARI-17728:
--

+1 for [^AMBARI-17728-2.patch]

> Error message does not deliver when executing ambari-server command as a 
> non-root user
> --
>
> Key: AMBARI-17728
> URL: https://issues.apache.org/jira/browse/AMBARI-17728
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: wangyaoxin
>Assignee: wangyaoxin
> Fix For: trunk
>
> Attachments: AMBARI-17728-1.patch, AMBARI-17728-2.patch, 
> AMBARI-17728.patch
>
>
> non-root user: like hdfs
> excute : ambari-server stop
> show :  Using python  /usr/bin/python2.6 Stopping ambari-server
> intent to: You can't perform this operation as non-sudoer user. Please, 
> re-login or configure sudo access for this user



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17228) Blueprint deployments should support a "START_ONLY" provision_action for clusters

2016-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433263#comment-15433263
 ] 

Hadoop QA commented on AMBARI-17228:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12825096/AMBARI_17228_v2.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8499//console

This message is automatically generated.

> Blueprint deployments should support a "START_ONLY" provision_action for 
> clusters
> -
>
> Key: AMBARI-17228
> URL: https://issues.apache.org/jira/browse/AMBARI-17228
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Robert Nettleton
>Assignee: Sandor Magyari
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI_17228_v2.patch
>
>
> This JIRA tracks an extension to the work started in:
> AMBARI-16220
> In that JIRA, support is being added to allow Blueprint deployments that skip 
> the installation steps, and only attempt to start all components in the 
> cluster after the configuration phase has completed.
> In AMBARI-16220, the feature is configured via a property in 
> "ambari.properties", which configures this feature on the ambari-server 
> itself.
> This current JIRA tracks a suggestion to extend this feature, such that the 
> "START_ONLY" mode can be treated as a new type of "provision_action". 
> Using the property specified in ambari.properties isn't incorrect, but may be 
> inconvenient in the future for enhancements and maintenance.  
> In multi-cluster scenarios, it might be better to configure this at the 
> cluster-level.
> There has already been some work done to configure provisioning in a more 
> fine-grained way.
> Sid's patch for the following Blueprints feature:
> https://issues.apache.org/jira/browse/AMBARI-14283
> Shows how a "provision_action" can be selected for a given deployment.  
> This notion of a "provision_action" has also been extended to the component 
> level as well:
> https://issues.apache.org/jira/browse/AMBARI-14555
> Currently, this "provision_action" configuration is only used to select an 
> INSTALL_ONLY deployment, but  it could certainly be extended to have a 
> "START_ONLY" action as well.  This would have the benefit of being able to 
> choose this option on a per-cluster basis, and not require an ambari-server 
> restart if this feature is desired but not enabled by default. 
> The following code reference shows how the "provision_action" configuration 
> option is used by a Blueprints deployment:
> org.apache.ambari.server.topology.HostRequest#createTasks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18187) AMS should work in SPNEGO enabled clusters.

2016-08-23 Thread Aravindan Vijayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated AMBARI-18187:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to  trunk.

> AMS should work in SPNEGO enabled clusters.
> ---
>
> Key: AMBARI-18187
> URL: https://issues.apache.org/jira/browse/AMBARI-18187
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-metrics
>Affects Versions: trunk
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18187.patch
>
>
> AMS should work in SPNEGO enabled clusters, even if AMS does not natively 
> support SPNEGO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18191) "Restart all required" services operation failed at Metrics Collector since HDFS was not yet up

2016-08-23 Thread Siddharth Wagle (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated AMBARI-18191:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to trunk.

> "Restart all required" services operation failed at Metrics Collector since 
> HDFS was not yet up
> ---
>
> Key: AMBARI-18191
> URL: https://issues.apache.org/jira/browse/AMBARI-18191
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-metrics
>Affects Versions: 2.4.0
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Blocker
> Fix For: 2.5.0
>
> Attachments: AMBARI-18191.patch
>
>
> ambari-server --hash
> 4017036da951a10f519a578de934308cf866ba50
> *Steps*
> # Deploy HDP-2.3.6 cluster with Ambari 2.2.2.0 (AMS is configured in 
> distributed mode)
> # Upgrade Ambari to 2.4.0.0 and let it complete
> # Open Ambari web UI and hit "Restart all required" under Actions menu
> *Result*
> The operation fails while trying to restart Metrics Collector as it tried to 
> make a WebHDFS call while HDFS was not started:
> {code}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
>  line 148, in 
> AmsCollector().execute()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 280, in execute
> method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 725, in restart
> self.start(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
>  line 46, in start
> self.configure(env, action = 'start') # for security
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
>  line 41, in configure
> hbase('master', action)
>   File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", 
> line 89, in thunk
> return fn(*args, **kwargs)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/hbase.py",
>  line 213, in hbase
> dfs_type=params.dfs_type
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 155, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 160, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 124, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 459, in action_create_on_execute
> self.action_delayed("create")
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 456, in action_delayed
> self.get_hdfs_resource_executor().action_delayed(action_name, self)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 256, in action_delayed
> self._set_mode(self.target_status)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 363, in _set_mode
> self.util.run_command(self.main_resource.resource.target, 
> 'SETPERMISSION', method='PUT', permission=self.mode, assertable_result=False)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 179, in run_command
> _, out, err = get_user_call_output(cmd, user=self.run_user, 
> logoutput=self.logoutput, quiet=False)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py",
>  line 61, in get_user_call_output
> raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w 
> '%{http_code}' -X PUT --negotiate -u : 
> 'http://vsharma-eu-mt-5.openstacklocal:50070/webhdfs/v1/user/ams/hbase?op=SETPERMISSION=hdfs=775'
>  1>/tmp/tmp8twcZt 2>/tmp/tmpLPih9a' returned 7. curl: (7) couldn't connect to 
> host
> 401
> {code}
> Afterwards, restarted HDFS individually first and then hit "Restart all 
> Required" - the operation was successful
> Looks like the issue is because the order of restart is incorrect across the 
> hosts, hence the dependent services don't come up upfront



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17228) Blueprint deployments should support a "START_ONLY" provision_action for clusters

2016-08-23 Thread Sandor Magyari (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandor Magyari updated AMBARI-17228:

Attachment: AMBARI_17228_v2.patch

> Blueprint deployments should support a "START_ONLY" provision_action for 
> clusters
> -
>
> Key: AMBARI-17228
> URL: https://issues.apache.org/jira/browse/AMBARI-17228
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Robert Nettleton
>Assignee: Sandor Magyari
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI_17228_v2.patch
>
>
> This JIRA tracks an extension to the work started in:
> AMBARI-16220
> In that JIRA, support is being added to allow Blueprint deployments that skip 
> the installation steps, and only attempt to start all components in the 
> cluster after the configuration phase has completed.
> In AMBARI-16220, the feature is configured via a property in 
> "ambari.properties", which configures this feature on the ambari-server 
> itself.
> This current JIRA tracks a suggestion to extend this feature, such that the 
> "START_ONLY" mode can be treated as a new type of "provision_action". 
> Using the property specified in ambari.properties isn't incorrect, but may be 
> inconvenient in the future for enhancements and maintenance.  
> In multi-cluster scenarios, it might be better to configure this at the 
> cluster-level.
> There has already been some work done to configure provisioning in a more 
> fine-grained way.
> Sid's patch for the following Blueprints feature:
> https://issues.apache.org/jira/browse/AMBARI-14283
> Shows how a "provision_action" can be selected for a given deployment.  
> This notion of a "provision_action" has also been extended to the component 
> level as well:
> https://issues.apache.org/jira/browse/AMBARI-14555
> Currently, this "provision_action" configuration is only used to select an 
> INSTALL_ONLY deployment, but  it could certainly be extended to have a 
> "START_ONLY" action as well.  This would have the benefit of being able to 
> choose this option on a per-cluster basis, and not require an ambari-server 
> restart if this feature is desired but not enabled by default. 
> The following code reference shows how the "provision_action" configuration 
> option is used by a Blueprints deployment:
> org.apache.ambari.server.topology.HostRequest#createTasks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17228) Blueprint deployments should support a "START_ONLY" provision_action for clusters

2016-08-23 Thread Sandor Magyari (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandor Magyari updated AMBARI-17228:

Attachment: (was: AMBARI-17228.patch)

> Blueprint deployments should support a "START_ONLY" provision_action for 
> clusters
> -
>
> Key: AMBARI-17228
> URL: https://issues.apache.org/jira/browse/AMBARI-17228
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Robert Nettleton
>Assignee: Sandor Magyari
>Priority: Critical
> Fix For: 2.5.0
>
>
> This JIRA tracks an extension to the work started in:
> AMBARI-16220
> In that JIRA, support is being added to allow Blueprint deployments that skip 
> the installation steps, and only attempt to start all components in the 
> cluster after the configuration phase has completed.
> In AMBARI-16220, the feature is configured via a property in 
> "ambari.properties", which configures this feature on the ambari-server 
> itself.
> This current JIRA tracks a suggestion to extend this feature, such that the 
> "START_ONLY" mode can be treated as a new type of "provision_action". 
> Using the property specified in ambari.properties isn't incorrect, but may be 
> inconvenient in the future for enhancements and maintenance.  
> In multi-cluster scenarios, it might be better to configure this at the 
> cluster-level.
> There has already been some work done to configure provisioning in a more 
> fine-grained way.
> Sid's patch for the following Blueprints feature:
> https://issues.apache.org/jira/browse/AMBARI-14283
> Shows how a "provision_action" can be selected for a given deployment.  
> This notion of a "provision_action" has also been extended to the component 
> level as well:
> https://issues.apache.org/jira/browse/AMBARI-14555
> Currently, this "provision_action" configuration is only used to select an 
> INSTALL_ONLY deployment, but  it could certainly be extended to have a 
> "START_ONLY" action as well.  This would have the benefit of being able to 
> choose this option on a per-cluster basis, and not require an ambari-server 
> restart if this feature is desired but not enabled by default. 
> The following code reference shows how the "provision_action" configuration 
> option is used by a Blueprints deployment:
> org.apache.ambari.server.topology.HostRequest#createTasks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17228) Blueprint deployments should support a "START_ONLY" provision_action for clusters

2016-08-23 Thread Sandor Magyari (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandor Magyari updated AMBARI-17228:

Attachment: AMBARI-17228.patch

> Blueprint deployments should support a "START_ONLY" provision_action for 
> clusters
> -
>
> Key: AMBARI-17228
> URL: https://issues.apache.org/jira/browse/AMBARI-17228
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Robert Nettleton
>Assignee: Sandor Magyari
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-17228.patch
>
>
> This JIRA tracks an extension to the work started in:
> AMBARI-16220
> In that JIRA, support is being added to allow Blueprint deployments that skip 
> the installation steps, and only attempt to start all components in the 
> cluster after the configuration phase has completed.
> In AMBARI-16220, the feature is configured via a property in 
> "ambari.properties", which configures this feature on the ambari-server 
> itself.
> This current JIRA tracks a suggestion to extend this feature, such that the 
> "START_ONLY" mode can be treated as a new type of "provision_action". 
> Using the property specified in ambari.properties isn't incorrect, but may be 
> inconvenient in the future for enhancements and maintenance.  
> In multi-cluster scenarios, it might be better to configure this at the 
> cluster-level.
> There has already been some work done to configure provisioning in a more 
> fine-grained way.
> Sid's patch for the following Blueprints feature:
> https://issues.apache.org/jira/browse/AMBARI-14283
> Shows how a "provision_action" can be selected for a given deployment.  
> This notion of a "provision_action" has also been extended to the component 
> level as well:
> https://issues.apache.org/jira/browse/AMBARI-14555
> Currently, this "provision_action" configuration is only used to select an 
> INSTALL_ONLY deployment, but  it could certainly be extended to have a 
> "START_ONLY" action as well.  This would have the benefit of being able to 
> choose this option on a per-cluster basis, and not require an ambari-server 
> restart if this feature is desired but not enabled by default. 
> The following code reference shows how the "provision_action" configuration 
> option is used by a Blueprints deployment:
> org.apache.ambari.server.topology.HostRequest#createTasks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17228) Blueprint deployments should support a "START_ONLY" provision_action for clusters

2016-08-23 Thread Sandor Magyari (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandor Magyari updated AMBARI-17228:

Attachment: (was: AMBARI-17228.patch)

> Blueprint deployments should support a "START_ONLY" provision_action for 
> clusters
> -
>
> Key: AMBARI-17228
> URL: https://issues.apache.org/jira/browse/AMBARI-17228
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Robert Nettleton
>Assignee: Sandor Magyari
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-17228.patch
>
>
> This JIRA tracks an extension to the work started in:
> AMBARI-16220
> In that JIRA, support is being added to allow Blueprint deployments that skip 
> the installation steps, and only attempt to start all components in the 
> cluster after the configuration phase has completed.
> In AMBARI-16220, the feature is configured via a property in 
> "ambari.properties", which configures this feature on the ambari-server 
> itself.
> This current JIRA tracks a suggestion to extend this feature, such that the 
> "START_ONLY" mode can be treated as a new type of "provision_action". 
> Using the property specified in ambari.properties isn't incorrect, but may be 
> inconvenient in the future for enhancements and maintenance.  
> In multi-cluster scenarios, it might be better to configure this at the 
> cluster-level.
> There has already been some work done to configure provisioning in a more 
> fine-grained way.
> Sid's patch for the following Blueprints feature:
> https://issues.apache.org/jira/browse/AMBARI-14283
> Shows how a "provision_action" can be selected for a given deployment.  
> This notion of a "provision_action" has also been extended to the component 
> level as well:
> https://issues.apache.org/jira/browse/AMBARI-14555
> Currently, this "provision_action" configuration is only used to select an 
> INSTALL_ONLY deployment, but  it could certainly be extended to have a 
> "START_ONLY" action as well.  This would have the benefit of being able to 
> choose this option on a per-cluster basis, and not require an ambari-server 
> restart if this feature is desired but not enabled by default. 
> The following code reference shows how the "provision_action" configuration 
> option is used by a Blueprints deployment:
> org.apache.ambari.server.topology.HostRequest#createTasks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18196) Generate REST API docs with Swagger for Log Search

2016-08-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-18196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó updated AMBARI-18196:
--
Attachment: AMBARI-18196_appendum1.patch

> Generate REST API docs with Swagger for Log Search
> --
>
> Key: AMBARI-18196
> URL: https://issues.apache.org/jira/browse/AMBARI-18196
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 2.4.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
> Fix For: 2.5.0
>
> Attachments: AMBARI-18196.patch, AMBARI-18196_appendum1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18196) Generate REST API docs with Swagger for Log Search

2016-08-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-18196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó updated AMBARI-18196:
--
Attachment: (was: AMBARI-18196_appendum1.patch)

> Generate REST API docs with Swagger for Log Search
> --
>
> Key: AMBARI-18196
> URL: https://issues.apache.org/jira/browse/AMBARI-18196
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 2.4.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
> Fix For: 2.5.0
>
> Attachments: AMBARI-18196.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18196) Generate REST API docs with Swagger for Log Search

2016-08-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-18196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó updated AMBARI-18196:
--
Attachment: (was: AMBARI-18196_appendum1.patch)

> Generate REST API docs with Swagger for Log Search
> --
>
> Key: AMBARI-18196
> URL: https://issues.apache.org/jira/browse/AMBARI-18196
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 2.4.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
> Fix For: 2.5.0
>
> Attachments: AMBARI-18196.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18196) Generate REST API docs with Swagger for Log Search

2016-08-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-18196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó updated AMBARI-18196:
--
Attachment: AMBARI-18196_appendum1.patch

> Generate REST API docs with Swagger for Log Search
> --
>
> Key: AMBARI-18196
> URL: https://issues.apache.org/jira/browse/AMBARI-18196
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 2.4.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
> Fix For: 2.5.0
>
> Attachments: AMBARI-18196.patch, AMBARI-18196_appendum1.patch, 
> AMBARI-18196_appendum1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18237) Certain configuration files cannot be modified through Ambari api.

2016-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433158#comment-15433158
 ] 

Hadoop QA commented on AMBARI-18237:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12825046/AMBARI-18237.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8498//console

This message is automatically generated.

> Certain configuration files cannot be modified through Ambari api.
> --
>
> Key: AMBARI-18237
> URL: https://issues.apache.org/jira/browse/AMBARI-18237
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 2.5.0
>
> Attachments: AMBARI-18237.patch
>
>
> Certain configuration files like hadoop-metrics2.properties are not exposed to
> Ambari api calls, therefore cannot be modified by Ambari REST api calls.
> Ambari QE team uses these configuration files to modify it for HDP stack
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433155#comment-15433155
 ] 

Hadoop QA commented on AMBARI-18239:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12825079/AMBARI-18239.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8497//console

This message is automatically generated.

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Ayub Khan
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18238) HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari

2016-08-23 Thread Dmytro Grinenko (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433149#comment-15433149
 ] 

Dmytro Grinenko commented on AMBARI-18238:
--

commited to  trunk 

bdae701..e671d0f  trunk -> trunk

> HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari
> --
>
> Key: AMBARI-18238
> URL: https://issues.apache.org/jira/browse/AMBARI-18238
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18238.patch
>
>
> After upgrading the HBase server doesn't start throws below error.
> {code}
> 016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:SERVER_GC_OPTS=-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 
> -Xloggc:/var/log/hbase/gc.log-201608210549
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HADOOP_CONF=/usr/hdp/2.5.0.0-1232/hadoop/conf
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HOSTNAME=hcube1-1n03.eng.hortonworks.com
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTDIR=/usr/lib64/qt-3.3
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_THRIFT_OPTS=
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_HOME=/usr/hdp/current/hbase-master/bin/..
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTLIB=/usr/lib64/qt-3.3/lib
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HOME=/home/hbase
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:MALLOC_ARENA_MAX=4
> 2016-08-21 05:49:27,301 INFO  [main] util.ServerCommandLine: vmName=Java 
> HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.60-b23
> 2016-08-21 05:49:27,302 INFO  [main] util.ServerCommandLine: 
> vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, 
> -Dhdp.version=2.5.0.0-1232, -XX:+UseConcMarkSweepGC, 
> -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_client_jaas.conf,
>  -Djava.io.tmpdir=/tmp, -verbose:gc, -XX:+PrintGCDetails, 
> -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201608210549, 
> -Xmx2048m, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_master_jaas.conf,
>  -Dhbase.log.dir=/var/log/hbase, 
> -Dhbase.log.file=hbase-hbase-master-hcube1-1n03.eng.hortonworks.com.log, 
> -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.., -Dhbase.id.str=hbase, 
> -Dhbase.root.logger=INFO,RFA, 
> -Djava.library.path=:/usr/hdp/2.5.0.0-1232/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1232/hadoop/lib/native,
>  -Dhbase.security.logger=INFO,RFAS]
> 2016-08-21 05:49:27,715 INFO  [main] regionserver.RSRpcServices: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000 server-side 
> HConnection retries=350
> 2016-08-21 05:49:27,892 INFO  [main] ipc.SimpleRpcScheduler: Using fifo as 
> user call queue, count=12
> 2016-08-21 05:49:27,896 INFO  [main] ipc.PhoenixRpcSchedulerFactory: Using 
> custom Phoenix Index RPC Handling with index rpc priority 1000 and metadata 
> rpc priority 2000
> 2016-08-21 05:49:27,919 INFO  [main] ipc.RpcServer: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000: started 10 
> reader(s).
> 2016-08-21 05:49:28,027 INFO  [main] impl.MetricsConfig: loaded properties 
> from hadoop-metrics2-hbase.properties
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Initializing Timeline metrics sink.
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Identified hostname = hcube1-1n03.eng.hortonworks.com, serviceName = hbase
> 2016-08-21 05:49:28,174 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Collector Uri: 
> http://hcube1-2n01.eng.hortonworks.com:6188/ws/v1/timeline/metrics
> 2016-08-21 05:49:28,189 INFO  [main] impl.MetricsSinkAdapter: Sink timeline 
> started
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: Scheduled 
> snapshot period at 10 second(s).
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: HBase metrics 
> system started
> 2016-08-21 05:49:28,649 INFO  [main] security.UserGroupInformation: Login 
> successful for user 
> hbase/hcube1-1n03.eng.hortonworks@hwx.stanleyhotel.com using keytab file 
> /etc/security/keytabs/hbase.service.keytab
> 2016-08-21 05:49:28,655 INFO  [main] hfile.CacheConfig: Allocating 
> LruBlockCache size=791.90 MB, blockSize=64 KB
> 2016-08-21 05:49:28,663 WARN  [main] hfile.CacheConfig: Configuration 
> 

[jira] [Updated] (AMBARI-18238) HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari

2016-08-23 Thread Dmytro Grinenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Grinenko updated AMBARI-18238:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari
> --
>
> Key: AMBARI-18238
> URL: https://issues.apache.org/jira/browse/AMBARI-18238
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18238.patch
>
>
> After upgrading the HBase server doesn't start throws below error.
> {code}
> 016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:SERVER_GC_OPTS=-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 
> -Xloggc:/var/log/hbase/gc.log-201608210549
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HADOOP_CONF=/usr/hdp/2.5.0.0-1232/hadoop/conf
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HOSTNAME=hcube1-1n03.eng.hortonworks.com
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTDIR=/usr/lib64/qt-3.3
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_THRIFT_OPTS=
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_HOME=/usr/hdp/current/hbase-master/bin/..
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTLIB=/usr/lib64/qt-3.3/lib
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HOME=/home/hbase
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:MALLOC_ARENA_MAX=4
> 2016-08-21 05:49:27,301 INFO  [main] util.ServerCommandLine: vmName=Java 
> HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.60-b23
> 2016-08-21 05:49:27,302 INFO  [main] util.ServerCommandLine: 
> vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, 
> -Dhdp.version=2.5.0.0-1232, -XX:+UseConcMarkSweepGC, 
> -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_client_jaas.conf,
>  -Djava.io.tmpdir=/tmp, -verbose:gc, -XX:+PrintGCDetails, 
> -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201608210549, 
> -Xmx2048m, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_master_jaas.conf,
>  -Dhbase.log.dir=/var/log/hbase, 
> -Dhbase.log.file=hbase-hbase-master-hcube1-1n03.eng.hortonworks.com.log, 
> -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.., -Dhbase.id.str=hbase, 
> -Dhbase.root.logger=INFO,RFA, 
> -Djava.library.path=:/usr/hdp/2.5.0.0-1232/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1232/hadoop/lib/native,
>  -Dhbase.security.logger=INFO,RFAS]
> 2016-08-21 05:49:27,715 INFO  [main] regionserver.RSRpcServices: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000 server-side 
> HConnection retries=350
> 2016-08-21 05:49:27,892 INFO  [main] ipc.SimpleRpcScheduler: Using fifo as 
> user call queue, count=12
> 2016-08-21 05:49:27,896 INFO  [main] ipc.PhoenixRpcSchedulerFactory: Using 
> custom Phoenix Index RPC Handling with index rpc priority 1000 and metadata 
> rpc priority 2000
> 2016-08-21 05:49:27,919 INFO  [main] ipc.RpcServer: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000: started 10 
> reader(s).
> 2016-08-21 05:49:28,027 INFO  [main] impl.MetricsConfig: loaded properties 
> from hadoop-metrics2-hbase.properties
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Initializing Timeline metrics sink.
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Identified hostname = hcube1-1n03.eng.hortonworks.com, serviceName = hbase
> 2016-08-21 05:49:28,174 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Collector Uri: 
> http://hcube1-2n01.eng.hortonworks.com:6188/ws/v1/timeline/metrics
> 2016-08-21 05:49:28,189 INFO  [main] impl.MetricsSinkAdapter: Sink timeline 
> started
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: Scheduled 
> snapshot period at 10 second(s).
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: HBase metrics 
> system started
> 2016-08-21 05:49:28,649 INFO  [main] security.UserGroupInformation: Login 
> successful for user 
> hbase/hcube1-1n03.eng.hortonworks@hwx.stanleyhotel.com using keytab file 
> /etc/security/keytabs/hbase.service.keytab
> 2016-08-21 05:49:28,655 INFO  [main] hfile.CacheConfig: Allocating 
> LruBlockCache size=791.90 MB, blockSize=64 KB
> 2016-08-21 05:49:28,663 WARN  [main] hfile.CacheConfig: Configuration 
> 'hbase.bucketcache.percentage.in.combinedcache' is 

[jira] [Updated] (AMBARI-18238) HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari

2016-08-23 Thread Dmytro Grinenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Grinenko updated AMBARI-18238:
-
Attachment: (was: AMBARI-18238.patch)

> HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari
> --
>
> Key: AMBARI-18238
> URL: https://issues.apache.org/jira/browse/AMBARI-18238
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18238.patch
>
>
> After upgrading the HBase server doesn't start throws below error.
> {code}
> 016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:SERVER_GC_OPTS=-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 
> -Xloggc:/var/log/hbase/gc.log-201608210549
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HADOOP_CONF=/usr/hdp/2.5.0.0-1232/hadoop/conf
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HOSTNAME=hcube1-1n03.eng.hortonworks.com
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTDIR=/usr/lib64/qt-3.3
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_THRIFT_OPTS=
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_HOME=/usr/hdp/current/hbase-master/bin/..
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTLIB=/usr/lib64/qt-3.3/lib
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HOME=/home/hbase
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:MALLOC_ARENA_MAX=4
> 2016-08-21 05:49:27,301 INFO  [main] util.ServerCommandLine: vmName=Java 
> HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.60-b23
> 2016-08-21 05:49:27,302 INFO  [main] util.ServerCommandLine: 
> vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, 
> -Dhdp.version=2.5.0.0-1232, -XX:+UseConcMarkSweepGC, 
> -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_client_jaas.conf,
>  -Djava.io.tmpdir=/tmp, -verbose:gc, -XX:+PrintGCDetails, 
> -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201608210549, 
> -Xmx2048m, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_master_jaas.conf,
>  -Dhbase.log.dir=/var/log/hbase, 
> -Dhbase.log.file=hbase-hbase-master-hcube1-1n03.eng.hortonworks.com.log, 
> -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.., -Dhbase.id.str=hbase, 
> -Dhbase.root.logger=INFO,RFA, 
> -Djava.library.path=:/usr/hdp/2.5.0.0-1232/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1232/hadoop/lib/native,
>  -Dhbase.security.logger=INFO,RFAS]
> 2016-08-21 05:49:27,715 INFO  [main] regionserver.RSRpcServices: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000 server-side 
> HConnection retries=350
> 2016-08-21 05:49:27,892 INFO  [main] ipc.SimpleRpcScheduler: Using fifo as 
> user call queue, count=12
> 2016-08-21 05:49:27,896 INFO  [main] ipc.PhoenixRpcSchedulerFactory: Using 
> custom Phoenix Index RPC Handling with index rpc priority 1000 and metadata 
> rpc priority 2000
> 2016-08-21 05:49:27,919 INFO  [main] ipc.RpcServer: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000: started 10 
> reader(s).
> 2016-08-21 05:49:28,027 INFO  [main] impl.MetricsConfig: loaded properties 
> from hadoop-metrics2-hbase.properties
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Initializing Timeline metrics sink.
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Identified hostname = hcube1-1n03.eng.hortonworks.com, serviceName = hbase
> 2016-08-21 05:49:28,174 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Collector Uri: 
> http://hcube1-2n01.eng.hortonworks.com:6188/ws/v1/timeline/metrics
> 2016-08-21 05:49:28,189 INFO  [main] impl.MetricsSinkAdapter: Sink timeline 
> started
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: Scheduled 
> snapshot period at 10 second(s).
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: HBase metrics 
> system started
> 2016-08-21 05:49:28,649 INFO  [main] security.UserGroupInformation: Login 
> successful for user 
> hbase/hcube1-1n03.eng.hortonworks@hwx.stanleyhotel.com using keytab file 
> /etc/security/keytabs/hbase.service.keytab
> 2016-08-21 05:49:28,655 INFO  [main] hfile.CacheConfig: Allocating 
> LruBlockCache size=791.90 MB, blockSize=64 KB
> 2016-08-21 05:49:28,663 WARN  [main] hfile.CacheConfig: Configuration 
> 'hbase.bucketcache.percentage.in.combinedcache' is no longer respected. See 

[jira] [Updated] (AMBARI-18196) Generate REST API docs with Swagger for Log Search

2016-08-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-18196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó updated AMBARI-18196:
--
Attachment: (was: AMBARI-18196_appendum1.patch)

> Generate REST API docs with Swagger for Log Search
> --
>
> Key: AMBARI-18196
> URL: https://issues.apache.org/jira/browse/AMBARI-18196
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 2.4.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
> Fix For: 2.5.0
>
> Attachments: AMBARI-18196.patch, AMBARI-18196_appendum1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18196) Generate REST API docs with Swagger for Log Search

2016-08-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-18196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó updated AMBARI-18196:
--
Attachment: AMBARI-18196_appendum1.patch

> Generate REST API docs with Swagger for Log Search
> --
>
> Key: AMBARI-18196
> URL: https://issues.apache.org/jira/browse/AMBARI-18196
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-logsearch
>Affects Versions: 2.4.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
> Fix For: 2.5.0
>
> Attachments: AMBARI-18196.patch, AMBARI-18196_appendum1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-17159) Upon successful start, log the process id for daemons started

2016-08-23 Thread Myroslav Papirkovskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myroslav Papirkovskyi resolved AMBARI-17159.

Resolution: Fixed

Pushed to trunk

> Upon successful start, log the process id for daemons started
> -
>
> Key: AMBARI-17159
> URL: https://issues.apache.org/jira/browse/AMBARI-17159
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: 2.4.0
>Reporter: Myroslav Papirkovskyi
>Assignee: Myroslav Papirkovskyi
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-17159.patch
>
>
> As part of successful start commands, lets log the process id of the daemons 
> that started. One option could be to call the status() from start() with a 
> parameter that lets the implementation log all details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18104) Unit Tests Broken Due to AMBARI-18011

2016-08-23 Thread Myroslav Papirkovskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myroslav Papirkovskyi updated AMBARI-18104:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to trunk

> Unit Tests Broken Due to AMBARI-18011
> -
>
> Key: AMBARI-18104
> URL: https://issues.apache.org/jira/browse/AMBARI-18104
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Jonathan Hurley
>Assignee: Myroslav Papirkovskyi
>Priority: Blocker
> Fix For: trunk
>
> Attachments: AMBARI-18104.patch
>
>
> Builds on trunk have been failing due to AMBARI-18011:
> {code}
> Error Message
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>  bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
> Stacktrace
> javax.persistence.RollbackException: 
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>   bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.eclipse.persistence.exceptions.DatabaseException: 
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>   bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: java.sql.SQLIntegrityConstraintViolationException: The statement 
> was aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.apache.derby.impl.jdbc.EmbedSQLException: The statement was 
> aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.apache.derby.iapi.error.StandardException: The statement was 
> aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> {code}
> {code}
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: 

[jira] [Updated] (AMBARI-18238) HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari

2016-08-23 Thread Dmytro Grinenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Grinenko updated AMBARI-18238:
-
Attachment: AMBARI-18238.patch

> HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari
> --
>
> Key: AMBARI-18238
> URL: https://issues.apache.org/jira/browse/AMBARI-18238
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18238.patch, AMBARI-18238.patch
>
>
> After upgrading the HBase server doesn't start throws below error.
> {code}
> 016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:SERVER_GC_OPTS=-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 
> -Xloggc:/var/log/hbase/gc.log-201608210549
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HADOOP_CONF=/usr/hdp/2.5.0.0-1232/hadoop/conf
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HOSTNAME=hcube1-1n03.eng.hortonworks.com
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTDIR=/usr/lib64/qt-3.3
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_THRIFT_OPTS=
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_HOME=/usr/hdp/current/hbase-master/bin/..
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTLIB=/usr/lib64/qt-3.3/lib
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HOME=/home/hbase
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:MALLOC_ARENA_MAX=4
> 2016-08-21 05:49:27,301 INFO  [main] util.ServerCommandLine: vmName=Java 
> HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.60-b23
> 2016-08-21 05:49:27,302 INFO  [main] util.ServerCommandLine: 
> vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, 
> -Dhdp.version=2.5.0.0-1232, -XX:+UseConcMarkSweepGC, 
> -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_client_jaas.conf,
>  -Djava.io.tmpdir=/tmp, -verbose:gc, -XX:+PrintGCDetails, 
> -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201608210549, 
> -Xmx2048m, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_master_jaas.conf,
>  -Dhbase.log.dir=/var/log/hbase, 
> -Dhbase.log.file=hbase-hbase-master-hcube1-1n03.eng.hortonworks.com.log, 
> -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.., -Dhbase.id.str=hbase, 
> -Dhbase.root.logger=INFO,RFA, 
> -Djava.library.path=:/usr/hdp/2.5.0.0-1232/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1232/hadoop/lib/native,
>  -Dhbase.security.logger=INFO,RFAS]
> 2016-08-21 05:49:27,715 INFO  [main] regionserver.RSRpcServices: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000 server-side 
> HConnection retries=350
> 2016-08-21 05:49:27,892 INFO  [main] ipc.SimpleRpcScheduler: Using fifo as 
> user call queue, count=12
> 2016-08-21 05:49:27,896 INFO  [main] ipc.PhoenixRpcSchedulerFactory: Using 
> custom Phoenix Index RPC Handling with index rpc priority 1000 and metadata 
> rpc priority 2000
> 2016-08-21 05:49:27,919 INFO  [main] ipc.RpcServer: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000: started 10 
> reader(s).
> 2016-08-21 05:49:28,027 INFO  [main] impl.MetricsConfig: loaded properties 
> from hadoop-metrics2-hbase.properties
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Initializing Timeline metrics sink.
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Identified hostname = hcube1-1n03.eng.hortonworks.com, serviceName = hbase
> 2016-08-21 05:49:28,174 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Collector Uri: 
> http://hcube1-2n01.eng.hortonworks.com:6188/ws/v1/timeline/metrics
> 2016-08-21 05:49:28,189 INFO  [main] impl.MetricsSinkAdapter: Sink timeline 
> started
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: Scheduled 
> snapshot period at 10 second(s).
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: HBase metrics 
> system started
> 2016-08-21 05:49:28,649 INFO  [main] security.UserGroupInformation: Login 
> successful for user 
> hbase/hcube1-1n03.eng.hortonworks@hwx.stanleyhotel.com using keytab file 
> /etc/security/keytabs/hbase.service.keytab
> 2016-08-21 05:49:28,655 INFO  [main] hfile.CacheConfig: Allocating 
> LruBlockCache size=791.90 MB, blockSize=64 KB
> 2016-08-21 05:49:28,663 WARN  [main] hfile.CacheConfig: Configuration 
> 'hbase.bucketcache.percentage.in.combinedcache' is no longer 

[jira] [Updated] (AMBARI-18104) Unit Tests Broken Due to AMBARI-18011

2016-08-23 Thread Myroslav Papirkovskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myroslav Papirkovskyi updated AMBARI-18104:
---
Status: Patch Available  (was: Open)

> Unit Tests Broken Due to AMBARI-18011
> -
>
> Key: AMBARI-18104
> URL: https://issues.apache.org/jira/browse/AMBARI-18104
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Jonathan Hurley
>Assignee: Myroslav Papirkovskyi
>Priority: Blocker
> Fix For: trunk
>
> Attachments: AMBARI-18104.patch
>
>
> Builds on trunk have been failing due to AMBARI-18011:
> {code}
> Error Message
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>  bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
> Stacktrace
> javax.persistence.RollbackException: 
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>   bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.eclipse.persistence.exceptions.DatabaseException: 
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>   bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: java.sql.SQLIntegrityConstraintViolationException: The statement 
> was aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.apache.derby.impl.jdbc.EmbedSQLException: The statement was 
> aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.apache.derby.iapi.error.StandardException: The statement was 
> aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> {code}
> {code}
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted 

[jira] [Updated] (AMBARI-18104) Unit Tests Broken Due to AMBARI-18011

2016-08-23 Thread Myroslav Papirkovskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myroslav Papirkovskyi updated AMBARI-18104:
---
Attachment: (was: AMBARI-18104.patch)

> Unit Tests Broken Due to AMBARI-18011
> -
>
> Key: AMBARI-18104
> URL: https://issues.apache.org/jira/browse/AMBARI-18104
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Jonathan Hurley
>Assignee: Myroslav Papirkovskyi
>Priority: Blocker
> Fix For: trunk
>
> Attachments: AMBARI-18104.patch
>
>
> Builds on trunk have been failing due to AMBARI-18011:
> {code}
> Error Message
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>  bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
> Stacktrace
> javax.persistence.RollbackException: 
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>   bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.eclipse.persistence.exceptions.DatabaseException: 
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>   bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: java.sql.SQLIntegrityConstraintViolationException: The statement 
> was aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.apache.derby.impl.jdbc.EmbedSQLException: The statement was 
> aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.apache.derby.iapi.error.StandardException: The statement was 
> aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> {code}
> {code}
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was 

[jira] [Updated] (AMBARI-18104) Unit Tests Broken Due to AMBARI-18011

2016-08-23 Thread Myroslav Papirkovskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myroslav Papirkovskyi updated AMBARI-18104:
---
Attachment: AMBARI-18104.patch

> Unit Tests Broken Due to AMBARI-18011
> -
>
> Key: AMBARI-18104
> URL: https://issues.apache.org/jira/browse/AMBARI-18104
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Jonathan Hurley
>Assignee: Myroslav Papirkovskyi
>Priority: Blocker
> Fix For: trunk
>
> Attachments: AMBARI-18104.patch
>
>
> Builds on trunk have been failing due to AMBARI-18011:
> {code}
> Error Message
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>  bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
> Stacktrace
> javax.persistence.RollbackException: 
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>   bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.eclipse.persistence.exceptions.DatabaseException: 
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>   bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: java.sql.SQLIntegrityConstraintViolationException: The statement 
> was aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.apache.derby.impl.jdbc.EmbedSQLException: The statement was 
> aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.apache.derby.iapi.error.StandardException: The statement was 
> aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> {code}
> {code}
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted 

[jira] [Assigned] (AMBARI-18104) Unit Tests Broken Due to AMBARI-18011

2016-08-23 Thread Myroslav Papirkovskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myroslav Papirkovskyi reassigned AMBARI-18104:
--

Assignee: Myroslav Papirkovskyi  (was: Nahappan Somasundaram)

> Unit Tests Broken Due to AMBARI-18011
> -
>
> Key: AMBARI-18104
> URL: https://issues.apache.org/jira/browse/AMBARI-18104
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Jonathan Hurley
>Assignee: Myroslav Papirkovskyi
>Priority: Blocker
> Fix For: trunk
>
>
> Builds on trunk have been failing due to AMBARI-18011:
> {code}
> Error Message
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>  bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
> Stacktrace
> javax.persistence.RollbackException: 
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>   bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.eclipse.persistence.exceptions.DatabaseException: 
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it would have caused a duplicate key value in a 
> unique or primary key constraint or unique index identified by 
> 'SQL160810131838480' defined on 'REQUEST'.
> Error Code: 2
> Call: INSERT INTO request (request_id, cluster_id, command_name, create_time, 
> end_time, exclusive_execution, inputs, request_context, request_type, 
> start_time, status, request_schedule_id) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 
> ?, ?, ?)
>   bind => [12 parameters bound]
> Query: 
> InsertObjectQuery(org.apache.ambari.server.orm.entities.StageEntity@2753fa)
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: java.sql.SQLIntegrityConstraintViolationException: The statement 
> was aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.apache.derby.impl.jdbc.EmbedSQLException: The statement was 
> aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> Caused by: org.apache.derby.iapi.error.StandardException: The statement was 
> aborted because it would have caused a duplicate key value in a unique or 
> primary key constraint or unique index identified by 'SQL160810131838480' 
> defined on 'REQUEST'.
>   at 
> org.apache.ambari.server.controller.AmbariManagementControllerTest.testScheduleSmokeTest(AmbariManagementControllerTest.java:9759)
> {code}
> {code}
> Exception [EclipseLink-4002] (Eclipse Persistence Services - 
> 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: java.sql.SQLIntegrityConstraintViolationException: The 
> statement was aborted because it 

[jira] [Created] (AMBARI-18240) During a Rolling Downgrade Oozie Long Running Jobs Can Fail

2016-08-23 Thread Jonathan Hurley (JIRA)
Jonathan Hurley created AMBARI-18240:


 Summary: During a Rolling Downgrade Oozie Long Running Jobs Can 
Fail
 Key: AMBARI-18240
 URL: https://issues.apache.org/jira/browse/AMBARI-18240
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.4.0
Reporter: Jonathan Hurley
Assignee: Jonathan Hurley
Priority: Blocker
 Fix For: trunk


- Install HDP-2.3.2.0-2950 with Ambari 2.4.0
- Being a long-running job (LRJ) in Oozie
- Start upgrading to HDP-2.5.0.0-1235
- Before finalizing step, start downgrading to HDP-2.3.2.0-2950. 

Sometimes, the LRJ will fail:
{code}
/usr/hdp/current/oozie-client/bin/oozie job -oozie 
http://natr66-grls-dlm10toeriedwngdsec-r6-10.openstacklocal:11000/oozie   -info 
001-160821214718970-oozie-oozi-C@248 
ID : 001-160821214718970-oozie-oozi-C@248

Action Number: 248
Console URL  : -
Error Code   : -
Error Message: -
External ID  : 030-160822042035608-oozie-oozi-W
External Status  : -
Job ID   : 001-160821214718970-oozie-oozi-C
Tracker URI  : -
Created  : 2016-08-22 00:37 GMT
Nominal Time : 2009-01-01 21:35 GMT
Status   : FAILED
Last Modified: 2016-08-22 05:15 GMT
First Missing Dependency : -

[hrt_qa@natr66-grls-dlm10toeriedwngdsec-r6-21 ~]$  
/usr/hdp/current/oozie-client/bin/oozie job -oozie 
http://natr66-grls-dlm10toeriedwngdsec-r6-10.openstacklocal:11000/oozie   -info 
030-160822042035608-oozie-oozi-W
Job ID : 030-160822042035608-oozie-oozi-W

Workflow Name : wordcount
App Path  : hdfs://nameservice/user/hrt_qa/test_oozie_long_running
Status: FAILED
Run   : 0
User  : hrt_qa
Group : -
Created   : 2016-08-22 05:08 GMT
Started   : 2016-08-22 05:08 GMT
Last Modified : 2016-08-22 05:15 GMT
Ended : 2016-08-22 05:15 GMT
CoordAction ID: 001-160821214718970-oozie-oozi-C@248

Actions

ID
StatusExt ID Ext Status Err Code  

030-160822042035608-oozie-oozi-W@wc   
FAILEDjob_1471842441396_0002 FAILED JA017 

030-160822042035608-oozie-oozi-W@:start:  
OK-  OK - 

{code}

This is caused by an outage of both NameNodes during the downgrade. 

- We have two NNs at the "Finalize Upgrade" state; 
-- nn1 is standby (out of safemode)
-- nn2 is active (out of safemode)
- A downgrade begins and we restart nn1
-- After the restart of nn1, it hasn't come online yet. Our code tries to 
contact it and can't, so we move onto nn2.
-- nn2 is online and active and out of safemode (because it hasn't been 
downgraded yet), so we let the downgrade continue
- The downgrade continues and we restart nn2
-- However, nn1 is still coming online and isn't even standby yet

Now we have an nn1 which isn't fully loaded and an nn2 which is restarting and 
trying to figure out whether to be active or standby. It's during this gap that 
the tests must be failing. 

So, it seems like we need to be a little bit smarter about waiting for the 
namenode to restart; we can't just look at the "active" one and say things are 
OK because it might be the next one to restart. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Ayub Khan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayub Khan updated AMBARI-18239:
---
Attachment: AMBARI-18239.patch

[~sumitmohanty] / [~afernandez] Please review the patch.

> oozie.py is reading invalid 'version' attribute which results in not copying 
> required atlas hook jars
> -
>
> Key: AMBARI-18239
> URL: https://issues.apache.org/jira/browse/AMBARI-18239
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Ayub Khan
>Assignee: Ayub Khan
> Fix For: trunk
>
> Attachments: AMBARI-18239.patch
>
>
> *OOzie server start output by ambari-agent is showing this error - 
> "2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this 
> Oozie server doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*
> {noformat}
> 2016-08-23 07:21:53,147 - call returned (0, '')
> 2016-08-23 07:21:53,148 - 
> Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
> hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
> /usr/hdp/current/oozie-server/share'] {'path': 
> [u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
> 'user': 'oozie'}
> 2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
> {'security_enabled': True, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 
> 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', 
> u'/mr-history/done', u'/apps/falcon'], 'mode': 0755}
> 2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
> /etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
> 2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 
> 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
> '"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
>  1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
> 2016-08-23 07:23:33,259 - call returned (0, '')
> 2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
> 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
> '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
> 'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
> 'hdfs_resource_ignore_file': 
> '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
> 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
> 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
> '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
> [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
> u'/apps/falcon']}
> 2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
> /usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
> {'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
> "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
> >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
> 'user': 'oozie'}
> 2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
> server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18239) oozie.py is reading invalid 'version' attribute which results in not copying required atlas hook jars

2016-08-23 Thread Ayub Khan (JIRA)
Ayub Khan created AMBARI-18239:
--

 Summary: oozie.py is reading invalid 'version' attribute which 
results in not copying required atlas hook jars
 Key: AMBARI-18239
 URL: https://issues.apache.org/jira/browse/AMBARI-18239
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: trunk, 2.4.0
Reporter: Ayub Khan
Assignee: Ayub Khan
 Fix For: trunk


*OOzie server start output by ambari-agent is showing this error - "2016-08-23 
07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie server 
doesn't contain directory /usr/hdp/None/atlas/hook/hive/"*

{noformat}
2016-08-23 07:21:53,147 - call returned (0, '')
2016-08-23 07:21:53,148 - 
Execute['/usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs 
hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020 -locallib 
/usr/hdp/current/oozie-server/share'] {'path': 
[u'/usr/hdp/current/oozie-server/bin:/usr/hdp/current/hadoop-client/bin'], 
'user': 'oozie'}
2016-08-23 07:23:33,091 - HdfsResource['/user/oozie/share'] 
{'security_enabled': True, 'hadoop_bin_dir': 
'/usr/hdp/current/hadoop-client/bin', 'keytab': 
'/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 'user': 'hdfs', 
'hdfs_resource_ignore_file': 
'/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
'recursive_chmod': True, 'action': ['create_on_execute'], 'hadoop_conf_dir': 
'/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'immutable_paths': 
[u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
u'/apps/falcon'], 'mode': 0755}
2016-08-23 07:23:33,093 - Execute['/usr/bin/kinit -kt 
/etc/security/keytabs/hdfs.headless.keytab h...@example.com'] {'user': 'hdfs'}
2016-08-23 07:23:33,170 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl 
-sS -L -w '"'"'%{http_code}'"'"' -X GET --negotiate -u : 
'"'"'http://nat-r7-pcds-falcon-multi-9.openstacklocal:20070/webhdfs/v1/user/oozie/share?op=GETFILESTATUS=hdfs'"'"'
 1>/tmp/tmp2xvm99 2>/tmp/tmpwdKIRi''] {'logoutput': None, 'quiet': False}
2016-08-23 07:23:33,259 - call returned (0, '')
2016-08-23 07:23:33,261 - HdfsResource[None] {'security_enabled': True, 
'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': 
'/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': '', 'default_fs': 
'hdfs://nat-r7-pcds-falcon-multi-9.openstacklocal:8020', 
'hdfs_resource_ignore_file': 
'/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 
'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'h...@example.com', 
'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': 
'/usr/hdp/current/hadoop-client/conf', 'immutable_paths': 
[u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', 
u'/apps/falcon']}
2016-08-23 07:23:33,261 - Execute['cd /var/tmp/oozie && 
/usr/hdp/current/oozie-server/bin/oozie-start.sh'] {'environment': 
{'OOZIE_CONFIG': u'/usr/hdp/current/oozie-server/conf'}, 'not_if': 
"ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid 
>/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 
'user': 'oozie'}
2016-08-23 07:23:36,447 - ERROR. Atlas is installed in cluster but this Oozie 
server doesn't contain directory /usr/hdp/None/atlas/hook/hive/
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18238) HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari

2016-08-23 Thread Dmytro Grinenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Grinenko updated AMBARI-18238:
-
Attachment: AMBARI-18238.patch

> HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari
> --
>
> Key: AMBARI-18238
> URL: https://issues.apache.org/jira/browse/AMBARI-18238
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18238.patch
>
>
> After upgrading the HBase server doesn't start throws below error.
> {code}
> 016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:SERVER_GC_OPTS=-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 
> -Xloggc:/var/log/hbase/gc.log-201608210549
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HADOOP_CONF=/usr/hdp/2.5.0.0-1232/hadoop/conf
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HOSTNAME=hcube1-1n03.eng.hortonworks.com
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTDIR=/usr/lib64/qt-3.3
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_THRIFT_OPTS=
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_HOME=/usr/hdp/current/hbase-master/bin/..
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTLIB=/usr/lib64/qt-3.3/lib
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HOME=/home/hbase
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:MALLOC_ARENA_MAX=4
> 2016-08-21 05:49:27,301 INFO  [main] util.ServerCommandLine: vmName=Java 
> HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.60-b23
> 2016-08-21 05:49:27,302 INFO  [main] util.ServerCommandLine: 
> vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, 
> -Dhdp.version=2.5.0.0-1232, -XX:+UseConcMarkSweepGC, 
> -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_client_jaas.conf,
>  -Djava.io.tmpdir=/tmp, -verbose:gc, -XX:+PrintGCDetails, 
> -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201608210549, 
> -Xmx2048m, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_master_jaas.conf,
>  -Dhbase.log.dir=/var/log/hbase, 
> -Dhbase.log.file=hbase-hbase-master-hcube1-1n03.eng.hortonworks.com.log, 
> -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.., -Dhbase.id.str=hbase, 
> -Dhbase.root.logger=INFO,RFA, 
> -Djava.library.path=:/usr/hdp/2.5.0.0-1232/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1232/hadoop/lib/native,
>  -Dhbase.security.logger=INFO,RFAS]
> 2016-08-21 05:49:27,715 INFO  [main] regionserver.RSRpcServices: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000 server-side 
> HConnection retries=350
> 2016-08-21 05:49:27,892 INFO  [main] ipc.SimpleRpcScheduler: Using fifo as 
> user call queue, count=12
> 2016-08-21 05:49:27,896 INFO  [main] ipc.PhoenixRpcSchedulerFactory: Using 
> custom Phoenix Index RPC Handling with index rpc priority 1000 and metadata 
> rpc priority 2000
> 2016-08-21 05:49:27,919 INFO  [main] ipc.RpcServer: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000: started 10 
> reader(s).
> 2016-08-21 05:49:28,027 INFO  [main] impl.MetricsConfig: loaded properties 
> from hadoop-metrics2-hbase.properties
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Initializing Timeline metrics sink.
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Identified hostname = hcube1-1n03.eng.hortonworks.com, serviceName = hbase
> 2016-08-21 05:49:28,174 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Collector Uri: 
> http://hcube1-2n01.eng.hortonworks.com:6188/ws/v1/timeline/metrics
> 2016-08-21 05:49:28,189 INFO  [main] impl.MetricsSinkAdapter: Sink timeline 
> started
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: Scheduled 
> snapshot period at 10 second(s).
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: HBase metrics 
> system started
> 2016-08-21 05:49:28,649 INFO  [main] security.UserGroupInformation: Login 
> successful for user 
> hbase/hcube1-1n03.eng.hortonworks@hwx.stanleyhotel.com using keytab file 
> /etc/security/keytabs/hbase.service.keytab
> 2016-08-21 05:49:28,655 INFO  [main] hfile.CacheConfig: Allocating 
> LruBlockCache size=791.90 MB, blockSize=64 KB
> 2016-08-21 05:49:28,663 WARN  [main] hfile.CacheConfig: Configuration 
> 'hbase.bucketcache.percentage.in.combinedcache' is no longer respected. See 
> comments 

[jira] [Updated] (AMBARI-18238) HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari

2016-08-23 Thread Dmytro Grinenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Grinenko updated AMBARI-18238:
-
Status: Patch Available  (was: Open)

> HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari
> --
>
> Key: AMBARI-18238
> URL: https://issues.apache.org/jira/browse/AMBARI-18238
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Dmytro Grinenko
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-18238.patch
>
>
> After upgrading the HBase server doesn't start throws below error.
> {code}
> 016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:SERVER_GC_OPTS=-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 
> -Xloggc:/var/log/hbase/gc.log-201608210549
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HADOOP_CONF=/usr/hdp/2.5.0.0-1232/hadoop/conf
> 2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
> env:HOSTNAME=hcube1-1n03.eng.hortonworks.com
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTDIR=/usr/lib64/qt-3.3
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_THRIFT_OPTS=
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HBASE_HOME=/usr/hdp/current/hbase-master/bin/..
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:QTLIB=/usr/lib64/qt-3.3/lib
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:HOME=/home/hbase
> 2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
> env:MALLOC_ARENA_MAX=4
> 2016-08-21 05:49:27,301 INFO  [main] util.ServerCommandLine: vmName=Java 
> HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.60-b23
> 2016-08-21 05:49:27,302 INFO  [main] util.ServerCommandLine: 
> vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, 
> -Dhdp.version=2.5.0.0-1232, -XX:+UseConcMarkSweepGC, 
> -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_client_jaas.conf,
>  -Djava.io.tmpdir=/tmp, -verbose:gc, -XX:+PrintGCDetails, 
> -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201608210549, 
> -Xmx2048m, 
> -Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_master_jaas.conf,
>  -Dhbase.log.dir=/var/log/hbase, 
> -Dhbase.log.file=hbase-hbase-master-hcube1-1n03.eng.hortonworks.com.log, 
> -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.., -Dhbase.id.str=hbase, 
> -Dhbase.root.logger=INFO,RFA, 
> -Djava.library.path=:/usr/hdp/2.5.0.0-1232/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1232/hadoop/lib/native,
>  -Dhbase.security.logger=INFO,RFAS]
> 2016-08-21 05:49:27,715 INFO  [main] regionserver.RSRpcServices: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000 server-side 
> HConnection retries=350
> 2016-08-21 05:49:27,892 INFO  [main] ipc.SimpleRpcScheduler: Using fifo as 
> user call queue, count=12
> 2016-08-21 05:49:27,896 INFO  [main] ipc.PhoenixRpcSchedulerFactory: Using 
> custom Phoenix Index RPC Handling with index rpc priority 1000 and metadata 
> rpc priority 2000
> 2016-08-21 05:49:27,919 INFO  [main] ipc.RpcServer: 
> master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000: started 10 
> reader(s).
> 2016-08-21 05:49:28,027 INFO  [main] impl.MetricsConfig: loaded properties 
> from hadoop-metrics2-hbase.properties
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Initializing Timeline metrics sink.
> 2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Identified hostname = hcube1-1n03.eng.hortonworks.com, serviceName = hbase
> 2016-08-21 05:49:28,174 INFO  [main] timeline.HadoopTimelineMetricsSink: 
> Collector Uri: 
> http://hcube1-2n01.eng.hortonworks.com:6188/ws/v1/timeline/metrics
> 2016-08-21 05:49:28,189 INFO  [main] impl.MetricsSinkAdapter: Sink timeline 
> started
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: Scheduled 
> snapshot period at 10 second(s).
> 2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: HBase metrics 
> system started
> 2016-08-21 05:49:28,649 INFO  [main] security.UserGroupInformation: Login 
> successful for user 
> hbase/hcube1-1n03.eng.hortonworks@hwx.stanleyhotel.com using keytab file 
> /etc/security/keytabs/hbase.service.keytab
> 2016-08-21 05:49:28,655 INFO  [main] hfile.CacheConfig: Allocating 
> LruBlockCache size=791.90 MB, blockSize=64 KB
> 2016-08-21 05:49:28,663 WARN  [main] hfile.CacheConfig: Configuration 
> 'hbase.bucketcache.percentage.in.combinedcache' is no longer respected. See 
> 

[jira] [Created] (AMBARI-18238) HBase Master doesn't start after upgrading from HDP 2.3.6 using Ambari

2016-08-23 Thread Dmytro Grinenko (JIRA)
Dmytro Grinenko created AMBARI-18238:


 Summary: HBase Master doesn't start after upgrading from HDP 2.3.6 
using Ambari
 Key: AMBARI-18238
 URL: https://issues.apache.org/jira/browse/AMBARI-18238
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: trunk
Reporter: Dmytro Grinenko
Priority: Critical
 Fix For: trunk


After upgrading the HBase server doesn't start throws below error.

{code}

016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
env:SERVER_GC_OPTS=-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 
-Xloggc:/var/log/hbase/gc.log-201608210549
2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
env:HADOOP_CONF=/usr/hdp/2.5.0.0-1232/hadoop/conf
2016-08-21 05:49:27,298 INFO  [main] util.ServerCommandLine: 
env:HOSTNAME=hcube1-1n03.eng.hortonworks.com
2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
env:QTDIR=/usr/lib64/qt-3.3
2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
env:HBASE_THRIFT_OPTS=
2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
env:HBASE_HOME=/usr/hdp/current/hbase-master/bin/..
2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
env:QTLIB=/usr/lib64/qt-3.3/lib
2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
env:HOME=/home/hbase
2016-08-21 05:49:27,299 INFO  [main] util.ServerCommandLine: 
env:MALLOC_ARENA_MAX=4
2016-08-21 05:49:27,301 INFO  [main] util.ServerCommandLine: vmName=Java 
HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.60-b23
2016-08-21 05:49:27,302 INFO  [main] util.ServerCommandLine: 
vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, 
-Dhdp.version=2.5.0.0-1232, -XX:+UseConcMarkSweepGC, 
-XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, 
-Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_client_jaas.conf,
 -Djava.io.tmpdir=/tmp, -verbose:gc, -XX:+PrintGCDetails, 
-XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201608210549, -Xmx2048m, 
-Djava.security.auth.login.config=/usr/hdp/current/hbase-master/conf/hbase_master_jaas.conf,
 -Dhbase.log.dir=/var/log/hbase, 
-Dhbase.log.file=hbase-hbase-master-hcube1-1n03.eng.hortonworks.com.log, 
-Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.., -Dhbase.id.str=hbase, 
-Dhbase.root.logger=INFO,RFA, 
-Djava.library.path=:/usr/hdp/2.5.0.0-1232/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1232/hadoop/lib/native,
 -Dhbase.security.logger=INFO,RFAS]
2016-08-21 05:49:27,715 INFO  [main] regionserver.RSRpcServices: 
master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000 server-side 
HConnection retries=350
2016-08-21 05:49:27,892 INFO  [main] ipc.SimpleRpcScheduler: Using fifo as user 
call queue, count=12
2016-08-21 05:49:27,896 INFO  [main] ipc.PhoenixRpcSchedulerFactory: Using 
custom Phoenix Index RPC Handling with index rpc priority 1000 and metadata rpc 
priority 2000
2016-08-21 05:49:27,919 INFO  [main] ipc.RpcServer: 
master/hcube1-1n03.eng.hortonworks.com/172.18.129.3:16000: started 10 reader(s).
2016-08-21 05:49:28,027 INFO  [main] impl.MetricsConfig: loaded properties from 
hadoop-metrics2-hbase.properties
2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
Initializing Timeline metrics sink.
2016-08-21 05:49:28,063 INFO  [main] timeline.HadoopTimelineMetricsSink: 
Identified hostname = hcube1-1n03.eng.hortonworks.com, serviceName = hbase
2016-08-21 05:49:28,174 INFO  [main] timeline.HadoopTimelineMetricsSink: 
Collector Uri: 
http://hcube1-2n01.eng.hortonworks.com:6188/ws/v1/timeline/metrics
2016-08-21 05:49:28,189 INFO  [main] impl.MetricsSinkAdapter: Sink timeline 
started
2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: Scheduled snapshot 
period at 10 second(s).
2016-08-21 05:49:28,278 INFO  [main] impl.MetricsSystemImpl: HBase metrics 
system started
2016-08-21 05:49:28,649 INFO  [main] security.UserGroupInformation: Login 
successful for user hbase/hcube1-1n03.eng.hortonworks@hwx.stanleyhotel.com 
using keytab file /etc/security/keytabs/hbase.service.keytab
2016-08-21 05:49:28,655 INFO  [main] hfile.CacheConfig: Allocating 
LruBlockCache size=791.90 MB, blockSize=64 KB
2016-08-21 05:49:28,663 WARN  [main] hfile.CacheConfig: Configuration 
'hbase.bucketcache.percentage.in.combinedcache' is no longer respected. See 
comments in http://hbase.apache.org/book.html#_changes_of_note
2016-08-21 05:49:28,674 INFO  [main] util.ByteBufferArray: Allocating buffers 
total=10 GB, sizePerBuffer=4 MB, count=2560, direct=true
2016-08-21 05:49:30,320 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class 
org.apache.hadoop.hbase.master.HMaster
at 

[jira] [Updated] (AMBARI-18231) Error around ATLAS_SERVER version advertisement

2016-08-23 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-18231:

Fix Version/s: (was: trunk)
   2.5.0

> Error around ATLAS_SERVER version advertisement
> ---
>
> Key: AMBARI-18231
> URL: https://issues.apache.org/jira/browse/AMBARI-18231
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.5.0
>
> Attachments: AMBARI-18231.patch
>
>
> Noticed in the stack deploy runs:
> {code}
> 20 Aug 2016 11:20:07,519 ERROR [ambari-heartbeat-processor-0] 
> StackVersionListener:116 - ServiceComponent {0} doesn't advertise version, 
> however ServiceHostComponent ATLAS_SERVER on host ATLAS_SERVER advertised 
> version as host. Skipping version update
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17159) Upon successful start, log the process id for daemons started

2016-08-23 Thread Myroslav Papirkovskyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myroslav Papirkovskyi updated AMBARI-17159:
---
Fix Version/s: (was: 2.5.0)
   trunk

> Upon successful start, log the process id for daemons started
> -
>
> Key: AMBARI-17159
> URL: https://issues.apache.org/jira/browse/AMBARI-17159
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-agent
>Affects Versions: 2.4.0
>Reporter: Myroslav Papirkovskyi
>Assignee: Myroslav Papirkovskyi
>Priority: Critical
> Fix For: trunk
>
> Attachments: AMBARI-17159.patch
>
>
> As part of successful start commands, lets log the process id of the daemons 
> that started. One option could be to call the status() from start() with a 
> parameter that lets the implementation log all details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18154) Ambari Dashboard, Cluster load widget - Incorrect value in Nodes._avg metric

2016-08-23 Thread Sandeep Nemuri (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri updated AMBARI-18154:

Affects Version/s: 2.2.1
   2.2.0

> Ambari Dashboard, Cluster load widget - Incorrect value in Nodes._avg metric
> 
>
> Key: AMBARI-18154
> URL: https://issues.apache.org/jira/browse/AMBARI-18154
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-metrics
>Affects Versions: 2.2.0, 2.2.1, 2.2.2
>Reporter: Sandeep Nemuri
> Attachments: Cluster_Load.png
>
>
> *PROBLEM* :
> Under Ambari Dashboard -> Cluster metrics.
> It has been noticed that the metrics Nodes._avg and CPUs._avg are showing the 
> same data.
> Below is the screenshot which shows Cluster metrics of a cluster which has 
> nodes 200+ and the Nodes._avg shows 18.5 as maximum value.
> !Cluster_Load.png|align=center,|height=750%,width=700&!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >