[jira] [Commented] (AMBARI-24407) Ambari: Add rpm support

2018-08-06 Thread Naresh Bhat (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571177#comment-16571177
 ] 

Naresh Bhat commented on AMBARI-24407:
--

Update the maven plugin version from 2.0.1 to 2.1.4 because the same version is 
being used in assembly - 
https://github.com/apache/ambari/pull/1967/commits/e9bbac83641ee9f73641be1575436c55ec555c76

> Ambari: Add rpm support
> ---
>
> Key: AMBARI-24407
> URL: https://issues.apache.org/jira/browse/AMBARI-24407
> Project: Ambari
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
> Environment: The Ambari rpm is build and tested on AArch64 machine 
> with CentOS Linux release 7.4.1708.
>Reporter: Naresh Bhat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.0
>
> Attachments: 0001-ambari-Add-rpm-support.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The Ambari infra and logsearch packages are missing rpm support. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-17346) Dependent components should be shutdown before stopping hdfs

2018-08-06 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-17346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated AMBARI-17346:

Description: 
Sometimes admin shuts down hdfs first, then hbase. 


By the time hbase is shutdown, no data can be persisted (including metadata). 
This results in large number of inconsistencies when hbase cluster is brought 
back up.


Before hdfs is shutdown, the components dependent on hdfs should be shutdown 
first.

  was:
Sometimes admin shuts down hdfs first, then hbase. 

By the time hbase is shutdown, no data can be persisted (including metadata). 
This results in large number of inconsistencies when hbase cluster is brought 
back up.


Before hdfs is shutdown, the components dependent on hdfs should be shutdown 
first.


> Dependent components should be shutdown before stopping hdfs
> 
>
> Key: AMBARI-17346
> URL: https://issues.apache.org/jira/browse/AMBARI-17346
> Project: Ambari
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Major
>
> Sometimes admin shuts down hdfs first, then hbase. 
> By the time hbase is shutdown, no data can be persisted (including metadata). 
> This results in large number of inconsistencies when hbase cluster is brought 
> back up.
> Before hdfs is shutdown, the components dependent on hdfs should be shutdown 
> first.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (AMBARI-18952) Register BackupObserver and BackupHFileCleaner

2018-08-06 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-18952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15884466#comment-15884466
 ] 

Ted Yu edited comment on AMBARI-18952 at 8/7/18 4:04 AM:
-

Note:

There is cost when BackupObserver is registered - it would poll backup table 
(at the end of bulk load per region) for whether the underlying table has gone 
thru full backup .



was (Author: yuzhih...@gmail.com):
Note:
There is cost when BackupObserver is registered - it would poll backup table 
(at the end of bulk load per region) for whether the underlying table has gone 
thru full backup .


> Register BackupObserver and BackupHFileCleaner
> --
>
> Key: AMBARI-18952
> URL: https://issues.apache.org/jira/browse/AMBARI-18952
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Major
>
> Over in HBASE-14417, two new classes are added.
> org.apache.hadoop.hbase.backup.BackupHFileCleaner should be registered 
> through hbase.master.hfilecleaner.plugins . It is responsible for keeping 
> bulk loaded hfiles so that incremental backup can pick them up.
> org.apache.hadoop.hbase.backup.BackupObserver should be registered through 
> hbase.coprocessor.region.classes
> It is notified when bulk load completes and writes records into hbase:backup 
> table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-20945) Configuration parameter 'hdfs-site' was not found error when AMS rootdir is on s3a

2018-08-06 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-20945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated AMBARI-20945:

Description: 
When I specify AMS rootdir to be on s3a and restart AMS, I would get the 
following error:
{code}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
 line 86, in 
AmsCollector().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 315, in execute
method(env)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 817, in restart
self.start(env)
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
 line 48, in start
self.configure(env, action = 'start') # for security
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 118, in locking_configure
original_configure(obj, *args, **kw)
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
 line 43, in configure
hbase('master', action)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", 
line 89, in thunk
return fn(*args, **kwargs)
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/hbase.py",
 line 222, in hbase
dfs_type=params.dfs_type
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 119, in run_action
provider = provider_class(resource)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 503, in __init__
self.assert_parameter_is_set('hdfs_site')
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 575, in assert_parameter_is_set
if not getattr(self.resource, parameter_name):
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py",
 line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in 
configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'hdfs-site' 
was not found in configurations dictionary!
{code}

  was:
When I specify AMS rootdir to be on s3a and restart AMS, I would get the 
following error:

{code}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
 line 86, in 
AmsCollector().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 315, in execute
method(env)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 817, in restart
self.start(env)
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
 line 48, in start
self.configure(env, action = 'start') # for security
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 118, in locking_configure
original_configure(obj, *args, **kw)
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
 line 43, in configure
hbase('master', action)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", 
line 89, in thunk
return fn(*args, **kwargs)
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/hbase.py",
 line 222, in hbase
dfs_type=params.dfs_type
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 155, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 119, in run_action
provider = provider_class(resource)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 503, in __init__
self.assert_parameter_is_set('hdfs_site')
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
 line 575, in assert_parameter_is_set
if not getattr(self.resource, parameter_name):
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py",
 line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not fo

[jira] [Commented] (AMBARI-24409) Infra Solr migration: Restore collection fails after EU on Custom Users + WE cluster. Error - Permission denied: u'/tmp/ranger/restore_core_pairs.json'

2018-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570855#comment-16570855
 ] 

Hudson commented on AMBARI-24409:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.7 #111 (See 
[https://builds.apache.org/job/Ambari-branch-2.7/111/])
AMBARI-24409. Infra Solr migration: Restore collection fails after EU on 
(github: 
[https://gitbox.apache.org/repos/asf?p=ambari.git&a=commit&h=e95d537b5339ea871d3860ddaddf9ff485c80f08])
* (edit) 
ambari-server/src/main/resources/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/command_commons.py


> Infra Solr migration: Restore collection fails after EU on Custom Users + WE 
> cluster. Error - Permission denied: u'/tmp/ranger/restore_core_pairs.json'
> ---
>
> Key: AMBARI-24409
> URL: https://issues.apache.org/jira/browse/AMBARI-24409
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-infra, ambari-server
>Affects Versions: 2.7.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.1
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/infra_solr.py",
>  line 171, in 
> InfraSolr().execute()
>   File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 353, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/infra_solr.py",
>  line 146, in restore
> restore_collection(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/collection.py",
>  line 103, in restore_collection
> core_pairs = command_commons.create_core_pairs(original_core_host_pairs, 
> new_core_host_pairs)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/command_commons.py",
>  line 289, in create_core_pairs
> with open(format("{index_location}/restore_core_pairs.json"), 'w') as 
> outfile:
> IOError: [Errno 13] Permission denied: u'/tmp/ranger/restore_core_pairs.json
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMBARI-24409) Infra Solr migration: Restore collection fails after EU on Custom Users + WE cluster. Error - Permission denied: u'/tmp/ranger/restore_core_pairs.json'

2018-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570811#comment-16570811
 ] 

Hudson commented on AMBARI-24409:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #9750 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/9750/])
AMBARI-24409. Infra Solr migration: Restore collection fails after EU on 
(github: 
[https://gitbox.apache.org/repos/asf?p=ambari.git&a=commit&h=d3b47d0fe8bd0ecfabba1b472d99def263310fdf])
* (edit) 
ambari-server/src/main/resources/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/command_commons.py


> Infra Solr migration: Restore collection fails after EU on Custom Users + WE 
> cluster. Error - Permission denied: u'/tmp/ranger/restore_core_pairs.json'
> ---
>
> Key: AMBARI-24409
> URL: https://issues.apache.org/jira/browse/AMBARI-24409
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-infra, ambari-server
>Affects Versions: 2.7.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.1
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/infra_solr.py",
>  line 171, in 
> InfraSolr().execute()
>   File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 353, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/infra_solr.py",
>  line 146, in restore
> restore_collection(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/collection.py",
>  line 103, in restore_collection
> core_pairs = command_commons.create_core_pairs(original_core_host_pairs, 
> new_core_host_pairs)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/command_commons.py",
>  line 289, in create_core_pairs
> with open(format("{index_location}/restore_core_pairs.json"), 'w') as 
> outfile:
> IOError: [Errno 13] Permission denied: u'/tmp/ranger/restore_core_pairs.json
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24410) Remove conf-select Tool From Ambari Framework

2018-08-06 Thread Jonathan Hurley (JIRA)
Jonathan Hurley created AMBARI-24410:


 Summary: Remove conf-select Tool From Ambari Framework
 Key: AMBARI-24410
 URL: https://issues.apache.org/jira/browse/AMBARI-24410
 Project: Ambari
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Jonathan Hurley
Assignee: Jonathan Hurley
 Fix For: 3.0.0


Mpack do not provide a replacement for {{conf-select}}, a utility which was 
previously used to provide parallel configurations so that 2 components (such 
as NameNode and DataNode) could use different configurations while in an 
upgrade. It ensured that breaking configuration changes stay isolated in their 
respective directories:

{noformat}
/usr/hdp/2.5.0.0/zookeeper/conf -> /etc/zookeeper/2.5.0.0/0
/usr/hdp/2.6.0.0/zookeeper/conf -> /etc/zookeeper/2.6.0.0/0
/usr/hdp/current/zookeeper-server -> /usr/hdp/2.5.0.0/zookeeper
{noformat}
 
When {{hdp-select}} was used to change the {{current}} symlink, the {{conf}} 
directories would “automatically” switch over. This allowed a complete 
separation of configurations and the ability to have parallel configurations on 
disk. 
 
If Ambari is managing your configurations, then we know what to write and when 
to write it
Even in cases where breaking configuration changes were made, since Ambari kept 
both old and new configurations in our database, a downgrade would always write 
out the correct configurations for the version being downgraded to
 
Let’s fast-forward to Ambari 3.0 and mpacks … The current structure does not 
allow for multiple configurations for a given service inside of a service group:
 
{noformat}
instances
└── hdpcore
├── default -> /usr/hwx/instances/hdpcore/HDPCORE
└── HDPCORE
└── default
├── zookeeper
│   └── zookeeper_server
│   └── ZOOKEEPER
│   ├── conf
│   │   ├── configuration.xsl
│   │   ├── log4j.properties
│   │   ├── zoo.cfg
│   │   ├── zookeeper-env.sh
│   │   └── zoo_sample.cfg
{noformat} 
 
Instead of symlinks scoped by a version, each service instance has a regular 
conf directory. A few points here:
- Since configurations are now at the component instance level, DataNode and 
NameNode won't share configs during an upgrade.
- For Ambari managed configurations, this should be fine since we keep track of 
old and new versions. So, even on a downgrade, we’d know to write the correct 
values in here
- If Ambari is not managing a configuration file, say foo-site.xml, then:
-- We don’t have to worry about copying it from one versioned directory to 
another. This seeding process is necessary in Ambari 2.x, but wouldn’t be here
-- If there was a change to the structure of a file which Ambari does not 
manage, then we have a problem on downgrade as Ambari wouldn’t know to replace 
anything. I suppose it’s the same issue on upgrade too since Ambari wouldn’t 
know to change the file either.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-24397) Allow PATCH VDFs to Specify Services Which Are Not Installed in the Cluster

2018-08-06 Thread Jonathan Hurley (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hurley resolved AMBARI-24397.
--
Resolution: Fixed

> Allow PATCH VDFs to Specify Services Which Are Not Installed in the Cluster
> ---
>
> Key: AMBARI-24397
> URL: https://issues.apache.org/jira/browse/AMBARI-24397
> Project: Ambari
>  Issue Type: Task
>Affects Versions: 2.6.0
>Reporter: Jonathan Hurley
>Assignee: Jonathan Hurley
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.7.1
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> AMBARI-21832 limited the flexibility of a PATCH VDF by requiring that the 
> list of {{available-services}} match what is installed in the cluster. For 
> example, if a cluster contained ZooKeeper and Storm, a patch VDF which 
> specified Storm and Accumulo could not be registered.
> Ambari should allow registration of a VDF without restricting it to the 
> services which are currently installed in the cluster. In the above mentioned 
> case, one concern would be what would happen if Accumulo was added after the 
> patch was applied. In this case, Ambari should add Accumulo from the parent 
> {{STANDARD}} repo. 
> When a patch is reverted, Ambari must now check to ensure that a service 
> included in that patch wasn't added after the patch was applied. Consider 
> this scenario:
> - Install a ZK only cluster
> - Register and patch using a VDF with ZK, STORM
> - Add Storm
> - Revert the patch
> - Re-apply the patch
> When the patch is re-applied, the hosts will not have the new storm packages 
> installed since the patch repository was distributed before Storm was a part 
> of the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMBARI-24408) Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues

2018-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570621#comment-16570621
 ] 

Hudson commented on AMBARI-24408:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #9749 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/9749/])
[AMBARI-24408] Update org.eclipse.jetty version to 9.4.11.v20180605 to (rlevas: 
[https://gitbox.apache.org/repos/asf?p=ambari.git&a=commit&h=836ea4c9135b138fda3b348c2e3dbbfa9fbd3d4c])
* (edit) ambari-server/pom.xml
* (edit) ambari-project/pom.xml


> Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues
> 
>
> Key: AMBARI-24408
> URL: https://issues.apache.org/jira/browse/AMBARI-24408
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.1
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Critical
>  Labels: cleanup, pull-request-available
> Fix For: 2.7.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues.
> See https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00123.html for 
> reported issues. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMBARI-24408) Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues

2018-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570603#comment-16570603
 ] 

Hudson commented on AMBARI-24408:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.7 #110 (See 
[https://builds.apache.org/job/Ambari-branch-2.7/110/])
[AMBARI-24408] Update org.eclipse.jetty version to 9.4.11.v20180605 to (rlevas: 
[https://gitbox.apache.org/repos/asf?p=ambari.git&a=commit&h=0d56d8b8f72c0a59251329de6f9d31d7e8b47cd3])
* (edit) ambari-server/pom.xml
* (edit) ambari-project/pom.xml


> Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues
> 
>
> Key: AMBARI-24408
> URL: https://issues.apache.org/jira/browse/AMBARI-24408
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.1
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Critical
>  Labels: cleanup, pull-request-available
> Fix For: 2.7.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues.
> See https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00123.html for 
> reported issues. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMBARI-24322) Log Search / Ambari upgrade: db config consistency check has warnings (*-logearch-conf configs)

2018-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570537#comment-16570537
 ] 

Hudson commented on AMBARI-24322:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #9748 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/9748/])
AMBARI-24322. Copy db config consistency fix to UpgradeCatalog271 as (github: 
[https://gitbox.apache.org/repos/asf?p=ambari.git&a=commit&h=c952a9090d1cbc3e18a6608f7e4db7791c7d83e8])
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/upgrade/UpgradeCatalog271Test.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog271.java


> Log Search / Ambari upgrade: db config consistency check has warnings 
> (*-logearch-conf configs)
> ---
>
> Key: AMBARI-24322
> URL: https://issues.apache.org/jira/browse/AMBARI-24322
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2018-07-11 15:09:54,135  WARN - You have config(s): 
> ams-logsearch-conf-version1,zookeeper-logsearch-conf-version1,hbase-logsearch-conf-version1,infra-logsearch-conf-version1,hdfs-logsearch-conf-version1,mapred-logsearch-conf-version1530850353441,logfeeder-custom-logsearch-conf-version1,atlas-logsearch-conf-version1,kafka-logsearch-conf-version1,yarn-logsearch-conf-version1530850353441
>  that is(are) not mapped (in serviceconfigmapping table) to any service!
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMBARI-24322) Log Search / Ambari upgrade: db config consistency check has warnings (*-logearch-conf configs)

2018-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570439#comment-16570439
 ] 

Hudson commented on AMBARI-24322:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.7 #109 (See 
[https://builds.apache.org/job/Ambari-branch-2.7/109/])
AMBARI-24322. Copy db config consistency fix to UpgradeCatalog271 as (github: 
[https://gitbox.apache.org/repos/asf?p=ambari.git&a=commit&h=b1c8eb5a913a011f69afa2121417a0d1971033c1])
* (edit) 
ambari-server/src/test/java/org/apache/ambari/server/upgrade/UpgradeCatalog271Test.java
* (edit) 
ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog271.java


> Log Search / Ambari upgrade: db config consistency check has warnings 
> (*-logearch-conf configs)
> ---
>
> Key: AMBARI-24322
> URL: https://issues.apache.org/jira/browse/AMBARI-24322
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2018-07-11 15:09:54,135  WARN - You have config(s): 
> ams-logsearch-conf-version1,zookeeper-logsearch-conf-version1,hbase-logsearch-conf-version1,infra-logsearch-conf-version1,hdfs-logsearch-conf-version1,mapred-logsearch-conf-version1530850353441,logfeeder-custom-logsearch-conf-version1,atlas-logsearch-conf-version1,kafka-logsearch-conf-version1,yarn-logsearch-conf-version1530850353441
>  that is(are) not mapped (in serviceconfigmapping table) to any service!
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24409) Infra Solr migration: Restore collection fails after EU on Custom Users + WE cluster. Error - Permission denied: u'/tmp/ranger/restore_core_pairs.json'

2018-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated AMBARI-24409:

Labels: pull-request-available  (was: )

> Infra Solr migration: Restore collection fails after EU on Custom Users + WE 
> cluster. Error - Permission denied: u'/tmp/ranger/restore_core_pairs.json'
> ---
>
> Key: AMBARI-24409
> URL: https://issues.apache.org/jira/browse/AMBARI-24409
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-infra, ambari-server
>Affects Versions: 2.7.0
>Reporter: Olivér Szabó
>Assignee: Olivér Szabó
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.1
>
>
> {code:java}
> Traceback (most recent call last):
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/infra_solr.py",
>  line 171, in 
> InfraSolr().execute()
>   File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 353, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/infra_solr.py",
>  line 146, in restore
> restore_collection(env)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/collection.py",
>  line 103, in restore_collection
> core_pairs = command_commons.create_core_pairs(original_core_host_pairs, 
> new_core_host_pairs)
>   File 
> "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/command_commons.py",
>  line 289, in create_core_pairs
> with open(format("{index_location}/restore_core_pairs.json"), 'w') as 
> outfile:
> IOError: [Errno 13] Permission denied: u'/tmp/ranger/restore_core_pairs.json
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24409) Infra Solr migration: Restore collection fails after EU on Custom Users + WE cluster. Error - Permission denied: u'/tmp/ranger/restore_core_pairs.json'

2018-08-06 Thread JIRA
Olivér Szabó created AMBARI-24409:
-

 Summary: Infra Solr migration: Restore collection fails after EU 
on Custom Users + WE cluster. Error - Permission denied: 
u'/tmp/ranger/restore_core_pairs.json'
 Key: AMBARI-24409
 URL: https://issues.apache.org/jira/browse/AMBARI-24409
 Project: Ambari
  Issue Type: Bug
  Components: ambari-infra, ambari-server
Affects Versions: 2.7.0
Reporter: Olivér Szabó
Assignee: Olivér Szabó
 Fix For: 2.7.1


{code:java}
Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/infra_solr.py",
 line 171, in 
InfraSolr().execute()
  File 
"/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
line 353, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/infra_solr.py",
 line 146, in restore
restore_collection(env)
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/collection.py",
 line 103, in restore_collection
core_pairs = command_commons.create_core_pairs(original_core_host_pairs, 
new_core_host_pairs)
  File 
"/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA_SOLR/0.1.0/package/scripts/command_commons.py",
 line 289, in create_core_pairs
with open(format("{index_location}/restore_core_pairs.json"), 'w') as 
outfile:
IOError: [Errno 13] Permission denied: u'/tmp/ranger/restore_core_pairs.json
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24408) Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues

2018-08-06 Thread Robert Levas (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Levas updated AMBARI-24408:
--
Status: Patch Available  (was: In Progress)

> Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues
> 
>
> Key: AMBARI-24408
> URL: https://issues.apache.org/jira/browse/AMBARI-24408
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.1
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Critical
>  Labels: cleanup, pull-request-available
> Fix For: 2.7.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues.
> See https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00123.html for 
> reported issues. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24408) Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues

2018-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated AMBARI-24408:

Labels: cleanup pull-request-available  (was: cleanup)

> Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues
> 
>
> Key: AMBARI-24408
> URL: https://issues.apache.org/jira/browse/AMBARI-24408
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.1
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Critical
>  Labels: cleanup, pull-request-available
> Fix For: 2.7.1
>
>
> Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues.
> See https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00123.html for 
> reported issues. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMBARI-24408) Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues

2018-08-06 Thread Robert Levas (JIRA)
Robert Levas created AMBARI-24408:
-

 Summary: Update org.eclipse.jetty version to 9.4.11.v20180605 to 
avoid CVE issues
 Key: AMBARI-24408
 URL: https://issues.apache.org/jira/browse/AMBARI-24408
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.7.1
Reporter: Robert Levas
Assignee: Robert Levas
 Fix For: 2.7.1


Update org.eclipse.jetty version to 9.4.11.v20180605 to avoid CVE issues.

See https://dev.eclipse.org/mhonarc/lists/jetty-announce/msg00123.html for 
reported issues. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24375) Adding services when Kerberos is enabled incorrectly changes unrelated service configurations

2018-08-06 Thread Robert Levas (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Levas updated AMBARI-24375:
--
Status: Patch Available  (was: In Progress)

> Adding services when Kerberos is enabled incorrectly changes unrelated 
> service configurations
> -
>
> Key: AMBARI-24375
> URL: https://issues.apache.org/jira/browse/AMBARI-24375
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.7.0
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Critical
>  Labels: kerberos, pull-request-available, regresion
> Fix For: 2.7.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Adding services when Kerberos is enabled incorrectly changes unrelated 
> service configurations.  For example, 
> {{kerberos-env/service_check_principal_name}} is changed from 
> "{{$\{cluster_name|toLower()\}-$\{short_date\}}}" to a concrete value like 
> "{{c1-072818}}".
> This is a regression created with the resolution of 
> [AMBARI-23292|https://issues.apache.org/jira/browse/AMBARI-23292].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMBARI-24404) Button label appears incorrect during service deletion

2018-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570309#comment-16570309
 ] 

Hudson commented on AMBARI-24404:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #9743 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/9743/])
AMBARI-24404. Button label appears incorrect during service deletion 
(aleksandrkovalenko: 
[https://gitbox.apache.org/repos/asf?p=ambari.git&a=commit&h=0b40da3954893592da02f6e421b5caa873f40212])
* (edit) ambari-web/app/controllers/main/service/item.js


> Button label appears incorrect during service deletion
> --
>
> Key: AMBARI-24404
> URL: https://issues.apache.org/jira/browse/AMBARI-24404
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.7.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.0.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Try to delete Ranger service from a cluster, upon deletion stack advisor 
> recommends changes to configs.
> This is fine, however the button at the bottom of the dialog says 'DELETE'
> It should instead say something like 'Proceed' and once user clicks Proceed, 
> the delete pop-up should appear.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMBARI-24399) Components start failing with 'Holder DFSClient_NONMAPREDUCE does not have any open files' while adding Namespace

2018-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570310#comment-16570310
 ] 

Hudson commented on AMBARI-24399:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #9743 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/9743/])
AMBARI-24399. Components start failing with 'Holder (aonishuk: 
[https://gitbox.apache.org/repos/asf?p=ambari.git&a=commit&h=04c53e35cf05de6e055604a26f57a1f9bf6683f3])
* (edit) 
ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py


> Components start failing with 'Holder DFSClient_NONMAPREDUCE does not have 
> any open files' while adding Namespace 
> --
>
> Key: AMBARI-24399
> URL: https://issues.apache.org/jira/browse/AMBARI-24399
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Vivek Rathod
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.1
>
> Attachments: AMBARI-24399.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> STR: 
> Add a namespace from UI. In the last step restart required services, 
> hiveserver2 restart fails. Although on retrying it comes back up
> {code}
> Traceback (most recent call last):
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 982, in restart
>  self.status(env)
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 79, in status
>  check_process_status(status_params.hive_pid)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py",
>  line 43, in check_process_status
>  raise ComponentIsNotRunning()
> ComponentIsNotRunning
> The above exception was the cause of the following exception:
> Traceback (most recent call last):
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 137, in 
>  HiveServer().execute()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 353, in execute
>  method(env)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 993, in restart
>  self.start(env, upgrade_type=upgrade_type)
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 50, in start
>  self.configure(env) # FOR SECURITY
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 45, in configure
>  hive(name='hiveserver2')
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py",
>  line 119, in hive
>  setup_hiveserver2()
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py",
>  line 167, in setup_hiveserver2
>  skip=params.sysprep_skip_copy_tarballs_hdfs)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/copy_tarball.py",
>  line 516, in copy_to_hdfs
>  replace_existing_files=replace_existing_files,
>  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, 
> in __init__
>  self.env.run()
>  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 160, in run
>  self.run_action(resource, action)
>  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 124, in run_action
>  provider_action()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 654, in action_create_on_execute
>  self.action_delayed("create")
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 651, in action_delayed
>  self.get_hdfs_resource_executor().action_delayed(action_name, self)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 354, in action_delayed
>  self.action_delayed_for_nameservice(nameservice, action_name, main_resource)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 380, in action_delayed_for_nameservice
>  self._create_resource()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 396, in _create_resource
>  self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 511, in _create_file
>  self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>  File 
> "/usr/lib/ambar

[jira] [Commented] (AMBARI-24399) Components start failing with 'Holder DFSClient_NONMAPREDUCE does not have any open files' while adding Namespace

2018-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570115#comment-16570115
 ] 

Hudson commented on AMBARI-24399:
-

SUCCESS: Integrated in Jenkins build Ambari-branch-2.7 #108 (See 
[https://builds.apache.org/job/Ambari-branch-2.7/108/])
AMBARI-24399. Components start failing with 'Holder (aonishuk: 
[https://gitbox.apache.org/repos/asf?p=ambari.git&a=commit&h=44384bf1e682c4c0bbfcb8881a0590dc93bc4414])
* (edit) 
ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py


> Components start failing with 'Holder DFSClient_NONMAPREDUCE does not have 
> any open files' while adding Namespace 
> --
>
> Key: AMBARI-24399
> URL: https://issues.apache.org/jira/browse/AMBARI-24399
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Vivek Rathod
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.1
>
> Attachments: AMBARI-24399.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> STR: 
> Add a namespace from UI. In the last step restart required services, 
> hiveserver2 restart fails. Although on retrying it comes back up
> {code}
> Traceback (most recent call last):
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 982, in restart
>  self.status(env)
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 79, in status
>  check_process_status(status_params.hive_pid)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py",
>  line 43, in check_process_status
>  raise ComponentIsNotRunning()
> ComponentIsNotRunning
> The above exception was the cause of the following exception:
> Traceback (most recent call last):
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 137, in 
>  HiveServer().execute()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 353, in execute
>  method(env)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 993, in restart
>  self.start(env, upgrade_type=upgrade_type)
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 50, in start
>  self.configure(env) # FOR SECURITY
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 45, in configure
>  hive(name='hiveserver2')
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py",
>  line 119, in hive
>  setup_hiveserver2()
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py",
>  line 167, in setup_hiveserver2
>  skip=params.sysprep_skip_copy_tarballs_hdfs)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/copy_tarball.py",
>  line 516, in copy_to_hdfs
>  replace_existing_files=replace_existing_files,
>  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, 
> in __init__
>  self.env.run()
>  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 160, in run
>  self.run_action(resource, action)
>  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 124, in run_action
>  provider_action()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 654, in action_create_on_execute
>  self.action_delayed("create")
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 651, in action_delayed
>  self.get_hdfs_resource_executor().action_delayed(action_name, self)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 354, in action_delayed
>  self.action_delayed_for_nameservice(nameservice, action_name, main_resource)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 380, in action_delayed_for_nameservice
>  self._create_resource()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 396, in _create_resource
>  self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 511, in _create_file
>  self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>  File 
> "/usr/lib/ambari-agen

[jira] [Commented] (AMBARI-24405) Components in hosts page should be sorted by display name

2018-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/AMBARI-24405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16570047#comment-16570047
 ] 

Hudson commented on AMBARI-24405:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #9727 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/9727/])
AMBARI-24405. Components in hosts page should be sorted by display name 
(aleksandrkovalenko: 
[https://gitbox.apache.org/repos/asf?p=ambari.git&a=commit&h=1db1dd06be5d55f83427bbb1f6c152ecde90ed11])
* (edit) ambari-web/test/views/main/host/summary_test.js
* (edit) ambari-web/app/views/main/host/summary.js


> Components in hosts page should be sorted by display name
> -
>
> Key: AMBARI-24405
> URL: https://issues.apache.org/jira/browse/AMBARI-24405
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Components in hosts page seem to be sorted by component type (master, slave, 
> client etc) and within them they are not sorted by "display names", 
> "Timeline" comes before "History"
> More importantly people when looking for a component don't look by their type 
> but instead look by their name. So all the components should be 
> alphabetically sorted by their display name irrespective of their type. Else 
> it becomes painful to look for a component of interest. (and finally have to 
> rely on browser's search capability)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24399) Components start failing with 'Holder DFSClient_NONMAPREDUCE does not have any open files' while adding Namespace

2018-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated AMBARI-24399:

Labels: pull-request-available  (was: )

> Components start failing with 'Holder DFSClient_NONMAPREDUCE does not have 
> any open files' while adding Namespace 
> --
>
> Key: AMBARI-24399
> URL: https://issues.apache.org/jira/browse/AMBARI-24399
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Vivek Rathod
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.7.1
>
> Attachments: AMBARI-24399.patch
>
>
> STR: 
> Add a namespace from UI. In the last step restart required services, 
> hiveserver2 restart fails. Although on retrying it comes back up
> {code}
> Traceback (most recent call last):
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 982, in restart
>  self.status(env)
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 79, in status
>  check_process_status(status_params.hive_pid)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py",
>  line 43, in check_process_status
>  raise ComponentIsNotRunning()
> ComponentIsNotRunning
> The above exception was the cause of the following exception:
> Traceback (most recent call last):
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 137, in 
>  HiveServer().execute()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 353, in execute
>  method(env)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 993, in restart
>  self.start(env, upgrade_type=upgrade_type)
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 50, in start
>  self.configure(env) # FOR SECURITY
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 45, in configure
>  hive(name='hiveserver2')
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py",
>  line 119, in hive
>  setup_hiveserver2()
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py",
>  line 167, in setup_hiveserver2
>  skip=params.sysprep_skip_copy_tarballs_hdfs)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/copy_tarball.py",
>  line 516, in copy_to_hdfs
>  replace_existing_files=replace_existing_files,
>  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, 
> in __init__
>  self.env.run()
>  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 160, in run
>  self.run_action(resource, action)
>  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 124, in run_action
>  provider_action()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 654, in action_create_on_execute
>  self.action_delayed("create")
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 651, in action_delayed
>  self.get_hdfs_resource_executor().action_delayed(action_name, self)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 354, in action_delayed
>  self.action_delayed_for_nameservice(nameservice, action_name, main_resource)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 380, in action_delayed_for_nameservice
>  self._create_resource()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 396, in _create_resource
>  self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 511, in _create_file
>  self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 199, in run_command
>  return self._run_command(*args, **kwargs)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 272, in _run_command
>  raise WebHDFSCallException(err_msg, result_dict)
> resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: 
> Execution of 'curl -sS -L -w '%\{http_cod

[jira] [Updated] (AMBARI-24399) Components start failing with 'Holder DFSClient_NONMAPREDUCE does not have any open files' while adding Namespace

2018-08-06 Thread Andrew Onischuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-24399:
-
Attachment: AMBARI-24399.patch

> Components start failing with 'Holder DFSClient_NONMAPREDUCE does not have 
> any open files' while adding Namespace 
> --
>
> Key: AMBARI-24399
> URL: https://issues.apache.org/jira/browse/AMBARI-24399
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Vivek Rathod
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: AMBARI-24399.patch
>
>
> STR: 
> Add a namespace from UI. In the last step restart required services, 
> hiveserver2 restart fails. Although on retrying it comes back up
> {code}
> Traceback (most recent call last):
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 982, in restart
>  self.status(env)
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 79, in status
>  check_process_status(status_params.hive_pid)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py",
>  line 43, in check_process_status
>  raise ComponentIsNotRunning()
> ComponentIsNotRunning
> The above exception was the cause of the following exception:
> Traceback (most recent call last):
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 137, in 
>  HiveServer().execute()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 353, in execute
>  method(env)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 993, in restart
>  self.start(env, upgrade_type=upgrade_type)
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 50, in start
>  self.configure(env) # FOR SECURITY
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 45, in configure
>  hive(name='hiveserver2')
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py",
>  line 119, in hive
>  setup_hiveserver2()
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py",
>  line 167, in setup_hiveserver2
>  skip=params.sysprep_skip_copy_tarballs_hdfs)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/copy_tarball.py",
>  line 516, in copy_to_hdfs
>  replace_existing_files=replace_existing_files,
>  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, 
> in __init__
>  self.env.run()
>  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 160, in run
>  self.run_action(resource, action)
>  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 124, in run_action
>  provider_action()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 654, in action_create_on_execute
>  self.action_delayed("create")
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 651, in action_delayed
>  self.get_hdfs_resource_executor().action_delayed(action_name, self)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 354, in action_delayed
>  self.action_delayed_for_nameservice(nameservice, action_name, main_resource)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 380, in action_delayed_for_nameservice
>  self._create_resource()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 396, in _create_resource
>  self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 511, in _create_file
>  self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 199, in run_command
>  return self._run_command(*args, **kwargs)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 272, in _run_command
>  raise WebHDFSCallException(err_msg, result_dict)
> resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: 
> Execution of 'curl -sS -L -w '%\{http_code}' -X PUT --data-binary 
> @/usr/hdp/3.0.1.0-30/hive

[jira] [Updated] (AMBARI-24399) Components start failing with 'Holder DFSClient_NONMAPREDUCE does not have any open files' while adding Namespace

2018-08-06 Thread Andrew Onischuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-24399:
-
Status: Patch Available  (was: Open)

> Components start failing with 'Holder DFSClient_NONMAPREDUCE does not have 
> any open files' while adding Namespace 
> --
>
> Key: AMBARI-24399
> URL: https://issues.apache.org/jira/browse/AMBARI-24399
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Vivek Rathod
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: AMBARI-24399.patch
>
>
> STR: 
> Add a namespace from UI. In the last step restart required services, 
> hiveserver2 restart fails. Although on retrying it comes back up
> {code}
> Traceback (most recent call last):
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 982, in restart
>  self.status(env)
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 79, in status
>  check_process_status(status_params.hive_pid)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py",
>  line 43, in check_process_status
>  raise ComponentIsNotRunning()
> ComponentIsNotRunning
> The above exception was the cause of the following exception:
> Traceback (most recent call last):
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 137, in 
>  HiveServer().execute()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 353, in execute
>  method(env)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", 
> line 993, in restart
>  self.start(env, upgrade_type=upgrade_type)
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 50, in start
>  self.configure(env) # FOR SECURITY
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_server.py",
>  line 45, in configure
>  hive(name='hiveserver2')
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py",
>  line 119, in hive
>  setup_hiveserver2()
>  File 
> "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py",
>  line 167, in setup_hiveserver2
>  skip=params.sysprep_skip_copy_tarballs_hdfs)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/copy_tarball.py",
>  line 516, in copy_to_hdfs
>  replace_existing_files=replace_existing_files,
>  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, 
> in __init__
>  self.env.run()
>  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 160, in run
>  self.run_action(resource, action)
>  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", 
> line 124, in run_action
>  provider_action()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 654, in action_create_on_execute
>  self.action_delayed("create")
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 651, in action_delayed
>  self.get_hdfs_resource_executor().action_delayed(action_name, self)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 354, in action_delayed
>  self.action_delayed_for_nameservice(nameservice, action_name, main_resource)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 380, in action_delayed_for_nameservice
>  self._create_resource()
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 396, in _create_resource
>  self._create_file(self.main_resource.resource.target, 
> source=self.main_resource.resource.source, mode=self.mode)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 511, in _create_file
>  self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, 
> assertable_result=False, file_to_put=source, **kwargs)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 199, in run_command
>  return self._run_command(*args, **kwargs)
>  File 
> "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py",
>  line 272, in _run_command
>  raise WebHDFSCallException(err_msg, result_dict)
> resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: 
> Execution of 'curl -sS -L -w '%\{http_code}' -X PUT --data-binary 
> @/usr/hdp/3.0.1.0-3

[jira] [Resolved] (AMBARI-24405) Components in hosts page should be sorted by display name

2018-08-06 Thread Aleksandr Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Kovalenko resolved AMBARI-24405.
--
Resolution: Fixed

> Components in hosts page should be sorted by display name
> -
>
> Key: AMBARI-24405
> URL: https://issues.apache.org/jira/browse/AMBARI-24405
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Components in hosts page seem to be sorted by component type (master, slave, 
> client etc) and within them they are not sorted by "display names", 
> "Timeline" comes before "History"
> More importantly people when looking for a component don't look by their type 
> but instead look by their name. So all the components should be 
> alphabetically sorted by their display name irrespective of their type. Else 
> it becomes painful to look for a component of interest. (and finally have to 
> rely on browser's search capability)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMBARI-24404) Button label appears incorrect during service deletion

2018-08-06 Thread Aleksandr Kovalenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Kovalenko resolved AMBARI-24404.
--
Resolution: Fixed

> Button label appears incorrect during service deletion
> --
>
> Key: AMBARI-24404
> URL: https://issues.apache.org/jira/browse/AMBARI-24404
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.7.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.0.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Try to delete Ranger service from a cluster, upon deletion stack advisor 
> recommends changes to configs.
> This is fine, however the button at the bottom of the dialog says 'DELETE'
> It should instead say something like 'Proceed' and once user clicks Proceed, 
> the delete pop-up should appear.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)