[jira] [Commented] (AMBARI-22506) Incorrect pie chart distribution

2017-12-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305132#comment-16305132
 ] 

Hudson commented on AMBARI-22506:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.6 #558 (See 
[https://builds.apache.org/job/Ambari-branch-2.6/558/])
AMBARI-22506.Incorrect pie chart distribution(Venkata Sairam) 
(venkatasairam.lanka: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=29ffc7ef99711b49bade5eba141d8018cf197b48])
* (edit) 
ambari-server/src/main/resources/common-services/ZEPPELIN/0.7.0/package/scripts/master.py


> Incorrect pie chart distribution
> 
>
> Key: AMBARI-22506
> URL: https://issues.apache.org/jira/browse/AMBARI-22506
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Prabhjyot Singh
>Assignee: Prabhjyot Singh
> Attachments: AMBARI-22506_trunk_v1.patch, AMBARI-22506_trunk_v2.patch
>
>
> Phoenix JDBC does number formatting if its a decimal, as describe in 
> https://phoenix.apache.org/tuning.html "phoenix.query.numberFormat" with 
> "#,##0.###", which causes a problem with displaying of graph.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22506) Incorrect pie chart distribution

2017-12-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305122#comment-16305122
 ] 

Hudson commented on AMBARI-22506:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #8562 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/8562/])
AMBARI-22506.Incorrect pie chart distribution(Venkata Sairam) 
(venkatasairam.lanka: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=c1b8cda9608180cd00384a8453e3e5f78a865cb2])
* (edit) 
ambari-server/src/main/resources/common-services/ZEPPELIN/0.7.0/package/scripts/master.py


> Incorrect pie chart distribution
> 
>
> Key: AMBARI-22506
> URL: https://issues.apache.org/jira/browse/AMBARI-22506
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Prabhjyot Singh
>Assignee: Prabhjyot Singh
> Attachments: AMBARI-22506_trunk_v1.patch, AMBARI-22506_trunk_v2.patch
>
>
> Phoenix JDBC does number formatting if its a decimal, as describe in 
> https://phoenix.apache.org/tuning.html "phoenix.query.numberFormat" with 
> "#,##0.###", which causes a problem with displaying of graph.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22702) Infra Manager: scheduled deleting of Infra Solr documents

2017-12-27 Thread Krisztian Kasa (JIRA)
Krisztian Kasa created AMBARI-22702:
---

 Summary: Infra Manager: scheduled deleting of Infra Solr documents
 Key: AMBARI-22702
 URL: https://issues.apache.org/jira/browse/AMBARI-22702
 Project: Ambari
  Issue Type: Improvement
  Components: ambari-infra
Affects Versions: 3.0.0
Reporter: Krisztian Kasa
Assignee: Krisztian Kasa
 Fix For: 3.0.0


delete Document from Infra Solr specified by an interval
also in archiving mode delete Documents which are successfully exported and 
uploaded



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22626) Zeppelin Interpreter settings are getting updated after zeppelin restart

2017-12-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305111#comment-16305111
 ] 

Hudson commented on AMBARI-22626:
-

FAILURE: Integrated in Jenkins build Ambari-branch-2.6 #557 (See 
[https://builds.apache.org/job/Ambari-branch-2.6/557/])
AMBARI-22626.Zeppelin Interpreter settings are getting updated after 
(venkatasairam.lanka: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=eef99836ea8f118b7735b9941890386f72d43018])
* (delete) 
ambari-server/src/main/resources/common-services/ZEPPELIN/0.7.0/package/scripts/spark2_config_template.py
* (edit) 
ambari-server/src/main/resources/common-services/ZEPPELIN/0.7.0/package/scripts/master.py
* (edit) 
ambari-server/src/main/resources/common-services/ZEPPELIN/0.7.0/package/scripts/interpreter_json_template.py
* (delete) 
ambari-server/src/main/resources/common-services/ZEPPELIN/0.7.0/package/scripts/livy2_config_template.py


> Zeppelin Interpreter settings are getting updated after zeppelin restart
> 
>
> Key: AMBARI-22626
> URL: https://issues.apache.org/jira/browse/AMBARI-22626
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-sever
>Affects Versions: 2.6.1
> Environment: ambari-server --version
> 2.6.1.0-114
>Reporter: Supreeth Sharma
>Assignee: Prabhjyot Singh
>Priority: Critical
> Fix For: 2.6.2
>
> Attachments: AMBARI-22626_branch-2.6_v1.patch, 
> AMBARI-22626_trunk_v1.patch
>
>
> Zeppelin Interpreter settings are getting updated after zeppelin restart.
> Live cluster : 
> http://ctr-e135-1512069032975-20895-02-09.hwx.site:9995/#/interpreter
> Steps to reproduce :
> 1) Update the zeppelin.pyspark.python to /base/tools/python-2.7.14/bin/python 
> for spark2 interpreter
> 2) Restart zeppelin
> 3) After restart zeppelin.pyspark.python is getting overridden with value 
> 'python'.
> Same is observed with livy2 interpreter.
> But same steps for spark interpreter is working fine even after zeppelin 
> restart.
> So looks like issue is happening only for spark2 and livy2 interpreter.
> Seeing below error in zeppelin logs :
> {code}
> INFO [2017-12-11 12:31:25,876] ({main} LuceneSearch.java[addIndexDocs]:305) - 
> Indexing 20 notebooks took 472ms
>  INFO [2017-12-11 12:31:25,876] ({main} Notebook.java[]:129) - Notebook 
> indexing finished: 20 indexed in 0s
>  WARN [2017-12-11 12:31:25,879] ({main} Helium.java[loadConf]:101) - 
> /usr/hdp/current/zeppelin-server/conf/helium.json does not exists
>  WARN [2017-12-11 12:31:25,882] ({main} NotebookRepoSync.java[]:88) - 
> Failed to initialize org.apache.zeppelin.notebook.repo.FileSystemNotebookRepo 
> notebook storage class
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.zeppelin.notebook.repo.NotebookRepoSync.(NotebookRepoSync.java:83)
>   at 
> org.apache.zeppelin.server.ZeppelinServer.(ZeppelinServer.java:155)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22701) hive CLI process leak on metastore alert

2017-12-27 Thread Hoc Phan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoc Phan updated AMBARI-22701:
--
Description: 
alert_hive_metastore.py will cause orphan processes running over time. Below is 
one example:


{code:none}
1001 593317 593316  0 Dec24 ?00:00:00 -bash -c export  
PATH='/usr/sbin:/sbin:/usr/
lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/s
bin/:/usr/hdp/current/hive-metastore/bin' ; export 
HIVE_CONF_DIR="/usr/hdp/current/hive-metastore/conf/conf.server" ; hive 
--hiveconf hive.metastore.uris=thrift://demo.local:9083 
--hiveconf hive.metastore.client.connect.retry.delay=1 
--hiveconf hive.metastore.failure.retries=1 --hiveconf 
hive.metastore.connect.retries=1 --hiveconf 
hive.metastore.client.socket.timeout=14 --hiveconf 
hive.execution.engine=mr -e "show databases;"
{code}


There could be thousands of those over many months in the host with Hive 
Metastore. To check, run below two commands:


{code:none}
ps -ef | grep "[s]how databases" | wc -l
ps h -Led -o user | sort | uniq -c | sort -n
{code}


This will hit nproc limit and crash other services in the same host.

The fixes are:
1. Swap to "hive" user instead of "ambari-qa" user: 
https://issues.apache.org/jira/browse/AMBARI-22142

2. Change hive CLI to beeline:
https://issues.apache.org/jira/browse/AMBARI-17006

For some reasons, the hive CLI processes don't get killed and kept "lingering" 
around.

Proposed fix in 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/alerts

Instructions:

1. Add below lines below "HIVE_METASTORE_URIS_KEY = 
'{{hive-site/hive.metastore.uris}}'"


{code:none}
HIVE_SERVER_THRIFT_PORT_KEY = '{{hive-site/hive.server2.thrift.port}}'
HIVE_SERVER_THRIFT_HTTP_PORT_KEY = '{{hive-site/hive.server2.thrift.http.port}}'
HIVE_SERVER_TRANSPORT_MODE_KEY = '{{hive-site/hive.server2.transport.mode}}'
THRIFT_PORT_DEFAULT = 1
HIVE_SERVER_TRANSPORT_MODE_DEFAULT = 'binary'
{code}


2. Change SMOKEUSER_DEFAULT = 'ambari-qa' to:


{code:none}
SMOKEUSER_DEFAULT = 'hive'
{code}


3. Replace   

{code:none}
return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT)
{code}


with this:


{code:none}
  return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT, HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY)
{code}


4. Replace this


{code:none}
return (HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)
{code}


with this:


{code:none}
  return (HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY, HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)
{code}



5. Comment out these lines because it will kept injecting ambari-qa user back


{code:none}
  #if SMOKEUSER_KEY in configurations:
  #  smokeuser = configurations[SMOKEUSER_KEY]
{code}


6. Replace this code block:



{code:none}
cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "
 "hive --hiveconf hive.metastore.uris={metastore_uri}\

 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")
{code}


with this block:


{code:none}
transport_mode = HIVE_SERVER_TRANSPORT_MODE_DEFAULT
if HIVE_SERVER_TRANSPORT_MODE_KEY in configurations:
  transport_mode = configurations[HIVE_SERVER_TRANSPORT_MODE_KEY]

port = THRIFT_PORT_DEFAULT
if transport_mode.lower() == 'binary' and HIVE_SERVER_THRIFT_PORT_KEY in 
configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_PORT_KEY])
elif transport_mode.lower() == 'http' and HIVE_SERVER_THRIFT_HTTP_PORT_KEY 
in configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_HTTP_PORT_KEY])

cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "

 "beeline -u jdbc:hive2://{host_name}:{port}/\
 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")
{code}


  was:
alert_hive_metastore.py will cause orphan processes running over time. Below is 
one example:


{code:none}
1001 593317 593316  0 Dec24 ?00:00:00 -bash -c export  
PATH='/usr/sbin:/sbin:/usr/  

[jira] [Commented] (AMBARI-22696) Whitelist execute latency from Storm Ambari metrics

2017-12-27 Thread Arun Mahadevan (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304883#comment-16304883
 ] 

Arun Mahadevan commented on AMBARI-22696:
-

[~avijayan], can you review the patch and get it merged ? Do you know which 
release of Ambari this would be in ? 

cc [~ggolani]

> Whitelist execute latency from Storm Ambari metrics
> ---
>
> Key: AMBARI-22696
> URL: https://issues.apache.org/jira/browse/AMBARI-22696
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: trunk, 2.6.2
>Reporter: Arun Mahadevan
>Assignee: Jungtaek Lim
> Attachments: AMBARI-22696-branch-2.6.patch, AMBARI-22696.patch
>
>
> Whitelist execute latency from Storm Ambari metrics



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22701) hive CLI process leak on metastore alert

2017-12-27 Thread Hoc Phan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoc Phan updated AMBARI-22701:
--
Description: 
alert_hive_metastore.py will cause orphan processes running over time. Below is 
one example:


{code:java}
1001 593317 593316  0 Dec24 ?00:00:00 -bash -c export  
PATH='/usr/sbin:/sbin:/usr/
lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/s
bin/:/usr/hdp/current/hive-metastore/bin' ; export 
HIVE_CONF_DIR="/usr/hdp/current/hive-metastore/conf/conf.server" ; hive 
--hiveconf hive.metastore.uris=thrift://demo.local:9083 
--hiveconf hive.metastore.client.connect.retry.delay=1 
--hiveconf hive.metastore.failure.retries=1 --hiveconf 
hive.metastore.connect.retries=1 --hiveconf 
hive.metastore.client.socket.timeout=14 --hiveconf 
hive.execution.engine=mr -e "show databases;"
{code}


There could be thousands of those over many months in the host with Hive 
Metastore. To check, run below two commands:


{code:shell}
ps -ef | grep "[s]how databases" | wc -l
ps h -Led -o user | sort | uniq -c | sort -n
{code}


This will hit nproc limit and crash other services in the same host.

The fixes are:
1. Swap to "hive" user instead of "ambari-qa" user: 
https://issues.apache.org/jira/browse/AMBARI-22142

2. Change hive CLI to beeline:
https://issues.apache.org/jira/browse/AMBARI-17006

For some reasons, the hive CLI processes don't get killed and kept "lingering" 
around.

Proposed fix in 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/alerts

Instructions:

1. Add below lines below "HIVE_METASTORE_URIS_KEY = 
'{{hive-site/hive.metastore.uris}}'"

HIVE_SERVER_THRIFT_PORT_KEY = '{{hive-site/hive.server2.thrift.port}}'
HIVE_SERVER_THRIFT_HTTP_PORT_KEY = '{{hive-site/hive.server2.thrift.http.port}}'
HIVE_SERVER_TRANSPORT_MODE_KEY = '{{hive-site/hive.server2.transport.mode}}'
THRIFT_PORT_DEFAULT = 1
HIVE_SERVER_TRANSPORT_MODE_DEFAULT = 'binary'

2. Change SMOKEUSER_DEFAULT = 'ambari-qa' to:

SMOKEUSER_DEFAULT = 'hive'

3. Replace   
return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT)

with this:

  return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT, HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY)

4. Replace this

return (HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)

with this:

  return (HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY, HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)


5. Comment out these lines because it will kept injecting ambari-qa user back

  #if SMOKEUSER_KEY in configurations:
  #  smokeuser = configurations[SMOKEUSER_KEY]

6. Replace this code block:


cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "
 "hive --hiveconf hive.metastore.uris={metastore_uri}\

 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")

with this block:

transport_mode = HIVE_SERVER_TRANSPORT_MODE_DEFAULT
if HIVE_SERVER_TRANSPORT_MODE_KEY in configurations:
  transport_mode = configurations[HIVE_SERVER_TRANSPORT_MODE_KEY]

port = THRIFT_PORT_DEFAULT
if transport_mode.lower() == 'binary' and HIVE_SERVER_THRIFT_PORT_KEY in 
configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_PORT_KEY])
elif transport_mode.lower() == 'http' and HIVE_SERVER_THRIFT_HTTP_PORT_KEY 
in configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_HTTP_PORT_KEY])

cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "

 "beeline -u jdbc:hive2://{host_name}:{port}/\
 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")

  was:
alert_hive_metastore.py will cause orphan processes running over time. Below is 
one example:


{code:java}
1001 593317 593316  0 Dec24 ?00:00:00 -bash -c export  
PATH='/usr/sbin:/sbin:/usr/
lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/s
bin/:/usr/hdp/current/hive-metastore/bin' ; export 

[jira] [Updated] (AMBARI-22701) hive CLI process leak on metastore alert

2017-12-27 Thread Hoc Phan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoc Phan updated AMBARI-22701:
--
Description: 
alert_hive_metastore.py will cause orphan processes running over time. Below is 
one example:


{code:none}
1001 593317 593316  0 Dec24 ?00:00:00 -bash -c export  
PATH='/usr/sbin:/sbin:/usr/
lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/s
bin/:/usr/hdp/current/hive-metastore/bin' ; export 
HIVE_CONF_DIR="/usr/hdp/current/hive-metastore/conf/conf.server" ; hive 
--hiveconf hive.metastore.uris=thrift://demo.local:9083 
--hiveconf hive.metastore.client.connect.retry.delay=1 
--hiveconf hive.metastore.failure.retries=1 --hiveconf 
hive.metastore.connect.retries=1 --hiveconf 
hive.metastore.client.socket.timeout=14 --hiveconf 
hive.execution.engine=mr -e "show databases;"
{code}


There could be thousands of those over many months in the host with Hive 
Metastore. To check, run below two commands:


{code:none}
ps -ef | grep "[s]how databases" | wc -l
ps h -Led -o user | sort | uniq -c | sort -n
{code}


This will hit nproc limit and crash other services in the same host.

The fixes are:
1. Swap to "hive" user instead of "ambari-qa" user: 
https://issues.apache.org/jira/browse/AMBARI-22142

2. Change hive CLI to beeline:
https://issues.apache.org/jira/browse/AMBARI-17006

For some reasons, the hive CLI processes don't get killed and kept "lingering" 
around.

Proposed fix in 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/alerts

Instructions:

1. Add below lines below "HIVE_METASTORE_URIS_KEY = 
'{{hive-site/hive.metastore.uris}}'"

HIVE_SERVER_THRIFT_PORT_KEY = '{{hive-site/hive.server2.thrift.port}}'
HIVE_SERVER_THRIFT_HTTP_PORT_KEY = '{{hive-site/hive.server2.thrift.http.port}}'
HIVE_SERVER_TRANSPORT_MODE_KEY = '{{hive-site/hive.server2.transport.mode}}'
THRIFT_PORT_DEFAULT = 1
HIVE_SERVER_TRANSPORT_MODE_DEFAULT = 'binary'

2. Change SMOKEUSER_DEFAULT = 'ambari-qa' to:

SMOKEUSER_DEFAULT = 'hive'

3. Replace   
return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT)

with this:

  return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT, HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY)

4. Replace this

return (HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)

with this:

  return (HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY, HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)


5. Comment out these lines because it will kept injecting ambari-qa user back

  #if SMOKEUSER_KEY in configurations:
  #  smokeuser = configurations[SMOKEUSER_KEY]

6. Replace this code block:


cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "
 "hive --hiveconf hive.metastore.uris={metastore_uri}\

 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")

with this block:

transport_mode = HIVE_SERVER_TRANSPORT_MODE_DEFAULT
if HIVE_SERVER_TRANSPORT_MODE_KEY in configurations:
  transport_mode = configurations[HIVE_SERVER_TRANSPORT_MODE_KEY]

port = THRIFT_PORT_DEFAULT
if transport_mode.lower() == 'binary' and HIVE_SERVER_THRIFT_PORT_KEY in 
configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_PORT_KEY])
elif transport_mode.lower() == 'http' and HIVE_SERVER_THRIFT_HTTP_PORT_KEY 
in configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_HTTP_PORT_KEY])

cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "

 "beeline -u jdbc:hive2://{host_name}:{port}/\
 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")

  was:
alert_hive_metastore.py will cause orphan processes running over time. Below is 
one example:


{code:java}
1001 593317 593316  0 Dec24 ?00:00:00 -bash -c export  
PATH='/usr/sbin:/sbin:/usr/
lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/s
bin/:/usr/hdp/current/hive-metastore/bin' ; export 

[jira] [Updated] (AMBARI-22701) hive CLI process leak on metastore alert

2017-12-27 Thread Hoc Phan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoc Phan updated AMBARI-22701:
--
Description: 
alert_hive_metastore.py will cause orphan processes running over time. Below is 
one example:


{code:java}
1001 593317 593316  0 Dec24 ?00:00:00 -bash -c export  
PATH='/usr/sbin:/sbin:/usr/
lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/s
bin/:/usr/hdp/current/hive-metastore/bin' ; export 
HIVE_CONF_DIR="/usr/hdp/current/hive-metastore/conf/conf.server" ; hive 
--hiveconf hive.metastore.uris=thrift://demo.local:9083 
--hiveconf hive.metastore.client.connect.retry.delay=1 
--hiveconf hive.metastore.failure.retries=1 --hiveconf 
hive.metastore.connect.retries=1 --hiveconf 
hive.metastore.client.socket.timeout=14 --hiveconf 
hive.execution.engine=mr -e "show databases;"
{code}


There could be thousands of those over many months in the host with Hive 
Metastore. To check, run below two commands:


{code:bash}
ps -ef | grep "[s]how databases" | wc -l
ps h -Led -o user | sort | uniq -c | sort -n
{code}


This will hit nproc limit and crash other services in the same host.

The fixes are:
1. Swap to "hive" user instead of "ambari-qa" user: 
https://issues.apache.org/jira/browse/AMBARI-22142

2. Change hive CLI to beeline:
https://issues.apache.org/jira/browse/AMBARI-17006

For some reasons, the hive CLI processes don't get killed and kept "lingering" 
around.

Proposed fix in 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/alerts

Instructions:

1. Add below lines below "HIVE_METASTORE_URIS_KEY = 
'{{hive-site/hive.metastore.uris}}'"

HIVE_SERVER_THRIFT_PORT_KEY = '{{hive-site/hive.server2.thrift.port}}'
HIVE_SERVER_THRIFT_HTTP_PORT_KEY = '{{hive-site/hive.server2.thrift.http.port}}'
HIVE_SERVER_TRANSPORT_MODE_KEY = '{{hive-site/hive.server2.transport.mode}}'
THRIFT_PORT_DEFAULT = 1
HIVE_SERVER_TRANSPORT_MODE_DEFAULT = 'binary'

2. Change SMOKEUSER_DEFAULT = 'ambari-qa' to:

SMOKEUSER_DEFAULT = 'hive'

3. Replace   
return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT)

with this:

  return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT, HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY)

4. Replace this

return (HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)

with this:

  return (HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY, HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)


5. Comment out these lines because it will kept injecting ambari-qa user back

  #if SMOKEUSER_KEY in configurations:
  #  smokeuser = configurations[SMOKEUSER_KEY]

6. Replace this code block:


cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "
 "hive --hiveconf hive.metastore.uris={metastore_uri}\

 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")

with this block:

transport_mode = HIVE_SERVER_TRANSPORT_MODE_DEFAULT
if HIVE_SERVER_TRANSPORT_MODE_KEY in configurations:
  transport_mode = configurations[HIVE_SERVER_TRANSPORT_MODE_KEY]

port = THRIFT_PORT_DEFAULT
if transport_mode.lower() == 'binary' and HIVE_SERVER_THRIFT_PORT_KEY in 
configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_PORT_KEY])
elif transport_mode.lower() == 'http' and HIVE_SERVER_THRIFT_HTTP_PORT_KEY 
in configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_HTTP_PORT_KEY])

cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "

 "beeline -u jdbc:hive2://{host_name}:{port}/\
 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")

  was:
alert_hive_metastore.py will cause orphan processes running over time. Below is 
one example:


{code:java}
1001 593317 593316  0 Dec24 ?00:00:00 -bash -c export  
PATH='/usr/sbin:/sbin:/usr/
lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/s
bin/:/usr/hdp/current/hive-metastore/bin' ; export 

[jira] [Updated] (AMBARI-22701) hive CLI process leak on metastore alert

2017-12-27 Thread Hoc Phan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoc Phan updated AMBARI-22701:
--
Description: 
alert_hive_metastore.py will cause orphan processes running over time. Below is 
one example:


{code:java}
1001 593317 593316  0 Dec24 ?00:00:00 -bash -c export  
PATH='/usr/sbin:/sbin:/usr/
lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/s
bin/:/usr/hdp/current/hive-metastore/bin' ; export 
HIVE_CONF_DIR="/usr/hdp/current/hive-metastore/conf/conf.server" ; hive 
--hiveconf hive.metastore.uris=thrift://demo.local:9083 
--hiveconf hive.metastore.client.connect.retry.delay=1 
--hiveconf hive.metastore.failure.retries=1 --hiveconf 
hive.metastore.connect.retries=1 --hiveconf 
hive.metastore.client.socket.timeout=14 --hiveconf 
hive.execution.engine=mr -e "show databases;"
{code}


There could be thousands of those over many months in the host with Hive 
Metastore. To check, run below two commands:

ps -ef | grep "[s]how databases" | wc -l
ps h -Led -o user | sort | uniq -c | sort -n

This will hit nproc limit and crash other services in the same host.

The fixes are:
1. Swap to "hive" user instead of "ambari-qa" user: 
https://issues.apache.org/jira/browse/AMBARI-22142

2. Change hive CLI to beeline:
https://issues.apache.org/jira/browse/AMBARI-17006

For some reasons, the hive CLI processes don't get killed and kept "lingering" 
around.

Proposed fix in 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/alerts

Instructions:

1. Add below lines below "HIVE_METASTORE_URIS_KEY = 
'{{hive-site/hive.metastore.uris}}'"

HIVE_SERVER_THRIFT_PORT_KEY = '{{hive-site/hive.server2.thrift.port}}'
HIVE_SERVER_THRIFT_HTTP_PORT_KEY = '{{hive-site/hive.server2.thrift.http.port}}'
HIVE_SERVER_TRANSPORT_MODE_KEY = '{{hive-site/hive.server2.transport.mode}}'
THRIFT_PORT_DEFAULT = 1
HIVE_SERVER_TRANSPORT_MODE_DEFAULT = 'binary'

2. Change SMOKEUSER_DEFAULT = 'ambari-qa' to:

SMOKEUSER_DEFAULT = 'hive'

3. Replace   
return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT)

with this:

  return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT, HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY)

4. Replace this

return (HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)

with this:

  return (HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY, HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)


5. Comment out these lines because it will kept injecting ambari-qa user back

  #if SMOKEUSER_KEY in configurations:
  #  smokeuser = configurations[SMOKEUSER_KEY]

6. Replace this code block:


cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "
 "hive --hiveconf hive.metastore.uris={metastore_uri}\

 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")

with this block:

transport_mode = HIVE_SERVER_TRANSPORT_MODE_DEFAULT
if HIVE_SERVER_TRANSPORT_MODE_KEY in configurations:
  transport_mode = configurations[HIVE_SERVER_TRANSPORT_MODE_KEY]

port = THRIFT_PORT_DEFAULT
if transport_mode.lower() == 'binary' and HIVE_SERVER_THRIFT_PORT_KEY in 
configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_PORT_KEY])
elif transport_mode.lower() == 'http' and HIVE_SERVER_THRIFT_HTTP_PORT_KEY 
in configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_HTTP_PORT_KEY])

cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "

 "beeline -u jdbc:hive2://{host_name}:{port}/\
 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")

  was:
alert_hive_metastore.py will cause orphan processes running over time. Below is 
one example:

1001 593317 593316  0 Dec24 ?00:00:00 -bash -c export  
PATH='/usr/sbin:/sbin:/usr/
lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/s
bin/:/usr/hdp/current/hive-metastore/bin' ; export 
HIVE_CONF_DIR="/usr/hdp/current/hive-metastore/conf/conf.server" ; 

[jira] [Created] (AMBARI-22701) hive CLI process leak on metastore alert

2017-12-27 Thread Hoc Phan (JIRA)
Hoc Phan created AMBARI-22701:
-

 Summary: hive CLI process leak on metastore alert
 Key: AMBARI-22701
 URL: https://issues.apache.org/jira/browse/AMBARI-22701
 Project: Ambari
  Issue Type: Bug
  Components: alerts
Affects Versions: 2.4.0
 Environment: CentOS 6.9
Ambari 2.4.0.1
Hortonworks Hadoop 2.5.0.0-1245
Hive installed
Tez installed
Reporter: Hoc Phan


alert_hive_metastore.py will cause orphan processes running over time. Below is 
one example:

1001 593317 593316  0 Dec24 ?00:00:00 -bash -c export  
PATH='/usr/sbin:/sbin:/usr/
lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/bin/:/usr/bin/:/usr/s
bin/:/usr/hdp/current/hive-metastore/bin' ; export 
HIVE_CONF_DIR="/usr/hdp/current/hive-metastore/conf/conf.server" ; hive 
--hiveconf hive.metastore.uris=thrift://demo.local:9083 
--hiveconf hive.metastore.client.connect.retry.delay=1 
--hiveconf hive.metastore.failure.retries=1 --hiveconf 
hive.metastore.connect.retries=1 --hiveconf 
hive.metastore.client.socket.timeout=14 --hiveconf 
hive.execution.engine=mr -e "show databases;"

There could be thousands of those over many months in the host with Hive 
Metastore. To check, run below two commands:

ps -ef | grep "[s]how databases" | wc -l
ps h -Led -o user | sort | uniq -c | sort -n

This will hit nproc limit and crash other services in the same host.

The fixes are:
1. Swap to "hive" user instead of "ambari-qa" user: 
https://issues.apache.org/jira/browse/AMBARI-22142

2. Change hive CLI to beeline:
https://issues.apache.org/jira/browse/AMBARI-17006

For some reasons, the hive CLI processes don't get killed and kept "lingering" 
around.

Proposed fix in 
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/alerts

Instructions:

1. Add below lines below "HIVE_METASTORE_URIS_KEY = 
'{{hive-site/hive.metastore.uris}}'"

HIVE_SERVER_THRIFT_PORT_KEY = '{{hive-site/hive.server2.thrift.port}}'
HIVE_SERVER_THRIFT_HTTP_PORT_KEY = '{{hive-site/hive.server2.thrift.http.port}}'
HIVE_SERVER_TRANSPORT_MODE_KEY = '{{hive-site/hive.server2.transport.mode}}'
THRIFT_PORT_DEFAULT = 1
HIVE_SERVER_TRANSPORT_MODE_DEFAULT = 'binary'

2. Change SMOKEUSER_DEFAULT = 'ambari-qa' to:

SMOKEUSER_DEFAULT = 'hive'

3. Replace   
return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT)

with this:

  return (SECURITY_ENABLED_KEY,SMOKEUSER_KEYTAB_KEY,SMOKEUSER_PRINCIPAL_KEY, 
HIVE_METASTORE_URIS_KEY, SMOKEUSER_KEY, KERBEROS_EXECUTABLE_SEARCH_PATHS_KEY, 
STACK_ROOT, HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY)

4. Replace this

return (HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)

with this:

  return (HIVE_SERVER_THRIFT_PORT_KEY, HIVE_SERVER_THRIFT_HTTP_PORT_KEY, 
HIVE_SERVER_TRANSPORT_MODE_KEY, HIVE_METASTORE_URIS_KEY, HADOOPUSER_KEY)


5. Comment out these lines because it will kept injecting ambari-qa user back

  #if SMOKEUSER_KEY in configurations:
  #  smokeuser = configurations[SMOKEUSER_KEY]

6. Replace this code block:


cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "
 "hive --hiveconf hive.metastore.uris={metastore_uri}\

 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")

with this block:

transport_mode = HIVE_SERVER_TRANSPORT_MODE_DEFAULT
if HIVE_SERVER_TRANSPORT_MODE_KEY in configurations:
  transport_mode = configurations[HIVE_SERVER_TRANSPORT_MODE_KEY]

port = THRIFT_PORT_DEFAULT
if transport_mode.lower() == 'binary' and HIVE_SERVER_THRIFT_PORT_KEY in 
configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_PORT_KEY])
elif transport_mode.lower() == 'http' and HIVE_SERVER_THRIFT_HTTP_PORT_KEY 
in configurations:
  port = int(configurations[HIVE_SERVER_THRIFT_HTTP_PORT_KEY])

cmd = format("export HIVE_CONF_DIR='{conf_dir}' ; "

 "beeline -u jdbc:hive2://{host_name}:{port}/\
 --hiveconf hive.metastore.client.connect.retry.delay=1\
 --hiveconf hive.metastore.failure.retries=1\
 --hiveconf hive.metastore.connect.retries=1\
 --hiveconf hive.metastore.client.socket.timeout=14\
 --hiveconf hive.execution.engine=mr -e 'show databases;'")



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304673#comment-16304673
 ] 

Hudson commented on AMBARI-22700:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #8561 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/8561/])
AMBARI-22700 Post-install: UI style fixes. (atkach) (atkach: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=02887284a1d19666e52b61884ad9daf8db040e68])
* (edit) ambari-web/app/styles/config_versions_control.less
* (edit) ambari-web/app/styles/enhanced_service_dashboard.less
* (edit) ambari-web/app/styles/modal_popups.less
* (edit) 
ambari-web/app/views/common/configs/service_config_layout_tab_compare_view.js
* (edit) ambari-web/app/styles/application.less
* (edit) ambari-web/app/templates/common/configs/config_versions_dropdown.hbs
* (edit) ambari-web/app/templates/application.hbs
* (edit) ambari-web/app/templates/main/service/menu_item.hbs
* (edit) ambari-web/app/templates/common/host_progress_popup.hbs
* (edit) 
ambari-web/app/templates/common/configs/service_config_layout_tab_compare.hbs
* (edit) ambari-web/app/views/common/host_progress_popup_body_view.js


> Post-install: UI style fixes
> 
>
> Key: AMBARI-22700
> URL: https://issues.apache.org/jira/browse/AMBARI-22700
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 3.0.0
>
> Attachments: AMBARI-22700.patch
>
>
> Issues:
> # Unknown color of service in navigation should be yellow
> # Remove left-right pad from Background Operations content
> # Status dropdown in Background Operations should be styled
> # Clicking on cluster name shouldn't open Background Operations
> # Align dropdowns on Configs page
> # Accordions in Compare mode missing arrow icon
> # Restart required icons thrown on the next line in firefox
> # Equalize padding of widget title
> # Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22625) Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari is different

2017-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304668#comment-16304668
 ] 

Hadoop QA commented on AMBARI-22625:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12903811/AMBARI-22625-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-web.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/12900//console

This message is automatically generated.

> Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari  is different
> ---
>
> Key: AMBARI-22625
> URL: https://issues.apache.org/jira/browse/AMBARI-22625
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Akhil S Naik
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: AMBARI-22625-trunk.patch, ambari-2.png, namenode.png
>
>
>  'NON DFS Used' value  in Services -> HDFS -> Summary if different from the 
> Value shown in NameNode UI (see Picture).
> in NAMENODE UI -->
> !https://issues.apache.org/jira/secure/attachment/12901501/namenode.png!
> In Services -> HDFS -> Summary
> !https://issues.apache.org/jira/secure/attachment/12901499/ambari-2.png!
> In NameNode UI the Non DFS Used is taken from 'NonDfsUsedSpace' variable from 
>  REST API call : 
> http://host1:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo 
> Where host1 is the Name Node Server FQDN
> but in Ambari we are calculating the same value from (Total Allocated - DFS 
> USed - DFS Remaining) .
> due to which this issue is happening.
> as a Fix we need to use the 'capacityNonDfsUsed' which comes from Server in 
> NameNode metrics
> 'FSNamesystem > CapacityNonDFSUsed'
> attached the patch in the JIRA.
> Please fix this bug



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22693) UpgradeUserKerberosDescriptor is not executed during stack upgrade due to missing target stack data

2017-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304648#comment-16304648
 ] 

Hadoop QA commented on AMBARI-22693:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12903816/AMBARI-22693_branch-2.6.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/12899//console

This message is automatically generated.

> UpgradeUserKerberosDescriptor is not executed during stack upgrade due to 
> missing target stack data
> ---
>
> Key: AMBARI-22693
> URL: https://issues.apache.org/jira/browse/AMBARI-22693
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0, 2.6.1
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Critical
> Fix For: 2.6.2
>
> Attachments: AMBARI-22693_branch-2.6.patch
>
>
> UpgradeUserKerberosDescriptor is not executed during stack upgrade due to 
> missing target stack data.
> *Steps to reproduce*
> # Deploy cluster with Ambari version 2.6.0 and HDP version 2.4
> ** Storm should be installed to guarantee an error
> # Do Express upgrade to HDP version 2.6
> # Regenerate Keytabs.
> Upon restarting Storm the following error is encountered
> {code:java}
> Exception in thread "main" java.lang.ExceptionInInitializerError
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at clojure.lang.RT.classForName(RT.java:2154)
>   at clojure.lang.RT.classForName(RT.java:2163)
>   at clojure.lang.RT.loadClassForName(RT.java:2182)
>   at clojure.lang.RT.load(RT.java:436)
>   at clojure.lang.RT.load(RT.java:412)
>   at clojure.core$load$fn__5448.invoke(core.clj:5866)
>   at clojure.core$load.doInvoke(core.clj:5865)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at clojure.core$load_one.invoke(core.clj:5671)
>   at clojure.core$load_lib$fn__5397.invoke(core.clj:5711)
>   at clojure.core$load_lib.doInvoke(core.clj:5710)
>   at clojure.lang.RestFn.applyTo(RestFn.java:142)
>   at clojure.core$apply.invoke(core.clj:632)
>   at clojure.core$load_libs.doInvoke(core.clj:5749)
>   at clojure.lang.RestFn.applyTo(RestFn.java:137)
>   at clojure.core$apply.invoke(core.clj:632)
>   at clojure.core$require.doInvoke(core.clj:5832)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at 
> org.apache.storm.daemon.nimbus$loading__5340__auto982.invoke(nimbus.clj:16)
>   at org.apache.storm.daemon.nimbus__init.load(Unknown Source)
>   at org.apache.storm.daemon.nimbus__init.(Unknown Source)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at clojure.lang.RT.classForName(RT.java:2154)
>   at clojure.lang.RT.classForName(RT.java:2163)
>   at clojure.lang.RT.loadClassForName(RT.java:2182)
>   at clojure.lang.RT.load(RT.java:436)
>   at clojure.lang.RT.load(RT.java:412)
>   at clojure.core$load$fn__5448.invoke(core.clj:5866)
>   at clojure.core$load.doInvoke(core.clj:5865)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at clojure.lang.Var.invoke(Var.java:379)
>   at org.apache.storm.daemon.nimbus.(Unknown Source)
> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.KerberosPrincipalToLocal
>   at 
> org.apache.storm.security.auth.AuthUtils.GetPrincipalToLocalPlugin(AuthUtils.java:125)
>   at 
> org.apache.storm.security.auth.authorizer.ImpersonationAuthorizer.prepare(ImpersonationAuthorizer.java:54)
>   at 
> org.apache.storm.daemon.common$mk_authorization_handler.invoke(common.clj:417)
>   at org.apache.storm.ui.core__init.load(Unknown Source)
>   at org.apache.storm.ui.core__init.(Unknown Source)
>   ... 35 more
> Caused by: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.KerberosPrincipalToLocal
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:264)
>   at 
> org.apache.storm.security.auth.AuthUtils.GetPrincipalToLocalPlugin(AuthUtils.java:121)
>   ... 39 more
> {code}
> *Cause*
> In the following code snip, {{targetStackID}} is {null}}:
> 

[jira] [Updated] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-22700:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Post-install: UI style fixes
> 
>
> Key: AMBARI-22700
> URL: https://issues.apache.org/jira/browse/AMBARI-22700
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 3.0.0
>
> Attachments: AMBARI-22700.patch
>
>
> Issues:
> # Unknown color of service in navigation should be yellow
> # Remove left-right pad from Background Operations content
> # Status dropdown in Background Operations should be styled
> # Clicking on cluster name shouldn't open Background Operations
> # Align dropdowns on Configs page
> # Accordions in Compare mode missing arrow icon
> # Restart required icons thrown on the next line in firefox
> # Equalize padding of widget title
> # Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Andrii Tkach (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304644#comment-16304644
 ] 

Andrii Tkach commented on AMBARI-22700:
---

committed to trunk

> Post-install: UI style fixes
> 
>
> Key: AMBARI-22700
> URL: https://issues.apache.org/jira/browse/AMBARI-22700
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 3.0.0
>
> Attachments: AMBARI-22700.patch
>
>
> Issues:
> # Unknown color of service in navigation should be yellow
> # Remove left-right pad from Background Operations content
> # Status dropdown in Background Operations should be styled
> # Clicking on cluster name shouldn't open Background Operations
> # Align dropdowns on Configs page
> # Accordions in Compare mode missing arrow icon
> # Restart required icons thrown on the next line in firefox
> # Equalize padding of widget title
> # Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Aleksandr Kovalenko (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304613#comment-16304613
 ] 

Aleksandr Kovalenko commented on AMBARI-22700:
--

+1 for the patch

> Post-install: UI style fixes
> 
>
> Key: AMBARI-22700
> URL: https://issues.apache.org/jira/browse/AMBARI-22700
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 3.0.0
>
> Attachments: AMBARI-22700.patch
>
>
> Issues:
> # Unknown color of service in navigation should be yellow
> # Remove left-right pad from Background Operations content
> # Status dropdown in Background Operations should be styled
> # Clicking on cluster name shouldn't open Background Operations
> # Align dropdowns on Configs page
> # Accordions in Compare mode missing arrow icon
> # Restart required icons thrown on the next line in firefox
> # Equalize padding of widget title
> # Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22693) UpgradeUserKerberosDescriptor is not executed during stack upgrade due to missing target stack data

2017-12-27 Thread Robert Levas (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Levas updated AMBARI-22693:
--
Status: Patch Available  (was: In Progress)

> UpgradeUserKerberosDescriptor is not executed during stack upgrade due to 
> missing target stack data
> ---
>
> Key: AMBARI-22693
> URL: https://issues.apache.org/jira/browse/AMBARI-22693
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0, 2.6.1
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Critical
> Fix For: 2.6.2
>
> Attachments: AMBARI-22693_branch-2.6.patch
>
>
> UpgradeUserKerberosDescriptor is not executed during stack upgrade due to 
> missing target stack data.
> *Steps to reproduce*
> # Deploy cluster with Ambari version 2.6.0 and HDP version 2.4
> ** Storm should be installed to guarantee an error
> # Do Express upgrade to HDP version 2.6
> # Regenerate Keytabs.
> Upon restarting Storm the following error is encountered
> {code:java}
> Exception in thread "main" java.lang.ExceptionInInitializerError
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at clojure.lang.RT.classForName(RT.java:2154)
>   at clojure.lang.RT.classForName(RT.java:2163)
>   at clojure.lang.RT.loadClassForName(RT.java:2182)
>   at clojure.lang.RT.load(RT.java:436)
>   at clojure.lang.RT.load(RT.java:412)
>   at clojure.core$load$fn__5448.invoke(core.clj:5866)
>   at clojure.core$load.doInvoke(core.clj:5865)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at clojure.core$load_one.invoke(core.clj:5671)
>   at clojure.core$load_lib$fn__5397.invoke(core.clj:5711)
>   at clojure.core$load_lib.doInvoke(core.clj:5710)
>   at clojure.lang.RestFn.applyTo(RestFn.java:142)
>   at clojure.core$apply.invoke(core.clj:632)
>   at clojure.core$load_libs.doInvoke(core.clj:5749)
>   at clojure.lang.RestFn.applyTo(RestFn.java:137)
>   at clojure.core$apply.invoke(core.clj:632)
>   at clojure.core$require.doInvoke(core.clj:5832)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at 
> org.apache.storm.daemon.nimbus$loading__5340__auto982.invoke(nimbus.clj:16)
>   at org.apache.storm.daemon.nimbus__init.load(Unknown Source)
>   at org.apache.storm.daemon.nimbus__init.(Unknown Source)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at clojure.lang.RT.classForName(RT.java:2154)
>   at clojure.lang.RT.classForName(RT.java:2163)
>   at clojure.lang.RT.loadClassForName(RT.java:2182)
>   at clojure.lang.RT.load(RT.java:436)
>   at clojure.lang.RT.load(RT.java:412)
>   at clojure.core$load$fn__5448.invoke(core.clj:5866)
>   at clojure.core$load.doInvoke(core.clj:5865)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at clojure.lang.Var.invoke(Var.java:379)
>   at org.apache.storm.daemon.nimbus.(Unknown Source)
> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.KerberosPrincipalToLocal
>   at 
> org.apache.storm.security.auth.AuthUtils.GetPrincipalToLocalPlugin(AuthUtils.java:125)
>   at 
> org.apache.storm.security.auth.authorizer.ImpersonationAuthorizer.prepare(ImpersonationAuthorizer.java:54)
>   at 
> org.apache.storm.daemon.common$mk_authorization_handler.invoke(common.clj:417)
>   at org.apache.storm.ui.core__init.load(Unknown Source)
>   at org.apache.storm.ui.core__init.(Unknown Source)
>   ... 35 more
> Caused by: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.KerberosPrincipalToLocal
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:264)
>   at 
> org.apache.storm.security.auth.AuthUtils.GetPrincipalToLocalPlugin(AuthUtils.java:121)
>   ... 39 more
> {code}
> *Cause*
> In the following code snip, {{targetStackID}} is {null}}:
> {code:title=org/apache/ambari/server/serveraction/upgrades/UpgradeUserKerberosDescriptor.java:103}
> StackId targetStackId = 
> getStackIdFromCommandParams(KeyNames.TARGET_STACK);
> {code}
> This causes the logic in  {{UpgradeUserKerberosDescriptor}} to be skipped.  
> *Solution*
> Change the code snip from above to 
> {code:title=org/apache/ambari/server/serveraction/upgrades/UpgradeUserKerberosDescriptor.java:103}
> 

[jira] [Updated] (AMBARI-22693) UpgradeUserKerberosDescriptor is not executed during stack upgrade due to missing target stack data

2017-12-27 Thread Robert Levas (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Levas updated AMBARI-22693:
--
Attachment: AMBARI-22693_branch-2.6.patch

> UpgradeUserKerberosDescriptor is not executed during stack upgrade due to 
> missing target stack data
> ---
>
> Key: AMBARI-22693
> URL: https://issues.apache.org/jira/browse/AMBARI-22693
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0, 2.6.1
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Critical
> Fix For: 2.6.2
>
> Attachments: AMBARI-22693_branch-2.6.patch
>
>
> UpgradeUserKerberosDescriptor is not executed during stack upgrade due to 
> missing target stack data.
> *Steps to reproduce*
> # Deploy cluster with Ambari version 2.6.0 and HDP version 2.4
> ** Storm should be installed to guarantee an error
> # Do Express upgrade to HDP version 2.6
> # Regenerate Keytabs.
> Upon restarting Storm the following error is encountered
> {code:java}
> Exception in thread "main" java.lang.ExceptionInInitializerError
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at clojure.lang.RT.classForName(RT.java:2154)
>   at clojure.lang.RT.classForName(RT.java:2163)
>   at clojure.lang.RT.loadClassForName(RT.java:2182)
>   at clojure.lang.RT.load(RT.java:436)
>   at clojure.lang.RT.load(RT.java:412)
>   at clojure.core$load$fn__5448.invoke(core.clj:5866)
>   at clojure.core$load.doInvoke(core.clj:5865)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at clojure.core$load_one.invoke(core.clj:5671)
>   at clojure.core$load_lib$fn__5397.invoke(core.clj:5711)
>   at clojure.core$load_lib.doInvoke(core.clj:5710)
>   at clojure.lang.RestFn.applyTo(RestFn.java:142)
>   at clojure.core$apply.invoke(core.clj:632)
>   at clojure.core$load_libs.doInvoke(core.clj:5749)
>   at clojure.lang.RestFn.applyTo(RestFn.java:137)
>   at clojure.core$apply.invoke(core.clj:632)
>   at clojure.core$require.doInvoke(core.clj:5832)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at 
> org.apache.storm.daemon.nimbus$loading__5340__auto982.invoke(nimbus.clj:16)
>   at org.apache.storm.daemon.nimbus__init.load(Unknown Source)
>   at org.apache.storm.daemon.nimbus__init.(Unknown Source)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at clojure.lang.RT.classForName(RT.java:2154)
>   at clojure.lang.RT.classForName(RT.java:2163)
>   at clojure.lang.RT.loadClassForName(RT.java:2182)
>   at clojure.lang.RT.load(RT.java:436)
>   at clojure.lang.RT.load(RT.java:412)
>   at clojure.core$load$fn__5448.invoke(core.clj:5866)
>   at clojure.core$load.doInvoke(core.clj:5865)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at clojure.lang.Var.invoke(Var.java:379)
>   at org.apache.storm.daemon.nimbus.(Unknown Source)
> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.KerberosPrincipalToLocal
>   at 
> org.apache.storm.security.auth.AuthUtils.GetPrincipalToLocalPlugin(AuthUtils.java:125)
>   at 
> org.apache.storm.security.auth.authorizer.ImpersonationAuthorizer.prepare(ImpersonationAuthorizer.java:54)
>   at 
> org.apache.storm.daemon.common$mk_authorization_handler.invoke(common.clj:417)
>   at org.apache.storm.ui.core__init.load(Unknown Source)
>   at org.apache.storm.ui.core__init.(Unknown Source)
>   ... 35 more
> Caused by: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.KerberosPrincipalToLocal
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:264)
>   at 
> org.apache.storm.security.auth.AuthUtils.GetPrincipalToLocalPlugin(AuthUtils.java:121)
>   ... 39 more
> {code}
> *Cause*
> In the following code snip, {{targetStackID}} is {null}}:
> {code:title=org/apache/ambari/server/serveraction/upgrades/UpgradeUserKerberosDescriptor.java:103}
> StackId targetStackId = 
> getStackIdFromCommandParams(KeyNames.TARGET_STACK);
> {code}
> This causes the logic in  {{UpgradeUserKerberosDescriptor}} to be skipped.  
> *Solution*
> Change the code snip from above to 
> {code:title=org/apache/ambari/server/serveraction/upgrades/UpgradeUserKerberosDescriptor.java:103}
> 

[jira] [Updated] (AMBARI-22693) UpgradeUserKerberosDescriptor is not executed during stack upgrade due to missing target stack data

2017-12-27 Thread Robert Levas (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Levas updated AMBARI-22693:
--
Attachment: AMBARI-22693_branch-2.6

> UpgradeUserKerberosDescriptor is not executed during stack upgrade due to 
> missing target stack data
> ---
>
> Key: AMBARI-22693
> URL: https://issues.apache.org/jira/browse/AMBARI-22693
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0, 2.6.1
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Critical
> Fix For: 2.6.2
>
>
> UpgradeUserKerberosDescriptor is not executed during stack upgrade due to 
> missing target stack data.
> *Steps to reproduce*
> # Deploy cluster with Ambari version 2.6.0 and HDP version 2.4
> ** Storm should be installed to guarantee an error
> # Do Express upgrade to HDP version 2.6
> # Regenerate Keytabs.
> Upon restarting Storm the following error is encountered
> {code:java}
> Exception in thread "main" java.lang.ExceptionInInitializerError
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at clojure.lang.RT.classForName(RT.java:2154)
>   at clojure.lang.RT.classForName(RT.java:2163)
>   at clojure.lang.RT.loadClassForName(RT.java:2182)
>   at clojure.lang.RT.load(RT.java:436)
>   at clojure.lang.RT.load(RT.java:412)
>   at clojure.core$load$fn__5448.invoke(core.clj:5866)
>   at clojure.core$load.doInvoke(core.clj:5865)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at clojure.core$load_one.invoke(core.clj:5671)
>   at clojure.core$load_lib$fn__5397.invoke(core.clj:5711)
>   at clojure.core$load_lib.doInvoke(core.clj:5710)
>   at clojure.lang.RestFn.applyTo(RestFn.java:142)
>   at clojure.core$apply.invoke(core.clj:632)
>   at clojure.core$load_libs.doInvoke(core.clj:5749)
>   at clojure.lang.RestFn.applyTo(RestFn.java:137)
>   at clojure.core$apply.invoke(core.clj:632)
>   at clojure.core$require.doInvoke(core.clj:5832)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at 
> org.apache.storm.daemon.nimbus$loading__5340__auto982.invoke(nimbus.clj:16)
>   at org.apache.storm.daemon.nimbus__init.load(Unknown Source)
>   at org.apache.storm.daemon.nimbus__init.(Unknown Source)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at clojure.lang.RT.classForName(RT.java:2154)
>   at clojure.lang.RT.classForName(RT.java:2163)
>   at clojure.lang.RT.loadClassForName(RT.java:2182)
>   at clojure.lang.RT.load(RT.java:436)
>   at clojure.lang.RT.load(RT.java:412)
>   at clojure.core$load$fn__5448.invoke(core.clj:5866)
>   at clojure.core$load.doInvoke(core.clj:5865)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at clojure.lang.Var.invoke(Var.java:379)
>   at org.apache.storm.daemon.nimbus.(Unknown Source)
> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.KerberosPrincipalToLocal
>   at 
> org.apache.storm.security.auth.AuthUtils.GetPrincipalToLocalPlugin(AuthUtils.java:125)
>   at 
> org.apache.storm.security.auth.authorizer.ImpersonationAuthorizer.prepare(ImpersonationAuthorizer.java:54)
>   at 
> org.apache.storm.daemon.common$mk_authorization_handler.invoke(common.clj:417)
>   at org.apache.storm.ui.core__init.load(Unknown Source)
>   at org.apache.storm.ui.core__init.(Unknown Source)
>   ... 35 more
> Caused by: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.KerberosPrincipalToLocal
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:264)
>   at 
> org.apache.storm.security.auth.AuthUtils.GetPrincipalToLocalPlugin(AuthUtils.java:121)
>   ... 39 more
> {code}
> *Cause*
> In the following code snip, {{targetStackID}} is {null}}:
> {code:title=org/apache/ambari/server/serveraction/upgrades/UpgradeUserKerberosDescriptor.java:103}
> StackId targetStackId = 
> getStackIdFromCommandParams(KeyNames.TARGET_STACK);
> {code}
> This causes the logic in  {{UpgradeUserKerberosDescriptor}} to be skipped.  
> *Solution*
> Change the code snip from above to 
> {code:title=org/apache/ambari/server/serveraction/upgrades/UpgradeUserKerberosDescriptor.java:103}
> StackId targetStackId = cluster.getDesiredStackVersion();
> 

[jira] [Updated] (AMBARI-22693) UpgradeUserKerberosDescriptor is not executed during stack upgrade due to missing target stack data

2017-12-27 Thread Robert Levas (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Levas updated AMBARI-22693:
--
Attachment: (was: AMBARI-22693_branch-2.6)

> UpgradeUserKerberosDescriptor is not executed during stack upgrade due to 
> missing target stack data
> ---
>
> Key: AMBARI-22693
> URL: https://issues.apache.org/jira/browse/AMBARI-22693
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.6.0, 2.6.1
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Critical
> Fix For: 2.6.2
>
>
> UpgradeUserKerberosDescriptor is not executed during stack upgrade due to 
> missing target stack data.
> *Steps to reproduce*
> # Deploy cluster with Ambari version 2.6.0 and HDP version 2.4
> ** Storm should be installed to guarantee an error
> # Do Express upgrade to HDP version 2.6
> # Regenerate Keytabs.
> Upon restarting Storm the following error is encountered
> {code:java}
> Exception in thread "main" java.lang.ExceptionInInitializerError
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at clojure.lang.RT.classForName(RT.java:2154)
>   at clojure.lang.RT.classForName(RT.java:2163)
>   at clojure.lang.RT.loadClassForName(RT.java:2182)
>   at clojure.lang.RT.load(RT.java:436)
>   at clojure.lang.RT.load(RT.java:412)
>   at clojure.core$load$fn__5448.invoke(core.clj:5866)
>   at clojure.core$load.doInvoke(core.clj:5865)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at clojure.core$load_one.invoke(core.clj:5671)
>   at clojure.core$load_lib$fn__5397.invoke(core.clj:5711)
>   at clojure.core$load_lib.doInvoke(core.clj:5710)
>   at clojure.lang.RestFn.applyTo(RestFn.java:142)
>   at clojure.core$apply.invoke(core.clj:632)
>   at clojure.core$load_libs.doInvoke(core.clj:5749)
>   at clojure.lang.RestFn.applyTo(RestFn.java:137)
>   at clojure.core$apply.invoke(core.clj:632)
>   at clojure.core$require.doInvoke(core.clj:5832)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at 
> org.apache.storm.daemon.nimbus$loading__5340__auto982.invoke(nimbus.clj:16)
>   at org.apache.storm.daemon.nimbus__init.load(Unknown Source)
>   at org.apache.storm.daemon.nimbus__init.(Unknown Source)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at clojure.lang.RT.classForName(RT.java:2154)
>   at clojure.lang.RT.classForName(RT.java:2163)
>   at clojure.lang.RT.loadClassForName(RT.java:2182)
>   at clojure.lang.RT.load(RT.java:436)
>   at clojure.lang.RT.load(RT.java:412)
>   at clojure.core$load$fn__5448.invoke(core.clj:5866)
>   at clojure.core$load.doInvoke(core.clj:5865)
>   at clojure.lang.RestFn.invoke(RestFn.java:408)
>   at clojure.lang.Var.invoke(Var.java:379)
>   at org.apache.storm.daemon.nimbus.(Unknown Source)
> Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.KerberosPrincipalToLocal
>   at 
> org.apache.storm.security.auth.AuthUtils.GetPrincipalToLocalPlugin(AuthUtils.java:125)
>   at 
> org.apache.storm.security.auth.authorizer.ImpersonationAuthorizer.prepare(ImpersonationAuthorizer.java:54)
>   at 
> org.apache.storm.daemon.common$mk_authorization_handler.invoke(common.clj:417)
>   at org.apache.storm.ui.core__init.load(Unknown Source)
>   at org.apache.storm.ui.core__init.(Unknown Source)
>   ... 35 more
> Caused by: java.lang.ClassNotFoundException: 
> backtype.storm.security.auth.KerberosPrincipalToLocal
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:264)
>   at 
> org.apache.storm.security.auth.AuthUtils.GetPrincipalToLocalPlugin(AuthUtils.java:121)
>   ... 39 more
> {code}
> *Cause*
> In the following code snip, {{targetStackID}} is {null}}:
> {code:title=org/apache/ambari/server/serveraction/upgrades/UpgradeUserKerberosDescriptor.java:103}
> StackId targetStackId = 
> getStackIdFromCommandParams(KeyNames.TARGET_STACK);
> {code}
> This causes the logic in  {{UpgradeUserKerberosDescriptor}} to be skipped.  
> *Solution*
> Change the code snip from above to 
> {code:title=org/apache/ambari/server/serveraction/upgrades/UpgradeUserKerberosDescriptor.java:103}
> StackId targetStackId = 

[jira] [Commented] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304599#comment-16304599
 ] 

Hadoop QA commented on AMBARI-22700:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903814/AMBARI-22700.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-web.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/12897//console

This message is automatically generated.

> Post-install: UI style fixes
> 
>
> Key: AMBARI-22700
> URL: https://issues.apache.org/jira/browse/AMBARI-22700
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 3.0.0
>
> Attachments: AMBARI-22700.patch
>
>
> Issues:
> # Unknown color of service in navigation should be yellow
> # Remove left-right pad from Background Operations content
> # Status dropdown in Background Operations should be styled
> # Clicking on cluster name shouldn't open Background Operations
> # Align dropdowns on Configs page
> # Accordions in Compare mode missing arrow icon
> # Restart required icons thrown on the next line in firefox
> # Equalize padding of widget title
> # Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-22700:
--
Status: Patch Available  (was: Open)

> Post-install: UI style fixes
> 
>
> Key: AMBARI-22700
> URL: https://issues.apache.org/jira/browse/AMBARI-22700
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 3.0.0
>
> Attachments: AMBARI-22700.patch
>
>
> Issues:
> # Unknown color of service in navigation should be yellow
> # Remove left-right pad from Background Operations content
> # Status dropdown in Background Operations should be styled
> # Clicking on cluster name shouldn't open Background Operations
> # Align dropdowns on Configs page
> # Accordions in Compare mode missing arrow icon
> # Restart required icons thrown on the next line in firefox
> # Equalize padding of widget title
> # Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-22700:
--
Status: Open  (was: Patch Available)

> Post-install: UI style fixes
> 
>
> Key: AMBARI-22700
> URL: https://issues.apache.org/jira/browse/AMBARI-22700
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 3.0.0
>
> Attachments: AMBARI-22700.patch
>
>
> Issues:
> # Unknown color of service in navigation should be yellow
> # Remove left-right pad from Background Operations content
> # Status dropdown in Background Operations should be styled
> # Clicking on cluster name shouldn't open Background Operations
> # Align dropdowns on Configs page
> # Accordions in Compare mode missing arrow icon
> # Restart required icons thrown on the next line in firefox
> # Equalize padding of widget title
> # Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22625) Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari is different

2017-12-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304574#comment-16304574
 ] 

Hadoop QA commented on AMBARI-22625:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12903811/AMBARI-22625-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-web.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/12895//console

This message is automatically generated.

> Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari  is different
> ---
>
> Key: AMBARI-22625
> URL: https://issues.apache.org/jira/browse/AMBARI-22625
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Akhil S Naik
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: AMBARI-22625-trunk.patch, ambari-2.png, namenode.png
>
>
>  'NON DFS Used' value  in Services -> HDFS -> Summary if different from the 
> Value shown in NameNode UI (see Picture).
> in NAMENODE UI -->
> !https://issues.apache.org/jira/secure/attachment/12901501/namenode.png!
> In Services -> HDFS -> Summary
> !https://issues.apache.org/jira/secure/attachment/12901499/ambari-2.png!
> In NameNode UI the Non DFS Used is taken from 'NonDfsUsedSpace' variable from 
>  REST API call : 
> http://host1:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo 
> Where host1 is the Name Node Server FQDN
> but in Ambari we are calculating the same value from (Total Allocated - DFS 
> USed - DFS Remaining) .
> due to which this issue is happening.
> as a Fix we need to use the 'capacityNonDfsUsed' which comes from Server in 
> NameNode metrics
> 'FSNamesystem > CapacityNonDFSUsed'
> attached the patch in the JIRA.
> Please fix this bug



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-22700:
--
Attachment: AMBARI-22700.patch

> Post-install: UI style fixes
> 
>
> Key: AMBARI-22700
> URL: https://issues.apache.org/jira/browse/AMBARI-22700
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 3.0.0
>
> Attachments: AMBARI-22700.patch
>
>
> Issues:
> # Unknown color of service in navigation should be yellow
> # Remove left-right pad from Background Operations content
> # Status dropdown in Background Operations should be styled
> # Clicking on cluster name shouldn't open Background Operations
> # Align dropdowns on Configs page
> # Accordions in Compare mode missing arrow icon
> # Restart required icons thrown on the next line in firefox
> # Equalize padding of widget title
> # Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-22700:
--
Attachment: (was: AMBARI-22700.patch)

> Post-install: UI style fixes
> 
>
> Key: AMBARI-22700
> URL: https://issues.apache.org/jira/browse/AMBARI-22700
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 3.0.0
>
> Attachments: AMBARI-22700.patch
>
>
> Issues:
> # Unknown color of service in navigation should be yellow
> # Remove left-right pad from Background Operations content
> # Status dropdown in Background Operations should be styled
> # Clicking on cluster name shouldn't open Background Operations
> # Align dropdowns on Configs page
> # Accordions in Compare mode missing arrow icon
> # Restart required icons thrown on the next line in firefox
> # Equalize padding of widget title
> # Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-22700:
--
Status: Patch Available  (was: Open)

> Post-install: UI style fixes
> 
>
> Key: AMBARI-22700
> URL: https://issues.apache.org/jira/browse/AMBARI-22700
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 3.0.0
>
> Attachments: AMBARI-22700.patch
>
>
> Issues:
> # Unknown color of service in navigation should be yellow
> # Remove left-right pad from Background Operations content
> # Status dropdown in Background Operations should be styled
> # Clicking on cluster name shouldn't open Background Operations
> # Align dropdowns on Configs page
> # Accordions in Compare mode missing arrow icon
> # Restart required icons thrown on the next line in firefox
> # Equalize padding of widget title
> # Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Andrii Tkach (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrii Tkach updated AMBARI-22700:
--
Attachment: AMBARI-22700.patch

> Post-install: UI style fixes
> 
>
> Key: AMBARI-22700
> URL: https://issues.apache.org/jira/browse/AMBARI-22700
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Andrii Tkach
>Assignee: Andrii Tkach
> Fix For: 3.0.0
>
> Attachments: AMBARI-22700.patch
>
>
> Issues:
> # Unknown color of service in navigation should be yellow
> # Remove left-right pad from Background Operations content
> # Status dropdown in Background Operations should be styled
> # Clicking on cluster name shouldn't open Background Operations
> # Align dropdowns on Configs page
> # Accordions in Compare mode missing arrow icon
> # Restart required icons thrown on the next line in firefox
> # Equalize padding of widget title
> # Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22700) Post-install: UI style fixes

2017-12-27 Thread Andrii Tkach (JIRA)
Andrii Tkach created AMBARI-22700:
-

 Summary: Post-install: UI style fixes
 Key: AMBARI-22700
 URL: https://issues.apache.org/jira/browse/AMBARI-22700
 Project: Ambari
  Issue Type: Bug
  Components: ambari-web
Affects Versions: 3.0.0
Reporter: Andrii Tkach
Assignee: Andrii Tkach
 Fix For: 3.0.0


Issues:
# Unknown color of service in navigation should be yellow
# Remove left-right pad from Background Operations content
# Status dropdown in Background Operations should be styled
# Clicking on cluster name shouldn't open Background Operations
# Align dropdowns on Configs page
# Accordions in Compare mode missing arrow icon
# Restart required icons thrown on the next line in firefox
# Equalize padding of widget title
# Add padding for wizard modals



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22625) Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari is different

2017-12-27 Thread Akhil S Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil S Naik updated AMBARI-22625:
--
Attachment: AMBARI-22625-trunk.patch

> Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari  is different
> ---
>
> Key: AMBARI-22625
> URL: https://issues.apache.org/jira/browse/AMBARI-22625
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Akhil S Naik
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: AMBARI-22625-trunk.patch, ambari-2.png, namenode.png
>
>
>  'NON DFS Used' value  in Services -> HDFS -> Summary if different from the 
> Value shown in NameNode UI (see Picture).
> in NAMENODE UI -->
> !https://issues.apache.org/jira/secure/attachment/12901501/namenode.png!
> In Services -> HDFS -> Summary
> !https://issues.apache.org/jira/secure/attachment/12901499/ambari-2.png!
> In NameNode UI the Non DFS Used is taken from 'NonDfsUsedSpace' variable from 
>  REST API call : 
> http://host1:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo 
> Where host1 is the Name Node Server FQDN
> but in Ambari we are calculating the same value from (Total Allocated - DFS 
> USed - DFS Remaining) .
> due to which this issue is happening.
> as a Fix we need to use the 'capacityNonDfsUsed' which comes from Server in 
> NameNode metrics
> 'FSNamesystem > CapacityNonDFSUsed'
> attached the patch in the JIRA.
> Please fix this bug



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22625) Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari is different

2017-12-27 Thread Akhil S Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil S Naik updated AMBARI-22625:
--
Status: Patch Available  (was: Open)

> Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari  is different
> ---
>
> Key: AMBARI-22625
> URL: https://issues.apache.org/jira/browse/AMBARI-22625
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Akhil S Naik
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: AMBARI-22625-trunk.patch, ambari-2.png, namenode.png
>
>
>  'NON DFS Used' value  in Services -> HDFS -> Summary if different from the 
> Value shown in NameNode UI (see Picture).
> in NAMENODE UI -->
> !https://issues.apache.org/jira/secure/attachment/12901501/namenode.png!
> In Services -> HDFS -> Summary
> !https://issues.apache.org/jira/secure/attachment/12901499/ambari-2.png!
> In NameNode UI the Non DFS Used is taken from 'NonDfsUsedSpace' variable from 
>  REST API call : 
> http://host1:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo 
> Where host1 is the Name Node Server FQDN
> but in Ambari we are calculating the same value from (Total Allocated - DFS 
> USed - DFS Remaining) .
> due to which this issue is happening.
> as a Fix we need to use the 'capacityNonDfsUsed' which comes from Server in 
> NameNode metrics
> 'FSNamesystem > CapacityNonDFSUsed'
> attached the patch in the JIRA.
> Please fix this bug



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22625) Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari is different

2017-12-27 Thread Akhil S Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil S Naik updated AMBARI-22625:
--
Status: Open  (was: Patch Available)

> Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari  is different
> ---
>
> Key: AMBARI-22625
> URL: https://issues.apache.org/jira/browse/AMBARI-22625
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Akhil S Naik
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: AMBARI-22625-trunk.patch, ambari-2.png, namenode.png
>
>
>  'NON DFS Used' value  in Services -> HDFS -> Summary if different from the 
> Value shown in NameNode UI (see Picture).
> in NAMENODE UI -->
> !https://issues.apache.org/jira/secure/attachment/12901501/namenode.png!
> In Services -> HDFS -> Summary
> !https://issues.apache.org/jira/secure/attachment/12901499/ambari-2.png!
> In NameNode UI the Non DFS Used is taken from 'NonDfsUsedSpace' variable from 
>  REST API call : 
> http://host1:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo 
> Where host1 is the Name Node Server FQDN
> but in Ambari we are calculating the same value from (Total Allocated - DFS 
> USed - DFS Remaining) .
> due to which this issue is happening.
> as a Fix we need to use the 'capacityNonDfsUsed' which comes from Server in 
> NameNode metrics
> 'FSNamesystem > CapacityNonDFSUsed'
> attached the patch in the JIRA.
> Please fix this bug



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22625) Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari is different

2017-12-27 Thread Akhil S Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil S Naik updated AMBARI-22625:
--
Attachment: (was: trunk.patch)

> Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari  is different
> ---
>
> Key: AMBARI-22625
> URL: https://issues.apache.org/jira/browse/AMBARI-22625
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Akhil S Naik
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: ambari-2.png, namenode.png
>
>
>  'NON DFS Used' value  in Services -> HDFS -> Summary if different from the 
> Value shown in NameNode UI (see Picture).
> in NAMENODE UI -->
> !https://issues.apache.org/jira/secure/attachment/12901501/namenode.png!
> In Services -> HDFS -> Summary
> !https://issues.apache.org/jira/secure/attachment/12901499/ambari-2.png!
> In NameNode UI the Non DFS Used is taken from 'NonDfsUsedSpace' variable from 
>  REST API call : 
> http://host1:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo 
> Where host1 is the Name Node Server FQDN
> but in Ambari we are calculating the same value from (Total Allocated - DFS 
> USed - DFS Remaining) .
> due to which this issue is happening.
> as a Fix we need to use the 'capacityNonDfsUsed' which comes from Server in 
> NameNode metrics
> 'FSNamesystem > CapacityNonDFSUsed'
> attached the patch in the JIRA.
> Please fix this bug



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22625) Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari is different

2017-12-27 Thread Akhil S Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil S Naik updated AMBARI-22625:
--
Attachment: (was: release2.5.2.patch)

> Non DFS Used from HDFS Namenode UI and HDFS summary in Ambari  is different
> ---
>
> Key: AMBARI-22625
> URL: https://issues.apache.org/jira/browse/AMBARI-22625
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.5.2
>Reporter: Akhil S Naik
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: ambari-2.png, namenode.png
>
>
>  'NON DFS Used' value  in Services -> HDFS -> Summary if different from the 
> Value shown in NameNode UI (see Picture).
> in NAMENODE UI -->
> !https://issues.apache.org/jira/secure/attachment/12901501/namenode.png!
> In Services -> HDFS -> Summary
> !https://issues.apache.org/jira/secure/attachment/12901499/ambari-2.png!
> In NameNode UI the Non DFS Used is taken from 'NonDfsUsedSpace' variable from 
>  REST API call : 
> http://host1:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo 
> Where host1 is the Name Node Server FQDN
> but in Ambari we are calculating the same value from (Total Allocated - DFS 
> USed - DFS Remaining) .
> due to which this issue is happening.
> as a Fix we need to use the 'capacityNonDfsUsed' which comes from Server in 
> NameNode metrics
> 'FSNamesystem > CapacityNonDFSUsed'
> attached the patch in the JIRA.
> Please fix this bug



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AMBARI-22699) Update FE to initiate regenerate keytab file operations for a service and a host

2017-12-27 Thread Antonenko Alexander (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antonenko Alexander updated AMBARI-22699:
-
Affects Version/s: 3.0.0
Fix Version/s: 3.0.0
  Component/s: ambari-web

> Update FE to initiate regenerate keytab file operations for a service and a 
> host
> 
>
> Key: AMBARI-22699
> URL: https://issues.apache.org/jira/browse/AMBARI-22699
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 3.0.0
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
> Fix For: 3.0.0
>
>
> Update FE to initiate regenerate keytab file operations for a service and a 
> host.
> An option needs to be added to the Actions menu of a service to regenerate 
> all keytab file for that service. The Ambari REST API call to initiate this 
> is:
> {code}
> PUT 
> /api/v1/clusters/CLUSTERNAME?regenerate_keytabs=all_components=SERVICENAME:*
> {
>   "Clusters": {
> "security_type" : "KERBEROS"
>   }
> }
> {code}
> NOTE:  CLUSTERNAME and SERVICENAME need to be replaced with the appropriate 
> values.
> An option needs to be added to the Host Actions menu of a host to regenerate 
> all keytab file for that host. The Ambari REST API call to initiate this is:
> {code}
> PUT 
> /api/v1/clusters/CLUSTERNAME?regenerate_keytabs=all_hosts=HOSTNAME:*
> {
>   "Clusters": {
> "security_type" : "KERBEROS"
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (AMBARI-22699) Update FE to initiate regenerate keytab file operations for a service and a host

2017-12-27 Thread Antonenko Alexander (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-22699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antonenko Alexander reassigned AMBARI-22699:


Assignee: Antonenko Alexander

> Update FE to initiate regenerate keytab file operations for a service and a 
> host
> 
>
> Key: AMBARI-22699
> URL: https://issues.apache.org/jira/browse/AMBARI-22699
> Project: Ambari
>  Issue Type: Bug
>Reporter: Antonenko Alexander
>Assignee: Antonenko Alexander
>
> Update FE to initiate regenerate keytab file operations for a service and a 
> host.
> An option needs to be added to the Actions menu of a service to regenerate 
> all keytab file for that service. The Ambari REST API call to initiate this 
> is:
> {code}
> PUT 
> /api/v1/clusters/CLUSTERNAME?regenerate_keytabs=all_components=SERVICENAME:*
> {
>   "Clusters": {
> "security_type" : "KERBEROS"
>   }
> }
> {code}
> NOTE:  CLUSTERNAME and SERVICENAME need to be replaced with the appropriate 
> values.
> An option needs to be added to the Host Actions menu of a host to regenerate 
> all keytab file for that host. The Ambari REST API call to initiate this is:
> {code}
> PUT 
> /api/v1/clusters/CLUSTERNAME?regenerate_keytabs=all_hosts=HOSTNAME:*
> {
>   "Clusters": {
> "security_type" : "KERBEROS"
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (AMBARI-22699) Update FE to initiate regenerate keytab file operations for a service and a host

2017-12-27 Thread Antonenko Alexander (JIRA)
Antonenko Alexander created AMBARI-22699:


 Summary: Update FE to initiate regenerate keytab file operations 
for a service and a host
 Key: AMBARI-22699
 URL: https://issues.apache.org/jira/browse/AMBARI-22699
 Project: Ambari
  Issue Type: Bug
Reporter: Antonenko Alexander


Update FE to initiate regenerate keytab file operations for a service and a 
host.
An option needs to be added to the Actions menu of a service to regenerate all 
keytab file for that service. The Ambari REST API call to initiate this is:
{code}
PUT 
/api/v1/clusters/CLUSTERNAME?regenerate_keytabs=all_components=SERVICENAME:*
{
  "Clusters": {
"security_type" : "KERBEROS"
  }
}
{code}
NOTE:  CLUSTERNAME and SERVICENAME need to be replaced with the appropriate 
values.

An option needs to be added to the Host Actions menu of a host to regenerate 
all keytab file for that host. The Ambari REST API call to initiate this is:
{code}
PUT 
/api/v1/clusters/CLUSTERNAME?regenerate_keytabs=all_hosts=HOSTNAME:*
{
  "Clusters": {
"security_type" : "KERBEROS"
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (AMBARI-22653) Infra Manager: s3 upload support for archiving Infra Solr

2017-12-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-22653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó resolved AMBARI-22653.
---
Resolution: Fixed

> Infra Manager: s3 upload support for archiving Infra Solr
> -
>
> Key: AMBARI-22653
> URL: https://issues.apache.org/jira/browse/AMBARI-22653
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-infra
>Affects Versions: 3.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
> Fix For: 3.0.0
>
>
> Upload exported document from infa solr to a specified s3 server.
> s3 configuration should be defined in a property file and a property should 
> be added to ambari-infra-manager.properties file for each jobs referencing 
> the s3 property file.
> s3 upload should be optional, If s3 configuration reference presents upload 
> the files into the specified s3 server and delete them from temporary folder, 
> if not leave the files in the filesystem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22653) Infra Manager: s3 upload support for archiving Infra Solr

2017-12-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-22653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304484#comment-16304484
 ] 

Hudson commented on AMBARI-22653:
-

SUCCESS: Integrated in Jenkins build Ambari-trunk-Commit #8560 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/8560/])
AMBARI-22653. ADDENDUM Infra Manager: s3 upload support for archiving 
(oleewere: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=935ea92aba62ec8f69be6c568b397608ef08b91f])
* (add) 
ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/archive/ItemWriterListener.java
* (add) 
ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/CloseableIterator.java
* (delete) 
ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/archive/DocumentSource.java
* (delete) 
ambari-infra/ambari-infra-manager/src/main/java/org/apache/ambari/infra/job/archive/DocumentIterator.java
* (edit) 
ambari-infra/ambari-infra-manager-it/src/test/java/org/apache/ambari/infra/InfraManagerStories.java


> Infra Manager: s3 upload support for archiving Infra Solr
> -
>
> Key: AMBARI-22653
> URL: https://issues.apache.org/jira/browse/AMBARI-22653
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-infra
>Affects Versions: 3.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
> Fix For: 3.0.0
>
>
> Upload exported document from infa solr to a specified s3 server.
> s3 configuration should be defined in a property file and a property should 
> be added to ambari-infra-manager.properties file for each jobs referencing 
> the s3 property file.
> s3 upload should be optional, If s3 configuration reference presents upload 
> the files into the specified s3 server and delete them from temporary folder, 
> if not leave the files in the filesystem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AMBARI-22653) Infra Manager: s3 upload support for archiving Infra Solr

2017-12-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/AMBARI-22653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304441#comment-16304441
 ] 

Olivér Szabó commented on AMBARI-22653:
---

committed to trunk:
{code:java}
commit 935ea92aba62ec8f69be6c568b397608ef08b91f
Author: Oliver Szabo 
Date:   Wed Dec 27 11:23:11 2017 +0100

AMBARI-22653. ADDENDUM Infra Manager: s3 upload support for archiving Infra 
Solr (Krisztian Kasa via oleewere)
{code}


> Infra Manager: s3 upload support for archiving Infra Solr
> -
>
> Key: AMBARI-22653
> URL: https://issues.apache.org/jira/browse/AMBARI-22653
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-infra
>Affects Versions: 3.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
> Fix For: 3.0.0
>
>
> Upload exported document from infa solr to a specified s3 server.
> s3 configuration should be defined in a property file and a property should 
> be added to ambari-infra-manager.properties file for each jobs referencing 
> the s3 property file.
> s3 upload should be optional, If s3 configuration reference presents upload 
> the files into the specified s3 server and delete them from temporary folder, 
> if not leave the files in the filesystem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)