[jira] [Commented] (AMBARI-18013) HiveHook fails to post messages to kafka due to missing keytab config in /etc/hive/conf/atlas-application.properties in kerberized cluster

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407100#comment-15407100
 ] 

Hudson commented on AMBARI-18013:
-

FAILURE: Integrated in Ambari-trunk-Commit #5450 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5450/])
AMBARI-18013. HiveHook fails to post messages to kafka due to missing 
(afernandez: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=b477a192cca5b2748ab4a0e619844c46f9851042])
* ambari-server/src/main/resources/stacks/HDP/2.5/services/HIVE/kerberos.json


> HiveHook fails to post messages to kafka due to missing keytab config in 
> /etc/hive/conf/atlas-application.properties in kerberized cluster
> --
>
> Key: AMBARI-18013
> URL: https://issues.apache.org/jira/browse/AMBARI-18013
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: 2.4.0
>
> Attachments: AMBARI-18013.patch
>
>
> STR:
> * Install Ambari 2.4
> * HDP 2.5 with Hive and Atlas
> * Kerberize the cluster
> The hive hook fails because 2 configs are missing from 
> hive-atlas-application.properties, 
> {noformat}
> atlas.jaas.KafkaClient.option.keyTab=/etc/security/keytabs/hive.service.keytab
> atlas.jaas.KafkaClient.option.principal=hive/_h...@example.com
> {noformat}
> *Impact: HiveHook related tests are failing.*
> {noformat}
> 2016-07-29 10:25:50,087 INFO  [Atlas Logger 1]: producer.ProducerConfig 
> (AbstractConfig.java:logAll(178)) - ProducerConfig values:
>   metric.reporters = []
>   metadata.max.age.ms = 30
>   reconnect.backoff.ms = 50
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   bootstrap.servers = [atlas-r6-bug-62789-1023re-2.openstacklocal:6667, 
> atlas-r6-bug-62789-1023re-1.openstacklocal:6667]
>   ssl.keystore.type = JKS
>   sasl.mechanism = GSSAPI
>   max.block.ms = 6
>   interceptor.classes = null
>   ssl.truststore.password = null
>   client.id =
>   ssl.endpoint.identification.algorithm = null
>   request.timeout.ms = 3
>   acks = 1
>   receive.buffer.bytes = 32768
>   ssl.truststore.type = JKS
>   retries = 0
>   ssl.truststore.location = null
>   ssl.keystore.password = null
>   send.buffer.bytes = 131072
>   compression.type = none
>   metadata.fetch.timeout.ms = 6
>   retry.backoff.ms = 100
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   buffer.memory = 33554432
>   timeout.ms = 3
>   key.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   sasl.kerberos.service.name = kafka
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   ssl.trustmanager.algorithm = PKIX
>   block.on.buffer.full = false
>   ssl.key.password = null
>   sasl.kerberos.min.time.before.relogin = 6
>   connections.max.idle.ms = 54
>   max.in.flight.requests.per.connection = 5
>   metrics.num.samples = 2
>   ssl.protocol = TLS
>   ssl.provider = null
>   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>   batch.size = 16384
>   ssl.keystore.location = null
>   ssl.cipher.suites = null
>   .protocol = PLAINTEXTSASL
>   max.request.size = 1048576
>   value.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   ssl.keymanager.algorithm = SunX509
>   metrics.sample.window.ms = 3
>   partitioner.class = class 
> org.apache.kafka.clients.producer.internals.DefaultPartitioner
>   linger.ms = 0
> 2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: producer.KafkaProducer 
> (KafkaProducer.java:close(658)) - Closing the Kafka producer with 
> timeoutMillis = 0 ms.
> 2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: hook.AtlasHook 
> (AtlasHook.java:notifyEntitiesInternal(131)) - Failed to notify atlas for 
> entity [[{Id='(type: hive_db, id: )', traits=[], 
> values={owner=public, ownerType=2, qualifiedName=default@cl1, 
> clusterName=cl1, name=default, description=Default Hive database, 
> location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse,
>  parameters={}}}, {Id='(type: hive_table, id: )', traits=[], 
> values={owner=hrt_qa, temporary=false, lastAccessTime=Fri Jul 29 10:25:49 UTC 
> 2016, qualifiedName=default.t2@cl1, columns=[{Id='(type: hive_column, id: 
> )', traits=[], values={owner=hrt_qa, 
> qualifiedName=default.t2.abc@cl1, name=abc, comment=null, type=string, 
> table=(type: hive_table, id: )}}], sd={Id='(type: 
> hive_storagedesc, id: )', traits=[], 
> values={qualifiedName=default.t2@cl1_storage, storedAsSubDirectories=false, 
> 

[jira] [Commented] (AMBARI-18012) Metrics Sink unable to connect to zookeeper

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407028#comment-15407028
 ] 

Hadoop QA commented on AMBARI-18012:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12821954/AMBARI-18012.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8281//console

This message is automatically generated.

> Metrics Sink unable to connect to zookeeper
> ---
>
> Key: AMBARI-18012
> URL: https://issues.apache.org/jira/browse/AMBARI-18012
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-18012.patch
>
>
> Test and validate sink fallback connect to ZK for finding collector
> {code}
> 2016-07-14 20:37:01,212 INFO  timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:findPreferredCollectHost(353)) - Collector 
> ambari-sid-5.c.pramod-thangali.internal is not longer live. Removing it from 
> list of know live collector hosts : []
> 2016-07-14 20:37:03,213 WARN  availability.MetricCollectorHAHelper 
> (MetricCollectorHAHelper.java:findLiveCollectorHostsFromZNode(83)) - Unable 
> to connect to zookeeper.
> java.lang.IllegalStateException: Client is not started
> at 
> org.apache.hadoop.metrics2.sink.relocated.google.common.base.Preconditions.checkState(Preconditions.java:149)
> at 
> org.apache.hadoop.metrics2.sink.relocated.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:113)
> at 
> org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper$1.call(MetricCollectorHAHelper.java:77)
> at 
> org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper$1.call(MetricCollectorHAHelper.java:74)
> at 
> org.apache.hadoop.metrics2.sink.relocated.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
> at 
> org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper.findLiveCollectorHostsFromZNode(MetricCollectorHAHelper.java:74)
> at 
> org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.findPreferredCollectHost(AbstractTimelineMetricsSink.java:363)
> at 
> org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.emitMetrics(AbstractTimelineMetricsSink.java:209)
> at 
> org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.putMetrics(HadoopTimelineMetricsSink.java:315)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> 2016-07-14 20:37:03,245 WARN  timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:findLiveCollectorHostsFromKnownCollector(433))
>  - Unable to conne
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17996) HAWQ service advisor shows wrong recommendations on edge cases

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407027#comment-15407027
 ] 

Hadoop QA commented on AMBARI-17996:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12821972/AMBARI-17996-trunk-v2.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8280//console

This message is automatically generated.

> HAWQ service advisor shows wrong recommendations on edge cases
> --
>
> Key: AMBARI-17996
> URL: https://issues.apache.org/jira/browse/AMBARI-17996
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: trunk, 2.4.0
>Reporter: Matt
>Assignee: Matt
> Fix For: trunk, 2.4.0
>
> Attachments: AMBARI-17966-trunk-orig.patch, 
> AMBARI-17966-trunk-v1.patch, AMBARI-17996-trunk-v2.patch
>
>
> HAWQ service advisor shows wrong recommendations on edge cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18014) PXF service definition is missing pxf-json profile

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15407023#comment-15407023
 ] 

Hadoop QA commented on AMBARI-18014:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12821975/AMBARI-18014.branch24.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8279//console

This message is automatically generated.

> PXF service definition is missing pxf-json profile
> --
>
> Key: AMBARI-18014
> URL: https://issues.apache.org/jira/browse/AMBARI-18014
> Project: Ambari
>  Issue Type: Bug
>Reporter: Alexander Denissov
>Assignee: Alexander Denissov
> Fix For: 2.4.0
>
> Attachments: AMBARI-18014.branch24.patch
>
>
> PXF service profile should have pxf-json listed and pxf-json rpm needs to be 
> installed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18013) HiveHook fails to post messages to kafka due to missing keytab config in /etc/hive/conf/atlas-application.properties in kerberized cluster

2016-08-03 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18013:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to trunk, commit b477a192cca5b2748ab4a0e619844c46f9851042
branch-2.4, 59e422ec4eebe53f1b8c43f3301878f613565064

> HiveHook fails to post messages to kafka due to missing keytab config in 
> /etc/hive/conf/atlas-application.properties in kerberized cluster
> --
>
> Key: AMBARI-18013
> URL: https://issues.apache.org/jira/browse/AMBARI-18013
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: 2.4.0
>
> Attachments: AMBARI-18013.patch
>
>
> STR:
> * Install Ambari 2.4
> * HDP 2.5 with Hive and Atlas
> * Kerberize the cluster
> The hive hook fails because 2 configs are missing from 
> hive-atlas-application.properties, 
> {noformat}
> atlas.jaas.KafkaClient.option.keyTab=/etc/security/keytabs/hive.service.keytab
> atlas.jaas.KafkaClient.option.principal=hive/_h...@example.com
> {noformat}
> *Impact: HiveHook related tests are failing.*
> {noformat}
> 2016-07-29 10:25:50,087 INFO  [Atlas Logger 1]: producer.ProducerConfig 
> (AbstractConfig.java:logAll(178)) - ProducerConfig values:
>   metric.reporters = []
>   metadata.max.age.ms = 30
>   reconnect.backoff.ms = 50
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   bootstrap.servers = [atlas-r6-bug-62789-1023re-2.openstacklocal:6667, 
> atlas-r6-bug-62789-1023re-1.openstacklocal:6667]
>   ssl.keystore.type = JKS
>   sasl.mechanism = GSSAPI
>   max.block.ms = 6
>   interceptor.classes = null
>   ssl.truststore.password = null
>   client.id =
>   ssl.endpoint.identification.algorithm = null
>   request.timeout.ms = 3
>   acks = 1
>   receive.buffer.bytes = 32768
>   ssl.truststore.type = JKS
>   retries = 0
>   ssl.truststore.location = null
>   ssl.keystore.password = null
>   send.buffer.bytes = 131072
>   compression.type = none
>   metadata.fetch.timeout.ms = 6
>   retry.backoff.ms = 100
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   buffer.memory = 33554432
>   timeout.ms = 3
>   key.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   sasl.kerberos.service.name = kafka
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   ssl.trustmanager.algorithm = PKIX
>   block.on.buffer.full = false
>   ssl.key.password = null
>   sasl.kerberos.min.time.before.relogin = 6
>   connections.max.idle.ms = 54
>   max.in.flight.requests.per.connection = 5
>   metrics.num.samples = 2
>   ssl.protocol = TLS
>   ssl.provider = null
>   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>   batch.size = 16384
>   ssl.keystore.location = null
>   ssl.cipher.suites = null
>   .protocol = PLAINTEXTSASL
>   max.request.size = 1048576
>   value.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   ssl.keymanager.algorithm = SunX509
>   metrics.sample.window.ms = 3
>   partitioner.class = class 
> org.apache.kafka.clients.producer.internals.DefaultPartitioner
>   linger.ms = 0
> 2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: producer.KafkaProducer 
> (KafkaProducer.java:close(658)) - Closing the Kafka producer with 
> timeoutMillis = 0 ms.
> 2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: hook.AtlasHook 
> (AtlasHook.java:notifyEntitiesInternal(131)) - Failed to notify atlas for 
> entity [[{Id='(type: hive_db, id: )', traits=[], 
> values={owner=public, ownerType=2, qualifiedName=default@cl1, 
> clusterName=cl1, name=default, description=Default Hive database, 
> location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse,
>  parameters={}}}, {Id='(type: hive_table, id: )', traits=[], 
> values={owner=hrt_qa, temporary=false, lastAccessTime=Fri Jul 29 10:25:49 UTC 
> 2016, qualifiedName=default.t2@cl1, columns=[{Id='(type: hive_column, id: 
> )', traits=[], values={owner=hrt_qa, 
> qualifiedName=default.t2.abc@cl1, name=abc, comment=null, type=string, 
> table=(type: hive_table, id: )}}], sd={Id='(type: 
> hive_storagedesc, id: )', traits=[], 
> values={qualifiedName=default.t2@cl1_storage, storedAsSubDirectories=false, 
> location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse/t2,
>  compressed=false, inputFormat=org.apache.hadoop.mapred.TextInputFormat, 
> 

[jira] [Updated] (AMBARI-18014) PXF service definition is missing pxf-json profile

2016-08-03 Thread Alexander Denissov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Denissov updated AMBARI-18014:

Description: PXF service profile should have pxf-json listed and pxf-json 
rpm needs to be installed.

> PXF service definition is missing pxf-json profile
> --
>
> Key: AMBARI-18014
> URL: https://issues.apache.org/jira/browse/AMBARI-18014
> Project: Ambari
>  Issue Type: Bug
>Reporter: Alexander Denissov
>Assignee: Alexander Denissov
> Fix For: 2.4.0
>
> Attachments: AMBARI-18014.branch24.patch
>
>
> PXF service profile should have pxf-json listed and pxf-json rpm needs to be 
> installed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18014) PXF service definition is missing pxf-json profile

2016-08-03 Thread Alexander Denissov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Denissov updated AMBARI-18014:

Fix Version/s: 2.4.0
   Status: Patch Available  (was: Open)

> PXF service definition is missing pxf-json profile
> --
>
> Key: AMBARI-18014
> URL: https://issues.apache.org/jira/browse/AMBARI-18014
> Project: Ambari
>  Issue Type: Bug
>Reporter: Alexander Denissov
>Assignee: Alexander Denissov
> Fix For: 2.4.0
>
> Attachments: AMBARI-18014.branch24.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18014) PXF service definition is missing pxf-json profile

2016-08-03 Thread Alexander Denissov (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Denissov updated AMBARI-18014:

Attachment: AMBARI-18014.branch24.patch

> PXF service definition is missing pxf-json profile
> --
>
> Key: AMBARI-18014
> URL: https://issues.apache.org/jira/browse/AMBARI-18014
> Project: Ambari
>  Issue Type: Bug
>Reporter: Alexander Denissov
>Assignee: Alexander Denissov
> Fix For: 2.4.0
>
> Attachments: AMBARI-18014.branch24.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18014) PXF service definition is missing pxf-json profile

2016-08-03 Thread Alexander Denissov (JIRA)
Alexander Denissov created AMBARI-18014:
---

 Summary: PXF service definition is missing pxf-json profile
 Key: AMBARI-18014
 URL: https://issues.apache.org/jira/browse/AMBARI-18014
 Project: Ambari
  Issue Type: Bug
Reporter: Alexander Denissov
Assignee: Alexander Denissov






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-16278) Give more time for HBase system tables to be assigned

2016-08-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated AMBARI-16278:

Description: 
We have observed extended cluster downtime due to HBase system tables not being 
assigned at cluster start up.

The default values for the following two parameters are too low:

hbase.regionserver.executor.openregion.threads (default: 3)
hbase.master.namespace.init.timeout (default: 30)

We set hbase.regionserver.executor.openregion.threads=200 and 
hbase.master.namespace.init.timeout=240 in some case to work around 
HBASE-14190.


Ambari can use 20 for hbase.regionserver.executor.openregion.threads and 
240 for hbase.master.namespace.init.timeout as default value.

  was:
We have observed extended cluster downtime due to HBase system tables not being 
assigned at cluster start up.

The default values for the following two parameters are too low:

hbase.regionserver.executor.openregion.threads (default: 3)
hbase.master.namespace.init.timeout (default: 30)

We set hbase.regionserver.executor.openregion.threads=200 and 
hbase.master.namespace.init.timeout=240 in some case to work around 
HBASE-14190.

Ambari can use 20 for hbase.regionserver.executor.openregion.threads and 
240 for hbase.master.namespace.init.timeout as default value.


> Give more time for HBase system tables to be assigned
> -
>
> Key: AMBARI-16278
> URL: https://issues.apache.org/jira/browse/AMBARI-16278
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> We have observed extended cluster downtime due to HBase system tables not 
> being assigned at cluster start up.
> The default values for the following two parameters are too low:
> hbase.regionserver.executor.openregion.threads (default: 3)
> hbase.master.namespace.init.timeout (default: 30)
> We set hbase.regionserver.executor.openregion.threads=200 and 
> hbase.master.namespace.init.timeout=240 in some case to work around 
> HBASE-14190.
> Ambari can use 20 for hbase.regionserver.executor.openregion.threads and 
> 240 for hbase.master.namespace.init.timeout as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17346) Dependent components should be shutdown before stopping hdfs

2016-08-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated AMBARI-17346:

Description: 
Sometimes admin shuts down hdfs first, then hbase. 

By the time hbase is shutdown, no data can be persisted (including metadata). 
This results in large number of inconsistencies when hbase cluster is brought 
back up.

Before hdfs is shutdown, the components dependent on hdfs should be shutdown 
first.

  was:
Sometimes admin shuts down hdfs first, then hbase. 

By the time hbase is shutdown, no data can be persisted (including metadata). 
This results in large number of inconsistencies when hbase cluster is brought 
back up.


Before hdfs is shutdown, the components dependent on hdfs should be shutdown 
first.


> Dependent components should be shutdown before stopping hdfs
> 
>
> Key: AMBARI-17346
> URL: https://issues.apache.org/jira/browse/AMBARI-17346
> Project: Ambari
>  Issue Type: Bug
>Reporter: Ted Yu
>
> Sometimes admin shuts down hdfs first, then hbase. 
> By the time hbase is shutdown, no data can be persisted (including metadata). 
> This results in large number of inconsistencies when hbase cluster is brought 
> back up.
> Before hdfs is shutdown, the components dependent on hdfs should be shutdown 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17996) HAWQ service advisor shows wrong recommendations on edge cases

2016-08-03 Thread Matt (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt updated AMBARI-17996:
--
Attachment: AMBARI-17996-trunk-v2.patch

> HAWQ service advisor shows wrong recommendations on edge cases
> --
>
> Key: AMBARI-17996
> URL: https://issues.apache.org/jira/browse/AMBARI-17996
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: trunk, 2.4.0
>Reporter: Matt
>Assignee: Matt
> Fix For: trunk, 2.4.0
>
> Attachments: AMBARI-17966-trunk-orig.patch, 
> AMBARI-17966-trunk-v1.patch, AMBARI-17996-trunk-v2.patch
>
>
> HAWQ service advisor shows wrong recommendations on edge cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17937) Ambari install/init should create a new gpadmin database

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406896#comment-15406896
 ] 

Hudson commented on AMBARI-17937:
-

FAILURE: Integrated in Ambari-trunk-Commit #5449 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5449/])
AMBARI-17937:Ambari install/init should create a new gpadmin database (ljain: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=c2e9b465df3b3c175baf9048784f38dd486a14a6])
* 
ambari-server/src/main/resources/common-services/HAWQ/2.0.0/package/scripts/common.py
* 
ambari-server/src/main/resources/common-services/HAWQ/2.0.0/package/scripts/utils.py


> Ambari install/init should create a new gpadmin database
> 
>
> Key: AMBARI-17937
> URL: https://issues.apache.org/jira/browse/AMBARI-17937
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Lav Jain
>Assignee: Lav Jain
>Priority: Minor
> Fix For: trunk, 2.4.0
>
> Attachments: AMBARI-17937.patch
>
>
> If you are logged in as gpadmin on the master and type in "psql" to connect 
> to the database, it will fail.  psql assumes you want to connect to the 
> database named "gpadmin" which matches your username. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17308) Ambari Logfeeder outputs a lot of errors due to parse date

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406898#comment-15406898
 ] 

Hudson commented on AMBARI-17308:
-

FAILURE: Integrated in Ambari-trunk-Commit #5449 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5449/])
AMBARI-17308. Ambari Logfeeder outputs a lot of errors due to parse date 
(oleewere: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=bfa24d139aa9d10925850165702024e7a094e473])
* 
ambari-server/src/main/java/org/apache/ambari/server/audit/AuditLoggerDefaultImpl.java
* 
ambari-server/src/main/resources/common-services/LOGSEARCH/0.5.0/package/templates/input.config-ambari.json.j2


> Ambari Logfeeder outputs a lot of errors due to parse date
> --
>
> Key: AMBARI-17308
> URL: https://issues.apache.org/jira/browse/AMBARI-17308
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: trunk, 2.4.0
> Environment: CentOS7.2, JST
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
> Fix For: 2.4.0
>
> Attachments: AMBARI-17308.1.patch, AMBARI-17308.2.patch, 
> AMBARI-17308.3.patch, AMBARI-17308.patch
>
>
> In logsearch_feeder service log, we got errors like below
> {code}
> 2016-06-20 15:28:09,368 ERROR file=ambari-audit.log 
> org.apache.ambari.logfeeder.mapper.MapperDate LogFeederUtil.java:356 - Error 
> applying date transformation. isEpoch=false, 
> dateFormat=-MM-dd'T'HH:mm:ss.SSSZ, value=2016-06-20T15:28:08.000. 
> mapClass=map_date, input=input:source=file, 
> path=/var/log/ambari-server/ambari-audit.log, fieldName=logtime. Messages 
> suppressed before: 2
> java.text.ParseException: Unparseable date: "2016-06-20T15:28:08.000"
>   at java.text.DateFormat.parse(DateFormat.java:366)
>   at 
> org.apache.ambari.logfeeder.mapper.MapperDate.apply(MapperDate.java:83)
>   at org.apache.ambari.logfeeder.filter.Filter.apply(Filter.java:154)
>   at 
> org.apache.ambari.logfeeder.filter.FilterGrok.applyMessage(FilterGrok.java:291)
>   at 
> org.apache.ambari.logfeeder.filter.FilterGrok.flush(FilterGrok.java:320)
>   at org.apache.ambari.logfeeder.input.Input.flush(Input.java:125)
>   at 
> org.apache.ambari.logfeeder.input.InputFile.processFile(InputFile.java:430)
>   at org.apache.ambari.logfeeder.input.InputFile.start(InputFile.java:260)
>   at org.apache.ambari.logfeeder.input.Input.run(Input.java:100)
>   at java.lang.Thread.run(Thread.java:745) 
> {code}
> ambari-audit.log is like below
> {code}
> 2016-07-21T01:52:49.875+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu14/repositories/HDP-2.5),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu14), Repo 
> id(HDP-2.5), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu14/2.x/BUILDS/2.5.0.0-1025)
> 2016-07-21T01:52:49.905+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu16/repositories/HDP-2.5),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu16), Repo 
> id(HDP-2.5), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu16/2.x/BUILDS/2.5.0.0-1025)
> 2016-07-21T01:52:50.015+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu14/repositories/HDP-UTILS-1.1.0.21),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu14), Repo 
> id(HDP-UTILS-1.1.0.21), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ubuntu14)
> {code}
> I think date format of the ambari-audit.log ({{2016-07-21T01:52:49.875+09}}) 
> should be like {{2016-07-21T01:52:49.875+0900}}, since grok-pattern can't 
> handle {{2016-07-21T01:52:49.875+09}} format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17938) Ambari should not recursively chown for HAWQ hdfs upon every start

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406897#comment-15406897
 ] 

Hudson commented on AMBARI-17938:
-

FAILURE: Integrated in Ambari-trunk-Commit #5449 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5449/])
AMBARI-17938: Ambari should not recursively chown for HAWQ hdfs upon (ljain: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=a90b7f7ccd9131463dbc49691f1e26f187d0aa09])
* 
ambari-server/src/main/resources/common-services/HAWQ/2.0.0/package/scripts/common.py


> Ambari should not recursively chown for HAWQ hdfs upon every start
> --
>
> Key: AMBARI-17938
> URL: https://issues.apache.org/jira/browse/AMBARI-17938
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Lav Jain
>Assignee: Lav Jain
>  Labels: performance
> Fix For: 2.4.0
>
> Attachments: AMBARI-17938.patch, AMBARI-17938.v2.patch, 
> AMBARI-17938.v3.patch
>
>
> This results in changing of owner even if the owner value is same. The 
> operation is very costly if there are a lot of subdirectories.
> The owner value only changes when you switch from regular mode to secure mode 
> and vice-versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17996) HAWQ service advisor shows wrong recommendations on edge cases

2016-08-03 Thread Matt (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt updated AMBARI-17996:
--
Attachment: AMBARI-17966-trunk-v1.patch

> HAWQ service advisor shows wrong recommendations on edge cases
> --
>
> Key: AMBARI-17996
> URL: https://issues.apache.org/jira/browse/AMBARI-17996
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: trunk, 2.4.0
>Reporter: Matt
>Assignee: Matt
> Fix For: trunk, 2.4.0
>
> Attachments: AMBARI-17966-trunk-orig.patch, 
> AMBARI-17966-trunk-v1.patch
>
>
> HAWQ service advisor shows wrong recommendations on edge cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17996) HAWQ service advisor shows wrong recommendations on edge cases

2016-08-03 Thread Matt (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt updated AMBARI-17996:
--
Attachment: (was: AMBARI-17966-trunk-v1.patch)

> HAWQ service advisor shows wrong recommendations on edge cases
> --
>
> Key: AMBARI-17996
> URL: https://issues.apache.org/jira/browse/AMBARI-17996
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: trunk, 2.4.0
>Reporter: Matt
>Assignee: Matt
> Fix For: trunk, 2.4.0
>
> Attachments: AMBARI-17966-trunk-orig.patch, 
> AMBARI-17966-trunk-v1.patch
>
>
> HAWQ service advisor shows wrong recommendations on edge cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17996) HAWQ service advisor shows wrong recommendations on edge cases

2016-08-03 Thread Matt (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt updated AMBARI-17996:
--
Attachment: AMBARI-17966-trunk-v1.patch

> HAWQ service advisor shows wrong recommendations on edge cases
> --
>
> Key: AMBARI-17996
> URL: https://issues.apache.org/jira/browse/AMBARI-17996
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: trunk, 2.4.0
>Reporter: Matt
>Assignee: Matt
> Fix For: trunk, 2.4.0
>
> Attachments: AMBARI-17966-trunk-orig.patch, 
> AMBARI-17966-trunk-v1.patch
>
>
> HAWQ service advisor shows wrong recommendations on edge cases



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17983) (1). Revert AMBARI-16031 (Create "/hadoop/llap/local" on each host and disk in Kerberized cluster for LLAP), and (2). Remove the config 'hive.llap.daemon.work.dirs' a

2016-08-03 Thread Swapan Shridhar (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406869#comment-15406869
 ] 

Swapan Shridhar commented on AMBARI-17983:
--

Tested with {quote}Ambari 2.4.0.0-1070 and HDP 2.5.0.0-1128 {quote}. It works.


Commits:

trunk:

{code}
commit da1701de99e55997e39e0f644c14e7ff2f927fb7
Author: Swapan Shridhar 
Date:   Mon Aug 1 16:04:40 2016 -0700

AMBARI-17983. (1). Revert AMBARI-16031 (Create /hadoop/llap/local on each 
host and disk in Kerberized cluster for LLAP), and (2). Remove the config 
'hive.llap.daemon.work.dirs' as HIVE will manage the work directories itself.
{code}

branch-2.4:

{code}
commit 389c8829219b01f2567f9f8320e61c3d1ccae8e4
Author: Swapan Shridhar 
Date:   Wed Aug 3 17:03:38 2016 -0700

 AMBARI-17983. (1). Revert AMBARI-16031 (Create /hadoop/llap/local on each 
host and disk in Kerberized cluster for LLAP), and (2). Remove the config 
'hive.llap.daemon.work.dirs' as HIVE will manage the work directories itself.
{code}


> (1). Revert AMBARI-16031 (Create "/hadoop/llap/local" on each host and disk 
> in Kerberized cluster for LLAP), and (2). Remove the config 
> 'hive.llap.daemon.work.dirs' as HIVE will manage the work directories itself.
> -
>
> Key: AMBARI-17983
> URL: https://issues.apache.org/jira/browse/AMBARI-17983
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Swapan Shridhar
>Assignee: Swapan Shridhar
> Fix For: 2.4.0
>
> Attachments: AMBARI-17983.patch
>
>
> (1). Revert AMBARI-16031 (Create "/hadoop/llap/local" on each host and disk 
> in Kerberized cluster for LLAP)
>
>- Earlier, we had logic for using YARN work dirs for LLAP by HIVE config 
> 'hive.llap.daemon.work.dirs' having same value as of YARN config 
> 'yarn.nodemanager.local-dirs'.
>- In kerberized setup, we were creating separate directory for LLAP, where 
> Hive user had persmissions.
>
>
>- That is not required now, as HIVE is itself taking care of workdirs in 
> kerberized and un-kerberized environment. 
>- Thus, the revert.
> (2). Remove the config 'hive.llap.daemon.work.dirs' as HIVE will manage the 
> work directories itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18012) Metrics Sink unable to connect to zookeeper

2016-08-03 Thread Aravindan Vijayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated AMBARI-18012:
---
Attachment: AMBARI-18012.patch

> Metrics Sink unable to connect to zookeeper
> ---
>
> Key: AMBARI-18012
> URL: https://issues.apache.org/jira/browse/AMBARI-18012
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-18012.patch
>
>
> Test and validate sink fallback connect to ZK for finding collector
> {code}
> 2016-07-14 20:37:01,212 INFO  timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:findPreferredCollectHost(353)) - Collector 
> ambari-sid-5.c.pramod-thangali.internal is not longer live. Removing it from 
> list of know live collector hosts : []
> 2016-07-14 20:37:03,213 WARN  availability.MetricCollectorHAHelper 
> (MetricCollectorHAHelper.java:findLiveCollectorHostsFromZNode(83)) - Unable 
> to connect to zookeeper.
> java.lang.IllegalStateException: Client is not started
> at 
> org.apache.hadoop.metrics2.sink.relocated.google.common.base.Preconditions.checkState(Preconditions.java:149)
> at 
> org.apache.hadoop.metrics2.sink.relocated.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:113)
> at 
> org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper$1.call(MetricCollectorHAHelper.java:77)
> at 
> org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper$1.call(MetricCollectorHAHelper.java:74)
> at 
> org.apache.hadoop.metrics2.sink.relocated.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
> at 
> org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper.findLiveCollectorHostsFromZNode(MetricCollectorHAHelper.java:74)
> at 
> org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.findPreferredCollectHost(AbstractTimelineMetricsSink.java:363)
> at 
> org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.emitMetrics(AbstractTimelineMetricsSink.java:209)
> at 
> org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.putMetrics(HadoopTimelineMetricsSink.java:315)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> 2016-07-14 20:37:03,245 WARN  timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:findLiveCollectorHostsFromKnownCollector(433))
>  - Unable to conne
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18012) Metrics Sink unable to connect to zookeeper

2016-08-03 Thread Aravindan Vijayan (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated AMBARI-18012:
---
Status: Patch Available  (was: Open)

> Metrics Sink unable to connect to zookeeper
> ---
>
> Key: AMBARI-18012
> URL: https://issues.apache.org/jira/browse/AMBARI-18012
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: AMBARI-18012.patch
>
>
> Test and validate sink fallback connect to ZK for finding collector
> {code}
> 2016-07-14 20:37:01,212 INFO  timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:findPreferredCollectHost(353)) - Collector 
> ambari-sid-5.c.pramod-thangali.internal is not longer live. Removing it from 
> list of know live collector hosts : []
> 2016-07-14 20:37:03,213 WARN  availability.MetricCollectorHAHelper 
> (MetricCollectorHAHelper.java:findLiveCollectorHostsFromZNode(83)) - Unable 
> to connect to zookeeper.
> java.lang.IllegalStateException: Client is not started
> at 
> org.apache.hadoop.metrics2.sink.relocated.google.common.base.Preconditions.checkState(Preconditions.java:149)
> at 
> org.apache.hadoop.metrics2.sink.relocated.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:113)
> at 
> org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper$1.call(MetricCollectorHAHelper.java:77)
> at 
> org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper$1.call(MetricCollectorHAHelper.java:74)
> at 
> org.apache.hadoop.metrics2.sink.relocated.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
> at 
> org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper.findLiveCollectorHostsFromZNode(MetricCollectorHAHelper.java:74)
> at 
> org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.findPreferredCollectHost(AbstractTimelineMetricsSink.java:363)
> at 
> org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.emitMetrics(AbstractTimelineMetricsSink.java:209)
> at 
> org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.putMetrics(HadoopTimelineMetricsSink.java:315)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> 2016-07-14 20:37:03,245 WARN  timeline.HadoopTimelineMetricsSink 
> (AbstractTimelineMetricsSink.java:findLiveCollectorHostsFromKnownCollector(433))
>  - Unable to conne
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17694) Kafka listeners property does not show SASL_PLAINTEXT protocol when Kerberos is enabled

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406808#comment-15406808
 ] 

Hadoop QA commented on AMBARI-17694:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12821921/AMBARI-17694-Aug3.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8278//console

This message is automatically generated.

> Kafka listeners property does not show SASL_PLAINTEXT protocol when Kerberos 
> is enabled
> ---
>
> Key: AMBARI-17694
> URL: https://issues.apache.org/jira/browse/AMBARI-17694
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Anita Gnanamalar Jebaraj
>Assignee: Anita Gnanamalar Jebaraj
>Priority: Critical
> Fix For: 2.4.0
>
> Attachments: AMBARI-17694-1.patch, AMBARI-17694-Aug3.patch, 
> AMBARI-17694-Jul26.patch, AMBARI-17694.patch
>
>
> When kerberos is enabled,  the protocol for listeners in 
> /etc/kafka/conf/server.properties is updated from PLAINTEXT to PLAINTEXTSASL, 
> even though the Ambari UI shows otherwise 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18013) HiveHook fails to post messages to kafka due to missing keytab config in /etc/hive/conf/atlas-application.properties in kerberized cluster

2016-08-03 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18013:
-
Attachment: AMBARI-18013.patch

> HiveHook fails to post messages to kafka due to missing keytab config in 
> /etc/hive/conf/atlas-application.properties in kerberized cluster
> --
>
> Key: AMBARI-18013
> URL: https://issues.apache.org/jira/browse/AMBARI-18013
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: 2.4.0
>
> Attachments: AMBARI-18013.patch
>
>
> STR:
> * Install Ambari 2.4
> * HDP 2.5 with Hive and Atlas
> * Kerberize the cluster
> The hive hook fails because 2 configs are missing from 
> hive-atlas-application.properties, 
> {noformat}
> atlas.jaas.KafkaClient.option.keyTab=/etc/security/keytabs/hive.service.keytab
> atlas.jaas.KafkaClient.option.principal=hive/_h...@example.com
> {noformat}
> *Impact: HiveHook related tests are failing.*
> {noformat}
> 2016-07-29 10:25:50,087 INFO  [Atlas Logger 1]: producer.ProducerConfig 
> (AbstractConfig.java:logAll(178)) - ProducerConfig values:
>   metric.reporters = []
>   metadata.max.age.ms = 30
>   reconnect.backoff.ms = 50
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   bootstrap.servers = [atlas-r6-bug-62789-1023re-2.openstacklocal:6667, 
> atlas-r6-bug-62789-1023re-1.openstacklocal:6667]
>   ssl.keystore.type = JKS
>   sasl.mechanism = GSSAPI
>   max.block.ms = 6
>   interceptor.classes = null
>   ssl.truststore.password = null
>   client.id =
>   ssl.endpoint.identification.algorithm = null
>   request.timeout.ms = 3
>   acks = 1
>   receive.buffer.bytes = 32768
>   ssl.truststore.type = JKS
>   retries = 0
>   ssl.truststore.location = null
>   ssl.keystore.password = null
>   send.buffer.bytes = 131072
>   compression.type = none
>   metadata.fetch.timeout.ms = 6
>   retry.backoff.ms = 100
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   buffer.memory = 33554432
>   timeout.ms = 3
>   key.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   sasl.kerberos.service.name = kafka
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   ssl.trustmanager.algorithm = PKIX
>   block.on.buffer.full = false
>   ssl.key.password = null
>   sasl.kerberos.min.time.before.relogin = 6
>   connections.max.idle.ms = 54
>   max.in.flight.requests.per.connection = 5
>   metrics.num.samples = 2
>   ssl.protocol = TLS
>   ssl.provider = null
>   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>   batch.size = 16384
>   ssl.keystore.location = null
>   ssl.cipher.suites = null
>   .protocol = PLAINTEXTSASL
>   max.request.size = 1048576
>   value.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   ssl.keymanager.algorithm = SunX509
>   metrics.sample.window.ms = 3
>   partitioner.class = class 
> org.apache.kafka.clients.producer.internals.DefaultPartitioner
>   linger.ms = 0
> 2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: producer.KafkaProducer 
> (KafkaProducer.java:close(658)) - Closing the Kafka producer with 
> timeoutMillis = 0 ms.
> 2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: hook.AtlasHook 
> (AtlasHook.java:notifyEntitiesInternal(131)) - Failed to notify atlas for 
> entity [[{Id='(type: hive_db, id: )', traits=[], 
> values={owner=public, ownerType=2, qualifiedName=default@cl1, 
> clusterName=cl1, name=default, description=Default Hive database, 
> location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse,
>  parameters={}}}, {Id='(type: hive_table, id: )', traits=[], 
> values={owner=hrt_qa, temporary=false, lastAccessTime=Fri Jul 29 10:25:49 UTC 
> 2016, qualifiedName=default.t2@cl1, columns=[{Id='(type: hive_column, id: 
> )', traits=[], values={owner=hrt_qa, 
> qualifiedName=default.t2.abc@cl1, name=abc, comment=null, type=string, 
> table=(type: hive_table, id: )}}], sd={Id='(type: 
> hive_storagedesc, id: )', traits=[], 
> values={qualifiedName=default.t2@cl1_storage, storedAsSubDirectories=false, 
> location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse/t2,
>  compressed=false, inputFormat=org.apache.hadoop.mapred.TextInputFormat, 
> outputFormat=org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, 
> parameters={}, serdeInfo=org.apache.atlas.typesystem.Struct@7648946d, 
> table=(type: hive_table, id: ), numBuckets=-1}}, 
> 

[jira] [Updated] (AMBARI-18013) HiveHook fails to post messages to kafka due to missing keytab config in /etc/hive/conf/atlas-application.properties in kerberized cluster

2016-08-03 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18013:
-
Status: Patch Available  (was: Open)

> HiveHook fails to post messages to kafka due to missing keytab config in 
> /etc/hive/conf/atlas-application.properties in kerberized cluster
> --
>
> Key: AMBARI-18013
> URL: https://issues.apache.org/jira/browse/AMBARI-18013
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: 2.4.0
>
> Attachments: AMBARI-18013.patch
>
>
> STR:
> * Install Ambari 2.4
> * HDP 2.5 with Hive and Atlas
> * Kerberize the cluster
> The hive hook fails because 2 configs are missing from 
> hive-atlas-application.properties, 
> {noformat}
> atlas.jaas.KafkaClient.option.keyTab=/etc/security/keytabs/hive.service.keytab
> atlas.jaas.KafkaClient.option.principal=hive/_h...@example.com
> {noformat}
> *Impact: HiveHook related tests are failing.*
> {noformat}
> 2016-07-29 10:25:50,087 INFO  [Atlas Logger 1]: producer.ProducerConfig 
> (AbstractConfig.java:logAll(178)) - ProducerConfig values:
>   metric.reporters = []
>   metadata.max.age.ms = 30
>   reconnect.backoff.ms = 50
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   bootstrap.servers = [atlas-r6-bug-62789-1023re-2.openstacklocal:6667, 
> atlas-r6-bug-62789-1023re-1.openstacklocal:6667]
>   ssl.keystore.type = JKS
>   sasl.mechanism = GSSAPI
>   max.block.ms = 6
>   interceptor.classes = null
>   ssl.truststore.password = null
>   client.id =
>   ssl.endpoint.identification.algorithm = null
>   request.timeout.ms = 3
>   acks = 1
>   receive.buffer.bytes = 32768
>   ssl.truststore.type = JKS
>   retries = 0
>   ssl.truststore.location = null
>   ssl.keystore.password = null
>   send.buffer.bytes = 131072
>   compression.type = none
>   metadata.fetch.timeout.ms = 6
>   retry.backoff.ms = 100
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   buffer.memory = 33554432
>   timeout.ms = 3
>   key.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   sasl.kerberos.service.name = kafka
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   ssl.trustmanager.algorithm = PKIX
>   block.on.buffer.full = false
>   ssl.key.password = null
>   sasl.kerberos.min.time.before.relogin = 6
>   connections.max.idle.ms = 54
>   max.in.flight.requests.per.connection = 5
>   metrics.num.samples = 2
>   ssl.protocol = TLS
>   ssl.provider = null
>   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>   batch.size = 16384
>   ssl.keystore.location = null
>   ssl.cipher.suites = null
>   .protocol = PLAINTEXTSASL
>   max.request.size = 1048576
>   value.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   ssl.keymanager.algorithm = SunX509
>   metrics.sample.window.ms = 3
>   partitioner.class = class 
> org.apache.kafka.clients.producer.internals.DefaultPartitioner
>   linger.ms = 0
> 2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: producer.KafkaProducer 
> (KafkaProducer.java:close(658)) - Closing the Kafka producer with 
> timeoutMillis = 0 ms.
> 2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: hook.AtlasHook 
> (AtlasHook.java:notifyEntitiesInternal(131)) - Failed to notify atlas for 
> entity [[{Id='(type: hive_db, id: )', traits=[], 
> values={owner=public, ownerType=2, qualifiedName=default@cl1, 
> clusterName=cl1, name=default, description=Default Hive database, 
> location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse,
>  parameters={}}}, {Id='(type: hive_table, id: )', traits=[], 
> values={owner=hrt_qa, temporary=false, lastAccessTime=Fri Jul 29 10:25:49 UTC 
> 2016, qualifiedName=default.t2@cl1, columns=[{Id='(type: hive_column, id: 
> )', traits=[], values={owner=hrt_qa, 
> qualifiedName=default.t2.abc@cl1, name=abc, comment=null, type=string, 
> table=(type: hive_table, id: )}}], sd={Id='(type: 
> hive_storagedesc, id: )', traits=[], 
> values={qualifiedName=default.t2@cl1_storage, storedAsSubDirectories=false, 
> location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse/t2,
>  compressed=false, inputFormat=org.apache.hadoop.mapred.TextInputFormat, 
> outputFormat=org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, 
> parameters={}, serdeInfo=org.apache.atlas.typesystem.Struct@7648946d, 
> table=(type: hive_table, id: ), numBuckets=-1}}, 
> 

[jira] [Updated] (AMBARI-18013) HiveHook fails to post messages to kafka due to missing keytab config in /etc/hive/conf/atlas-application.properties in kerberized cluster

2016-08-03 Thread Alejandro Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Fernandez updated AMBARI-18013:
-
Description: 
STR:
* Install Ambari 2.4
* HDP 2.5 with Hive and Atlas
* Kerberize the cluster

The hive hook fails because 2 configs are missing from 
hive-atlas-application.properties, 
{noformat}
atlas.jaas.KafkaClient.option.keyTab=/etc/security/keytabs/hive.service.keytab
atlas.jaas.KafkaClient.option.principal=hive/_h...@example.com
{noformat}

*Impact: HiveHook related tests are failing.*
{noformat}
2016-07-29 10:25:50,087 INFO  [Atlas Logger 1]: producer.ProducerConfig 
(AbstractConfig.java:logAll(178)) - ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 30
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [atlas-r6-bug-62789-1023re-2.openstacklocal:6667, 
atlas-r6-bug-62789-1023re-1.openstacklocal:6667]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 6
interceptor.classes = null
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 3
acks = 1
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 0
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 6
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 3
key.serializer = class 
org.apache.kafka.common.serialization.StringSerializer
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 6
connections.max.idle.ms = 54
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
.protocol = PLAINTEXTSASL
max.request.size = 1048576
value.serializer = class 
org.apache.kafka.common.serialization.StringSerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 3
partitioner.class = class 
org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0

2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: producer.KafkaProducer 
(KafkaProducer.java:close(658)) - Closing the Kafka producer with timeoutMillis 
= 0 ms.
2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: hook.AtlasHook 
(AtlasHook.java:notifyEntitiesInternal(131)) - Failed to notify atlas for 
entity [[{Id='(type: hive_db, id: )', traits=[], 
values={owner=public, ownerType=2, qualifiedName=default@cl1, clusterName=cl1, 
name=default, description=Default Hive database, 
location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse,
 parameters={}}}, {Id='(type: hive_table, id: )', traits=[], 
values={owner=hrt_qa, temporary=false, lastAccessTime=Fri Jul 29 10:25:49 UTC 
2016, qualifiedName=default.t2@cl1, columns=[{Id='(type: hive_column, id: 
)', traits=[], values={owner=hrt_qa, 
qualifiedName=default.t2.abc@cl1, name=abc, comment=null, type=string, 
table=(type: hive_table, id: )}}], sd={Id='(type: hive_storagedesc, 
id: )', traits=[], values={qualifiedName=default.t2@cl1_storage, 
storedAsSubDirectories=false, 
location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse/t2,
 compressed=false, inputFormat=org.apache.hadoop.mapred.TextInputFormat, 
outputFormat=org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, 
parameters={}, serdeInfo=org.apache.atlas.typesystem.Struct@7648946d, 
table=(type: hive_table, id: ), numBuckets=-1}}, 
tableType=MANAGED_TABLE, createTime=Fri Jul 29 10:25:49 UTC 2016, name=t2, 
comment=null, partitionKeys=[], parameters={totalSize=0, numRows=0, 
rawDataSize=0, COLUMN_STATS_ACCURATE={"BASIC_STATS":"true"}, numFiles=0, 
transient_lastDdlTime=1469787949}, retention=0, db={Id='(type: hive_db, id: 
)', traits=[], values={owner=public, ownerType=2, 
qualifiedName=default@cl1, clusterName=cl1, name=default, description=Default 
Hive database, 
location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse,
 parameters={}]]. Retrying
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at 
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:335)
at 

[jira] [Created] (AMBARI-18013) HiveHook fails to post messages to kafka due to missing keytab config in /etc/hive/conf/atlas-application.properties in kerberized cluster

2016-08-03 Thread Alejandro Fernandez (JIRA)
Alejandro Fernandez created AMBARI-18013:


 Summary: HiveHook fails to post messages to kafka due to missing 
keytab config in /etc/hive/conf/atlas-application.properties in kerberized 
cluster
 Key: AMBARI-18013
 URL: https://issues.apache.org/jira/browse/AMBARI-18013
 Project: Ambari
  Issue Type: Bug
  Components: stacks
Affects Versions: 2.4.0
Reporter: Alejandro Fernandez
Assignee: Alejandro Fernandez
 Fix For: 2.4.0


STR:
* Install Ambari 2.4
* HDP 2.5 with Hive and Atlas
* Kerberize the cluster

The hive hook fails because 2 configs are missing from 
hive-atlas-application.properties, 
{noformat}
atlas.jaas.KafkaClient.option.keyTab=/etc/security/keytabs/hive.service.keytab
atlas.jaas.KafkaClient.option.principal=hive/_h...@example.com
{noformat}

*Impact: HiveHook related tests are failing.*
{noformat}
2016-07-29 10:25:50,087 INFO  [Atlas Logger 1]: producer.ProducerConfig 
(AbstractConfig.java:logAll(178)) - ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 30
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [atlas-r6-bug-62789-1023re-2.openstacklocal:6667, 
atlas-r6-bug-62789-1023re-1.openstacklocal:6667]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 6
interceptor.classes = null
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 3
acks = 1
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 0
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 6
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 3
key.serializer = class 
org.apache.kafka.common.serialization.StringSerializer
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 6
connections.max.idle.ms = 54
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
batch.size = 16384
ssl.keystore.location = null
ssl.cipher.suites = null
.protocol = PLAINTEXTSASL
max.request.size = 1048576
value.serializer = class 
org.apache.kafka.common.serialization.StringSerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 3
partitioner.class = class 
org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 0

2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: producer.KafkaProducer 
(KafkaProducer.java:close(658)) - Closing the Kafka producer with timeoutMillis 
= 0 ms.
2016-07-29 10:25:50,091 INFO  [Atlas Logger 1]: hook.AtlasHook 
(AtlasHook.java:notifyEntitiesInternal(131)) - Failed to notify atlas for 
entity [[{Id='(type: hive_db, id: )', traits=[], 
values={owner=public, ownerType=2, qualifiedName=default@cl1, clusterName=cl1, 
name=default, description=Default Hive database, 
location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse,
 parameters={}}}, {Id='(type: hive_table, id: )', traits=[], 
values={owner=hrt_qa, temporary=false, lastAccessTime=Fri Jul 29 10:25:49 UTC 
2016, qualifiedName=default.t2@cl1, columns=[{Id='(type: hive_column, id: 
)', traits=[], values={owner=hrt_qa, 
qualifiedName=default.t2.abc@cl1, name=abc, comment=null, type=string, 
table=(type: hive_table, id: )}}], sd={Id='(type: hive_storagedesc, 
id: )', traits=[], values={qualifiedName=default.t2@cl1_storage, 
storedAsSubDirectories=false, 
location=hdfs://atlas-r6-bug-62789-1023re-1.openstacklocal:8020/apps/hive/warehouse/t2,
 compressed=false, inputFormat=org.apache.hadoop.mapred.TextInputFormat, 
outputFormat=org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, 
parameters={}, serdeInfo=org.apache.atlas.typesystem.Struct@7648946d, 
table=(type: hive_table, id: ), numBuckets=-1}}, 
tableType=MANAGED_TABLE, createTime=Fri Jul 29 10:25:49 UTC 2016, name=t2, 
comment=null, partitionKeys=[], parameters={totalSize=0, numRows=0, 
rawDataSize=0, COLUMN_STATS_ACCURATE={"BASIC_STATS":"true"}, numFiles=0, 
transient_lastDdlTime=1469787949}, retention=0, db={Id='(type: hive_db, id: 
)', traits=[], values={owner=public, ownerType=2, 
qualifiedName=default@cl1, clusterName=cl1, name=default, description=Default 
Hive database, 

[jira] [Commented] (AMBARI-18009) Ranger usersync pid location not configurable

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406784#comment-15406784
 ] 

Hadoop QA commented on AMBARI-18009:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12821928/0001-AMBARI-18009-Ranger-usersync-pid-location-not-config.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8277//console

This message is automatically generated.

> Ranger usersync pid location not configurable
> -
>
> Key: AMBARI-18009
> URL: https://issues.apache.org/jira/browse/AMBARI-18009
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Yujie Li
> Attachments: 
> 0001-AMBARI-18009-Ranger-usersync-pid-location-not-config.patch
>
>
> During the installation of Ranger via Ambari UI, users should be able to 
> change the directory for Ranger usersync pid file. But changing that has no 
> effects since setup_ranger_xml.py from ambari-server doesn't have the code to 
> support the customization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17938) Ambari should not recursively chown for HAWQ hdfs upon every start

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406782#comment-15406782
 ] 

Hadoop QA commented on AMBARI-17938:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12821927/AMBARI-17938.v3.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8276//console

This message is automatically generated.

> Ambari should not recursively chown for HAWQ hdfs upon every start
> --
>
> Key: AMBARI-17938
> URL: https://issues.apache.org/jira/browse/AMBARI-17938
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Lav Jain
>Assignee: Lav Jain
>  Labels: performance
> Fix For: 2.4.0
>
> Attachments: AMBARI-17938.patch, AMBARI-17938.v2.patch, 
> AMBARI-17938.v3.patch
>
>
> This results in changing of owner even if the owner value is same. The 
> operation is very costly if there are a lot of subdirectories.
> The owner value only changes when you switch from regular mode to secure mode 
> and vice-versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18011) Add api for bulk delete host component

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406756#comment-15406756
 ] 

Hadoop QA commented on AMBARI-18011:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12821930/rb50450.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8275//console

This message is automatically generated.

> Add api for bulk delete host component
> --
>
> Key: AMBARI-18011
> URL: https://issues.apache.org/jira/browse/AMBARI-18011
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Ajit Kumar
>Assignee: Ajit Kumar
> Fix For: 2.5.0
>
> Attachments: rb50450.patch
>
>
> This api takes in query and instead of failing fast on the first error, puts 
> the best effort to delete all requested hosts. Response should be json object 
> which has deleted keys and keys which failed to delete with exception.
> Sample API calls:
> Delete all host components on a set of hosts:
> {code}
> Request:
> curl -i -uadmin:admin -H 'X-Requested-By: ambari' -X DELETE 
> http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components -d 
> '{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)"}}'
> Response
> {
>   "deleteResult" : [
> {
>   "deleted" : {
> "key" : "c6401.ambari.apache.org/HIVE_METASTORE"
>   }
> },
> {
>   "deleted" : {
> "key" : "c6402.ambari.apache.org/MYSQL_SERVER"
>   }
> },
> {
>   "error" : {
> "key" : "c6402.ambari.apache.org/RESOURCEMANAGER",
> "code" : 500,
> "message" : "org.apache.ambari.server.AmbariException: Host Component 
> cannot be removed, clusterName=c1, serviceName=YARN, 
> componentName=RESOURCEMANAGER, hostname=c6402.ambari.apache.org, request={ 
> clusterName=c1, serviceName=YARN, componentName=RESOURCEMANAGER, 
> hostname=c6402.ambari.apache.org, desiredState=null, state=null, 
> desiredStackId=null, staleConfig=null, adminState=null}"
>   }
> }
>   ]
> }
> {code}
> Delete selected host components on a set of host
> {code} 
> Request:
> curl -i -uadmin:admin -H 'X-Requested-By: ambari' -X DELETE 
> http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components -d 
> '{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)/component_name.in(NODEMANAGER)"}}'
> Response:
> {
>   "deleteResult" : [
> {
>   "deleted" : {
> "key" : "c6401.ambari.apache.org/NODEMANAGER"
>   }
> },
> {
>   "error" : {
> "key" : "c6402.ambari.apache.org/NODEMANAGER",
> "code" : 500,
> "message" : "org.apache.ambari.server.AmbariException: Host Component 
> cannot be removed, clusterName=c1, serviceName=YARN, 
> componentName=NODEMANAGER, hostname=c6402.ambari.apache.org, request={ 
> clusterName=c1, serviceName=YARN, componentName=NODEMANAGER, 
> hostname=c6402.ambari.apache.org, desiredState=null, state=null, 
> desiredStackId=null, staleConfig=null, adminState=null}"
>   }
> }
>   ]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-17983) (1). Revert AMBARI-16031 (Create "/hadoop/llap/local" on each host and disk in Kerberized cluster for LLAP), and (2). Remove the config 'hive.llap.daemon.work.dirs' as

2016-08-03 Thread Swapan Shridhar (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapan Shridhar resolved AMBARI-17983.
--
Resolution: Fixed

> (1). Revert AMBARI-16031 (Create "/hadoop/llap/local" on each host and disk 
> in Kerberized cluster for LLAP), and (2). Remove the config 
> 'hive.llap.daemon.work.dirs' as HIVE will manage the work directories itself.
> -
>
> Key: AMBARI-17983
> URL: https://issues.apache.org/jira/browse/AMBARI-17983
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Swapan Shridhar
>Assignee: Swapan Shridhar
> Fix For: 2.4.0
>
> Attachments: AMBARI-17983.patch
>
>
> (1). Revert AMBARI-16031 (Create "/hadoop/llap/local" on each host and disk 
> in Kerberized cluster for LLAP)
>
>- Earlier, we had logic for using YARN work dirs for LLAP by HIVE config 
> 'hive.llap.daemon.work.dirs' having same value as of YARN config 
> 'yarn.nodemanager.local-dirs'.
>- In kerberized setup, we were creating separate directory for LLAP, where 
> Hive user had persmissions.
>
>
>- That is not required now, as HIVE is itself taking care of workdirs in 
> kerberized and un-kerberized environment. 
>- Thus, the revert.
> (2). Remove the config 'hive.llap.daemon.work.dirs' as HIVE will manage the 
> work directories itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18012) Metrics Sink unable to connect to zookeeper

2016-08-03 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created AMBARI-18012:
--

 Summary: Metrics Sink unable to connect to zookeeper
 Key: AMBARI-18012
 URL: https://issues.apache.org/jira/browse/AMBARI-18012
 Project: Ambari
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
Priority: Critical
 Fix For: 2.5.0


Test and validate sink fallback connect to ZK for finding collector

{code}
2016-07-14 20:37:01,212 INFO  timeline.HadoopTimelineMetricsSink 
(AbstractTimelineMetricsSink.java:findPreferredCollectHost(353)) - Collector 
ambari-sid-5.c.pramod-thangali.internal is not longer live. Removing it from 
list of know live collector hosts : []
2016-07-14 20:37:03,213 WARN  availability.MetricCollectorHAHelper 
(MetricCollectorHAHelper.java:findLiveCollectorHostsFromZNode(83)) - Unable to 
connect to zookeeper.
java.lang.IllegalStateException: Client is not started
at 
org.apache.hadoop.metrics2.sink.relocated.google.common.base.Preconditions.checkState(Preconditions.java:149)
at 
org.apache.hadoop.metrics2.sink.relocated.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:113)
at 
org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper$1.call(MetricCollectorHAHelper.java:77)
at 
org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper$1.call(MetricCollectorHAHelper.java:74)
at 
org.apache.hadoop.metrics2.sink.relocated.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
at 
org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper.findLiveCollectorHostsFromZNode(MetricCollectorHAHelper.java:74)
at 
org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.findPreferredCollectHost(AbstractTimelineMetricsSink.java:363)
at 
org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.emitMetrics(AbstractTimelineMetricsSink.java:209)
at 
org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.putMetrics(HadoopTimelineMetricsSink.java:315)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
at 
org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
2016-07-14 20:37:03,245 WARN  timeline.HadoopTimelineMetricsSink 
(AbstractTimelineMetricsSink.java:findLiveCollectorHostsFromKnownCollector(433))
 - Unable to conne
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17983) (1). Revert AMBARI-16031 (Create "/hadoop/llap/local" on each host and disk in Kerberized cluster for LLAP), and (2). Remove the config 'hive.llap.daemon.work.dirs' a

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406728#comment-15406728
 ] 

Hudson commented on AMBARI-17983:
-

SUCCESS: Integrated in Ambari-trunk-Commit #5448 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5448/])
AMBARI-17983. (1). Revert AMBARI-16031 (Create /hadoop/llap/local on 
(sshridhar: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=da1701de99e55997e39e0f644c14e7ff2f927fb7])
* ambari-web/app/views/main/service/item.js
* ambari-web/app/messages.js
* 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py
* ambari-web/app/models/host_component.js
* 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py
* ambari-server/src/main/resources/stacks/HDP/2.5/services/HIVE/kerberos.json
* ambari-web/test/views/main/service/item_test.js
* ambari-web/app/controllers/main/service/item.js
* 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/nodemanager.py
* ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/metainfo.xml
* 
ambari-server/src/main/resources/stacks/HDP/2.5/services/HIVE/configuration/hive-interactive-site.xml


> (1). Revert AMBARI-16031 (Create "/hadoop/llap/local" on each host and disk 
> in Kerberized cluster for LLAP), and (2). Remove the config 
> 'hive.llap.daemon.work.dirs' as HIVE will manage the work directories itself.
> -
>
> Key: AMBARI-17983
> URL: https://issues.apache.org/jira/browse/AMBARI-17983
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Swapan Shridhar
>Assignee: Swapan Shridhar
> Fix For: 2.4.0
>
> Attachments: AMBARI-17983.patch
>
>
> (1). Revert AMBARI-16031 (Create "/hadoop/llap/local" on each host and disk 
> in Kerberized cluster for LLAP)
>
>- Earlier, we had logic for using YARN work dirs for LLAP by HIVE config 
> 'hive.llap.daemon.work.dirs' having same value as of YARN config 
> 'yarn.nodemanager.local-dirs'.
>- In kerberized setup, we were creating separate directory for LLAP, where 
> Hive user had persmissions.
>
>
>- That is not required now, as HIVE is itself taking care of workdirs in 
> kerberized and un-kerberized environment. 
>- Thus, the revert.
> (2). Remove the config 'hive.llap.daemon.work.dirs' as HIVE will manage the 
> work directories itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-16031) Create "/hadoop/llap/local" on each host and disk in Kerberized cluster for LLAP

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406727#comment-15406727
 ] 

Hudson commented on AMBARI-16031:
-

SUCCESS: Integrated in Ambari-trunk-Commit #5448 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5448/])
AMBARI-17983. (1). Revert AMBARI-16031 (Create /hadoop/llap/local on 
(sshridhar: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=da1701de99e55997e39e0f644c14e7ff2f927fb7])
* ambari-web/app/views/main/service/item.js
* ambari-web/app/messages.js
* 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py
* ambari-web/app/models/host_component.js
* 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py
* ambari-server/src/main/resources/stacks/HDP/2.5/services/HIVE/kerberos.json
* ambari-web/test/views/main/service/item_test.js
* ambari-web/app/controllers/main/service/item.js
* 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/nodemanager.py
* ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/metainfo.xml
* 
ambari-server/src/main/resources/stacks/HDP/2.5/services/HIVE/configuration/hive-interactive-site.xml


> Create "/hadoop/llap/local" on each host and disk in Kerberized cluster for 
> LLAP
> 
>
> Key: AMBARI-16031
> URL: https://issues.apache.org/jira/browse/AMBARI-16031
> Project: Ambari
>  Issue Type: Story
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Alejandro Fernandez
>Assignee: Alejandro Fernandez
> Fix For: 2.4.0
>
> Attachments: AMBARI-16031.trunk.patch
>
>
> - In non-kerberized cluster, hive.llap.daemon.work.dir will point to : 
> "$\{yarn.nodemanager.local-dirs\}”
> - In kerberized cluster, we need to create "/hadoop/llap/local" on each node 
> and disk and have "hive.llap.daemon.work.dirs" point to it.
> - It's similar to the way yarn.nodemanager.local-dirs is created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17308) Ambari Logfeeder outputs a lot of errors due to parse date

2016-08-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/AMBARI-17308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olivér Szabó updated AMBARI-17308:
--
   Resolution: Fixed
Fix Version/s: 2.4.0
   Status: Resolved  (was: Patch Available)

committed to trunk:
{code:java}
commit bfa24d139aa9d10925850165702024e7a094e473
Author: oleewere 
Date:   Thu Aug 4 00:05:20 2016 +0200

AMBARI-17308. Ambari Logfeeder outputs a lot of errors due to parse date 
(ambari-audit.log) (Masahiro Tanaka via oleewere)
{code}
comitted to branch-2.4:
{code:java}
commit c41565852d93f1af6cf0278d752b839689948f15
Author: oleewere 
Date:   Thu Aug 4 00:05:20 2016 +0200

AMBARI-17308. Ambari Logfeeder outputs a lot of errors due to parse date 
(ambari-audit.log) (Masahiro Tanaka via oleewere)
{code}

> Ambari Logfeeder outputs a lot of errors due to parse date
> --
>
> Key: AMBARI-17308
> URL: https://issues.apache.org/jira/browse/AMBARI-17308
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: trunk, 2.4.0
> Environment: CentOS7.2, JST
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
> Fix For: 2.4.0
>
> Attachments: AMBARI-17308.1.patch, AMBARI-17308.2.patch, 
> AMBARI-17308.3.patch, AMBARI-17308.patch
>
>
> In logsearch_feeder service log, we got errors like below
> {code}
> 2016-06-20 15:28:09,368 ERROR file=ambari-audit.log 
> org.apache.ambari.logfeeder.mapper.MapperDate LogFeederUtil.java:356 - Error 
> applying date transformation. isEpoch=false, 
> dateFormat=-MM-dd'T'HH:mm:ss.SSSZ, value=2016-06-20T15:28:08.000. 
> mapClass=map_date, input=input:source=file, 
> path=/var/log/ambari-server/ambari-audit.log, fieldName=logtime. Messages 
> suppressed before: 2
> java.text.ParseException: Unparseable date: "2016-06-20T15:28:08.000"
>   at java.text.DateFormat.parse(DateFormat.java:366)
>   at 
> org.apache.ambari.logfeeder.mapper.MapperDate.apply(MapperDate.java:83)
>   at org.apache.ambari.logfeeder.filter.Filter.apply(Filter.java:154)
>   at 
> org.apache.ambari.logfeeder.filter.FilterGrok.applyMessage(FilterGrok.java:291)
>   at 
> org.apache.ambari.logfeeder.filter.FilterGrok.flush(FilterGrok.java:320)
>   at org.apache.ambari.logfeeder.input.Input.flush(Input.java:125)
>   at 
> org.apache.ambari.logfeeder.input.InputFile.processFile(InputFile.java:430)
>   at org.apache.ambari.logfeeder.input.InputFile.start(InputFile.java:260)
>   at org.apache.ambari.logfeeder.input.Input.run(Input.java:100)
>   at java.lang.Thread.run(Thread.java:745) 
> {code}
> ambari-audit.log is like below
> {code}
> 2016-07-21T01:52:49.875+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu14/repositories/HDP-2.5),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu14), Repo 
> id(HDP-2.5), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu14/2.x/BUILDS/2.5.0.0-1025)
> 2016-07-21T01:52:49.905+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu16/repositories/HDP-2.5),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu16), Repo 
> id(HDP-2.5), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu16/2.x/BUILDS/2.5.0.0-1025)
> 2016-07-21T01:52:50.015+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu14/repositories/HDP-UTILS-1.1.0.21),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu14), Repo 
> id(HDP-UTILS-1.1.0.21), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ubuntu14)
> {code}
> I think date format of the ambari-audit.log ({{2016-07-21T01:52:49.875+09}}) 
> should be like {{2016-07-21T01:52:49.875+0900}}, since grok-pattern can't 
> handle {{2016-07-21T01:52:49.875+09}} format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18011) Add api for bulk delete host component

2016-08-03 Thread Ajit Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajit Kumar updated AMBARI-18011:

Description: 
This api takes in query and instead of failing fast on the first error, puts 
the best effort to delete all requested hosts. Response should be json object 
which has deleted keys and keys which failed to delete with exception.
Sample API calls:
Delete all host components on a set of hosts:
{code}

Request:
curl -i -uadmin:admin -H 'X-Requested-By: ambari' -X DELETE 
http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components -d 
'{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)"}}'

Response
{
  "deleteResult" : [
{
  "deleted" : {
"key" : "c6401.ambari.apache.org/HIVE_METASTORE"
  }
},
{
  "deleted" : {
"key" : "c6402.ambari.apache.org/MYSQL_SERVER"
  }
},
{
  "error" : {
"key" : "c6402.ambari.apache.org/RESOURCEMANAGER",
"code" : 500,
"message" : "org.apache.ambari.server.AmbariException: Host Component 
cannot be removed, clusterName=c1, serviceName=YARN, 
componentName=RESOURCEMANAGER, hostname=c6402.ambari.apache.org, request={ 
clusterName=c1, serviceName=YARN, componentName=RESOURCEMANAGER, 
hostname=c6402.ambari.apache.org, desiredState=null, state=null, 
desiredStackId=null, staleConfig=null, adminState=null}"
  }
}
  ]
}
{code}
Delete selected host components on a set of host
{code} 
Request:
curl -i -uadmin:admin -H 'X-Requested-By: ambari' -X DELETE 
http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components -d 
'{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)/component_name.in(NODEMANAGER)"}}'

Response:
{
  "deleteResult" : [
{
  "deleted" : {
"key" : "c6401.ambari.apache.org/NODEMANAGER"
  }
},
{
  "error" : {
"key" : "c6402.ambari.apache.org/NODEMANAGER",
"code" : 500,
"message" : "org.apache.ambari.server.AmbariException: Host Component 
cannot be removed, clusterName=c1, serviceName=YARN, componentName=NODEMANAGER, 
hostname=c6402.ambari.apache.org, request={ clusterName=c1, serviceName=YARN, 
componentName=NODEMANAGER, hostname=c6402.ambari.apache.org, desiredState=null, 
state=null, desiredStackId=null, staleConfig=null, adminState=null}"
  }
}
  ]
}
{code}

  was:
This api takes in query and instead of failing fast on the first error, puts 
the best effort to delete all requested hosts. Response should be json object 
which has deleted keys and keys which failed to delete with exception.
Sample API calls:
Delete all host components on a set of hosts:
{code}

delete http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components 
-d 
'{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)"}}'
{
  "deleteResult" : [
{
  "deleted" : {
"key" : "c6402.ambari.apache.org/HIVE_METASTORE"
  }
},
{
  "deleted" : {
"key" : "c6402.ambari.apache.org/MYSQL_SERVER"
  }
},
{
  "error" : {
"key" : "c6402.ambari.apache.org/RESOURCEMANAGER",
"code" : 500,
"message" : "org.apache.ambari.server.AmbariException: Host Component 
cannot be removed, clusterName=c1, serviceName=YARN, 
componentName=RESOURCEMANAGER, hostname=c6402.ambari.apache.org, request={ 
clusterName=c1, serviceName=YARN, componentName=RESOURCEMANAGER, 
hostname=c6402.ambari.apache.org, desiredState=null, state=null, 
desiredStackId=null, staleConfig=null, adminState=null}"
  }
}
  ]
}
{code}
Delete selected host components on a set of host
{code} 
delete http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components 
-d 
'{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)/component_name.in(NODEMANAGER)"}}'
{
  "deleteResult" : [
{
  "deleted" : {
"key" : "c6401.ambari.apache.org/NODEMANAGER"
  }
},
{
  "error" : {
"key" : "c6402.ambari.apache.org/NODEMANAGER",
"code" : 500,
"message" : "org.apache.ambari.server.AmbariException: Host Component 
cannot be removed, clusterName=c1, serviceName=YARN, componentName=NODEMANAGER, 
hostname=c6402.ambari.apache.org, request={ clusterName=c1, serviceName=YARN, 
componentName=NODEMANAGER, hostname=c6402.ambari.apache.org, desiredState=null, 
state=null, desiredStackId=null, staleConfig=null, adminState=null}"
  }
}
  ]
}
{code}


> Add api for bulk delete host component
> --
>
> Key: AMBARI-18011
> URL: https://issues.apache.org/jira/browse/AMBARI-18011
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Ajit 

[jira] [Updated] (AMBARI-18011) Add api for bulk delete host component

2016-08-03 Thread Ajit Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajit Kumar updated AMBARI-18011:

Description: 
This api takes in query and instead of failing fast on the first error, puts 
the best effort to delete all requested hosts. Response should be json object 
which has deleted keys and keys which failed to delete with exception.
Sample API calls:
Delete all host components on a set of hosts:
{code}

delete http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components 
-d 
'{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)"}}'
{
  "deleteResult" : [
{
  "deleted" : {
"key" : "c6402.ambari.apache.org/HIVE_METASTORE"
  }
},
{
  "deleted" : {
"key" : "c6402.ambari.apache.org/MYSQL_SERVER"
  }
},
{
  "error" : {
"key" : "c6402.ambari.apache.org/RESOURCEMANAGER",
"code" : 500,
"message" : "org.apache.ambari.server.AmbariException: Host Component 
cannot be removed, clusterName=c1, serviceName=YARN, 
componentName=RESOURCEMANAGER, hostname=c6402.ambari.apache.org, request={ 
clusterName=c1, serviceName=YARN, componentName=RESOURCEMANAGER, 
hostname=c6402.ambari.apache.org, desiredState=null, state=null, 
desiredStackId=null, staleConfig=null, adminState=null}"
  }
}
  ]
}
{code}
Delete selected host components on a set of host
{code} 
delete http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components 
-d 
'{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)/component_name.in(NODEMANAGER)"}}'
{
  "deleteResult" : [
{
  "deleted" : {
"key" : "c6401.ambari.apache.org/NODEMANAGER"
  }
},
{
  "error" : {
"key" : "c6402.ambari.apache.org/NODEMANAGER",
"code" : 500,
"message" : "org.apache.ambari.server.AmbariException: Host Component 
cannot be removed, clusterName=c1, serviceName=YARN, componentName=NODEMANAGER, 
hostname=c6402.ambari.apache.org, request={ clusterName=c1, serviceName=YARN, 
componentName=NODEMANAGER, hostname=c6402.ambari.apache.org, desiredState=null, 
state=null, desiredStackId=null, staleConfig=null, adminState=null}"
  }
}
  ]
}
{code}

  was:
This api takes in query and instead of failing fast on the first error, puts 
the best effort to delete all requested hosts. Response should be json object 
which has deleted keys and keys which failed to delete with exception.
Sample API call:
{code}

delete http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components 
-d 
'{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)"}}'
{
  "deleteResult" : [
{
  "deleted" : {
"key" : "c6402.ambari.apache.org/HIVE_METASTORE"
  }
},
{
  "deleted" : {
"key" : "c6402.ambari.apache.org/MYSQL_SERVER"
  }
},
{
  "error" : {
"key" : "c6402.ambari.apache.org/RESOURCEMANAGER",
"code" : 500,
"message" : "org.apache.ambari.server.AmbariException: Host Component 
cannot be removed, clusterName=c1, serviceName=YARN, 
componentName=RESOURCEMANAGER, hostname=c6402.ambari.apache.org, request={ 
clusterName=c1, serviceName=YARN, componentName=RESOURCEMANAGER, 
hostname=c6402.ambari.apache.org, desiredState=null, state=null, 
desiredStackId=null, staleConfig=null, adminState=null}"
  }
}
  ]
}
{code}


> Add api for bulk delete host component
> --
>
> Key: AMBARI-18011
> URL: https://issues.apache.org/jira/browse/AMBARI-18011
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Ajit Kumar
>Assignee: Ajit Kumar
> Fix For: 2.5.0
>
> Attachments: rb50450.patch
>
>
> This api takes in query and instead of failing fast on the first error, puts 
> the best effort to delete all requested hosts. Response should be json object 
> which has deleted keys and keys which failed to delete with exception.
> Sample API calls:
> Delete all host components on a set of hosts:
> {code}
> delete http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components 
> -d 
> '{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)"}}'
> {
>   "deleteResult" : [
> {
>   "deleted" : {
> "key" : "c6402.ambari.apache.org/HIVE_METASTORE"
>   }
> },
> {
>   "deleted" : {
> "key" : "c6402.ambari.apache.org/MYSQL_SERVER"
>   }
> },
> {
>   "error" : {
> "key" : "c6402.ambari.apache.org/RESOURCEMANAGER",
> "code" : 500,
> "message" : "org.apache.ambari.server.AmbariException: Host Component 
> cannot be removed, clusterName=c1, serviceName=YARN, 

[jira] [Commented] (AMBARI-17938) Ambari should not recursively chown for HAWQ hdfs upon every start

2016-08-03 Thread Lav Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406659#comment-15406659
 ] 

Lav Jain commented on AMBARI-17938:
---

BRANCH 2.4

author  Lav Jain  
Wed, 3 Aug 2016 14:31:12 -0700 (14:31 -0700)
commit  516ec8663136a57e274f2fc85e7ae31a34c1ba63

TRUNK

author  Lav Jain  
Wed, 3 Aug 2016 14:33:13 -0700 (14:33 -0700)
commit  a90b7f7ccd9131463dbc49691f1e26f187d0aa09

> Ambari should not recursively chown for HAWQ hdfs upon every start
> --
>
> Key: AMBARI-17938
> URL: https://issues.apache.org/jira/browse/AMBARI-17938
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Lav Jain
>Assignee: Lav Jain
>  Labels: performance
> Fix For: 2.4.0
>
> Attachments: AMBARI-17938.patch, AMBARI-17938.v2.patch, 
> AMBARI-17938.v3.patch
>
>
> This results in changing of owner even if the owner value is same. The 
> operation is very costly if there are a lot of subdirectories.
> The owner value only changes when you switch from regular mode to secure mode 
> and vice-versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18011) Add api for bulk delete host component

2016-08-03 Thread Ajit Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajit Kumar updated AMBARI-18011:

Status: Patch Available  (was: Open)

> Add api for bulk delete host component
> --
>
> Key: AMBARI-18011
> URL: https://issues.apache.org/jira/browse/AMBARI-18011
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Ajit Kumar
>Assignee: Ajit Kumar
> Fix For: 2.5.0
>
> Attachments: rb50450.patch
>
>
> This api takes in query and instead of failing fast on the first error, puts 
> the best effort to delete all requested hosts. Response should be json object 
> which has deleted keys and keys which failed to delete with exception.
> Sample API call:
> {code}
> delete http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components 
> -d 
> '{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)"}}'
> {
>   "deleteResult" : [
> {
>   "deleted" : {
> "key" : "c6402.ambari.apache.org/HIVE_METASTORE"
>   }
> },
> {
>   "deleted" : {
> "key" : "c6402.ambari.apache.org/MYSQL_SERVER"
>   }
> },
> {
>   "error" : {
> "key" : "c6402.ambari.apache.org/RESOURCEMANAGER",
> "code" : 500,
> "message" : "org.apache.ambari.server.AmbariException: Host Component 
> cannot be removed, clusterName=c1, serviceName=YARN, 
> componentName=RESOURCEMANAGER, hostname=c6402.ambari.apache.org, request={ 
> clusterName=c1, serviceName=YARN, componentName=RESOURCEMANAGER, 
> hostname=c6402.ambari.apache.org, desiredState=null, state=null, 
> desiredStackId=null, staleConfig=null, adminState=null}"
>   }
> }
>   ]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18011) Add api for bulk delete host component

2016-08-03 Thread Ajit Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajit Kumar updated AMBARI-18011:

Attachment: rb50450.patch

> Add api for bulk delete host component
> --
>
> Key: AMBARI-18011
> URL: https://issues.apache.org/jira/browse/AMBARI-18011
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Affects Versions: 2.5.0
>Reporter: Ajit Kumar
>Assignee: Ajit Kumar
> Fix For: 2.5.0
>
> Attachments: rb50450.patch
>
>
> This api takes in query and instead of failing fast on the first error, puts 
> the best effort to delete all requested hosts. Response should be json object 
> which has deleted keys and keys which failed to delete with exception.
> Sample API call:
> {code}
> delete http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components 
> -d 
> '{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)"}}'
> {
>   "deleteResult" : [
> {
>   "deleted" : {
> "key" : "c6402.ambari.apache.org/HIVE_METASTORE"
>   }
> },
> {
>   "deleted" : {
> "key" : "c6402.ambari.apache.org/MYSQL_SERVER"
>   }
> },
> {
>   "error" : {
> "key" : "c6402.ambari.apache.org/RESOURCEMANAGER",
> "code" : 500,
> "message" : "org.apache.ambari.server.AmbariException: Host Component 
> cannot be removed, clusterName=c1, serviceName=YARN, 
> componentName=RESOURCEMANAGER, hostname=c6402.ambari.apache.org, request={ 
> clusterName=c1, serviceName=YARN, componentName=RESOURCEMANAGER, 
> hostname=c6402.ambari.apache.org, desiredState=null, state=null, 
> desiredStackId=null, staleConfig=null, adminState=null}"
>   }
> }
>   ]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18009) Ranger usersync pid location not configurable

2016-08-03 Thread Yujie Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yujie Li updated AMBARI-18009:
--
Status: Patch Available  (was: Open)

> Ranger usersync pid location not configurable
> -
>
> Key: AMBARI-18009
> URL: https://issues.apache.org/jira/browse/AMBARI-18009
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Yujie Li
> Attachments: 
> 0001-AMBARI-18009-Ranger-usersync-pid-location-not-config.patch
>
>
> During the installation of Ranger via Ambari UI, users should be able to 
> change the directory for Ranger usersync pid file. But changing that has no 
> effects since setup_ranger_xml.py from ambari-server doesn't have the code to 
> support the customization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18009) Ranger usersync pid location not configurable

2016-08-03 Thread Yujie Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yujie Li updated AMBARI-18009:
--
Attachment: 0001-AMBARI-18009-Ranger-usersync-pid-location-not-config.patch

> Ranger usersync pid location not configurable
> -
>
> Key: AMBARI-18009
> URL: https://issues.apache.org/jira/browse/AMBARI-18009
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Yujie Li
> Attachments: 
> 0001-AMBARI-18009-Ranger-usersync-pid-location-not-config.patch
>
>
> During the installation of Ranger via Ambari UI, users should be able to 
> change the directory for Ranger usersync pid file. But changing that has no 
> effects since setup_ranger_xml.py from ambari-server doesn't have the code to 
> support the customization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18009) Ranger usersync pid location not configurable

2016-08-03 Thread Yujie Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yujie Li updated AMBARI-18009:
--
Affects Version/s: trunk
   Status: Patch Available  (was: Open)

> Ranger usersync pid location not configurable
> -
>
> Key: AMBARI-18009
> URL: https://issues.apache.org/jira/browse/AMBARI-18009
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Yujie Li
> Attachments: 
> 0001-AMBARI-18009-Ranger-usersync-pid-location-not-config.patch
>
>
> During the installation of Ranger via Ambari UI, users should be able to 
> change the directory for Ranger usersync pid file. But changing that has no 
> effects since setup_ranger_xml.py from ambari-server doesn't have the code to 
> support the customization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18009) Ranger usersync pid location not configurable

2016-08-03 Thread Yujie Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yujie Li updated AMBARI-18009:
--
Status: Open  (was: Patch Available)

> Ranger usersync pid location not configurable
> -
>
> Key: AMBARI-18009
> URL: https://issues.apache.org/jira/browse/AMBARI-18009
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Yujie Li
> Attachments: 
> 0001-AMBARI-18009-Ranger-usersync-pid-location-not-config.patch
>
>
> During the installation of Ranger via Ambari UI, users should be able to 
> change the directory for Ranger usersync pid file. But changing that has no 
> effects since setup_ranger_xml.py from ambari-server doesn't have the code to 
> support the customization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17938) Ambari should not recursively chown for HAWQ hdfs upon every start

2016-08-03 Thread Lav Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lav Jain updated AMBARI-17938:
--
Attachment: AMBARI-17938.v3.patch

> Ambari should not recursively chown for HAWQ hdfs upon every start
> --
>
> Key: AMBARI-17938
> URL: https://issues.apache.org/jira/browse/AMBARI-17938
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Lav Jain
>Assignee: Lav Jain
>  Labels: performance
> Fix For: 2.4.0
>
> Attachments: AMBARI-17938.patch, AMBARI-17938.v2.patch, 
> AMBARI-17938.v3.patch
>
>
> This results in changing of owner even if the owner value is same. The 
> operation is very costly if there are a lot of subdirectories.
> The owner value only changes when you switch from regular mode to secure mode 
> and vice-versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18011) Add api for bulk delete host component

2016-08-03 Thread Ajit Kumar (JIRA)
Ajit Kumar created AMBARI-18011:
---

 Summary: Add api for bulk delete host component
 Key: AMBARI-18011
 URL: https://issues.apache.org/jira/browse/AMBARI-18011
 Project: Ambari
  Issue Type: Task
  Components: ambari-server
Affects Versions: 2.5.0
Reporter: Ajit Kumar
Assignee: Ajit Kumar
 Fix For: 2.5.0


This api takes in query and instead of failing fast on the first error, puts 
the best effort to delete all requested hosts. Response should be json object 
which has deleted keys and keys which failed to delete with exception.
Sample API call:
{code}

delete http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/host_components 
-d 
'{"RequestInfo":{"query":"HostRoles/host_name.in(c6401.ambari.apache.org,c6402.ambari.apache.org)"}}'
{
  "deleteResult" : [
{
  "deleted" : {
"key" : "c6402.ambari.apache.org/HIVE_METASTORE"
  }
},
{
  "deleted" : {
"key" : "c6402.ambari.apache.org/MYSQL_SERVER"
  }
},
{
  "error" : {
"key" : "c6402.ambari.apache.org/RESOURCEMANAGER",
"code" : 500,
"message" : "org.apache.ambari.server.AmbariException: Host Component 
cannot be removed, clusterName=c1, serviceName=YARN, 
componentName=RESOURCEMANAGER, hostname=c6402.ambari.apache.org, request={ 
clusterName=c1, serviceName=YARN, componentName=RESOURCEMANAGER, 
hostname=c6402.ambari.apache.org, desiredState=null, state=null, 
desiredStackId=null, staleConfig=null, adminState=null}"
  }
}
  ]
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18010) container-executor.cfg gets overwritten upon restart

2016-08-03 Thread John Vines (JIRA)
John Vines created AMBARI-18010:
---

 Summary: container-executor.cfg gets overwritten upon restart
 Key: AMBARI-18010
 URL: https://issues.apache.org/jira/browse/AMBARI-18010
 Project: Ambari
  Issue Type: Bug
  Components: ambari-agent
Affects Versions: 2.2.1
Reporter: John Vines


Ambari provides no means to manage allowed.system.users in 
container-executor.cfg (AMBARI-9151). That't not a deal breaker. After manually 
updating the cfg files to add the allowed users I needed everything worked fine 
until I did a restart of my cluster. Afterward I found that the 
container-executor.cfg had been reverted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17938) Ambari should not recursively chown for HAWQ hdfs upon every start

2016-08-03 Thread Lav Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lav Jain updated AMBARI-17938:
--
Attachment: AMBARI-17938.v2.patch

> Ambari should not recursively chown for HAWQ hdfs upon every start
> --
>
> Key: AMBARI-17938
> URL: https://issues.apache.org/jira/browse/AMBARI-17938
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Lav Jain
>Assignee: Lav Jain
>  Labels: performance
> Fix For: 2.4.0
>
> Attachments: AMBARI-17938.patch, AMBARI-17938.v2.patch
>
>
> This results in changing of owner even if the owner value is same. The 
> operation is very costly if there are a lot of subdirectories.
> The owner value only changes when you switch from regular mode to secure mode 
> and vice-versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17694) Kafka listeners property does not show SASL_PLAINTEXT protocol when Kerberos is enabled

2016-08-03 Thread Anita Gnanamalar Jebaraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anita Gnanamalar Jebaraj updated AMBARI-17694:
--
Status: Patch Available  (was: In Progress)

> Kafka listeners property does not show SASL_PLAINTEXT protocol when Kerberos 
> is enabled
> ---
>
> Key: AMBARI-17694
> URL: https://issues.apache.org/jira/browse/AMBARI-17694
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Anita Gnanamalar Jebaraj
>Assignee: Anita Gnanamalar Jebaraj
>Priority: Critical
> Fix For: 2.4.0
>
> Attachments: AMBARI-17694-1.patch, AMBARI-17694-Aug3.patch, 
> AMBARI-17694-Jul26.patch, AMBARI-17694.patch
>
>
> When kerberos is enabled,  the protocol for listeners in 
> /etc/kafka/conf/server.properties is updated from PLAINTEXT to PLAINTEXTSASL, 
> even though the Ambari UI shows otherwise 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17694) Kafka listeners property does not show SASL_PLAINTEXT protocol when Kerberos is enabled

2016-08-03 Thread Anita Gnanamalar Jebaraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anita Gnanamalar Jebaraj updated AMBARI-17694:
--
Attachment: AMBARI-17694-Aug3.patch

> Kafka listeners property does not show SASL_PLAINTEXT protocol when Kerberos 
> is enabled
> ---
>
> Key: AMBARI-17694
> URL: https://issues.apache.org/jira/browse/AMBARI-17694
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Anita Gnanamalar Jebaraj
>Assignee: Anita Gnanamalar Jebaraj
>Priority: Critical
> Fix For: 2.4.0
>
> Attachments: AMBARI-17694-1.patch, AMBARI-17694-Aug3.patch, 
> AMBARI-17694-Jul26.patch, AMBARI-17694.patch
>
>
> When kerberos is enabled,  the protocol for listeners in 
> /etc/kafka/conf/server.properties is updated from PLAINTEXT to PLAINTEXTSASL, 
> even though the Ambari UI shows otherwise 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17937) Ambari install/init should create a new gpadmin database

2016-08-03 Thread Lav Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lav Jain updated AMBARI-17937:
--
   Resolution: Fixed
Fix Version/s: trunk
   Status: Resolved  (was: Patch Available)

> Ambari install/init should create a new gpadmin database
> 
>
> Key: AMBARI-17937
> URL: https://issues.apache.org/jira/browse/AMBARI-17937
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Lav Jain
>Assignee: Lav Jain
>Priority: Minor
> Fix For: trunk, 2.4.0
>
> Attachments: AMBARI-17937.patch
>
>
> If you are logged in as gpadmin on the master and type in "psql" to connect 
> to the database, it will fail.  psql assumes you want to connect to the 
> database named "gpadmin" which matches your username. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17937) Ambari install/init should create a new gpadmin database

2016-08-03 Thread Lav Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406599#comment-15406599
 ] 

Lav Jain commented on AMBARI-17937:
---

BRANCH-2.4

commit 8138a9dbee872e1a5309fa929a678fb7ff806b98
Author: Lav Jain 
Wed, 3 Aug 2016 13:48:07 -0700 (13:48 -0700)

TRUNK

commit c2e9b465df3b3c175baf9048784f38dd486a14a6
Author: Lav Jain 
Wed, 3 Aug 2016 13:54:39 -0700 (13:54 -0700)

> Ambari install/init should create a new gpadmin database
> 
>
> Key: AMBARI-17937
> URL: https://issues.apache.org/jira/browse/AMBARI-17937
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk, 2.4.0
>Reporter: Lav Jain
>Assignee: Lav Jain
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: AMBARI-17937.patch
>
>
> If you are logged in as gpadmin on the master and type in "psql" to connect 
> to the database, it will fail.  psql assumes you want to connect to the 
> database named "gpadmin" which matches your username. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17954) Fix Spark issues in upgrading and fresh install

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406576#comment-15406576
 ] 

Hudson commented on AMBARI-17954:
-

SUCCESS: Integrated in Ambari-trunk-Commit #5447 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5447/])
AMBARI-17954: Fix Spark hdp.version issues in upgrading and fresh (jluniya: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=9bb77239292f99b935634eafe5e807d61e1d8cb1])
* 
ambari-server/src/main/resources/stacks/HDP/2.4/services/SPARK/configuration/spark-thrift-sparkconf.xml
* 
ambari-server/src/main/resources/stacks/HDP/2.0.6/properties/stack_features.json
* ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/upgrade-2.5.xml
* 
ambari-server/src/main/resources/common-services/SPARK/1.2.1/package/scripts/setup_spark.py
* 
ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/nonrolling-upgrade-2.5.xml
* ambari-server/src/test/python/stacks/2.2/SPARK/test_spark_client.py
* 
ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/nonrolling-upgrade-2.4.xml
* 
ambari-server/src/main/resources/stacks/HDP/2.4/services/SPARK/configuration/spark-defaults.xml
* ambari-server/src/test/python/stacks/2.3/SPARK/test_spark_thrift_server.py
* 
ambari-common/src/main/python/resource_management/libraries/functions/constants.py
* ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/upgrade-2.4.xml
* 
ambari-server/src/main/resources/common-services/SPARK/1.2.1/configuration/spark-defaults.xml
* 
ambari-server/src/main/resources/common-services/SPARK/1.2.1/package/scripts/params.py
* 
ambari-server/src/main/resources/common-services/SPARK/1.5.2/configuration/spark-thrift-sparkconf.xml
* 
ambari-server/src/main/resources/stacks/HDP/2.5/services/SPARK/configuration/spark-defaults.xml
* ambari-server/src/test/python/stacks/2.2/SPARK/test_job_history_server.py
* ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/config-upgrade.xml


> Fix Spark issues in upgrading and fresh install
> ---
>
> Key: AMBARI-17954
> URL: https://issues.apache.org/jira/browse/AMBARI-17954
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Saisai Shao
>Assignee: Saisai Shao
>Priority: Critical
> Fix For: 2.4.0
>
>
> Ambari Spark definitions have several issues related to hdp.version, so here 
> fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17993) Kerberos identity definitions in Kerberos descriptors should explicitly declare a reference

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406575#comment-15406575
 ] 

Hudson commented on AMBARI-17993:
-

SUCCESS: Integrated in Ambari-trunk-Commit #5447 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5447/])
AMBARI-17993. Kerberos identity definitions in Kerberos descriptors (rlevas: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=5ddeb09897fbf015f2a63fe33c937a99e027bdbe])
* 
ambari-server/src/main/java/org/apache/ambari/server/state/kerberos/KerberosIdentityDescriptor.java
* 
ambari-server/src/main/java/org/apache/ambari/server/controller/KerberosHelperImpl.java
* 
ambari-server/src/test/java/org/apache/ambari/server/controller/KerberosHelperTest.java
* 
ambari-server/src/test/java/org/apache/ambari/server/state/kerberos/KerberosIdentityDescriptorTest.java
* 
ambari-server/src/test/java/org/apache/ambari/server/state/kerberos/KerberosDescriptorTest.java
* 
ambari-server/src/main/java/org/apache/ambari/server/state/kerberos/AbstractKerberosDescriptorContainer.java
* 
ambari-server/src/test/resources/kerberos/test_get_referenced_identity_descriptor.json
* 
ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog240.java


> Kerberos identity definitions in Kerberos descriptors should explicitly 
> declare a reference
> ---
>
> Key: AMBARI-17993
> URL: https://issues.apache.org/jira/browse/AMBARI-17993
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Blocker
>  Labels: kerberos_descriptor
> Fix For: 2.4.0
>
> Attachments: AMBARI-17993_branch-2.4_01.patch, 
> AMBARI-17993_trunk_01.patch
>
>
> Kerberos identity definitions in Kerberos descriptors should explicitly 
> declare a reference rather than rely on the identity's _name_ attribute. 
> Currently, the set of Kerberos identities declared at a service-level or a 
> component-level can contain identities with unique names.  For example using:
> {code}
>   "identities": [
> {
>   "name": "identity",
>   "principal": {
> "value": "service/_HOST@${realm}",
> "configuration": "service-site/property1.principal",
> ...
>   },
>   "keytab": {
> "file": "${keytab_dir}/service.service.keytab",
> "configuration": "service-site/property1.keytab",
> ...
>   }
> },
> {
>   "name": "identity",
>   "principal": {
> "value": "service/_HOST@${realm}",
> "configuration": "service-site/property2.principal",
> ...
>   },
>   "keytab": {
> "file": "${keytab_dir}/service.service.keytab",
> "configuration": "service-site/property2.keytab",
> ...
>   }
> }
>   ]
> {code}
> Only the first "identity" principal is realized and the additional one is 
> ignored, leaving the configurations {{service-site/property2.principal}} and 
> {{service-site/property2.keytab}} untouched when Kerberos is enabled for the 
> service. 
> To help this, the 2nd instance can be converted to a reference, overriding 
> only the attributes the need to be changed - like the configurations. 
> {code}
>   "identities": [
> {
>   "name": "identity",
>   "principal": {
> "value": "service/_HOST@${realm}",
> "configuration": "service-site/property1.principal",
> ...
>   },
>   "keytab": {
> "file": "${keytab_dir}/service.service.keytab",
> "configuration": "service-site/property1.keytab",
> ...
>   }
> },
> {
>   "name": "/SERVICE/identity",
>   "principal": {
> "configuration": "service-site/property2.principal"
>   },
>   "keytab": {
> "configuration": "service-site/property2.keytab"
>   }
> }
>   ]
> {code}
> This allows for both identity declarations to be realized, however this is 
> limited to only the 2 instances. If a 3rd instance is needed (to set an 
> additional configuration), it must look be:
> {code}
> {
>   "name": "/SERVICE/identity",
>   "principal": {
> "configuration": "service-site/property3.principal"
>   },
>   "keytab": {
> "configuration": "service-site/property3.keytab"
>   }
> }
> {code}
> However since it's name is the same as the 2nd instance, it will be ignored. 
> If explicit references are specified, then multiple uniquely-named identity 
> blocks will be allowed to reference the same base identity, effectively 
> enabling the ability to declare unlimited configurations for the same 
> identity definition:
> {code}
>   "identities": [
> {
>   "name": "identity",
>   "principal": {
> 

[jira] [Commented] (AMBARI-18007) JournalNodes filter doesn't work

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406574#comment-15406574
 ] 

Hudson commented on AMBARI-18007:
-

SUCCESS: Integrated in Ambari-trunk-Commit #5447 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5447/])
AMBARI-18007. JournalNodes filter doesn't work (akovalenko) (akovalenko: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=15df9731741f64dd9a6857ec2edaf9aa197f7aaf])
* ambari-web/app/views/main/service/services/hdfs.js


> JournalNodes filter doesn't work
> 
>
> Key: AMBARI-18007
> URL: https://issues.apache.org/jira/browse/AMBARI-18007
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
> Fix For: trunk
>
> Attachments: AMBARI-18007.patch
>
>
> After clicking on JournalNodes link on HDFS Summary page nothing happens.
> Redirecting to hosts page with applied filter is expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17991) Ambari agent unable to register with server when server response is too big

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406515#comment-15406515
 ] 

Hadoop QA commented on AMBARI-17991:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12821863/AMBARI-17991_2.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8274//console

This message is automatically generated.

> Ambari agent unable to register with server when server response is too big
> ---
>
> Key: AMBARI-17991
> URL: https://issues.apache.org/jira/browse/AMBARI-17991
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmytro Sen
>Assignee: Dmytro Sen
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: AMBARI-17991.patch, AMBARI-17991_2.patch
>
>
> Ambari agent is unable to register with ambari server, failing with:
> {code}
> INFO 2016-06-09 11:22:00,964 security.py:147 - Encountered communication 
> error. Details: SSLError('The read operation timed out',)
> ERROR 2016-06-09 11:22:00,965 Controller.py:196 - Unable to connect to: 
> https://localhost:8441/agent/v1/register/dvtcbdqd02.corp.cox.com
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 
> 150, in registerWithServer
> ret = self.sendRequest(self.registerUrl, data)
>   File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 
> 423, in sendRequest
> raise IOError('Request to {0} failed due to {1}'.format(url, 
> str(exception)))
> IOError: Request to https://server1:8441/agent/v1/register/host1 failed due 
> to Error occured during connecting to the server: The read operation timed out
> ERROR 2016-06-09 11:22:00,965 Controller.py:197 - Error:Request to 
> https://server1:8441/agent/v1/register/host1 failed due to Error occured 
> during connecting to the server: The read operation timed out
> {code}
> The problem was fixed by modifying the timeout in  
> /usr/lib/python2.6/site-packages/ambari_agent/security.py:
> {code}
> def create_connection(self):
> if self.sock:
>   self.sock.close()
> logger.info("SSL Connect being called.. connecting to the server")
> sock = socket.create_connection((self.host, self.port), 120)
> {code}
> Use Jetty 8 instead of 9 in Ambari 2.4.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18008) RU Downgrade failure

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406510#comment-15406510
 ] 

Hadoop QA commented on AMBARI-18008:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12821861/AMBARI-18008.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8273//console

This message is automatically generated.

> RU Downgrade failure
> 
>
> Key: AMBARI-18008
> URL: https://issues.apache.org/jira/browse/AMBARI-18008
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.4.0
>
> Attachments: AMBARI-18008.patch
>
>
> RU Downgrade failure while downgrading from 2.5 to 2.3. Downgrade failed at 
> the step
> ' Client Components' > ' Updating configuration 
> sqoop-atlas-application.properties'.
> 1) Install HDP-2.3.2.0-2950 with ambari-server-2.1.2-377
> 2) upgrade ambari to 2.4.0.0-1054
> 3) Start stack upgrade to erie.
> 4) Before finalize step, start the downgrade.
> The downgrade is failing at above mentioned step.
> {code}
> 02 Aug 2016 05:27:58,527  WARN [Server Action Executor Worker 4525] 
> ServerActionExecutor:497 - Task #4525 failed to complete execution due to 
> thrown exception: java.lang.NullPointerException:null
> java.lang.NullPointerException
> at 
> org.apache.ambari.server.serveraction.upgrades.ConfigureAction.execute(ConfigureAction.java:239)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:555)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:492)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18008) RU Downgrade failure

2016-08-03 Thread Yusaku Sako (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yusaku Sako updated AMBARI-18008:
-
Summary: RU Downgrade failure  (was: RU Downgrade failure while downgrading 
from erie to DAL-m10)

> RU Downgrade failure
> 
>
> Key: AMBARI-18008
> URL: https://issues.apache.org/jira/browse/AMBARI-18008
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.4.0
>
> Attachments: AMBARI-18008.patch
>
>
> RU Downgrade failure while downgrading from 2.5 to 2.3. Downgrade failed at 
> the step
> ' Client Components' > ' Updating configuration 
> sqoop-atlas-application.properties'.
> 1) Install HDP-2.3.2.0-2950 with ambari-server-2.1.2-377
> 2) upgrade ambari to 2.4.0.0-1054
> 3) Start stack upgrade to erie.
> 4) Before finalize step, start the downgrade.
> The downgrade is failing at above mentioned step.
> {code}
> 02 Aug 2016 05:27:58,527  WARN [Server Action Executor Worker 4525] 
> ServerActionExecutor:497 - Task #4525 failed to complete execution due to 
> thrown exception: java.lang.NullPointerException:null
> java.lang.NullPointerException
> at 
> org.apache.ambari.server.serveraction.upgrades.ConfigureAction.execute(ConfigureAction.java:239)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:555)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:492)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18006) Tune HDFS opts parameters to trigger GC more predicitably

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406404#comment-15406404
 ] 

Hudson commented on AMBARI-18006:
-

ABORTED: Integrated in Ambari-trunk-Commit #5446 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5446/])
AMBARI-18006. Tune HDFS opts parameters to trigger GC more predicitably 
(aonishuk: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=f5b9c9fcfc973db22b4624b87aa850b5ef0407ad])
* 
ambari-server/src/main/resources/stacks/HDP/2.3/services/HDFS/configuration/hadoop-env.xml


> Tune HDFS opts parameters to trigger GC more predicitably
> -
>
> Key: AMBARI-18006
> URL: https://issues.apache.org/jira/browse/AMBARI-18006
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 3.0.0
>
> Attachments: AMBARI-18006.patch
>
>
> DN Heap usage alert is set to 80% warning and 90% critical. This alert is
> causing a lot of alerts to be fired off and needs to be tuned.
> One thing we need to make sure is that DN gc is happening. I am not sure at
> what % of heap usage will the gc be fired but we need to make sure our alerts
> are for a % higher than that as when i manually ran a GC the heap usage went
> below the threshold.
> Based on discussions with hdfs dev's it was determined we should be adding the
> following
> 
> 
> 
> XX:+UseCMSInitiatingOccupancyOnly and 
> -XX:CMSInitiatingOccupancyFraction=
> 
> to the namenode and datanode opts. Based on this setting hopefully we can also
> determine what % to set the alerts to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18009) Ranger usersync pid location not configurable

2016-08-03 Thread Yujie Li (JIRA)
Yujie Li created AMBARI-18009:
-

 Summary: Ranger usersync pid location not configurable
 Key: AMBARI-18009
 URL: https://issues.apache.org/jira/browse/AMBARI-18009
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.4.0
Reporter: Yujie Li


During the installation of Ranger via Ambari UI, users should be able to change 
the directory for Ranger usersync pid file. But changing that has no effects 
since setup_ranger_xml.py from ambari-server doesn't have the code to support 
the customization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17954) Fix Spark issues in upgrading and fresh install

2016-08-03 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-17954:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix Spark issues in upgrading and fresh install
> ---
>
> Key: AMBARI-17954
> URL: https://issues.apache.org/jira/browse/AMBARI-17954
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Saisai Shao
>Assignee: Saisai Shao
>Priority: Critical
> Fix For: 2.4.0
>
>
> Ambari Spark definitions have several issues related to hdp.version, so here 
> fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17954) Fix Spark issues in upgrading and fresh install

2016-08-03 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406290#comment-15406290
 ] 

Jayush Luniya commented on AMBARI-17954:


Branch-2.4
commit ea99e7ae6c5f418de89ff5c2997e3b4a1059aa77
Author: Jayush Luniya 
Date:   Wed Aug 3 10:41:27 2016 -0700

AMBARI-17954: Fix Spark hdp.version issues in upgrading and fresh install 
(Saisai Shao via jluniya)

> Fix Spark issues in upgrading and fresh install
> ---
>
> Key: AMBARI-17954
> URL: https://issues.apache.org/jira/browse/AMBARI-17954
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Saisai Shao
>Assignee: Saisai Shao
>Priority: Critical
> Fix For: 2.4.0
>
>
> Ambari Spark definitions have several issues related to hdp.version, so here 
> fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17954) Fix Spark issues in upgrading and fresh install

2016-08-03 Thread Jayush Luniya (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406288#comment-15406288
 ] 

Jayush Luniya commented on AMBARI-17954:


Trunk
commit 9bb77239292f99b935634eafe5e807d61e1d8cb1
Author: Jayush Luniya 
Date:   Wed Aug 3 10:41:27 2016 -0700

AMBARI-17954: Fix Spark hdp.version issues in upgrading and fresh install 
(Saisai Shao via jluniya)

> Fix Spark issues in upgrading and fresh install
> ---
>
> Key: AMBARI-17954
> URL: https://issues.apache.org/jira/browse/AMBARI-17954
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Saisai Shao
>Assignee: Saisai Shao
>Priority: Critical
> Fix For: 2.4.0
>
>
> Ambari Spark definitions have several issues related to hdp.version, so here 
> fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17694) Kafka listeners property does not show SASL_PLAINTEXT protocol when Kerberos is enabled

2016-08-03 Thread Robert Levas (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406283#comment-15406283
 ] 

Robert Levas commented on AMBARI-17694:
---

[~harsha_ch], [~anitajebaraj]...

The problem with the original patch is not related to the stack advisor.  
Essentially there are two kerberos.json files for KAFKA.  
* {{…/stacks/HDP/2.5/services/KAFKA/kerberos.json}}
* {{…/resources/common-services/KAFKA/0.9.0/kerberos.json}}

The patch updates the kerberos.json file in common-service 
({{…/resources/common-services/KAFKA/0.9.0/kerberos.json}}); however, the 
kerberos.json file that comes into play with installing HDP 2.5 is the one at 
{{…/stacks/HDP/2.5/services/KAFKA/kerberos.json}}. 

So the same change make to 
{{…/resources/common-services/KAFKA/0.9.0/kerberos.json}} should also be made 
to {{…/stacks/HDP/2.5/services/KAFKA/kerberos.json}}.  Unless, the Kafka 
version installed for pre-HDP 2.5 stacks does not need it. 






> Kafka listeners property does not show SASL_PLAINTEXT protocol when Kerberos 
> is enabled
> ---
>
> Key: AMBARI-17694
> URL: https://issues.apache.org/jira/browse/AMBARI-17694
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: Anita Gnanamalar Jebaraj
>Assignee: Anita Gnanamalar Jebaraj
>Priority: Critical
> Fix For: 2.4.0
>
> Attachments: AMBARI-17694-1.patch, AMBARI-17694-Jul26.patch, 
> AMBARI-17694.patch
>
>
> When kerberos is enabled,  the protocol for listeners in 
> /etc/kafka/conf/server.properties is updated from PLAINTEXT to PLAINTEXTSASL, 
> even though the Ambari UI shows otherwise 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17954) Fix Spark issues in upgrading and fresh install

2016-08-03 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-17954:
---
Priority: Critical  (was: Major)

> Fix Spark issues in upgrading and fresh install
> ---
>
> Key: AMBARI-17954
> URL: https://issues.apache.org/jira/browse/AMBARI-17954
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Saisai Shao
>Assignee: Saisai Shao
>Priority: Critical
> Fix For: 2.4.0
>
>
> Ambari Spark definitions have several issues related to hdp.version, so here 
> fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17954) Fix Spark issues in upgrading and fresh install

2016-08-03 Thread Jayush Luniya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayush Luniya updated AMBARI-17954:
---
Fix Version/s: 2.4.0

> Fix Spark issues in upgrading and fresh install
> ---
>
> Key: AMBARI-17954
> URL: https://issues.apache.org/jira/browse/AMBARI-17954
> Project: Ambari
>  Issue Type: Bug
>  Components: stacks
>Affects Versions: 2.4.0
>Reporter: Saisai Shao
>Assignee: Saisai Shao
>Priority: Critical
> Fix For: 2.4.0
>
>
> Ambari Spark definitions have several issues related to hdp.version, so here 
> fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17991) Ambari agent unable to register with server when server response is too big

2016-08-03 Thread Dmytro Sen (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Sen updated AMBARI-17991:

Attachment: AMBARI-17991_2.patch

> Ambari agent unable to register with server when server response is too big
> ---
>
> Key: AMBARI-17991
> URL: https://issues.apache.org/jira/browse/AMBARI-17991
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmytro Sen
>Assignee: Dmytro Sen
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: AMBARI-17991.patch, AMBARI-17991_2.patch
>
>
> Ambari agent is unable to register with ambari server, failing with:
> {code}
> INFO 2016-06-09 11:22:00,964 security.py:147 - Encountered communication 
> error. Details: SSLError('The read operation timed out',)
> ERROR 2016-06-09 11:22:00,965 Controller.py:196 - Unable to connect to: 
> https://localhost:8441/agent/v1/register/dvtcbdqd02.corp.cox.com
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 
> 150, in registerWithServer
> ret = self.sendRequest(self.registerUrl, data)
>   File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 
> 423, in sendRequest
> raise IOError('Request to {0} failed due to {1}'.format(url, 
> str(exception)))
> IOError: Request to https://server1:8441/agent/v1/register/host1 failed due 
> to Error occured during connecting to the server: The read operation timed out
> ERROR 2016-06-09 11:22:00,965 Controller.py:197 - Error:Request to 
> https://server1:8441/agent/v1/register/host1 failed due to Error occured 
> during connecting to the server: The read operation timed out
> {code}
> The problem was fixed by modifying the timeout in  
> /usr/lib/python2.6/site-packages/ambari_agent/security.py:
> {code}
> def create_connection(self):
> if self.sock:
>   self.sock.close()
> logger.info("SSL Connect being called.. connecting to the server")
> sock = socket.create_connection((self.host, self.port), 120)
> {code}
> Use Jetty 8 instead of 9 in Ambari 2.4.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18008) RU Downgrade failure while downgrading from erie to DAL-m10

2016-08-03 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-18008:

Fix Version/s: 2.4.0

> RU Downgrade failure while downgrading from erie to DAL-m10
> ---
>
> Key: AMBARI-18008
> URL: https://issues.apache.org/jira/browse/AMBARI-18008
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.4.0
>
> Attachments: AMBARI-18008.patch
>
>
> RU Downgrade failure while downgrading from 2.5 to 2.3. Downgrade failed at 
> the step
> ' Client Components' > ' Updating configuration 
> sqoop-atlas-application.properties'.
> 1) Install HDP-2.3.2.0-2950 with ambari-server-2.1.2-377
> 2) upgrade ambari to 2.4.0.0-1054
> 3) Start stack upgrade to erie.
> 4) Before finalize step, start the downgrade.
> The downgrade is failing at above mentioned step.
> {code}
> 02 Aug 2016 05:27:58,527  WARN [Server Action Executor Worker 4525] 
> ServerActionExecutor:497 - Task #4525 failed to complete execution due to 
> thrown exception: java.lang.NullPointerException:null
> java.lang.NullPointerException
> at 
> org.apache.ambari.server.serveraction.upgrades.ConfigureAction.execute(ConfigureAction.java:239)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:555)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:492)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18008) RU Downgrade failure while downgrading from erie to DAL-m10

2016-08-03 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-18008:

Affects Version/s: 2.4.0

> RU Downgrade failure while downgrading from erie to DAL-m10
> ---
>
> Key: AMBARI-18008
> URL: https://issues.apache.org/jira/browse/AMBARI-18008
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Fix For: 2.4.0
>
> Attachments: AMBARI-18008.patch
>
>
> RU Downgrade failure while downgrading from 2.5 to 2.3. Downgrade failed at 
> the step
> ' Client Components' > ' Updating configuration 
> sqoop-atlas-application.properties'.
> 1) Install HDP-2.3.2.0-2950 with ambari-server-2.1.2-377
> 2) upgrade ambari to 2.4.0.0-1054
> 3) Start stack upgrade to erie.
> 4) Before finalize step, start the downgrade.
> The downgrade is failing at above mentioned step.
> {code}
> 02 Aug 2016 05:27:58,527  WARN [Server Action Executor Worker 4525] 
> ServerActionExecutor:497 - Task #4525 failed to complete execution due to 
> thrown exception: java.lang.NullPointerException:null
> java.lang.NullPointerException
> at 
> org.apache.ambari.server.serveraction.upgrades.ConfigureAction.execute(ConfigureAction.java:239)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:555)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:492)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18008) RU Downgrade failure while downgrading from erie to DAL-m10

2016-08-03 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-18008:

Component/s: ambari-server

> RU Downgrade failure while downgrading from erie to DAL-m10
> ---
>
> Key: AMBARI-18008
> URL: https://issues.apache.org/jira/browse/AMBARI-18008
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Attachments: AMBARI-18008.patch
>
>
> RU Downgrade failure while downgrading from 2.5 to 2.3. Downgrade failed at 
> the step
> ' Client Components' > ' Updating configuration 
> sqoop-atlas-application.properties'.
> 1) Install HDP-2.3.2.0-2950 with ambari-server-2.1.2-377
> 2) upgrade ambari to 2.4.0.0-1054
> 3) Start stack upgrade to erie.
> 4) Before finalize step, start the downgrade.
> The downgrade is failing at above mentioned step.
> {code}
> 02 Aug 2016 05:27:58,527  WARN [Server Action Executor Worker 4525] 
> ServerActionExecutor:497 - Task #4525 failed to complete execution due to 
> thrown exception: java.lang.NullPointerException:null
> java.lang.NullPointerException
> at 
> org.apache.ambari.server.serveraction.upgrades.ConfigureAction.execute(ConfigureAction.java:239)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:555)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:492)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18008) RU Downgrade failure while downgrading from erie to DAL-m10

2016-08-03 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-18008:

Attachment: AMBARI-18008.patch

> RU Downgrade failure while downgrading from erie to DAL-m10
> ---
>
> Key: AMBARI-18008
> URL: https://issues.apache.org/jira/browse/AMBARI-18008
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Attachments: AMBARI-18008.patch
>
>
> RU Downgrade failure while downgrading from 2.5 to 2.3. Downgrade failed at 
> the step
> ' Client Components' > ' Updating configuration 
> sqoop-atlas-application.properties'.
> 1) Install HDP-2.3.2.0-2950 with ambari-server-2.1.2-377
> 2) upgrade ambari to 2.4.0.0-1054
> 3) Start stack upgrade to erie.
> 4) Before finalize step, start the downgrade.
> The downgrade is failing at above mentioned step.
> {code}
> 02 Aug 2016 05:27:58,527  WARN [Server Action Executor Worker 4525] 
> ServerActionExecutor:497 - Task #4525 failed to complete execution due to 
> thrown exception: java.lang.NullPointerException:null
> java.lang.NullPointerException
> at 
> org.apache.ambari.server.serveraction.upgrades.ConfigureAction.execute(ConfigureAction.java:239)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:555)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:492)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18008) RU Downgrade failure while downgrading from erie to DAL-m10

2016-08-03 Thread Dmitry Lysnichenko (JIRA)
Dmitry Lysnichenko created AMBARI-18008:
---

 Summary: RU Downgrade failure while downgrading from erie to 
DAL-m10
 Key: AMBARI-18008
 URL: https://issues.apache.org/jira/browse/AMBARI-18008
 Project: Ambari
  Issue Type: Bug
Reporter: Dmitry Lysnichenko
Assignee: Dmitry Lysnichenko
 Attachments: AMBARI-18008.patch


RU Downgrade failure while downgrading from 2.5 to 2.3. Downgrade failed at the 
step
' Client Components' > ' Updating configuration 
sqoop-atlas-application.properties'.


1) Install HDP-2.3.2.0-2950 with ambari-server-2.1.2-377
2) upgrade ambari to 2.4.0.0-1054
3) Start stack upgrade to erie.
4) Before finalize step, start the downgrade.
The downgrade is failing at above mentioned step.

{code}
02 Aug 2016 05:27:58,527  WARN [Server Action Executor Worker 4525] 
ServerActionExecutor:497 - Task #4525 failed to complete execution due to 
thrown exception: java.lang.NullPointerException:null
java.lang.NullPointerException
at 
org.apache.ambari.server.serveraction.upgrades.ConfigureAction.execute(ConfigureAction.java:239)
at 
org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:555)
at 
org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:492)
at java.lang.Thread.run(Thread.java:745){code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18008) RU Downgrade failure while downgrading from erie to DAL-m10

2016-08-03 Thread Dmitry Lysnichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Lysnichenko updated AMBARI-18008:

Status: Patch Available  (was: Open)

> RU Downgrade failure while downgrading from erie to DAL-m10
> ---
>
> Key: AMBARI-18008
> URL: https://issues.apache.org/jira/browse/AMBARI-18008
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Reporter: Dmitry Lysnichenko
>Assignee: Dmitry Lysnichenko
> Attachments: AMBARI-18008.patch
>
>
> RU Downgrade failure while downgrading from 2.5 to 2.3. Downgrade failed at 
> the step
> ' Client Components' > ' Updating configuration 
> sqoop-atlas-application.properties'.
> 1) Install HDP-2.3.2.0-2950 with ambari-server-2.1.2-377
> 2) upgrade ambari to 2.4.0.0-1054
> 3) Start stack upgrade to erie.
> 4) Before finalize step, start the downgrade.
> The downgrade is failing at above mentioned step.
> {code}
> 02 Aug 2016 05:27:58,527  WARN [Server Action Executor Worker 4525] 
> ServerActionExecutor:497 - Task #4525 failed to complete execution due to 
> thrown exception: java.lang.NullPointerException:null
> java.lang.NullPointerException
> at 
> org.apache.ambari.server.serveraction.upgrades.ConfigureAction.execute(ConfigureAction.java:239)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:555)
> at 
> org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:492)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-17308) Ambari Logfeeder outputs a lot of errors due to parse date

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-17308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406091#comment-15406091
 ] 

Hadoop QA commented on AMBARI-17308:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12821831/AMBARI-17308.3.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8272//console

This message is automatically generated.

> Ambari Logfeeder outputs a lot of errors due to parse date
> --
>
> Key: AMBARI-17308
> URL: https://issues.apache.org/jira/browse/AMBARI-17308
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: trunk, 2.4.0
> Environment: CentOS7.2, JST
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
> Attachments: AMBARI-17308.1.patch, AMBARI-17308.2.patch, 
> AMBARI-17308.3.patch, AMBARI-17308.patch
>
>
> In logsearch_feeder service log, we got errors like below
> {code}
> 2016-06-20 15:28:09,368 ERROR file=ambari-audit.log 
> org.apache.ambari.logfeeder.mapper.MapperDate LogFeederUtil.java:356 - Error 
> applying date transformation. isEpoch=false, 
> dateFormat=-MM-dd'T'HH:mm:ss.SSSZ, value=2016-06-20T15:28:08.000. 
> mapClass=map_date, input=input:source=file, 
> path=/var/log/ambari-server/ambari-audit.log, fieldName=logtime. Messages 
> suppressed before: 2
> java.text.ParseException: Unparseable date: "2016-06-20T15:28:08.000"
>   at java.text.DateFormat.parse(DateFormat.java:366)
>   at 
> org.apache.ambari.logfeeder.mapper.MapperDate.apply(MapperDate.java:83)
>   at org.apache.ambari.logfeeder.filter.Filter.apply(Filter.java:154)
>   at 
> org.apache.ambari.logfeeder.filter.FilterGrok.applyMessage(FilterGrok.java:291)
>   at 
> org.apache.ambari.logfeeder.filter.FilterGrok.flush(FilterGrok.java:320)
>   at org.apache.ambari.logfeeder.input.Input.flush(Input.java:125)
>   at 
> org.apache.ambari.logfeeder.input.InputFile.processFile(InputFile.java:430)
>   at org.apache.ambari.logfeeder.input.InputFile.start(InputFile.java:260)
>   at org.apache.ambari.logfeeder.input.Input.run(Input.java:100)
>   at java.lang.Thread.run(Thread.java:745) 
> {code}
> ambari-audit.log is like below
> {code}
> 2016-07-21T01:52:49.875+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu14/repositories/HDP-2.5),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu14), Repo 
> id(HDP-2.5), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu14/2.x/BUILDS/2.5.0.0-1025)
> 2016-07-21T01:52:49.905+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu16/repositories/HDP-2.5),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu16), Repo 
> id(HDP-2.5), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu16/2.x/BUILDS/2.5.0.0-1025)
> 2016-07-21T01:52:50.015+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu14/repositories/HDP-UTILS-1.1.0.21),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu14), Repo 
> id(HDP-UTILS-1.1.0.21), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ubuntu14)
> {code}
> I think date format of the ambari-audit.log ({{2016-07-21T01:52:49.875+09}}) 
> should be like {{2016-07-21T01:52:49.875+0900}}, since grok-pattern can't 
> handle {{2016-07-21T01:52:49.875+09}} format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18005) Clean cached resources on host removal

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406090#comment-15406090
 ] 

Hadoop QA commented on AMBARI-18005:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12821847/AMBARI-18005.v2.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8271//console

This message is automatically generated.

> Clean cached resources on host removal
> --
>
> Key: AMBARI-18005
> URL: https://issues.apache.org/jira/browse/AMBARI-18005
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Laszlo Puskas
>Assignee: Laszlo Puskas
>  Labels: ambari-server
> Fix For: 2.4.0
>
> Attachments: AMBARI-18005.v2.patch
>
>
> When a host is removed from the cluster and later from ambari there's a 
> chance the agent registers back to the ambari server before the agent is 
> stopped.
> Stopping the machine running the agent without the host being deleted again 
> leads to an inconsistent state in the ambari-server due to cached state.
> Resolution:
> The cached resources get cleared on host delete event. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18007) JournalNodes filter doesn't work

2016-08-03 Thread Aleksandr Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Kovalenko updated AMBARI-18007:
-
Fix Version/s: (was: 2.5.0)
   trunk

> JournalNodes filter doesn't work
> 
>
> Key: AMBARI-18007
> URL: https://issues.apache.org/jira/browse/AMBARI-18007
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
> Fix For: trunk
>
> Attachments: AMBARI-18007.patch
>
>
> After clicking on JournalNodes link on HDFS Summary page nothing happens.
> Redirecting to hosts page with applied filter is expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18007) JournalNodes filter doesn't work

2016-08-03 Thread Aleksandr Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Kovalenko updated AMBARI-18007:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to trunk

> JournalNodes filter doesn't work
> 
>
> Key: AMBARI-18007
> URL: https://issues.apache.org/jira/browse/AMBARI-18007
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
> Fix For: 2.5.0
>
> Attachments: AMBARI-18007.patch
>
>
> After clicking on JournalNodes link on HDFS Summary page nothing happens.
> Redirecting to hosts page with applied filter is expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18007) JournalNodes filter doesn't work

2016-08-03 Thread Aleksandr Kovalenko (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406083#comment-15406083
 ] 

Aleksandr Kovalenko commented on AMBARI-18007:
--

Tested manually.

Results of running unit tests:
  29247 tests complete (32 seconds)
  154 tests pending

> JournalNodes filter doesn't work
> 
>
> Key: AMBARI-18007
> URL: https://issues.apache.org/jira/browse/AMBARI-18007
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
> Fix For: 2.5.0
>
> Attachments: AMBARI-18007.patch
>
>
> After clicking on JournalNodes link on HDFS Summary page nothing happens.
> Redirecting to hosts page with applied filter is expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18007) JournalNodes filter doesn't work

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406078#comment-15406078
 ] 

Hadoop QA commented on AMBARI-18007:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12821852/AMBARI-18007.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8270//console

This message is automatically generated.

> JournalNodes filter doesn't work
> 
>
> Key: AMBARI-18007
> URL: https://issues.apache.org/jira/browse/AMBARI-18007
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
> Fix For: 2.5.0
>
> Attachments: AMBARI-18007.patch
>
>
> After clicking on JournalNodes link on HDFS Summary page nothing happens.
> Redirecting to hosts page with applied filter is expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18007) JournalNodes filter doesn't work

2016-08-03 Thread Andrii Babiichuk (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406075#comment-15406075
 ] 

Andrii Babiichuk commented on AMBARI-18007:
---

+1 for the patch

> JournalNodes filter doesn't work
> 
>
> Key: AMBARI-18007
> URL: https://issues.apache.org/jira/browse/AMBARI-18007
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
> Fix For: 2.5.0
>
> Attachments: AMBARI-18007.patch
>
>
> After clicking on JournalNodes link on HDFS Summary page nothing happens.
> Redirecting to hosts page with applied filter is expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18007) JournalNodes filter doesn't work

2016-08-03 Thread Aleksandr Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Kovalenko updated AMBARI-18007:
-
Status: Patch Available  (was: Open)

> JournalNodes filter doesn't work
> 
>
> Key: AMBARI-18007
> URL: https://issues.apache.org/jira/browse/AMBARI-18007
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
> Fix For: 2.5.0
>
> Attachments: AMBARI-18007.patch
>
>
> After clicking on JournalNodes link on HDFS Summary page nothing happens.
> Redirecting to hosts page with applied filter is expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18007) JournalNodes filter doesn't work

2016-08-03 Thread Aleksandr Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Kovalenko updated AMBARI-18007:
-
Attachment: AMBARI-18007.patch

> JournalNodes filter doesn't work
> 
>
> Key: AMBARI-18007
> URL: https://issues.apache.org/jira/browse/AMBARI-18007
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
> Fix For: 2.5.0
>
> Attachments: AMBARI-18007.patch
>
>
> After clicking on JournalNodes link on HDFS Summary page nothing happens.
> Redirecting to hosts page with applied filter is expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18007) JournalNodes filter doesn't work

2016-08-03 Thread Aleksandr Kovalenko (JIRA)
Aleksandr Kovalenko created AMBARI-18007:


 Summary: JournalNodes filter doesn't work
 Key: AMBARI-18007
 URL: https://issues.apache.org/jira/browse/AMBARI-18007
 Project: Ambari
  Issue Type: Bug
  Components: ambari-web
Affects Versions: 2.4.0
Reporter: Aleksandr Kovalenko
Assignee: Aleksandr Kovalenko
 Fix For: 2.5.0


After clicking on JournalNodes link on HDFS Summary page nothing happens.
Redirecting to hosts page with applied filter is expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18005) Clean cached resources on host removal

2016-08-03 Thread Laszlo Puskas (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Puskas updated AMBARI-18005:
---
Status: Patch Available  (was: In Progress)

> Clean cached resources on host removal
> --
>
> Key: AMBARI-18005
> URL: https://issues.apache.org/jira/browse/AMBARI-18005
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Laszlo Puskas
>Assignee: Laszlo Puskas
>  Labels: ambari-server
> Fix For: 2.4.0
>
> Attachments: AMBARI-18005.v2.patch
>
>
> When a host is removed from the cluster and later from ambari there's a 
> chance the agent registers back to the ambari server before the agent is 
> stopped.
> Stopping the machine running the agent without the host being deleted again 
> leads to an inconsistent state in the ambari-server due to cached state.
> Resolution:
> The cached resources get cleared on host delete event. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18005) Clean cached resources on host removal

2016-08-03 Thread Laszlo Puskas (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Puskas updated AMBARI-18005:
---
Attachment: AMBARI-18005.v2.patch

> Clean cached resources on host removal
> --
>
> Key: AMBARI-18005
> URL: https://issues.apache.org/jira/browse/AMBARI-18005
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Laszlo Puskas
>Assignee: Laszlo Puskas
>  Labels: ambari-server
> Fix For: 2.4.0
>
> Attachments: AMBARI-18005.v2.patch
>
>
> When a host is removed from the cluster and later from ambari there's a 
> chance the agent registers back to the ambari server before the agent is 
> stopped.
> Stopping the machine running the agent without the host being deleted again 
> leads to an inconsistent state in the ambari-server due to cached state.
> Resolution:
> The cached resources get cleared on host delete event. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17993) Kerberos identity definitions in Kerberos descriptors should explicitly declare a reference

2016-08-03 Thread Robert Levas (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Levas updated AMBARI-17993:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk
{noformat}
commit 5ddeb09897fbf015f2a63fe33c937a99e027bdbe
Author: Robert Levas 
Date:   Wed Aug 3 10:53:02 2016 -0400
{noformat}

Committed to branch-2.4
{noformat}
commit b689646d870e488f79637c92f23c681437f24426
Author: Robert Levas 
Date:   Wed Aug 3 10:54:16 2016 -0400
{noformat}


> Kerberos identity definitions in Kerberos descriptors should explicitly 
> declare a reference
> ---
>
> Key: AMBARI-17993
> URL: https://issues.apache.org/jira/browse/AMBARI-17993
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-server
>Affects Versions: 2.4.0
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Blocker
>  Labels: kerberos_descriptor
> Fix For: 2.4.0
>
> Attachments: AMBARI-17993_branch-2.4_01.patch, 
> AMBARI-17993_trunk_01.patch
>
>
> Kerberos identity definitions in Kerberos descriptors should explicitly 
> declare a reference rather than rely on the identity's _name_ attribute. 
> Currently, the set of Kerberos identities declared at a service-level or a 
> component-level can contain identities with unique names.  For example using:
> {code}
>   "identities": [
> {
>   "name": "identity",
>   "principal": {
> "value": "service/_HOST@${realm}",
> "configuration": "service-site/property1.principal",
> ...
>   },
>   "keytab": {
> "file": "${keytab_dir}/service.service.keytab",
> "configuration": "service-site/property1.keytab",
> ...
>   }
> },
> {
>   "name": "identity",
>   "principal": {
> "value": "service/_HOST@${realm}",
> "configuration": "service-site/property2.principal",
> ...
>   },
>   "keytab": {
> "file": "${keytab_dir}/service.service.keytab",
> "configuration": "service-site/property2.keytab",
> ...
>   }
> }
>   ]
> {code}
> Only the first "identity" principal is realized and the additional one is 
> ignored, leaving the configurations {{service-site/property2.principal}} and 
> {{service-site/property2.keytab}} untouched when Kerberos is enabled for the 
> service. 
> To help this, the 2nd instance can be converted to a reference, overriding 
> only the attributes the need to be changed - like the configurations. 
> {code}
>   "identities": [
> {
>   "name": "identity",
>   "principal": {
> "value": "service/_HOST@${realm}",
> "configuration": "service-site/property1.principal",
> ...
>   },
>   "keytab": {
> "file": "${keytab_dir}/service.service.keytab",
> "configuration": "service-site/property1.keytab",
> ...
>   }
> },
> {
>   "name": "/SERVICE/identity",
>   "principal": {
> "configuration": "service-site/property2.principal"
>   },
>   "keytab": {
> "configuration": "service-site/property2.keytab"
>   }
> }
>   ]
> {code}
> This allows for both identity declarations to be realized, however this is 
> limited to only the 2 instances. If a 3rd instance is needed (to set an 
> additional configuration), it must look be:
> {code}
> {
>   "name": "/SERVICE/identity",
>   "principal": {
> "configuration": "service-site/property3.principal"
>   },
>   "keytab": {
> "configuration": "service-site/property3.keytab"
>   }
> }
> {code}
> However since it's name is the same as the 2nd instance, it will be ignored. 
> If explicit references are specified, then multiple uniquely-named identity 
> blocks will be allowed to reference the same base identity, effectively 
> enabling the ability to declare unlimited configurations for the same 
> identity definition:
> {code}
>   "identities": [
> {
>   "name": "identity",
>   "principal": {
> "value": "service/_HOST@${realm}",
> "configuration": "service-site/property1.principal",
> ...
>   },
>   "keytab": {
> "file": "${keytab_dir}/service.service.keytab",
> "configuration": "service-site/property1.keytab",
> ...
>   }
> },
> {
>   "name": "identitiy_reference1",
>   "reference": "/SERVICE/identity",
>   "principal": {
> "configuration": "service-site/property2.principal"
>   },
>   "keytab": {
> "configuration": "service-site/property2.keytab"
>   }
> },
> {
>   "name": "identitiy_reference2",
>   "reference": "/SERVICE/identity",
>   

[jira] [Updated] (AMBARI-18006) Tune HDFS opts parameters to trigger GC more predicitably

2016-08-03 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-18006:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk

> Tune HDFS opts parameters to trigger GC more predicitably
> -
>
> Key: AMBARI-18006
> URL: https://issues.apache.org/jira/browse/AMBARI-18006
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 3.0.0
>
> Attachments: AMBARI-18006.patch
>
>
> DN Heap usage alert is set to 80% warning and 90% critical. This alert is
> causing a lot of alerts to be fired off and needs to be tuned.
> One thing we need to make sure is that DN gc is happening. I am not sure at
> what % of heap usage will the gc be fired but we need to make sure our alerts
> are for a % higher than that as when i manually ran a GC the heap usage went
> below the threshold.
> Based on discussions with hdfs dev's it was determined we should be adding the
> following
> 
> 
> 
> XX:+UseCMSInitiatingOccupancyOnly and 
> -XX:CMSInitiatingOccupancyFraction=
> 
> to the namenode and datanode opts. Based on this setting hopefully we can also
> determine what % to set the alerts to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18006) Tune HDFS opts parameters to trigger GC more predicitably

2016-08-03 Thread Andrew Onischuk (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15406003#comment-15406003
 ] 

Andrew Onischuk commented on AMBARI-18006:
--

The above failure is caused by jenkins bug. Ran the tests manually:
{noformat}
[INFO] Rat check: Summary of files. Unapproved: 0 unknown: 0 generated: 0 
approved: 148 licence.
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Ambari Views .. SUCCESS [2.150s]
[INFO] Ambari Metrics Common . SUCCESS [1.209s]
[INFO] Ambari Server . SUCCESS [1:02.528s]
[INFO] Ambari Agent .. SUCCESS [14.786s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 1:21.439s
[INFO] Finished at: Wed Aug 03 17:36:37 EEST 2016
[INFO] Final Memory: 86M/1115M
[INFO] 
{noformat}

> Tune HDFS opts parameters to trigger GC more predicitably
> -
>
> Key: AMBARI-18006
> URL: https://issues.apache.org/jira/browse/AMBARI-18006
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 3.0.0
>
> Attachments: AMBARI-18006.patch
>
>
> DN Heap usage alert is set to 80% warning and 90% critical. This alert is
> causing a lot of alerts to be fired off and needs to be tuned.
> One thing we need to make sure is that DN gc is happening. I am not sure at
> what % of heap usage will the gc be fired but we need to make sure our alerts
> are for a % higher than that as when i manually ran a GC the heap usage went
> below the threshold.
> Based on discussions with hdfs dev's it was determined we should be adding the
> following
> 
> 
> 
> XX:+UseCMSInitiatingOccupancyOnly and 
> -XX:CMSInitiatingOccupancyFraction=
> 
> to the namenode and datanode opts. Based on this setting hopefully we can also
> determine what % to set the alerts to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18006) Tune HDFS opts parameters to trigger GC more predicitably

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15405993#comment-15405993
 ] 

Hadoop QA commented on AMBARI-18006:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12821842/AMBARI-18006.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8269//console

This message is automatically generated.

> Tune HDFS opts parameters to trigger GC more predicitably
> -
>
> Key: AMBARI-18006
> URL: https://issues.apache.org/jira/browse/AMBARI-18006
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 3.0.0
>
> Attachments: AMBARI-18006.patch
>
>
> DN Heap usage alert is set to 80% warning and 90% critical. This alert is
> causing a lot of alerts to be fired off and needs to be tuned.
> One thing we need to make sure is that DN gc is happening. I am not sure at
> what % of heap usage will the gc be fired but we need to make sure our alerts
> are for a % higher than that as when i manually ran a GC the heap usage went
> below the threshold.
> Based on discussions with hdfs dev's it was determined we should be adding the
> following
> 
> 
> 
> XX:+UseCMSInitiatingOccupancyOnly and 
> -XX:CMSInitiatingOccupancyFraction=
> 
> to the namenode and datanode opts. Based on this setting hopefully we can also
> determine what % to set the alerts to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18006) Tune HDFS opts parameters to trigger GC more predicitably

2016-08-03 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-18006:
-
Attachment: AMBARI-18006.patch

> Tune HDFS opts parameters to trigger GC more predicitably
> -
>
> Key: AMBARI-18006
> URL: https://issues.apache.org/jira/browse/AMBARI-18006
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 3.0.0
>
> Attachments: AMBARI-18006.patch
>
>
> DN Heap usage alert is set to 80% warning and 90% critical. This alert is
> causing a lot of alerts to be fired off and needs to be tuned.
> One thing we need to make sure is that DN gc is happening. I am not sure at
> what % of heap usage will the gc be fired but we need to make sure our alerts
> are for a % higher than that as when i manually ran a GC the heap usage went
> below the threshold.
> Based on discussions with hdfs dev's it was determined we should be adding the
> following
> 
> 
> 
> XX:+UseCMSInitiatingOccupancyOnly and 
> -XX:CMSInitiatingOccupancyFraction=
> 
> to the namenode and datanode opts. Based on this setting hopefully we can also
> determine what % to set the alerts to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18006) Tune HDFS opts parameters to trigger GC more predicitably

2016-08-03 Thread Andrew Onischuk (JIRA)
Andrew Onischuk created AMBARI-18006:


 Summary: Tune HDFS opts parameters to trigger GC more predicitably
 Key: AMBARI-18006
 URL: https://issues.apache.org/jira/browse/AMBARI-18006
 Project: Ambari
  Issue Type: Bug
Reporter: Andrew Onischuk
Assignee: Andrew Onischuk
 Fix For: 3.0.0
 Attachments: AMBARI-18006.patch

DN Heap usage alert is set to 80% warning and 90% critical. This alert is
causing a lot of alerts to be fired off and needs to be tuned.

One thing we need to make sure is that DN gc is happening. I am not sure at
what % of heap usage will the gc be fired but we need to make sure our alerts
are for a % higher than that as when i manually ran a GC the heap usage went
below the threshold.

Based on discussions with hdfs dev's it was determined we should be adding the
following




XX:+UseCMSInitiatingOccupancyOnly and 
-XX:CMSInitiatingOccupancyFraction=


to the namenode and datanode opts. Based on this setting hopefully we can also
determine what % to set the alerts to.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18006) Tune HDFS opts parameters to trigger GC more predicitably

2016-08-03 Thread Andrew Onischuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk updated AMBARI-18006:
-
Status: Patch Available  (was: Open)

> Tune HDFS opts parameters to trigger GC more predicitably
> -
>
> Key: AMBARI-18006
> URL: https://issues.apache.org/jira/browse/AMBARI-18006
> Project: Ambari
>  Issue Type: Bug
>Reporter: Andrew Onischuk
>Assignee: Andrew Onischuk
> Fix For: 3.0.0
>
> Attachments: AMBARI-18006.patch
>
>
> DN Heap usage alert is set to 80% warning and 90% critical. This alert is
> causing a lot of alerts to be fired off and needs to be tuned.
> One thing we need to make sure is that DN gc is happening. I am not sure at
> what % of heap usage will the gc be fired but we need to make sure our alerts
> are for a % higher than that as when i manually ran a GC the heap usage went
> below the threshold.
> Based on discussions with hdfs dev's it was determined we should be adding the
> following
> 
> 
> 
> XX:+UseCMSInitiatingOccupancyOnly and 
> -XX:CMSInitiatingOccupancyFraction=
> 
> to the namenode and datanode opts. Based on this setting hopefully we can also
> determine what % to set the alerts to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18004) Wrong messages on NameNode HA Wizard step9

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15405976#comment-15405976
 ] 

Hudson commented on AMBARI-18004:
-

SUCCESS: Integrated in Ambari-trunk-Commit #5445 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5445/])
AMBARI-18004. Wrong messages on NameNode HA Wizard step9 (akovalenko) 
(akovalenko: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=e3fd2f0bd1586e697d2634efdbfda5be1ab1745d])
* ambari-web/app/messages.js


> Wrong messages on NameNode HA Wizard step9
> --
>
> Key: AMBARI-18004
> URL: https://issues.apache.org/jira/browse/AMBARI-18004
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: AMBARI-18004.patch
>
>
> Stop NameNodes and Delete Secondary NameNode have opposite label messages.
> If you click on Delete Secondary NameNode label to see logs, you will see 
> logs for Stop NameNode action and you will not see logs after clicking on 
> Stop NameNodes as components deletion does'n produce request id with logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMBARI-18005) Clean cached resources on host removal

2016-08-03 Thread Laszlo Puskas (JIRA)
Laszlo Puskas created AMBARI-18005:
--

 Summary: Clean cached resources on host removal
 Key: AMBARI-18005
 URL: https://issues.apache.org/jira/browse/AMBARI-18005
 Project: Ambari
  Issue Type: Bug
  Components: ambari-server
Affects Versions: 2.4.0
Reporter: Laszlo Puskas
Assignee: Laszlo Puskas
 Fix For: 2.4.0


When a host is removed from the cluster and later from ambari there's a chance 
the agent registers back to the ambari server before the agent is stopped.

Stopping the machine running the agent without the host being deleted again 
leads to an inconsistent state in the ambari-server due to cached state.

Resolution:
The cached resources get cleared on host delete event. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMBARI-16212) Reduce server startup time

2016-08-03 Thread Laszlo Puskas (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Puskas resolved AMBARI-16212.

Resolution: Fixed

> Reduce server startup time
> --
>
> Key: AMBARI-16212
> URL: https://issues.apache.org/jira/browse/AMBARI-16212
> Project: Ambari
>  Issue Type: Task
>  Components: ambari-server
>Reporter: Sebastian Toader
>Assignee: Laszlo Puskas
> Fix For: 2.4.1
>
>
> During startup Ambari server does some work such as expanding view jars which 
> is done sequentially which may last couple of minutes if there are many views 
> to be extracted.
> Perhaps server can be started as part of sys-prep to allow view extraction to 
> reduce the time for the second start.
> In addition, other optimizations should be explored to expedite server start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-17308) Ambari Logfeeder outputs a lot of errors due to parse date

2016-08-03 Thread Masahiro Tanaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-17308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masahiro Tanaka updated AMBARI-17308:
-
Attachment: AMBARI-17308.3.patch

> Ambari Logfeeder outputs a lot of errors due to parse date
> --
>
> Key: AMBARI-17308
> URL: https://issues.apache.org/jira/browse/AMBARI-17308
> Project: Ambari
>  Issue Type: Bug
>Affects Versions: trunk, 2.4.0
> Environment: CentOS7.2, JST
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
> Attachments: AMBARI-17308.1.patch, AMBARI-17308.2.patch, 
> AMBARI-17308.3.patch, AMBARI-17308.patch
>
>
> In logsearch_feeder service log, we got errors like below
> {code}
> 2016-06-20 15:28:09,368 ERROR file=ambari-audit.log 
> org.apache.ambari.logfeeder.mapper.MapperDate LogFeederUtil.java:356 - Error 
> applying date transformation. isEpoch=false, 
> dateFormat=-MM-dd'T'HH:mm:ss.SSSZ, value=2016-06-20T15:28:08.000. 
> mapClass=map_date, input=input:source=file, 
> path=/var/log/ambari-server/ambari-audit.log, fieldName=logtime. Messages 
> suppressed before: 2
> java.text.ParseException: Unparseable date: "2016-06-20T15:28:08.000"
>   at java.text.DateFormat.parse(DateFormat.java:366)
>   at 
> org.apache.ambari.logfeeder.mapper.MapperDate.apply(MapperDate.java:83)
>   at org.apache.ambari.logfeeder.filter.Filter.apply(Filter.java:154)
>   at 
> org.apache.ambari.logfeeder.filter.FilterGrok.applyMessage(FilterGrok.java:291)
>   at 
> org.apache.ambari.logfeeder.filter.FilterGrok.flush(FilterGrok.java:320)
>   at org.apache.ambari.logfeeder.input.Input.flush(Input.java:125)
>   at 
> org.apache.ambari.logfeeder.input.InputFile.processFile(InputFile.java:430)
>   at org.apache.ambari.logfeeder.input.InputFile.start(InputFile.java:260)
>   at org.apache.ambari.logfeeder.input.Input.run(Input.java:100)
>   at java.lang.Thread.run(Thread.java:745) 
> {code}
> ambari-audit.log is like below
> {code}
> 2016-07-21T01:52:49.875+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu14/repositories/HDP-2.5),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu14), Repo 
> id(HDP-2.5), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu14/2.x/BUILDS/2.5.0.0-1025)
> 2016-07-21T01:52:49.905+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu16/repositories/HDP-2.5),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu16), Repo 
> id(HDP-2.5), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP/ubuntu16/2.x/BUILDS/2.5.0.0-1025)
> 2016-07-21T01:52:50.015+09, User(admin), RemoteIp(192.168.72.1), 
> Operation(Repository update), RequestType(PUT), 
> url(http://192.168.72.101:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/ubuntu14/repositories/HDP-UTILS-1.1.0.21),
>  ResultStatus(200 OK), Stack(HDP), Stack version(2.5), OS(ubuntu14), Repo 
> id(HDP-UTILS-1.1.0.21), Base 
> URL(http://s3.amazonaws.com/dev.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ubuntu14)
> {code}
> I think date format of the ambari-audit.log ({{2016-07-21T01:52:49.875+09}}) 
> should be like {{2016-07-21T01:52:49.875+0900}}, since grok-pattern can't 
> handle {{2016-07-21T01:52:49.875+09}} format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18003) Hive view 1.5.0 shows error for previous invalid queries in the logs of any subsequent query

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15405847#comment-15405847
 ] 

Hadoop QA commented on AMBARI-18003:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12821815/AMBARI-18003.branch-2.4.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8267//console

This message is automatically generated.

> Hive view 1.5.0 shows error for previous invalid queries in the logs of any 
> subsequent query
> 
>
> Key: AMBARI-18003
> URL: https://issues.apache.org/jira/browse/AMBARI-18003
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: 2.4.0
>Reporter: Anusha Bilgi
>Assignee: DIPAYAN BHOWMICK
>Priority: Critical
> Fix For: 2.4.0
>
> Attachments: AMBARI-18003.branch-2.4.patch
>
>
> Steps to reproduce:
> 1. go to hive view run an invalid query "drop db1"
> 2. Run a valid query "create database db2"
> The query execution status is shown as succeeded, but the log will show error 
> for the 1st invalid query



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18001) operationStatus and taskStatus audit log should contain remoteIp message

2016-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15405850#comment-15405850
 ] 

Hadoop QA commented on AMBARI-18001:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12821793/AMBARI-18001_01.patch
  against trunk revision .

{color:red}-1 patch{color}.  Top-level trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/8268//console

This message is automatically generated.

> operationStatus and taskStatus audit log  should contain remoteIp message
> -
>
> Key: AMBARI-18001
> URL: https://issues.apache.org/jira/browse/AMBARI-18001
> Project: Ambari
>  Issue Type: Improvement
>  Components: ambari-server
>Affects Versions: trunk
>Reporter: wangyaoxin
> Fix For: trunk
>
> Attachments: AMBARI-18001.patch, AMBARI-18001_01.patch
>
>
> OperationStatusAuditEvent and TaskStatusAuditEvent auditlog message should 
> contain the remoteip info, store in the pg table host_role_command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18000) Data can not be migrated from Hive 1.0 to Hive2

2016-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15405844#comment-15405844
 ] 

Hudson commented on AMBARI-18000:
-

SUCCESS: Integrated in Ambari-trunk-Commit #5444 (See 
[https://builds.apache.org/job/Ambari-trunk-Commit/5444/])
AMBARI-18000. Data can not be migrated from Hive 1.0 to Hive2. (gnagar: 
[http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=0272ace43cddc0c1d5126d9535f278aafea32f4d])
* contrib/views/hive-next/src/main/resources/view.xml
* 
ambari-server/src/main/java/org/apache/ambari/server/view/ViewDataMigrationUtility.java
* 
contrib/views/hive-next/src/main/java/org/apache/ambari/view/hive2/DataMigrator.java


> Data can not be migrated from Hive 1.0 to Hive2
> ---
>
> Key: AMBARI-18000
> URL: https://issues.apache.org/jira/browse/AMBARI-18000
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-views
>Affects Versions: 2.4.0
>Reporter: Gaurav Nagar
>Assignee: Gaurav Nagar
> Fix For: 2.4.0
>
> Attachments: AMBARI-18000_branch-2.4.patch
>
>
> The persistence entities of Hive view packages were changed so data migration 
> mechanism can not find the relationship between entities in both versions:
> for example, in hive-next:
> org.apache.ambari.view.hive2.resources.udfs.UDF
> and in the original view:
> org.apache.ambari.view.hive.resources.udfs.UDF
> Need to write simple data-migrator that will map entities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMBARI-18004) Wrong messages on NameNode HA Wizard step9

2016-08-03 Thread Aleksandr Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMBARI-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Kovalenko updated AMBARI-18004:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Wrong messages on NameNode HA Wizard step9
> --
>
> Key: AMBARI-18004
> URL: https://issues.apache.org/jira/browse/AMBARI-18004
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: AMBARI-18004.patch
>
>
> Stop NameNodes and Delete Secondary NameNode have opposite label messages.
> If you click on Delete Secondary NameNode label to see logs, you will see 
> logs for Stop NameNode action and you will not see logs after clicking on 
> Stop NameNodes as components deletion does'n produce request id with logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMBARI-18004) Wrong messages on NameNode HA Wizard step9

2016-08-03 Thread Aleksandr Kovalenko (JIRA)

[ 
https://issues.apache.org/jira/browse/AMBARI-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15405805#comment-15405805
 ] 

Aleksandr Kovalenko commented on AMBARI-18004:
--

committed to trunk and branch-2.4.0

> Wrong messages on NameNode HA Wizard step9
> --
>
> Key: AMBARI-18004
> URL: https://issues.apache.org/jira/browse/AMBARI-18004
> Project: Ambari
>  Issue Type: Bug
>  Components: ambari-web
>Affects Versions: 2.4.0
>Reporter: Aleksandr Kovalenko
>Assignee: Aleksandr Kovalenko
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: AMBARI-18004.patch
>
>
> Stop NameNodes and Delete Secondary NameNode have opposite label messages.
> If you click on Delete Secondary NameNode label to see logs, you will see 
> logs for Stop NameNode action and you will not see logs after clicking on 
> Stop NameNodes as components deletion does'n produce request id with logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >