[jira] [Assigned] (AMBARI-25650) Components running on a Metrics Collector host should use the local Collector
[ https://issues.apache.org/jira/browse/AMBARI-25650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer reassigned AMBARI-25650: Assignee: (was: Tamas Payer) > Components running on a Metrics Collector host should use the local Collector > - > > Key: AMBARI-25650 > URL: https://issues.apache.org/jira/browse/AMBARI-25650 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.6.2, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Priority: Minor > Labels: HA, metric-collector, metrics, shard > > On a cluster where Ambari Metrics HA is set up, thus the cluster has multiple > metrics collectors, the components running on the collector's host should > always use the local collector if available. > {code:java} > 2021-03-26 11:24:17,180 INFO [HBase-Metrics2-1] > availability.MetricSinkWriteShardHostnameHashingStrategy: Calculated > collector shard c7402.ambari.apache.org based on hostname: > c7403.ambari.apache.org > {code} > In the above log the RegionServer running on c7403.ambari.apache.org is using > the collector of c7402.ambari.apache.org despite having a local collector on > c7403. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@ambari.apache.org For additional commands, e-mail: issues-h...@ambari.apache.org
[jira] [Updated] (AMBARI-25668) Integrate the Apache released HBase as back store of Ambari Metrics
[ https://issues.apache.org/jira/browse/AMBARI-25668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25668: - Resolution: Fixed Status: Resolved (was: Patch Available) > Integrate the Apache released HBase as back store of Ambari Metrics > --- > > Key: AMBARI-25668 > URL: https://issues.apache.org/jira/browse/AMBARI-25668 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: apache, hbase, metrics > Fix For: 2.7.6 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Since the non managed HDP tarballs are not accessible anymore publicly > AMBARI-25599 is an attempt to replace those dependencies of Ambari Metrics > with the open source versions. > However, the Apache version of HBase ( > [https://archive.apache.org/dist/hbase/2.0.2/hbase-2.0.2-bin.tar.gz] ) does > not starting up and failing with: > {code:java} > [root@c7401 ambari-metrics-collector]# less > hbase-ams-master-c7401.ambari.apache.org.log{code} > {code:java} > 2021-02-10 08:19:01,953 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.NoClassDefFoundError: > org/apache/commons/configuration/Configuration > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:38) > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:36) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:159) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2964) > Caused by: java.lang.ClassNotFoundException: > org.apache.commons.configuration.Configuration{code} > The missing class '_org.apache.commons.configuration.Configuration_' is > located in _'commons-configuration2'_ but strangely that is not present in > '_/usr/lib/ams-hbase/lib/'_ if the Apache version of HBase is used for the > build. > * *Investigate why HBase fails to start up ?* > * *Why is commons-configuration2 lib missing ?* > * *Integrate the Apache released HBase as back store of Ambari Metrics* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25668) Integrate the Apache released HBase as back store of Ambari Metrics
[ https://issues.apache.org/jira/browse/AMBARI-25668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25668: - Status: Patch Available (was: In Progress) > Integrate the Apache released HBase as back store of Ambari Metrics > --- > > Key: AMBARI-25668 > URL: https://issues.apache.org/jira/browse/AMBARI-25668 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: apache, hbase, metrics > Fix For: 2.7.6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Since the non managed HDP tarballs are not accessible anymore publicly > AMBARI-25599 is an attempt to replace those dependencies of Ambari Metrics > with the open source versions. > However, the Apache version of HBase ( > [https://archive.apache.org/dist/hbase/2.0.2/hbase-2.0.2-bin.tar.gz] ) does > not starting up and failing with: > {code:java} > [root@c7401 ambari-metrics-collector]# less > hbase-ams-master-c7401.ambari.apache.org.log{code} > {code:java} > 2021-02-10 08:19:01,953 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.NoClassDefFoundError: > org/apache/commons/configuration/Configuration > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:38) > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:36) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:159) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2964) > Caused by: java.lang.ClassNotFoundException: > org.apache.commons.configuration.Configuration{code} > The missing class '_org.apache.commons.configuration.Configuration_' is > located in _'commons-configuration2'_ but strangely that is not present in > '_/usr/lib/ams-hbase/lib/'_ if the Apache version of HBase is used for the > build. > * *Investigate why HBase fails to start up ?* > * *Why is commons-configuration2 lib missing ?* > * *Integrate the Apache released HBase as back store of Ambari Metrics* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (AMBARI-25668) Integrate the Apache released HBase as back store of Ambari Metrics
[ https://issues.apache.org/jira/browse/AMBARI-25668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer reassigned AMBARI-25668: Assignee: Tamas Payer > Integrate the Apache released HBase as back store of Ambari Metrics > --- > > Key: AMBARI-25668 > URL: https://issues.apache.org/jira/browse/AMBARI-25668 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: apache, hbase, metrics > Fix For: 2.7.6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Since the non managed HDP tarballs are not accessible anymore publicly > AMBARI-25599 is an attempt to replace those dependencies of Ambari Metrics > with the open source versions. > However, the Apache version of HBase ( > [https://archive.apache.org/dist/hbase/2.0.2/hbase-2.0.2-bin.tar.gz] ) does > not starting up and failing with: > {code:java} > [root@c7401 ambari-metrics-collector]# less > hbase-ams-master-c7401.ambari.apache.org.log{code} > {code:java} > 2021-02-10 08:19:01,953 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.NoClassDefFoundError: > org/apache/commons/configuration/Configuration > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:38) > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:36) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:159) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2964) > Caused by: java.lang.ClassNotFoundException: > org.apache.commons.configuration.Configuration{code} > The missing class '_org.apache.commons.configuration.Configuration_' is > located in _'commons-configuration2'_ but strangely that is not present in > '_/usr/lib/ams-hbase/lib/'_ if the Apache version of HBase is used for the > build. > * *Investigate why HBase fails to start up ?* > * *Why is commons-configuration2 lib missing ?* > * *Integrate the Apache released HBase as back store of Ambari Metrics* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (AMBARI-25670) Apache released HBase does not work in distributed mode in Ambari Metrics
[ https://issues.apache.org/jira/browse/AMBARI-25670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17327177#comment-17327177 ] Tamas Payer commented on AMBARI-25670: -- cc: [~mariann.vakula] > Apache released HBase does not work in distributed mode in Ambari Metrics > - > > Key: AMBARI-25670 > URL: https://issues.apache.org/jira/browse/AMBARI-25670 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.6 >Reporter: Tamas Payer >Priority: Major > Labels: Hbase, Phoenix, metric-collector > Fix For: 2.7.6 > > > AMBARI-25668 integrates HBase packaged by Apache instead of Cloudera packaged. > Metrics Collector is working properly in embedded mode but HBase fails to > start up in distributed mode. > RegionServer provides the following message: > {noformat} > regionserver.HRegionServer: STOPPED: Unhandled: Found interface > org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was > expected{noformat} > The root cause is likely incompatibility between Hadoop 3.1.1 and HBase > 2.0.2. See:[https://hbase.apache.org/2.0/book.html#hadoop] > *Consider uplift the HBase and Phoenix versions.* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25670) Apache released HBase does not work in distributed mode in Ambari Metrics
Tamas Payer created AMBARI-25670: Summary: Apache released HBase does not work in distributed mode in Ambari Metrics Key: AMBARI-25670 URL: https://issues.apache.org/jira/browse/AMBARI-25670 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.7.6 Reporter: Tamas Payer Fix For: 2.7.6 AMBARI-25668 integrates HBase packaged by Apache instead of Cloudera packaged. Metrics Collector is working properly in embedded mode but HBase fails to start up in distributed mode. RegionServer provides the following message: {noformat} regionserver.HRegionServer: STOPPED: Unhandled: Found interface org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected{noformat} The root cause is likely incompatibility between Hadoop 3.1.1 and HBase 2.0.2. See:[https://hbase.apache.org/2.0/book.html#hadoop] *Consider uplift the HBase and Phoenix versions.* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25638) FindBugs: Class defines equals() and uses Object.hashCode()
[ https://issues.apache.org/jira/browse/AMBARI-25638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25638. -- Resolution: Fixed > FindBugs: Class defines equals() and uses Object.hashCode() > --- > > Key: AMBARI-25638 > URL: https://issues.apache.org/jira/browse/AMBARI-25638 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: cleanup, findbugs, server > Fix For: 2.7.6 > > Time Spent: 20m > Remaining Estimate: 0h > > FindBugs finding: > org.apache.ambari.server.state.alert.MetricSource$JmxInfo defines equals and > uses Object.hashCode() > This class overrides {{equals(Object)}}, but does not override > {{hashCode()}}, and inherits the implementation of {{hashCode()}} from > {{java.lang.Object}} (which returns the identity hash code, an arbitrary > value assigned to the object by the VM). Therefore, the class is very likely > to violate the invariant that equal objects must have equal hashcodes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25632) Verify custom queries with "IN" clause for ORA-01795 issue
[ https://issues.apache.org/jira/browse/AMBARI-25632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25632. -- Resolution: Fixed > Verify custom queries with "IN" clause for ORA-01795 issue > -- > > Key: AMBARI-25632 > URL: https://issues.apache.org/jira/browse/AMBARI-25632 > Project: Ambari > Issue Type: Task > Components: ambari-server >Affects Versions: 2.6.2, 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: oracle, server, sql > Fix For: 2.7.6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Review all custom queries with "IN clause" if they are vulnerable by > ORA-01795 issue. Approximate the possible max size of passed list in-to the > query and in case of need apply wrapper to batch process the query. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (AMBARI-25668) Integrate the Apache released HBase as back store of Ambari Metrics
[ https://issues.apache.org/jira/browse/AMBARI-25668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer reassigned AMBARI-25668: Assignee: (was: Tamas Payer) > Integrate the Apache released HBase as back store of Ambari Metrics > --- > > Key: AMBARI-25668 > URL: https://issues.apache.org/jira/browse/AMBARI-25668 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Tamas Payer >Priority: Major > Labels: apache, hbase, metrics > Fix For: 2.7.6 > > > Since the non managed HDP tarballs are not accessible anymore publicly > AMBARI-25599 is an attempt to replace those dependencies of Ambari Metrics > with the open source versions. > However, the Apache version of HBase ( > [https://archive.apache.org/dist/hbase/2.0.2/hbase-2.0.2-bin.tar.gz] ) does > not starting up and failing with: > {code:java} > [root@c7401 ambari-metrics-collector]# less > hbase-ams-master-c7401.ambari.apache.org.log{code} > {code:java} > 2021-02-10 08:19:01,953 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.NoClassDefFoundError: > org/apache/commons/configuration/Configuration > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:38) > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:36) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:159) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2964) > Caused by: java.lang.ClassNotFoundException: > org.apache.commons.configuration.Configuration{code} > The missing class '_org.apache.commons.configuration.Configuration_' is > located in _'commons-configuration2'_ but strangely that is not present in > '_/usr/lib/ams-hbase/lib/'_ if the Apache version of HBase is used for the > build. > * *Investigate why HBase fails to start up ?* > * *Why is commons-configuration2 lib missing ?* > * *Integrate the Apache released HBase as back store of Ambari Metrics* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (AMBARI-25668) Integrate the Apache released HBase as back store of Ambari Metrics
[ https://issues.apache.org/jira/browse/AMBARI-25668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer reassigned AMBARI-25668: Assignee: Tamas Payer > Integrate the Apache released HBase as back store of Ambari Metrics > --- > > Key: AMBARI-25668 > URL: https://issues.apache.org/jira/browse/AMBARI-25668 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: apache, hbase, metrics > Fix For: 2.7.6 > > > Since the non managed HDP tarballs are not accessible anymore publicly > AMBARI-25599 is an attempt to replace those dependencies of Ambari Metrics > with the open source versions. > However, the Apache version of HBase ( > [https://archive.apache.org/dist/hbase/2.0.2/hbase-2.0.2-bin.tar.gz] ) does > not starting up and failing with: > {code:java} > [root@c7401 ambari-metrics-collector]# less > hbase-ams-master-c7401.ambari.apache.org.log{code} > {code:java} > 2021-02-10 08:19:01,953 ERROR [main] master.HMasterCommandLine: Master exiting > java.lang.NoClassDefFoundError: > org/apache/commons/configuration/Configuration > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:38) > at > org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:36) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:159) > at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149) > at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2964) > Caused by: java.lang.ClassNotFoundException: > org.apache.commons.configuration.Configuration{code} > The missing class '_org.apache.commons.configuration.Configuration_' is > located in _'commons-configuration2'_ but strangely that is not present in > '_/usr/lib/ams-hbase/lib/'_ if the Apache version of HBase is used for the > build. > * *Investigate why HBase fails to start up ?* > * *Why is commons-configuration2 lib missing ?* > * *Integrate the Apache released HBase as back store of Ambari Metrics* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25668) Integrate the Apache released HBase as back store of Ambari Metrics
Tamas Payer created AMBARI-25668: Summary: Integrate the Apache released HBase as back store of Ambari Metrics Key: AMBARI-25668 URL: https://issues.apache.org/jira/browse/AMBARI-25668 Project: Ambari Issue Type: Task Components: ambari-metrics Affects Versions: 2.7.5 Reporter: Tamas Payer Fix For: 2.7.6 Since the non managed HDP tarballs are not accessible anymore publicly AMBARI-25599 is an attempt to replace those dependencies of Ambari Metrics with the open source versions. However, the Apache version of HBase ( [https://archive.apache.org/dist/hbase/2.0.2/hbase-2.0.2-bin.tar.gz] ) does not starting up and failing with: {code:java} [root@c7401 ambari-metrics-collector]# less hbase-ams-master-c7401.ambari.apache.org.log{code} {code:java} 2021-02-10 08:19:01,953 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:38) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.(DefaultMetricsSystem.java:36) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:159) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2964) Caused by: java.lang.ClassNotFoundException: org.apache.commons.configuration.Configuration{code} The missing class '_org.apache.commons.configuration.Configuration_' is located in _'commons-configuration2'_ but strangely that is not present in '_/usr/lib/ams-hbase/lib/'_ if the Apache version of HBase is used for the build. * *Investigate why HBase fails to start up ?* * *Why is commons-configuration2 lib missing ?* * *Integrate the Apache released HBase as back store of Ambari Metrics* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25650) Components running on a Metrics Collector host should use the local Collector
Tamas Payer created AMBARI-25650: Summary: Components running on a Metrics Collector host should use the local Collector Key: AMBARI-25650 URL: https://issues.apache.org/jira/browse/AMBARI-25650 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.6.2, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.7.5 Reporter: Tamas Payer Assignee: Tamas Payer Fix For: 2.7.6 On a cluster where Ambari Metrics HA is set up, thus the cluster has multiple metrics collectors, the components running on the collector's host should always use the local collector if available. {code:java} 2021-03-26 11:24:17,180 INFO [HBase-Metrics2-1] availability.MetricSinkWriteShardHostnameHashingStrategy: Calculated collector shard c7402.ambari.apache.org based on hostname: c7403.ambari.apache.org {code} In the above log the RegionServer running on c7403.ambari.apache.org is using the collector of c7402.ambari.apache.org despite having a local collector on c7403. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25635) Clear Cluster and METRIC_AGGREGATORS MBeans upon shutdown
[ https://issues.apache.org/jira/browse/AMBARI-25635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25635: - Labels: HA helix metric-collector (was: metric-collector) > Clear Cluster and METRIC_AGGREGATORS MBeans upon shutdown > - > > Key: AMBARI-25635 > URL: https://issues.apache.org/jira/browse/AMBARI-25635 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: HA, helix, metric-collector > Fix For: 2.7.6 > > > Following warnings appear in metrics-collector.log upon startup. > {code:java} > javax.management.InstanceAlreadyExistsException: ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001 > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.register(ClusterStatusMonitor.java:172) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.registerInstances(ClusterStatusMonitor.java:498) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.setClusterInstanceStatus(ClusterStatusMonitor.java:220) > at > org.apache.helix.controller.stages.ReadClusterDataStage.process(ReadClusterDataStage.java:91) > at org.apache.helix.controller.pipeline.Pipeline.handle(Pipeline.java:48) > at > org.apache.helix.controller.GenericHelixController.handleEvent(GenericHelixController.java:295) > at > org.apache.helix.controller.GenericHelixController$ClusterEventProcessor.run(GenericHelixController.java:595) > {code} > and > {code:java} > 2021-03-09 23:15:53,450 INFO > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Register MBean: > ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS > 2021-03-09 23:15:53,450 INFO > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Register MBean: > ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS > 2021-03-09 23:15:53,450 WARN > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Could not register > MBean: ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS > javax.management.InstanceAlreadyExistsException: ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.register(ClusterStatusMonitor.java:172) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.registerPerInstanceResources(ClusterStatusMonitor.java:537) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.setPerInstanceResourceStatus(ClusterStatusMonitor.java:307) > at > org.apache.helix.controller.stages.BestPossibleStateCalcStage.process(BestPossibleStateCalcStage.java:74) > at > org.apache.helix.controller.pipeline.Pipeline.handle(Pipeline.java:48) > at > org.apache.helix.controller.GenericHelixController.handleEvent(GenericHelixController.java:295) > at > org.apache.helix.controller.GenericHelixController$ClusterEventProcessor.run(GenericHelixController.java:595) > {code} -- This message was s
[jira] [Assigned] (AMBARI-25635) Clear Cluster and METRIC_AGGREGATORS MBeans upon shutdown
[ https://issues.apache.org/jira/browse/AMBARI-25635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer reassigned AMBARI-25635: Assignee: Tamas Payer > Clear Cluster and METRIC_AGGREGATORS MBeans upon shutdown > - > > Key: AMBARI-25635 > URL: https://issues.apache.org/jira/browse/AMBARI-25635 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metric-collector > Fix For: 2.7.6 > > > Following warnings appear in metrics-collector.log upon startup. > {code:java} > javax.management.InstanceAlreadyExistsException: ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001 > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.register(ClusterStatusMonitor.java:172) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.registerInstances(ClusterStatusMonitor.java:498) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.setClusterInstanceStatus(ClusterStatusMonitor.java:220) > at > org.apache.helix.controller.stages.ReadClusterDataStage.process(ReadClusterDataStage.java:91) > at org.apache.helix.controller.pipeline.Pipeline.handle(Pipeline.java:48) > at > org.apache.helix.controller.GenericHelixController.handleEvent(GenericHelixController.java:295) > at > org.apache.helix.controller.GenericHelixController$ClusterEventProcessor.run(GenericHelixController.java:595) > {code} > and > {code:java} > 2021-03-09 23:15:53,450 INFO > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Register MBean: > ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS > 2021-03-09 23:15:53,450 INFO > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Register MBean: > ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS > 2021-03-09 23:15:53,450 WARN > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Could not register > MBean: ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS > javax.management.InstanceAlreadyExistsException: ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.register(ClusterStatusMonitor.java:172) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.registerPerInstanceResources(ClusterStatusMonitor.java:537) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.setPerInstanceResourceStatus(ClusterStatusMonitor.java:307) > at > org.apache.helix.controller.stages.BestPossibleStateCalcStage.process(BestPossibleStateCalcStage.java:74) > at > org.apache.helix.controller.pipeline.Pipeline.handle(Pipeline.java:48) > at > org.apache.helix.controller.GenericHelixController.handleEvent(GenericHelixController.java:295) > at > org.apache.helix.controller.GenericHelixController$ClusterEventProcessor.run(GenericHelixController.java:595) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25638) FindBugs: Class defines equals() and uses Object.hashCode()
Tamas Payer created AMBARI-25638: Summary: FindBugs: Class defines equals() and uses Object.hashCode() Key: AMBARI-25638 URL: https://issues.apache.org/jira/browse/AMBARI-25638 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.7.3, 2.7.4, 2.7.5 Reporter: Tamas Payer Assignee: Tamas Payer Fix For: 2.7.6 FindBugs finding: org.apache.ambari.server.state.alert.MetricSource$JmxInfo defines equals and uses Object.hashCode() This class overrides {{equals(Object)}}, but does not override {{hashCode()}}, and inherits the implementation of {{hashCode()}} from {{java.lang.Object}} (which returns the identity hash code, an arbitrary value assigned to the object by the VM). Therefore, the class is very likely to violate the invariant that equal objects must have equal hashcodes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25636) FindBugs: Comparison of String parameter using == or !=
[ https://issues.apache.org/jira/browse/AMBARI-25636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25636: - Labels: cleanup findbugs server (was: findbugs server) > FindBugs: Comparison of String parameter using == or != > > > Key: AMBARI-25636 > URL: https://issues.apache.org/jira/browse/AMBARI-25636 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Minor > Labels: cleanup, findbugs, server > Fix For: 2.7.6 > > Time Spent: 10m > Remaining Estimate: 0h > > Fix FindBugs issue: Comparison of String parameter using == or != in > org.apache.ambari.server.state.host.HostImpl.setStatus(String) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25326) AMS - no HBase and Hive metrics post-upgrade when using 2 collectors
[ https://issues.apache.org/jira/browse/AMBARI-25326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25326. -- Resolution: Duplicate > AMS - no HBase and Hive metrics post-upgrade when using 2 collectors > > > Key: AMBARI-25326 > URL: https://issues.apache.org/jira/browse/AMBARI-25326 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.3 >Reporter: Gabor Boros >Assignee: Gabor Boros >Priority: Major > > Seems like a bug when 2 metric collectors are deployed. Hive and hbase > services are not able to send metrics > {code} > Error : 2019-06-10 02:42:59,215 INFO timeline > timeline.HadoopTimelineMetricsSink: No live collector to send metrics to. > Metrics to be sent will be discarded. This message will be skipped for the > next 20 > Debug Error shows this : > 2019-06-14 20:35:29,538 DEBUG main timeline.HadoopTimelineMetricsSink: Trying > to find live collector host from : > bolhdppname5.micron.com,bolhdppname4.micron.com > 2019-06-14 20:35:29,538 DEBUG main timeline.HadoopTimelineMetricsSink: > Requesting live collector nodes : > http://bolhdppname5.micron.com,bolhdppname4.micron.com:6188/ws/v1/timeline/metrics/livenodes > 2019-06-14 20:35:29,557 DEBUG main timeline.HadoopTimelineMetricsSink: Unable > to connect to collector, > http://bolhdppname5.micron.com,bolhdppname4.micron.com:6188/ws/v1/timeline/metrics/livenodes > 2019-06-14 20:35:29,557 DEBUG main timeline.HadoopTimelineMetricsSink: > java.net.UnknownHostException: bolhdppname5.micron.com,bolhdppname4.micron.com > 2019-06-14 20:35:29,558 DEBUG main timeline.HadoopTimelineMetricsSink: > Collector bolhdppname5.micron.com,bolhdppname4.micron.com is not longer live. > Removing it from list of know live collector hosts : [] > 2019-06-14 20:35:29,558 DEBUG main timeline.HadoopTimelineMetricsSink: No > live collectors from configuration. > {code} > Its incorrectly parsing hostnames when there are 2 collectors. > Hive service and Hbase service have ability to determine the live collectors > either through curl or zookeeper but the configs doesn't support fetching > live collector node from zookeeper. > To work around this, we added > for hbase > {code} > *.sink.timeline.zookeeper.quorum=bolhdppname5.micron.com:2181,bolhdppname1.micron.com:2181,bolhdppname4.micron.com:2181,bolhdppname2.micron.com:2181,bolhdppname3.micron.com:2181 > {code} > in > /var/lib/ambari-server/resources/stacks/HDP/3.0/services/HBASE/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2 > and for hive > Add > {code} > *.sink.timeline.zookeeper.quorum=bolhdppname5.micron.com:2181,bolhdppname1.micron.com:2181,bolhdppname4.micron.com:2181,bolhdppname2.micron.com:2181,bolhdppname3.micron.com:2181 > {code} > in all 4 files under > /var/lib/ambari-server/resources/stacks/HDP/3.0/services/HIVE/package/templates/ > ( on ambari server ) > {code} > root@c1207-node1 templates# ll | grep metr > -rwxr-xr-x 1 root root 3032 Sep 18 2018 > hadoop-metrics2-hivemetastore.properties.j2 > -rwxr-xr-x 1 root root 3016 Sep 18 2018 > hadoop-metrics2-hiveserver2.properties.j2 > -rwxr-xr-x 1 root root 2959 Sep 18 2018 hadoop-metrics2-llapdaemon.j2 > -rwxr-xr-x 1 root root 3015 Sep 18 2018 hadoop-metrics2-llaptaskscheduler.j2 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25549) NegativeArraySizeException thrown when invoking CurrentCollectorHost
[ https://issues.apache.org/jira/browse/AMBARI-25549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25549: - Description: SMM is using *AbstractTimelineMetricsSink* class to fetch and push metrics to Ambari Metric Collector in multi-threaded manner. When all the AMS live nodes are down([http://localhost:6188/ws/v1/timeline/metrics/livenodes]), the method [getCurrentCollectorHost|https://github.com/apache/ambari-metrics/blob/c7dcf2b25241e2cfe6931d6261a43be97e0deaba/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java#L273] throws _NegativeArraySizeException_. {code:java} java.lang.NegativeArraySizeException: null at java.util.AbstractCollection.toArray(AbstractCollection.java:136) at java.util.ArrayList.(ArrayList.java:178) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:460) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:450) at org.apache.hadoop.metrics2.sink.relocated.google.common.base.Suppliers$ExpiringMemoizingSupplier.get(Suppliers.java:192) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.getCurrentCollectorHost(AbstractTimelineMetricsSink.java:264) at com.hortonworks.smm.kafka.services.metric.ams.AMSMetricsFetcher.getCollectorAPIUri(AMSMetricsFetcher.java:231) {code} was: SMM is using *AbstractTimelineMetricsSink* class to fetch and push metrics to Ambari Metric Collector in multi-threaded manner. When all the AMS live nodes are down([http://localhost:6188/ws/v1/timeline/metrics/livenodes]), the method [getCurrentCollectorHost|https://github.com/apache/ambari-metrics/blob/c7dcf2b25241e2cfe6931d6261a43be97e0deaba/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java#L273] throws _NegativeArraySizeException_. {code:java} ava.lang.NegativeArraySizeException: null at java.util.AbstractCollection.toArray(AbstractCollection.java:136) at java.util.ArrayList.(ArrayList.java:178) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:460) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:450) at org.apache.hadoop.metrics2.sink.relocated.google.common.base.Suppliers$ExpiringMemoizingSupplier.get(Suppliers.java:192) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.getCurrentCollectorHost(AbstractTimelineMetricsSink.java:264) at com.hortonworks.smm.kafka.services.metric.ams.AMSMetricsFetcher.getCollectorAPIUri(AMSMetricsFetcher.java:231) {code} > NegativeArraySizeException thrown when invoking CurrentCollectorHost > > > Key: AMBARI-25549 > URL: https://issues.apache.org/jira/browse/AMBARI-25549 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.6.1, 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Fix For: 2.7.6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > SMM is using *AbstractTimelineMetricsSink* class to fetch and push metrics to > Ambari Metric Collector in multi-threaded manner. > When all the AMS live nodes are > down([http://localhost:6188/ws/v1/timeline/metrics/livenodes]), the method > [getCurrentCollectorHost|https://github.com/apache/ambari-metrics/blob/c7dcf2b25241e2cfe6931d6261a43be97e0deaba/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java#L273] > throws _NegativeArraySizeException_. > {code:java} > java.lang.NegativeArraySizeException: null at > java.util.AbstractCollection.toArray(AbstractCollection.java:136) > at java.util.ArrayList.(ArrayList.java:178) > at > org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:460) > at > org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:450) > at > org.apache.hadoop.metrics2.sink.relocated.google.common.base.Suppliers$ExpiringMemoizingSupplier.get(Suppliers.java:192) > at > org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.getCurrentCollectorHost(AbstractTimelineMetricsSink.java:264) > at > com.hortonworks.smm.kafka.services.metric.ams.AMSMetricsFetcher.getCollectorAPIUri(AMSMetricsFetcher.java:231) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25636) FindBugs: Comparison of String parameter using == or !=
Tamas Payer created AMBARI-25636: Summary: FindBugs: Comparison of String parameter using == or != Key: AMBARI-25636 URL: https://issues.apache.org/jira/browse/AMBARI-25636 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.7.3, 2.7.4, 2.7.5 Reporter: Tamas Payer Assignee: Tamas Payer Fix For: 2.7.6 Fix FindBugs issue: Comparison of String parameter using == or != in org.apache.ambari.server.state.host.HostImpl.setStatus(String) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (AMBARI-25635) Clear Cluster and METRIC_AGGREGATORS MBeans upon shutdown
[ https://issues.apache.org/jira/browse/AMBARI-25635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17302460#comment-17302460 ] Tamas Payer commented on AMBARI-25635: -- {code:java} 2021-03-09 22:30:55,712 INFO org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Unregistering ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS 2021-03-09 22:30:55,712 INFO org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Unregistering ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS 2021-03-09 22:30:55,712 WARN org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Could not unregister MBean: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS javax.management.InstanceNotFoundException: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415) at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.unregister(ClusterStatusMonitor.java:182) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.unregisterPerInstanceResources(ClusterStatusMonitor.java:547) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.setPerInstanceResourceStatus(ClusterStatusMonitor.java:288) at org.apache.helix.controller.stages.BestPossibleStateCalcStage.process(BestPossibleStateCalcStage.java:74) at org.apache.helix.controller.pipeline.Pipeline.handle(Pipeline.java:48) at org.apache.helix.controller.GenericHelixController.handleEvent(GenericHelixController.java:295) at org.apache.helix.controller.GenericHelixController$ClusterEventProcessor.run(GenericHelixController.java:595) {code} > Clear Cluster and METRIC_AGGREGATORS MBeans upon shutdown > - > > Key: AMBARI-25635 > URL: https://issues.apache.org/jira/browse/AMBARI-25635 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.4, 2.7.5 >Reporter: Tamas Payer >Priority: Major > Labels: metric-collector > Fix For: 2.7.6 > > > Following warnings appear in metrics-collector.log upon startup. > {code:java} > javax.management.InstanceAlreadyExistsException: ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001 > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.register(ClusterStatusMonitor.java:172) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.registerInstances(ClusterStatusMonitor.java:498) > at > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.setClusterInstanceStatus(ClusterStatusMonitor.java:220) > at > org.apache.helix.controller.stages.ReadClusterDataStage.process(ReadClusterDataStage.java:91) > at org.apache.helix.controller.pipeline.Pipeline.handle(Pipeline.java:48) > at > org.apache.helix.controller.GenericHelixController.handleEvent(GenericHelixController.java:295) > at > org.apache.helix.controller.GenericHelixController$ClusterEventProcessor.run(GenericHelixController.java:595) > {code} > and > {code:java} > 2021-03-09 23:15:53,450 INFO > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Register MBean: > ClusterStatus: > cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS > 2021-03-09 23:15:53,450 INFO > org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Register MBean: > ClusterStatus: > cluster=ambari-metrics-cluster,instan
[jira] [Updated] (AMBARI-25635) Clear Cluster and METRIC_AGGREGATORS MBeans upon shutdown
[ https://issues.apache.org/jira/browse/AMBARI-25635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25635: - Description: Following warnings appear in metrics-collector.log upon startup. {code:java} javax.management.InstanceAlreadyExistsException: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001 at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.register(ClusterStatusMonitor.java:172) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.registerInstances(ClusterStatusMonitor.java:498) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.setClusterInstanceStatus(ClusterStatusMonitor.java:220) at org.apache.helix.controller.stages.ReadClusterDataStage.process(ReadClusterDataStage.java:91) at org.apache.helix.controller.pipeline.Pipeline.handle(Pipeline.java:48) at org.apache.helix.controller.GenericHelixController.handleEvent(GenericHelixController.java:295) at org.apache.helix.controller.GenericHelixController$ClusterEventProcessor.run(GenericHelixController.java:595) {code} and {code:java} 2021-03-09 23:15:53,450 INFO org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Register MBean: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS 2021-03-09 23:15:53,450 INFO org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Register MBean: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS 2021-03-09 23:15:53,450 WARN org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Could not register MBean: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS javax.management.InstanceAlreadyExistsException: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.register(ClusterStatusMonitor.java:172) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.registerPerInstanceResources(ClusterStatusMonitor.java:537) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.setPerInstanceResourceStatus(ClusterStatusMonitor.java:307) at org.apache.helix.controller.stages.BestPossibleStateCalcStage.process(BestPossibleStateCalcStage.java:74) at org.apache.helix.controller.pipeline.Pipeline.handle(Pipeline.java:48) at org.apache.helix.controller.GenericHelixController.handleEvent(GenericHelixController.java:295) at org.apache.helix.controller.GenericHelixController$ClusterEventProcessor.run(GenericHelixController.java:595) {code} was: j {code:java} avax.management.InstanceAlreadyExistsException: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001 at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxM
[jira] [Created] (AMBARI-25635) Clear Cluster and METRIC_AGGREGATORS MBeans upon shutdown
Tamas Payer created AMBARI-25635: Summary: Clear Cluster and METRIC_AGGREGATORS MBeans upon shutdown Key: AMBARI-25635 URL: https://issues.apache.org/jira/browse/AMBARI-25635 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.7.4, 2.7.5 Reporter: Tamas Payer Fix For: 2.7.6 j {code:java} avax.management.InstanceAlreadyExistsException: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001 at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.register(ClusterStatusMonitor.java:172) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.registerInstances(ClusterStatusMonitor.java:498) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.setClusterInstanceStatus(ClusterStatusMonitor.java:220) at org.apache.helix.controller.stages.ReadClusterDataStage.process(ReadClusterDataStage.java:91) at org.apache.helix.controller.pipeline.Pipeline.handle(Pipeline.java:48) at org.apache.helix.controller.GenericHelixController.handleEvent(GenericHelixController.java:295) at org.apache.helix.controller.GenericHelixController$ClusterEventProcessor.run(GenericHelixController.java:595) {code} and {code:java} 2021-03-09 23:15:53,450 INFO org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Register MBean: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS 2021-03-09 23:15:53,450 INFO org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Register MBean: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS 2021-03-09 23:15:53,450 WARN org.apache.helix.monitoring.mbeans.ClusterStatusMonitor: Could not register MBean: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS javax.management.InstanceAlreadyExistsException: ClusterStatus: cluster=ambari-metrics-cluster,instanceName=ctr-e153-1613480641811-87570-01-57.hwx.site_12001,resourceName=METRIC_AGGREGATORS at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.register(ClusterStatusMonitor.java:172) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.registerPerInstanceResources(ClusterStatusMonitor.java:537) at org.apache.helix.monitoring.mbeans.ClusterStatusMonitor.setPerInstanceResourceStatus(ClusterStatusMonitor.java:307) at org.apache.helix.controller.stages.BestPossibleStateCalcStage.process(BestPossibleStateCalcStage.java:74) at org.apache.helix.controller.pipeline.Pipeline.handle(Pipeline.java:48) at org.apache.helix.controller.GenericHelixController.handleEvent(GenericHelixController.java:295) at org.apache.helix.controller.GenericHelixController$ClusterEventProcessor.run(GenericHelixController.java:595) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25632) Verify custom queries with "IN" clause for ORA-01795 issue
[ https://issues.apache.org/jira/browse/AMBARI-25632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25632: - Description: Review all custom queries with "IN clause" if they are vulnerable by ORA-01795 issue. Approximate the possible max size of passed list in-to the query and in case of need apply wrapper to batch process the query. (was: Review all custom queries with "IN clause" if they are vulnerable by ORA-01795 issue. Approximate the possible max size of passed list in-to the query and in case of need apply wrapper to batch process the query in style/) > Verify custom queries with "IN" clause for ORA-01795 issue > -- > > Key: AMBARI-25632 > URL: https://issues.apache.org/jira/browse/AMBARI-25632 > Project: Ambari > Issue Type: Task > Components: ambari-server >Affects Versions: 2.6.2, 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: oracle, server, sql > Fix For: 2.7.6 > > Time Spent: 20m > Remaining Estimate: 0h > > Review all custom queries with "IN clause" if they are vulnerable by > ORA-01795 issue. Approximate the possible max size of passed list in-to the > query and in case of need apply wrapper to batch process the query. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25632) Verify custom queries with "IN" clause for ORA-01795 issue
Tamas Payer created AMBARI-25632: Summary: Verify custom queries with "IN" clause for ORA-01795 issue Key: AMBARI-25632 URL: https://issues.apache.org/jira/browse/AMBARI-25632 Project: Ambari Issue Type: Task Components: ambari-server Affects Versions: 2.6.2, 2.7.3, 2.7.4, 2.7.5 Reporter: Tamas Payer Assignee: Tamas Payer Fix For: 2.7.6 Review all custom queries with "IN clause" if they are vulnerable by ORA-01795 issue. Approximate the possible max size of passed list in-to the query and in case of need apply wrapper to batch process the query in style/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25599) Consider to eliminate HDP public binary references
[ https://issues.apache.org/jira/browse/AMBARI-25599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25599: - Resolution: Fixed Status: Resolved (was: Patch Available) > Consider to eliminate HDP public binary references > -- > > Key: AMBARI-25599 > URL: https://issues.apache.org/jira/browse/AMBARI-25599 > Project: Ambari > Issue Type: Task > Components: ambari-server >Reporter: Éberhardt Péter >Assignee: Szilard Antal >Priority: Major > Fix For: 2.7.6 > > Time Spent: 2.5h > Remaining Estimate: 0h > > Public HDP binaries will no longer available: > [https://my.cloudera.com/knowledge/Cloudera-Customer-Advisory-Paywall-Update-External?id=306085] > Please analyse and eliminate(replace) the necessary references from the > codebase where rpms and tarballs of HDP are accessible. > Additionally non managed HDP tarballs should be replaced with open source > ones. For example: > [https://github.com/apache/ambari/blob/branch-2.7/ambari-metrics/pom.xml#L43] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25628) Hive cli not working after upgrading jdk version from jdk-8u112 to jdk-8u281
[ https://issues.apache.org/jira/browse/AMBARI-25628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25628. -- Resolution: Abandoned This is a Hive issue rather than Ambari. > Hive cli not working after upgrading jdk version from jdk-8u112 to jdk-8u281 > > > Key: AMBARI-25628 > URL: https://issues.apache.org/jira/browse/AMBARI-25628 > Project: Ambari > Issue Type: Bug > Components: HiveServer2, Metastore, and Client Heap Sizes to Smart > Configs, ambari-server >Affects Versions: 2.0.0 > Environment: For all environments >Reporter: Korayya Suresh Kumar >Priority: Blocker > Labels: ambari, hive > > Hi Team, > After upgrading jdk version from jdk-8u112 to jdk-8u281 hive cli is not > working, and gives below error while login. > > {code:java} > WARNING: Use "yarn jar" to launch YARN applications. > 21/03/09 11:00:04 WARN conf.HiveConf: HiveConf of name > hive.server2.enable.impersonation does not existLogging initialized using > configuration in file:/etc/hive/2.4.3.0-227/0/hive-log4j.properties > Exception in thread "main" java.lang.RuntimeException: java.io.IOException: > Previous writer likely failed to write > hdfs://ppcontent-nn1.pp-content.dataplatform.com:8020/tmp/hive/hive/_tez_session_dir/96b21825-63f4-4316-9c43-20ebe641d9c9/hive-hcatalog-core.jar. > Failing because I am unlikely to write too. > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:544) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:680) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > Caused by: java.io.IOException: Previous writer likely failed to write > hdfs://ppcontent-nn1.pp-content.dataplatform.com:8020/tmp/hive/hive/_tez_session_dir/96b21825-63f4-4316-9c43-20ebe641d9c9/hive-hcatalog-core.jar. > Failing because I am unlikely to write too. > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:982) > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:862) > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:805) > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:233) > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:158) > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:117) > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:541) > ... 8 more > {code} > please provide the JDK versions which supports Ambari 2.0 > Please suggest how to fix the issue. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25496) Upgrade pre-check fails when attempting to upgrade from HDP 2.6 to 7.1 on ubuntu18
[ https://issues.apache.org/jira/browse/AMBARI-25496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25496. -- Fix Version/s: (was: 2.7.6) Resolution: Done > Upgrade pre-check fails when attempting to upgrade from HDP 2.6 to 7.1 on > ubuntu18 > --- > > Key: AMBARI-25496 > URL: https://issues.apache.org/jira/browse/AMBARI-25496 > Project: Ambari > Issue Type: Bug > Components: ambari-sever >Affects Versions: 2.6.2 > Environment: ubuntu18 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > > When trying to Express Upgrade from HDP 2.6.5 to the latest version on > ubuntu18 the upgrade pre checks failed. > When we hit below api: > [http://example.com:8080/api/v1/clusters/cl1/rolling_upgrades_check?fields=*&UpgradeChecks/repository_version_id=51&UpgradeChecks/upgrade_type=NON_ROLLING|http://172.27.30.73:8080/api/v1/clusters/cl1/rolling_upgrades_check?fields=*&UpgradeChecks/repository_version_id=51&UpgradeChecks/upgrade_type=NON_ROLLING] > {code:java} > { > "href" : > "http://172.27.30.73:8080/api/v1/clusters/cl1/rolling_upgrades_check/MISSING_OS_IN_REPO_VERSION";, > "UpgradeChecks" : { > "check" : "Missing OS in repository version.", > "check_type" : "CLUSTER", > "cluster_name" : "cl1", > "failed_detail" : [ ], > "failed_on" : [ > "ubuntu18" > ], > "id" : "MISSING_OS_IN_REPO_VERSION", > "reason" : "The source version must have an entry for each OS type in > the cluster", > "repository_version_id" : 51, > "status" : "FAIL", > "upgrade_type" : "NON_ROLLING" > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25629) For kerberos service check IN clause must be split into batches
[ https://issues.apache.org/jira/browse/AMBARI-25629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25629. -- Resolution: Fixed > For kerberos service check IN clause must be split into batches > --- > > Key: AMBARI-25629 > URL: https://issues.apache.org/jira/browse/AMBARI-25629 > Project: Ambari > Issue Type: Task > Components: ambari-server >Affects Versions: 2.7.4, 2.7.5, 2.7.6 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: oracle, server, sql > Fix For: 2.7.6 > > Time Spent: 20m > Remaining Estimate: 0h > > javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse > Persistence Services - 2.6.2.v20151217-774c696): > org.eclipse.persistence.exceptions.DatabaseException > Internal Exception: java.sql.SQLSyntaxErrorException: ORA-01795: maximum > number of expressions in a list is 1000Error Code: 1795 > Call: SELECT kkp_id, host_id, is_distributed, keytab_path, principal_name > FROM kerberos_keytab_principal WHERE (principal_name IN (?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? > , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, > ?, ?, ?, ?)) > bind => [1083 parameters bound] > Query: ReadAllQuery(referenceClass=KerberosKeytabPrincipalEntity sql="SELECT > kkp_id, host_id, is_distributed, keytab_path, principal_name FROM > kerberos_keytab_principal WHERE (principal_nam > e IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,
[jira] [Created] (AMBARI-25629) For kerberos service check IN clause must be split into batches
Tamas Payer created AMBARI-25629: Summary: For kerberos service check IN clause must be split into batches Key: AMBARI-25629 URL: https://issues.apache.org/jira/browse/AMBARI-25629 Project: Ambari Issue Type: Task Components: ambari-server Affects Versions: 2.7.4, 2.7.5, 2.7.6 Reporter: Tamas Payer Assignee: Tamas Payer Fix For: 2.7.6 javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000Error Code: 1795 Call: SELECT kkp_id, host_id, is_distributed, keytab_path, principal_name FROM kerberos_keytab_principal WHERE (principal_name IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? , ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)) bind => [1083 parameters bound] Query: ReadAllQuery(referenceClass=KerberosKeytabPrincipalEntity sql="SELECT kkp_id, host_id, is_distributed, keytab_path, principal_name FROM kerberos_keytab_principal WHERE (principal_nam e IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?
[jira] [Updated] (AMBARI-25627) ORA-01795 error when querying hostcomponentdesiredstate table on large cluster
[ https://issues.apache.org/jira/browse/AMBARI-25627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25627: - Status: Patch Available (was: In Progress) > ORA-01795 error when querying hostcomponentdesiredstate table on large cluster > -- > > Key: AMBARI-25627 > URL: https://issues.apache.org/jira/browse/AMBARI-25627 > Project: Ambari > Issue Type: Task > Components: ambari-server >Affects Versions: 2.7.4, 2.7.5, 2.7.6 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Critical > Labels: ambari-server, oracle > Fix For: 2.7.6 > > Time Spent: 10m > Remaining Estimate: 0h > > Ambari server is not able to login because the server is querying Oracle DB > which has more than 1000 entries. > {noformat} > bind => [1173 parameters bound] > Query: > ReadAllQuery(name="HostComponentDesiredStateEntity.findByHostsAndCluster" > referenceClass=HostComponentDesiredStateEntity sql="SELECT id, admin_state, > blueprint_provisio > ning_state, cluster_id, component_name, desired_state, host_id, > maintenance_state, restart_required, service_name FROM > hostcomponentdesiredstate WHERE ((host_id IN ?) AND (clu > ster_id = ?))") > at > org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) > at > ... 27 more > Caused by: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of > expressions in a list is 1000{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (AMBARI-22827) db-purge-history operation fails with large DB size in postgres.
[ https://issues.apache.org/jira/browse/AMBARI-22827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17298035#comment-17298035 ] Tamas Payer commented on AMBARI-22827: -- Addendum of this fix "AMBARI-22827. DB Cleanup scripts are using IN clauses" was merged in PR#3291. > db-purge-history operation fails with large DB size in postgres. > > > Key: AMBARI-22827 > URL: https://issues.apache.org/jira/browse/AMBARI-22827 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: trunk, 2.5.2 >Reporter: Jay SenSharma >Assignee: Sandor Molnar >Priority: Major > Labels: pull-request-available > Fix For: 2.7.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > When the ambari DB size is too large (around 1+ GB) then the db-purge-history > may fail with the postgres error Tried to send an out-of-range integer as a > 2-byte value > > Following error trace is from Ambari 2.5.2 however the same might cause in > higher version as well. > {code} > Internal Exception: org.postgresql.util.PSQLException: An I/O error occurred > while sending to the backend. > Error Code: 0 > Call: SELECT DISTINCT host_task_id FROM topology_logical_task WHERE > (physical_task_id IN > (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?, > > > . > .. > ?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)) > Query: > ReportQuery(name="TopologyLogicalTaskEntity.findHostTaskIdsByPhysicalTaskIds" > referenceClass=TopologyLogicalTaskEntity sql="SELECT DISTINCT host_task_id > FROM topology_logical_task WHERE (physical_task_id IN ?)") > at > org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) > at > org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1620) > at > org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:676) > at > org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:560) > at > org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2055) > at > org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:570) > at > org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242) > at > org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228) > at > org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeSelectCall(DatasourceCallQueryMechanism.java:299) > at > org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.selectAllRows(DatasourceCallQueryMechanism.java:694) > at > org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectAllRowsFromTable(ExpressionQueryMechanism.java:2740) > at > org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectAllReportQueryRows(ExpressionQueryMechanism.java:2677) > at > org.eclipse.persistence.queries.ReportQuery.executeDatabaseQuery(ReportQuery.java:852) > at > org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:904) > at > org.eclipse.persistence.queries.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:1134) > at > org.eclipse.persistence.queries.ReadAllQuery.execute(ReadAllQuery.java:460) > at > org.eclipse.persistence.queries.ObjectLevelReadQuery.executeInUnitOfWork(ObjectLevelReadQuery.java:1222) > at > org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2896) > at > org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1857) > at > org.eclipse.persistence.internal.sessions.AbstractSession.retryQuery(AbstractSession.java:1927
[jira] [Updated] (AMBARI-25627) ORA-01795 error when querying hostcomponentdesiredstate table on large cluster
[ https://issues.apache.org/jira/browse/AMBARI-25627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25627: - Description: Ambari server is not able to login because the server is querying Oracle DB which has more than 1000 entries. {noformat} bind => [1173 parameters bound] Query: ReadAllQuery(name="HostComponentDesiredStateEntity.findByHostsAndCluster" referenceClass=HostComponentDesiredStateEntity sql="SELECT id, admin_state, blueprint_provisio ning_state, cluster_id, component_name, desired_state, host_id, maintenance_state, restart_required, service_name FROM hostcomponentdesiredstate WHERE ((host_id IN ?) AND (clu ster_id = ?))") at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) at ... 27 more Caused by: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000{noformat} was: Ambari server we are not able to login because the server is querying Oracle DB which has more than 1000 entries. {noformat} bind => [1173 parameters bound] Query: ReadAllQuery(name="HostComponentDesiredStateEntity.findByHostsAndCluster" referenceClass=HostComponentDesiredStateEntity sql="SELECT id, admin_state, blueprint_provisio ning_state, cluster_id, component_name, desired_state, host_id, maintenance_state, restart_required, service_name FROM hostcomponentdesiredstate WHERE ((host_id IN ?) AND (clu ster_id = ?))") at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) at ... 27 more Caused by: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000{noformat} > ORA-01795 error when querying hostcomponentdesiredstate table on large cluster > -- > > Key: AMBARI-25627 > URL: https://issues.apache.org/jira/browse/AMBARI-25627 > Project: Ambari > Issue Type: Task > Components: ambari-server >Affects Versions: 2.7.4, 2.7.5, 2.7.6 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Critical > Labels: ambari-server, oracle > Fix For: 2.7.6 > > > Ambari server is not able to login because the server is querying Oracle DB > which has more than 1000 entries. > {noformat} > bind => [1173 parameters bound] > Query: > ReadAllQuery(name="HostComponentDesiredStateEntity.findByHostsAndCluster" > referenceClass=HostComponentDesiredStateEntity sql="SELECT id, admin_state, > blueprint_provisio > ning_state, cluster_id, component_name, desired_state, host_id, > maintenance_state, restart_required, service_name FROM > hostcomponentdesiredstate WHERE ((host_id IN ?) AND (clu > ster_id = ?))") > at > org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) > at > ... 27 more > Caused by: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of > expressions in a list is 1000{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25627) ORA-01795 error when querying hostcomponentdesiredstate table on large cluster
[ https://issues.apache.org/jira/browse/AMBARI-25627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25627: - Description: Ambari server we are not able to login because the server is querying Oracle DB which has more than 1000 entries. {noformat} bind => [1173 parameters bound] Query: ReadAllQuery(name="HostComponentDesiredStateEntity.findByHostsAndCluster" referenceClass=HostComponentDesiredStateEntity sql="SELECT id, admin_state, blueprint_provisio ning_state, cluster_id, component_name, desired_state, host_id, maintenance_state, restart_required, service_name FROM hostcomponentdesiredstate WHERE ((host_id IN ?) AND (clu ster_id = ?))") at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) at ... 27 more Caused by: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000{noformat} was: Ambari server we are not able to login because the server is querying Oracle DB which has more than 1000 entries. bind => [1173 parameters bound] Query: ReadAllQuery(name="HostComponentDesiredStateEntity.findByHostsAndCluster" referenceClass=HostComponentDesiredStateEntity sql="SELECT id, admin_state, blueprint_provisio ning_state, cluster_id, component_name, desired_state, host_id, maintenance_state, restart_required, service_name FROM hostcomponentdesiredstate WHERE ((host_id IN ?) AND (clu ster_id = ?))") at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) at ... 27 more Caused by: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000 > ORA-01795 error when querying hostcomponentdesiredstate table on large cluster > -- > > Key: AMBARI-25627 > URL: https://issues.apache.org/jira/browse/AMBARI-25627 > Project: Ambari > Issue Type: Task > Components: ambari-server >Affects Versions: 2.7.4, 2.7.5, 2.7.6 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Critical > Labels: ambari-server, oracle > Fix For: 2.7.6 > > > Ambari server we are not able to login because the server is querying Oracle > DB which has more than 1000 entries. > {noformat} > bind => [1173 parameters bound] > Query: > ReadAllQuery(name="HostComponentDesiredStateEntity.findByHostsAndCluster" > referenceClass=HostComponentDesiredStateEntity sql="SELECT id, admin_state, > blueprint_provisio > ning_state, cluster_id, component_name, desired_state, host_id, > maintenance_state, restart_required, service_name FROM > hostcomponentdesiredstate WHERE ((host_id IN ?) AND (clu > ster_id = ?))") > at > org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) > at > ... 27 more > Caused by: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of > expressions in a list is 1000{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25627) ORA-01795 error when querying hostcomponentdesiredstate table on large cluster
Tamas Payer created AMBARI-25627: Summary: ORA-01795 error when querying hostcomponentdesiredstate table on large cluster Key: AMBARI-25627 URL: https://issues.apache.org/jira/browse/AMBARI-25627 Project: Ambari Issue Type: Task Components: ambari-server Affects Versions: 2.7.4, 2.7.5, 2.7.6 Reporter: Tamas Payer Assignee: Tamas Payer Fix For: 2.7.6 Ambari server we are not able to login because the server is querying Oracle DB which has more than 1000 entries. bind => [1173 parameters bound] Query: ReadAllQuery(name="HostComponentDesiredStateEntity.findByHostsAndCluster" referenceClass=HostComponentDesiredStateEntity sql="SELECT id, admin_state, blueprint_provisio ning_state, cluster_id, component_name, desired_state, host_id, maintenance_state, restart_required, service_name FROM hostcomponentdesiredstate WHERE ((host_id IN ?) AND (clu ster_id = ?))") at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) at ... 27 more Caused by: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25587) Metrics cannot be stored and the exception message is null when metric value is NaN
[ https://issues.apache.org/jira/browse/AMBARI-25587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25587: - Affects Version/s: 2.7.5 2.7.4 > Metrics cannot be stored and the exception message is null when metric value > is NaN > > > Key: AMBARI-25587 > URL: https://issues.apache.org/jira/browse/AMBARI-25587 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.4, 2.7.5 > Environment: Ambari:2.7.4 >Reporter: akiyamaneko >Priority: Major > Fix For: 2.7.6 > > Time Spent: 40m > Remaining Estimate: 0h > > The exception information frequently appeared in ambari-metrics-collector.log > as follows: > {code:java} > 2020-11-12 12:28:11,200 WARN > org.apache.ambari.metrics.core.timeline.PhoenixHBaseAccessor: Failed on > insert records to store : null > 2020-11-12 12:28:11,200 WARN > org.apache.ambari.metrics.core.timeline.PhoenixHBaseAccessor: Metric that > cannot be stored : > [default.General.hs2_avg_active_session_time,hiveserver2]{1605155168235=NaN, > 1605155198236=NaN, 1605155228235=NaN, 1605155258235=NaN} > {code} > The exception message of metrics written to HBase is directly displayed as > null. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25587) Metrics cannot be stored and the exception message is null when metric value is NaN
[ https://issues.apache.org/jira/browse/AMBARI-25587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25587: - Fix Version/s: 2.7.6 > Metrics cannot be stored and the exception message is null when metric value > is NaN > > > Key: AMBARI-25587 > URL: https://issues.apache.org/jira/browse/AMBARI-25587 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics > Environment: Ambari:2.7.4 >Reporter: akiyamaneko >Priority: Major > Fix For: 2.7.6 > > Time Spent: 40m > Remaining Estimate: 0h > > The exception information frequently appeared in ambari-metrics-collector.log > as follows: > {code:java} > 2020-11-12 12:28:11,200 WARN > org.apache.ambari.metrics.core.timeline.PhoenixHBaseAccessor: Failed on > insert records to store : null > 2020-11-12 12:28:11,200 WARN > org.apache.ambari.metrics.core.timeline.PhoenixHBaseAccessor: Metric that > cannot be stored : > [default.General.hs2_avg_active_session_time,hiveserver2]{1605155168235=NaN, > 1605155198236=NaN, 1605155228235=NaN, 1605155258235=NaN} > {code} > The exception message of metrics written to HBase is directly displayed as > null. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25415) AMS - metadata table has incorrect primary key
[ https://issues.apache.org/jira/browse/AMBARI-25415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25415. -- Resolution: Fixed > AMS - metadata table has incorrect primary key > -- > > Key: AMBARI-25415 > URL: https://issues.apache.org/jira/browse/AMBARI-25415 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Ihor Lukianov >Assignee: Ihor Lukianov >Priority: Critical > Labels: pull-request-available > Fix For: 2.7.5 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > The metrics metadata table has incorrect PK, which causes some metadata > information to be overridden by false metrics that appear due to some other > issue -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25625) druid.emitter.ambari-metrics.hostname set improperly
[ https://issues.apache.org/jira/browse/AMBARI-25625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25625: - Resolution: Fixed Status: Resolved (was: Patch Available) > druid.emitter.ambari-metrics.hostname set improperly > > > Key: AMBARI-25625 > URL: https://issues.apache.org/jira/browse/AMBARI-25625 > Project: Ambari > Issue Type: Task > Components: stacks >Affects Versions: 2.6.2, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: ambari-metrics, druid, stack > Fix For: 2.7.6 > > Time Spent: 20m > Remaining Estimate: 0h > > When installing Druid the `druid.emitter.ambari-metrics.hostname` property is > set to the first character of the hostname of the AMS collector host even > though the property in Ambari is set to `{{metric_collector_host}}`. > /etc/druid/conf/_common/common.runtime.properties file contains: > {code:java} > druid.emitter=ambari-metrics > druid.emitter.ambari-metrics.eventConverter={"type":"whiteList"} > druid.emitter.ambari-metrics.hostname=c > druid.emitter.ambari-metrics.port=6188 > druid.emitter.ambari-metrics.protocol=https > druid.emitter.ambari-metrics.trustStorePassword=clientTrustStorePassword > druid.emitter.ambari-metrics.trustStorePath=/etc/security/clientKeys/all.jks > druid.emitter.ambari-metrics.trustStoreType=jks{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25625) druid.emitter.ambari-metrics.hostname set improperly
[ https://issues.apache.org/jira/browse/AMBARI-25625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25625: - Status: Patch Available (was: In Progress) > druid.emitter.ambari-metrics.hostname set improperly > > > Key: AMBARI-25625 > URL: https://issues.apache.org/jira/browse/AMBARI-25625 > Project: Ambari > Issue Type: Task > Components: stacks >Affects Versions: 2.6.2, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: ambari-metrics, druid, stack > Fix For: 2.7.6 > > Time Spent: 10m > Remaining Estimate: 0h > > When installing Druid the `druid.emitter.ambari-metrics.hostname` property is > set to the first character of the hostname of the AMS collector host even > though the property in Ambari is set to `{{metric_collector_host}}`. > /etc/druid/conf/_common/common.runtime.properties file contains: > {code:java} > druid.emitter=ambari-metrics > druid.emitter.ambari-metrics.eventConverter={"type":"whiteList"} > druid.emitter.ambari-metrics.hostname=c > druid.emitter.ambari-metrics.port=6188 > druid.emitter.ambari-metrics.protocol=https > druid.emitter.ambari-metrics.trustStorePassword=clientTrustStorePassword > druid.emitter.ambari-metrics.trustStorePath=/etc/security/clientKeys/all.jks > druid.emitter.ambari-metrics.trustStoreType=jks{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25625) druid.emitter.ambari-metrics.hostname set improperly
Tamas Payer created AMBARI-25625: Summary: druid.emitter.ambari-metrics.hostname set improperly Key: AMBARI-25625 URL: https://issues.apache.org/jira/browse/AMBARI-25625 Project: Ambari Issue Type: Task Components: stacks Affects Versions: 2.6.2, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.7.5 Reporter: Tamas Payer Assignee: Tamas Payer Fix For: 2.7.6 When installing Druid the `druid.emitter.ambari-metrics.hostname` property is set to the first character of the hostname of the AMS collector host even though the property in Ambari is set to `{{metric_collector_host}}`. /etc/druid/conf/_common/common.runtime.properties file contains: {code:java} druid.emitter=ambari-metrics druid.emitter.ambari-metrics.eventConverter={"type":"whiteList"} druid.emitter.ambari-metrics.hostname=c druid.emitter.ambari-metrics.port=6188 druid.emitter.ambari-metrics.protocol=https druid.emitter.ambari-metrics.trustStorePassword=clientTrustStorePassword druid.emitter.ambari-metrics.trustStorePath=/etc/security/clientKeys/all.jks druid.emitter.ambari-metrics.trustStoreType=jks{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25611) After purging AMS database "TimelineMetricMetadataKey is null" error is thrown
[ https://issues.apache.org/jira/browse/AMBARI-25611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25611. -- Assignee: Tamas Payer Resolution: Workaround > After purging AMS database "TimelineMetricMetadataKey is null" error is thrown > -- > > Key: AMBARI-25611 > URL: https://issues.apache.org/jira/browse/AMBARI-25611 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Minor > > After purging the Metrics Collector's database the following error message > appears in the logs: > {code:java} > 2021-01-15 11:30:48,921 ERROR > org.apache.ambari.metrics.core.timeline.discovery.TimelineMetricMetadataManager: > TimelineMetricMetadataKey is null for : [-1, -1, -64, -112, -85, -17, 39, > -84, -121, 19, -118, -36, -104, -21, -7, 110, 61, -97, -56, 10]{code} > > *The following steps were used to purge the database:* > * +Get some configuration values:+ > * Get _hbase.rootdir_. *hbase.rootdir = /user/ams/hbase* > * Get _hbase.tmp.dir_. *hbase.tmp.dir = > /var/lib/ambari-metrics-collector/hbase-tmp* > * Get _hbase.zookeeper.property.datadir_. *hbase.zookeeper.property.datadir > = ${hbase.tmp.dir}/zookeeper* > * Get _phoenix.spool.directory_. *phoenix.spool.directory = > ${hbase.tmp.dir}/phoenix-spool* > * Get ‘ZooKeeper Znode Parent’. *ZooKeeper Znode Parent = > /ams-hbase-unsecure* > * *Stop AMS and set it to Maintenance Mode* > * Remove the HBase database > * _su hdfs_ > * _hdfs dfs -ls /user/ams/hbase_ > * _hdfs dfs -rm -r /user/ams/hbase_ > * _exit (back to root from hdfs user)_ > * Remove the AMS Zookeeper data *on both Collector Nodes* > * _su ams_ > * _ls /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/_ > * _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/*_ > * Remove any Phoenix spool files *on both Collector Nodes* > * _ls /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/_ > * _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/*_ > * _exit_ (back to root from ams user) > * Connect to the cluster zookeeper instance and delete the ‘_ZooKeeper Znode > Parent’/meta-region-server_ node > * _/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server localhost:2181_ > * _[zk: localhost:2181(CONNECTED) 0]_ > * _rmr /ams-hbase-unsecure/meta-region-server_ > * _quit_ > * Restart AMS > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (AMBARI-25611) After purging AMS database "TimelineMetricMetadataKey is null" error is thrown
[ https://issues.apache.org/jira/browse/AMBARI-25611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17292815#comment-17292815 ] Tamas Payer commented on AMBARI-25611: -- This issue is appears only if there are multiple Metrics Collectors on the cluster where the data purge war done. It seems that the metadata cache are not in sync on the collectors. The situation can be worked around by the following purge procedure: h2. Cleaning up Ambari Metrics System Data * Get some configuration values: * Get _hbase.rootdir_. *hbase.rootdir = /user/ams/hbase* * Get _hbase.tmp.dir_. *hbase.tmp.dir = /var/lib/ambari-metrics-collector/hbase-tmp* * Get _hbase.zookeeper.property.datadir_. *hbase.zookeeper.property.datadir = ${hbase.tmp.dir}/zookeeper* * Get _phoenix.spool.directory_. *phoenix.spool.directory = ${hbase.tmp.dir}/phoenix-spool* * Get ‘ZooKeeper Znode Parent’. *ZooKeeper Znode Parent = /ams-hbase-unsecure* * Stop AMS and set it to Maintenance Mode * Remove the HBase database ** _su hdfs_ ** _hdfs dfs -ls /user/ams/hbase_ ** _hdfs dfs -rm -r /user/ams/hbase_ ** _exit_ (back to root from hdfs user) * Remove the AMS Zookeeper data *on both Collector Nodes* ** _su ams_ ** _ls /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/_ ** _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper_ * Remove any Phoenix spool files *on both Collector Nodes* ** _ls /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/_ ** _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool_ * _exit_ (back to root from ams user) * Connect to the cluster zookeeper instance and delete the ‘_ZooKeeper Znode Parent’/meta-region-server_ node ** _zkCli.sh -server localhost:2181_ ** _[zk: localhost:2181(CONNECTED) 0]__ _ _rmr /ams-hbase-unsecure/meta-region-server_ ** _quit_ * *If there are two Metrics Collectors on the cluster* * *Start only one of the Collectors - the primary one. On the Hosts page of Ambari start the individual Metrics Collector component.* * *Watch the ambari-metrics-collector.log (tail -f ambari-metrics-collector.log) and wait for a few aggregation cycles.* * *Finally start the Ambari Metrics normally from Ambari. That will start up the second Collector along with the other components.* * Disable the Maintenance Mode > After purging AMS database "TimelineMetricMetadataKey is null" error is thrown > -- > > Key: AMBARI-25611 > URL: https://issues.apache.org/jira/browse/AMBARI-25611 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Tamas Payer >Priority: Minor > > After purging the Metrics Collector's database the following error message > appears in the logs: > {code:java} > 2021-01-15 11:30:48,921 ERROR > org.apache.ambari.metrics.core.timeline.discovery.TimelineMetricMetadataManager: > TimelineMetricMetadataKey is null for : [-1, -1, -64, -112, -85, -17, 39, > -84, -121, 19, -118, -36, -104, -21, -7, 110, 61, -97, -56, 10]{code} > > *The following steps were used to purge the database:* > * +Get some configuration values:+ > * Get _hbase.rootdir_. *hbase.rootdir = /user/ams/hbase* > * Get _hbase.tmp.dir_. *hbase.tmp.dir = > /var/lib/ambari-metrics-collector/hbase-tmp* > * Get _hbase.zookeeper.property.datadir_. *hbase.zookeeper.property.datadir > = ${hbase.tmp.dir}/zookeeper* > * Get _phoenix.spool.directory_. *phoenix.spool.directory = > ${hbase.tmp.dir}/phoenix-spool* > * Get ‘ZooKeeper Znode Parent’. *ZooKeeper Znode Parent = > /ams-hbase-unsecure* > * *Stop AMS and set it to Maintenance Mode* > * Remove the HBase database > * _su hdfs_ > * _hdfs dfs -ls /user/ams/hbase_ > * _hdfs dfs -rm -r /user/ams/hbase_ > * _exit (back to root from hdfs user)_ > * Remove the AMS Zookeeper data *on both Collector Nodes* > * _su ams_ > * _ls /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/_ > * _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/*_ > * Remove any Phoenix spool files *on both Collector Nodes* > * _ls /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/_ > * _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/*_ > * _exit_ (back to root from ams user) > * Connect to the cluster zookeeper instance and delete the ‘_ZooKeeper Znode > Parent’/meta-region-server_ node > * _/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server localhost:2181_ > * _[zk: localhost:2181(CONNECTED) 0]_ > * _rmr /ams-hbase-unsecure/meta-region-server_ > * _quit_ > * Restart AMS > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25569) Reassess Ambari Metrics data migration
[ https://issues.apache.org/jira/browse/AMBARI-25569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25569. -- Resolution: Fixed > Reassess Ambari Metrics data migration > -- > > Key: AMBARI-25569 > URL: https://issues.apache.org/jira/browse/AMBARI-25569 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metric-collector, migration, pull-request-available > Fix For: 2.7.6 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > The data migration process of Ambari Metrics as described at > [https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/upgrading_HDP_post_upgrade_tasks.html] > is causing issues, like not migrating data that would be expected by the > user. (e.g. Yarn Queue metrics other than the root queue's.) > The data migration is usually called by the > > {code:java} > /usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start > /etc/ambari-metrics-collector/conf/metrics_whitelist "31556952000" > {code} > command where the whitelist is specified. > The migration code only looks for the metrics that are present in the > whitelist file. This is true even in the case when the AMS Whitelisting is > not enabled. The user will only have those metrics migrated that are present > in the whitelist file, which is usually not all that are required. > > I suggest the following change: > - If whitelist file parameter *is provided* then > ** migrate only the metrics that are in the whitelist file > - if *--allmetrics* value is provided in place of whitelist file parameter > then > * > ** migrate all metrics regardless of other configuration settings > - if whitelist file parameter is *not provided* ( and the time period for > data migration is also not provided) then > ** if whitelisting is *enabled* then > *** discover the whitelist file configured in AMS and migrate only the > metrics that are in the whitelist file > ** if whitelisting is *disabled* then > *** migrate *all the metrics* present in the database > *Examples:* > * {{*Migrate the metrics present in the whitelist file that are not older > than one year (365 days)*}} > /usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start > /etc/ambari-metrics-collector/conf/metrics_whitelist "365" > * {{*Migrate the metrics present in the whitelist file that are not older > than the default one month (30 days)*}} > {{/usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start > /etc/ambari-metrics-collector/conf/metrics_whitelist}} > * {{*Migrate all metrics that are not older than one year (365 days)*}} > {{/usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start --allmetrics "365"}} > * {{*Migrate all metrics*}} *that are not older than the default one month > (30 days)* > {{/usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start --allmetrics}} > * *If whitelisting is enabled then migrate the metrics present in the > whitelist file configured in Ambari that are not older than the default one > month (30 days). If whitelisting is disabled M**igrate all metrics that are > not older than the default one month.* > /usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start > > *1. Introduce an '--allmetrics' to enforce migration of all metrics > regardless of other settings.* > Due to the suboptimal argument handling, if one wants to define an argument > that comes after the 'whitelist file' > argument - like the 'starttime' - the 'whitelist file' argument must be > defined. > But when we don't want to use the whitelist data because we need to migrate > all the metrics the '--allmetrics' argument can be provided instead of > 'whitelist file'. > Example: migrate all the metrics from the last year > {{/usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start --allmetrics "365"}} > *2. The start time handling should be fixed and changed* > * The code is intended to migrate data from the "last x milliseconds" as the > handling of the default data shows where the startTime is subtracted from the > current timestamp. > {{public static final long DEFAULT_START_TIME = System.currentTimeMillis() - > ONE_MONTH_MILLIS; //Last month}} > But when the user externally provided the {{startTime}} value it was not > subtracted from the current timestamp, but was used as it is, which is indeed > erroneous. > * Also, I suggest using days instea
[jira] [Resolved] (AMBARI-25547) Update Grafana version to 6.7.4 to avoid CVE-2020-13379
[ https://issues.apache.org/jira/browse/AMBARI-25547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25547. -- Resolution: Fixed > Update Grafana version to 6.7.4 to avoid CVE-2020-13379 > --- > > Key: AMBARI-25547 > URL: https://issues.apache.org/jira/browse/AMBARI-25547 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Éberhardt Péter >Assignee: Tamas Payer >Priority: Critical > Labels: grafana > Fix For: 2.7.6 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > Uplift grafana vesion to 6.7.4 > Grafana Vulnerability CVE-2020-13379 > [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13379] > [https://grafana.com/blog/2020/06/03/grafana-6.7.4-and-7.0.2-released-with-important-security-fix/] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25547) Update Grafana version to 6.7.4 to avoid CVE-2020-13379
[ https://issues.apache.org/jira/browse/AMBARI-25547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25547: - Labels: grafana (was: ) > Update Grafana version to 6.7.4 to avoid CVE-2020-13379 > --- > > Key: AMBARI-25547 > URL: https://issues.apache.org/jira/browse/AMBARI-25547 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Éberhardt Péter >Assignee: Tamas Payer >Priority: Critical > Labels: grafana > Fix For: 2.7.6 > > Time Spent: 10m > Remaining Estimate: 0h > > Uplift grafana vesion to 6.7.4 > Grafana Vulnerability CVE-2020-13379 > [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13379] > [https://grafana.com/blog/2020/06/03/grafana-6.7.4-and-7.0.2-released-with-important-security-fix/] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (AMBARI-25611) After purging AMS database "TimelineMetricMetadataKey is null" error is thrown
[ https://issues.apache.org/jira/browse/AMBARI-25611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17267133#comment-17267133 ] Tamas Payer commented on AMBARI-25611: -- After my preliminary assessment it seems that this phenomena happens when the database is purged and the Metrics Collector is starting to get metrics that are not in already in the metadata store. After restarting AMS the metadata memory store is rebuild and the messages should go away. They may appear until there are new metrics coming that are not already in the metastore. So after a few restarts AMS should be stabilised and the messages disappear. > After purging AMS database "TimelineMetricMetadataKey is null" error is thrown > -- > > Key: AMBARI-25611 > URL: https://issues.apache.org/jira/browse/AMBARI-25611 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Tamas Payer >Priority: Minor > > After purging the Metrics Collector's database the following error message > appears in the logs: > {code:java} > 2021-01-15 11:30:48,921 ERROR > org.apache.ambari.metrics.core.timeline.discovery.TimelineMetricMetadataManager: > TimelineMetricMetadataKey is null for : [-1, -1, -64, -112, -85, -17, 39, > -84, -121, 19, -118, -36, -104, -21, -7, 110, 61, -97, -56, 10]{code} > > *The following steps were used to purge the database:* > * +Get some configuration values:+ > * Get _hbase.rootdir_. *hbase.rootdir = /user/ams/hbase* > * Get _hbase.tmp.dir_. *hbase.tmp.dir = > /var/lib/ambari-metrics-collector/hbase-tmp* > * Get _hbase.zookeeper.property.datadir_. *hbase.zookeeper.property.datadir > = ${hbase.tmp.dir}/zookeeper* > * Get _phoenix.spool.directory_. *phoenix.spool.directory = > ${hbase.tmp.dir}/phoenix-spool* > * Get ‘ZooKeeper Znode Parent’. *ZooKeeper Znode Parent = > /ams-hbase-unsecure* > * *Stop AMS and set it to Maintenance Mode* > * Remove the HBase database > * _su hdfs_ > * _hdfs dfs -ls /user/ams/hbase_ > * _hdfs dfs -rm -r /user/ams/hbase_ > * _exit (back to root from hdfs user)_ > * Remove the AMS Zookeeper data *on both Collector Nodes* > * _su ams_ > * _ls /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/_ > * _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/*_ > * Remove any Phoenix spool files *on both Collector Nodes* > * _ls /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/_ > * _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/*_ > * _exit_ (back to root from ams user) > * Connect to the cluster zookeeper instance and delete the ‘_ZooKeeper Znode > Parent’/meta-region-server_ node > * _/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server localhost:2181_ > * _[zk: localhost:2181(CONNECTED) 0]_ > * _rmr /ams-hbase-unsecure/meta-region-server_ > * _quit_ > * Restart AMS > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25611) After purging AMS database "TimelineMetricMetadataKey is null" error is thrown
[ https://issues.apache.org/jira/browse/AMBARI-25611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25611: - Priority: Minor (was: Major) > After purging AMS database "TimelineMetricMetadataKey is null" error is thrown > -- > > Key: AMBARI-25611 > URL: https://issues.apache.org/jira/browse/AMBARI-25611 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Tamas Payer >Priority: Minor > > After purging the Metrics Collector's database the following error message > appears in the logs: > {code:java} > 2021-01-15 11:30:48,921 ERROR > org.apache.ambari.metrics.core.timeline.discovery.TimelineMetricMetadataManager: > TimelineMetricMetadataKey is null for : [-1, -1, -64, -112, -85, -17, 39, > -84, -121, 19, -118, -36, -104, -21, -7, 110, 61, -97, -56, 10]{code} > > *The following steps were used to purge the database:* > * +Get some configuration values:+ > * Get _hbase.rootdir_. *hbase.rootdir = /user/ams/hbase* > * Get _hbase.tmp.dir_. *hbase.tmp.dir = > /var/lib/ambari-metrics-collector/hbase-tmp* > * Get _hbase.zookeeper.property.datadir_. *hbase.zookeeper.property.datadir > = ${hbase.tmp.dir}/zookeeper* > * Get _phoenix.spool.directory_. *phoenix.spool.directory = > ${hbase.tmp.dir}/phoenix-spool* > * Get ‘ZooKeeper Znode Parent’. *ZooKeeper Znode Parent = > /ams-hbase-unsecure* > * *Stop AMS and set it to Maintenance Mode* > * Remove the HBase database > * _su hdfs_ > * _hdfs dfs -ls /user/ams/hbase_ > * _hdfs dfs -rm -r /user/ams/hbase_ > * _exit (back to root from hdfs user)_ > * Remove the AMS Zookeeper data *on both Collector Nodes* > * _su ams_ > * _ls /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/_ > * _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/*_ > * Remove any Phoenix spool files *on both Collector Nodes* > * _ls /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/_ > * _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/*_ > * _exit_ (back to root from ams user) > * Connect to the cluster zookeeper instance and delete the ‘_ZooKeeper Znode > Parent’/meta-region-server_ node > * _/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server localhost:2181_ > * _[zk: localhost:2181(CONNECTED) 0]_ > * _rmr /ams-hbase-unsecure/meta-region-server_ > * _quit_ > * Restart AMS > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25611) After purging AMS database "TimelineMetricMetadataKey is null" error is thrown
Tamas Payer created AMBARI-25611: Summary: After purging AMS database "TimelineMetricMetadataKey is null" error is thrown Key: AMBARI-25611 URL: https://issues.apache.org/jira/browse/AMBARI-25611 Project: Ambari Issue Type: Task Components: ambari-metrics Affects Versions: 2.7.5 Reporter: Tamas Payer After purging the Metrics Collector's database the following error message appears in the logs: {code:java} 2021-01-15 11:30:48,921 ERROR org.apache.ambari.metrics.core.timeline.discovery.TimelineMetricMetadataManager: TimelineMetricMetadataKey is null for : [-1, -1, -64, -112, -85, -17, 39, -84, -121, 19, -118, -36, -104, -21, -7, 110, 61, -97, -56, 10]{code} *The following steps were used to purge the database:* * +Get some configuration values:+ * Get _hbase.rootdir_. *hbase.rootdir = /user/ams/hbase* * Get _hbase.tmp.dir_. *hbase.tmp.dir = /var/lib/ambari-metrics-collector/hbase-tmp* * Get _hbase.zookeeper.property.datadir_. *hbase.zookeeper.property.datadir = ${hbase.tmp.dir}/zookeeper* * Get _phoenix.spool.directory_. *phoenix.spool.directory = ${hbase.tmp.dir}/phoenix-spool* * Get ‘ZooKeeper Znode Parent’. *ZooKeeper Znode Parent = /ams-hbase-unsecure* * *Stop AMS and set it to Maintenance Mode* * Remove the HBase database * _su hdfs_ * _hdfs dfs -ls /user/ams/hbase_ * _hdfs dfs -rm -r /user/ams/hbase_ * _exit (back to root from hdfs user)_ * Remove the AMS Zookeeper data *on both Collector Nodes* * _su ams_ * _ls /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/_ * _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/*_ * Remove any Phoenix spool files *on both Collector Nodes* * _ls /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/_ * _rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/phoenix-spool/*_ * _exit_ (back to root from ams user) * Connect to the cluster zookeeper instance and delete the ‘_ZooKeeper Znode Parent’/meta-region-server_ node * _/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server localhost:2181_ * _[zk: localhost:2181(CONNECTED) 0]_ * _rmr /ams-hbase-unsecure/meta-region-server_ * _quit_ * Restart AMS -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25569) Reassess Ambari Metrics data migration
[ https://issues.apache.org/jira/browse/AMBARI-25569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25569: - Description: The data migration process of Ambari Metrics as described at [https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/upgrading_HDP_post_upgrade_tasks.html] is causing issues, like not migrating data that would be expected by the user. (e.g. Yarn Queue metrics other than the root queue's.) The data migration is usually called by the {code:java} /usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start /etc/ambari-metrics-collector/conf/metrics_whitelist "31556952000" {code} command where the whitelist is specified. The migration code only looks for the metrics that are present in the whitelist file. This is true even in the case when the AMS Whitelisting is not enabled. The user will only have those metrics migrated that are present in the whitelist file, which is usually not all that are required. I suggest the following change: - If whitelist file parameter *is provided* then ** migrate only the metrics that are in the whitelist file - if *--allmetrics* value is provided in place of whitelist file parameter then * ** migrate all metrics regardless of other configuration settings - if whitelist file parameter is *not provided* ( and the time period for data migration is also not provided) then ** if whitelisting is *enabled* then *** discover the whitelist file configured in AMS and migrate only the metrics that are in the whitelist file ** if whitelisting is *disabled* then *** migrate *all the metrics* present in the database *Examples:* * {{*Migrate the metrics present in the whitelist file that are not older than one year (365 days)*}} /usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start /etc/ambari-metrics-collector/conf/metrics_whitelist "365" * {{*Migrate the metrics present in the whitelist file that are not older than the default one month (30 days)*}} {{/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start /etc/ambari-metrics-collector/conf/metrics_whitelist}} * {{*Migrate all metrics that are not older than one year (365 days)*}} {{/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start --allmetrics "365"}} * {{*Migrate all metrics*}} *that are not older than the default one month (30 days)* {{/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start --allmetrics}} * *If whitelisting is enabled then migrate the metrics present in the whitelist file configured in Ambari that are not older than the default one month (30 days). If whitelisting is disabled M**igrate all metrics that are not older than the default one month.* /usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start *1. Introduce an '--allmetrics' to enforce migration of all metrics regardless of other settings.* Due to the suboptimal argument handling, if one wants to define an argument that comes after the 'whitelist file' argument - like the 'starttime' - the 'whitelist file' argument must be defined. But when we don't want to use the whitelist data because we need to migrate all the metrics the '--allmetrics' argument can be provided instead of 'whitelist file'. Example: migrate all the metrics from the last year {{/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start --allmetrics "365"}} *2. The start time handling should be fixed and changed* * The code is intended to migrate data from the "last x milliseconds" as the handling of the default data shows where the startTime is subtracted from the current timestamp. {{public static final long DEFAULT_START_TIME = System.currentTimeMillis() - ONE_MONTH_MILLIS; //Last month}} But when the user externally provided the {{startTime}} value it was not subtracted from the current timestamp, but was used as it is, which is indeed erroneous. * Also, I suggest using days instead of milliseconds to define the required migration time window, because it is a more realistic and convenient granularity. Like in the above example the command will migrate data from the last 365 days. *3. Furthermore, the migration process frequently dies silently while saving the metadata.* The log message "Saving metadata to store..." is present in the logs but the "Metadata was saved." is mostly never there, but there are no other error messages. I suggest revising the current solution where the saving of the metadata is triggered in a Shutdown hook. was: The data migration process of Ambari Metrics as described at [https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/upgrading_HDP_post_up
[jira] [Updated] (AMBARI-25569) Reassess Ambari Metrics data migration
[ https://issues.apache.org/jira/browse/AMBARI-25569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25569: - Description: The data migration process of Ambari Metrics as described at [https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/upgrading_HDP_post_upgrade_tasks.html] is causing issues, like not migrating data that would be expected by the user. (e.g. Yarn Queue metrics other than the root queue's.) The data migration is usually called by the {code:java} /usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start /etc/ambari-metrics-collector/conf/metrics_whitelist "31556952000" {code} command where the whitelist is specified. The migration code only looks for the metrics that are present in the whitelist file. This is true even in the case when the AMS Whitelisting is not enabled. The user will only have those metrics migrated that are present in the whitelist file, which is usually not all that are required. I suggest the following change: - If whitelist file parameter *is provided* then ** migrate only the metrics that are in the whitelist file - if *--allmetrics* value is provided in place of whitelist file parameter then ** migrate all metrics regardless of other configuration settings - if whitelist file parameter is *not provided* ( and the time period for data migration is also not provided) then ** if whitelisting is *enabled* then *** discover the whitelist file configured in AMS and migrate only the metrics that are in the whitelist file ** if whitelisting is *disabled* then *** migrate *all the metrics* present in the database *Examples:* * {{*Migrate the metrics present in the whitelist file that are not older than one year (365 days)*}} {{/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start /etc/ambari-metrics-collector/conf/metrics_whitelist "365" }} * *Migrate the metrics present in the whitelist file that are not older than the default one month (30 days)* {{/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start /etc/ambari-metrics-collector/conf/metrics_whitelist}} * {{*Migrate all metrics that are not older than one year (365 days)*}} {{/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start --allmetrics "365"}} * {{*{{Migrate }}{{all metrics}}*{{ *that are not older than the default one month (30 days)* {{/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start --allmetrics}} * {{*If whitelisting is enabled then migrate the metrics present in the whitelist file configured in Ambari that are not older than the default one month (30 days). If whitelisting is disabled M**igrate all metrics that are not older than the default one month.* /usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start}} }}{{}}{{ {{}} *1. Introduce an '--allmetrics' to enforce migration of all metrics regardless of other settings.* Due to the suboptimal argument handling, if one wants to define an argument that comes after the 'whitelist file' argument - like the 'starttime' - the 'whitelist file' argument must be defined. But when we don't want to use the whitelist data because we need to migrate all the metrics the '--allmetrics' argument can be provided instead of 'whitelist file'. Example: migrate all the metrics from the last year {{/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start --allmetrics "365"}} *2. The start time handling should be fixed and changed* * The code is intended to migrate data from the "last x milliseconds" as the handling of the default data shows where the startTime is subtracted from the current timestamp. {{public static final long DEFAULT_START_TIME = System.currentTimeMillis() - ONE_MONTH_MILLIS; //Last month}} But when the user externally provided the {{startTime}} value it was not subtracted from the current timestamp, but was used as it is, which is indeed erroneous. * Also, I suggest using days instead of milliseconds to define the required migration time window, because it is a more realistic and convenient granularity. Like in the above example the command will migrate data from the last 365 days. *3. Furthermore, the migration process frequently dies silently while saving the metadata.* The log message "Saving metadata to store..." is present in the logs but the "Metadata was saved." is mostly never there, but there are no other error messages. I suggest revising the current solution where the saving of the metadata is triggered in a Shutdown hook. was: The data migration process of Ambari Metrics as described at [https://docs.clouder
[jira] [Commented] (AMBARI-25572) Metrics cannot be stored if mutation size is not set properly
[ https://issues.apache.org/jira/browse/AMBARI-25572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235342#comment-17235342 ] Tamas Payer commented on AMBARI-25572: -- [~echohlne], feel free to work on this. Thanks! > Metrics cannot be stored if mutation size is not set properly > - > > Key: AMBARI-25572 > URL: https://issues.apache.org/jira/browse/AMBARI-25572 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Priority: Major > Labels: metric-collector, reliability > > Ambari Metrics Collector sometimes fails to store metric because the batch > size settings does not fit to the current row size and the assembled batch > probably does not fit into the maximum batch memory space. > {code:java} > 2020-10-20 10:29:04,752 WARN > org.apache.ambari.metrics.core.timeline.PhoenixHBaseAccessor: Failed on > insert records to store : ERROR 730 (LIM02): MutationState size is bigger > than maximum allowed number of bytes > 2020-10-20 10:29:04,752 WARN > org.apache.ambari.metrics.core.timeline.PhoenixHBaseAccessor: Metric that > cannot be stored : > [kafka.server.ConsumerGroupMetrics.CommittedOffset.clientId.-.group.console-consumer-46399.partition.0.topic.PREPROD_KAFKA_CBS_TRANSACTION._sum,kafka_broker]{1603186125049=2.65274693E8}{code} > This error can be eliminated by the tweaking of *phoenix.mutate.batchSize* > and *phoenix.mutate.batchSizeBytes* configuration settings. > *Asses the feasibility of changing the AMS implementation that retries upon > failure after manipulation the auto-commit or batch size, or whatever is > needed for the successful commit and notifies the user about sub optimal > batch size settings.* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25582) Change the way AMS Grafana datasource discovers the Kafka topics
[ https://issues.apache.org/jira/browse/AMBARI-25582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25582: - Description: Currently the available Kafka topics are discovered by extracting them from the kafka.log.Log.* metrics in the AMS Datasource to show them on the Kafka-Topics dashboard. This means that if whitelisting is used the kafka.log.Log.* metrics must be enabled and must not be excluded by 'external.kafka.metrics.exclude.prefix' Kafka property either. However, in some cases the large amount of kafka.log.Log.* metrics can be a burden to AMS, so excluding them would be welcomed. The possibility to discover the Kafka topics should be assessed! was: Currently the available Kafka topics are discovered by extracting them from the kafka.log.Log.* metrics in the AMS Datasource to show them on the Kafka-Topics dashboard. This means that if whitelisting is used the kafka.log.Log.* metrics must be enabled and must not be excluded by 'external.kafka.metrics.exclude.prefix' Kafka propery. However, in some cases the large amount of kafka.log.Log.* metrics can be a burden to AMS, so excluding them would be welcomed. The possibility to discover the Kafka topics should be assessed! > Change the way AMS Grafana datasource discovers the Kafka topics > - > > Key: AMBARI-25582 > URL: https://issues.apache.org/jira/browse/AMBARI-25582 > Project: Ambari > Issue Type: Task >Reporter: Tamas Payer >Priority: Major > Labels: ambari-metrics, grafana > > Currently the available Kafka topics are discovered by extracting them from > the kafka.log.Log.* > metrics in the AMS Datasource to show them on the Kafka-Topics dashboard. > This means that if whitelisting is used the kafka.log.Log.* metrics must be > enabled and must not be excluded by 'external.kafka.metrics.exclude.prefix' > Kafka property either. > However, in some cases the large amount of kafka.log.Log.* metrics can be a > burden to AMS, so excluding them would be welcomed. > The possibility to discover the Kafka topics should be assessed! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25582) Change the way AMS Grafana datasource discovers the Kafka topics
Tamas Payer created AMBARI-25582: Summary: Change the way AMS Grafana datasource discovers the Kafka topics Key: AMBARI-25582 URL: https://issues.apache.org/jira/browse/AMBARI-25582 Project: Ambari Issue Type: Task Reporter: Tamas Payer Currently the available Kafka topics are discovered by extracting them from the kafka.log.Log.* metrics in the AMS Datasource to show them on the Kafka-Topics dashboard. This means that if whitelisting is used the kafka.log.Log.* metrics must be enabled and must not be excluded by 'external.kafka.metrics.exclude.prefix' Kafka propery. However, in some cases the large amount of kafka.log.Log.* metrics can be a burden to AMS, so excluding them would be welcomed. The possibility to discover the Kafka topics should be assessed! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25563) Storm dashboards are not showing metrics
[ https://issues.apache.org/jira/browse/AMBARI-25563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25563. -- Resolution: Fixed > Storm dashboards are not showing metrics > > > Key: AMBARI-25563 > URL: https://issues.apache.org/jira/browse/AMBARI-25563 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: grafana, storm > Fix For: 2.7.6 > > Time Spent: 40m > Remaining Estimate: 0h > > All the Storm dashboards are failing to show metric data. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (AMBARI-25569) Reassess Ambari Metrics data migration
[ https://issues.apache.org/jira/browse/AMBARI-25569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer reopened AMBARI-25569: -- Reopening due to new issues. > Reassess Ambari Metrics data migration > -- > > Key: AMBARI-25569 > URL: https://issues.apache.org/jira/browse/AMBARI-25569 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metric-collector, migration, pull-request-available > Fix For: 2.7.6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > The data migration process of Ambari Metrics as described at > [https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/upgrading_HDP_post_upgrade_tasks.html] > is causing issues, like not migrating data that would be expected by the > user. (e.g. Yarn Queue metrics other than the root queue's.) > The data migration is usually called by the > > {code:java} > /usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start > /etc/ambari-metrics-collector/conf/metrics_whitelist > {code} > command where the whitelist is specified. > The migration code only looks for the metrics that are present in the > whitelist file. This is true even in the case when the AMS Whitelisting is > not enabled. The user will only have those metrics migrated that are present > in the whitelist file, which is usually not all that are required. > > I suggest the following change: > - If whitelist file parameter i*s provided* then > - migrate only the metrics that are in the whitelist file > - if whitelist file parameter is *not provided* then > - if whitelisting is *enabled* then > - discover the whitelist file configured in AMS and > migrate only the metrics that are in the whitelist file > - if whitelisting is *disabled* then > - migrate *all the metrics* present in the database > > Furthermore, the migration process frequently dies silently while saving the > metadata. The log message "Saving metadata to store..." is present in the > logs but the "Metadata was saved." is mostly never there, but there are no > other error messages. > I suggest revising the current solution where the saving of the metadata is > triggered in a Shutdown hook. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (AMBARI-25573) Ambari Metrics save as JSON/CSV use metricName instead of default name.
[ https://issues.apache.org/jira/browse/AMBARI-25573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17222049#comment-17222049 ] Tamas Payer commented on AMBARI-25573: -- [~echohlne], thank you for the contribution! > Ambari Metrics save as JSON/CSV use metricName instead of default name. > > > Key: AMBARI-25573 > URL: https://issues.apache.org/jira/browse/AMBARI-25573 > Project: Ambari > Issue Type: Improvement >Affects Versions: 2.7.5 >Reporter: akiyamaneko >Priority: Major > Labels: pull-request-available, widget > Fix For: 2.7.6 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The name of the metrics data exported in Ambari-web is always data.json/csv, > and the exported file must be renamed to distinguish which metric the data > belongs to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25573) Ambari Metrics save as JSON/CSV use metricName instead of default name.
[ https://issues.apache.org/jira/browse/AMBARI-25573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25573: - Labels: pull-request-available widget (was: ) > Ambari Metrics save as JSON/CSV use metricName instead of default name. > > > Key: AMBARI-25573 > URL: https://issues.apache.org/jira/browse/AMBARI-25573 > Project: Ambari > Issue Type: Improvement >Affects Versions: 2.7.5 >Reporter: akiyamaneko >Priority: Major > Labels: pull-request-available, widget > Time Spent: 1h 10m > Remaining Estimate: 0h > > The name of the metrics data exported in Ambari-web is always data.json/csv, > and the exported file must be renamed to distinguish which metric the data > belongs to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25573) Ambari Metrics save as JSON/CSV use metricName instead of default name.
[ https://issues.apache.org/jira/browse/AMBARI-25573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25573. -- Resolution: Fixed > Ambari Metrics save as JSON/CSV use metricName instead of default name. > > > Key: AMBARI-25573 > URL: https://issues.apache.org/jira/browse/AMBARI-25573 > Project: Ambari > Issue Type: Improvement >Affects Versions: 2.7.5 >Reporter: akiyamaneko >Priority: Major > Labels: pull-request-available, widget > Fix For: 2.7.6 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The name of the metrics data exported in Ambari-web is always data.json/csv, > and the exported file must be renamed to distinguish which metric the data > belongs to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25573) Ambari Metrics save as JSON/CSV use metricName instead of default name.
[ https://issues.apache.org/jira/browse/AMBARI-25573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25573: - Fix Version/s: 2.7.6 > Ambari Metrics save as JSON/CSV use metricName instead of default name. > > > Key: AMBARI-25573 > URL: https://issues.apache.org/jira/browse/AMBARI-25573 > Project: Ambari > Issue Type: Improvement >Affects Versions: 2.7.5 >Reporter: akiyamaneko >Priority: Major > Labels: pull-request-available, widget > Fix For: 2.7.6 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The name of the metrics data exported in Ambari-web is always data.json/csv, > and the exported file must be renamed to distinguish which metric the data > belongs to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25570) Move DataNode RPC metrics graphs to HDFS DataNode dashboard and fix the metric name change
[ https://issues.apache.org/jira/browse/AMBARI-25570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25570: - Labels: grafana pull-request-available (was: ) > Move DataNode RPC metrics graphs to HDFS DataNode dashboard and fix the > metric name change > -- > > Key: AMBARI-25570 > URL: https://issues.apache.org/jira/browse/AMBARI-25570 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: akiyamaneko >Priority: Major > Labels: grafana, pull-request-available > Fix For: 2.7.6 > > Attachments: RPC Service Port metrics shows No datapoints.png > > > Some RPC Service Point metrics are lost in the Grafana NameNode page, shows: > 'No Data Points', as the attachment shows. > Ambari Version:2.7.4 > Grafana URL: http://${Grafana_HOST}:3000/dashboard/db/hdfs-namenodes > The abnormal metrics are shown as follows: > ||metric||widget name|| > |rpc.rpc.datanode.RpcQueueTimeAvgTime|RPC Service Port Queue Time| > |rpc.rpc.datanode.RpcQueueTimeNumOps|RPC Service Port Queue Num Ops| > |rpc.rpc.datanode.RpcProcessingTimeAvgTime|RPC Service Port Processing Time| > |rpc.rpc.datanode.RpcProcessingTimeNumOps|RPC Service Port Processing Num Ops| > |rpc.rpc.datanode.CallQueueLength|RPC Service Port Call Queue Length| > |rpc.rpc.datanode.RpcSlowCalls|RPC Service Port Slow Calls| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25570) Move DataNode RPC metrics graphs to HDFS DataNode dashboard and fix the metric name change
[ https://issues.apache.org/jira/browse/AMBARI-25570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25570. -- Resolution: Fixed Thanks for [~echohlne] for the fix! > Move DataNode RPC metrics graphs to HDFS DataNode dashboard and fix the > metric name change > -- > > Key: AMBARI-25570 > URL: https://issues.apache.org/jira/browse/AMBARI-25570 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: akiyamaneko >Priority: Major > Labels: grafana, pull-request-available > Fix For: 2.7.6 > > Attachments: RPC Service Port metrics shows No datapoints.png > > > Some RPC Service Point metrics are lost in the Grafana NameNode page, shows: > 'No Data Points', as the attachment shows. > Ambari Version:2.7.4 > Grafana URL: http://${Grafana_HOST}:3000/dashboard/db/hdfs-namenodes > The abnormal metrics are shown as follows: > ||metric||widget name|| > |rpc.rpc.datanode.RpcQueueTimeAvgTime|RPC Service Port Queue Time| > |rpc.rpc.datanode.RpcQueueTimeNumOps|RPC Service Port Queue Num Ops| > |rpc.rpc.datanode.RpcProcessingTimeAvgTime|RPC Service Port Processing Time| > |rpc.rpc.datanode.RpcProcessingTimeNumOps|RPC Service Port Processing Num Ops| > |rpc.rpc.datanode.CallQueueLength|RPC Service Port Call Queue Length| > |rpc.rpc.datanode.RpcSlowCalls|RPC Service Port Slow Calls| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25570) Move DataNode RPC metrics graphs to HDFS DataNode dashboard and fix the metric name change
[ https://issues.apache.org/jira/browse/AMBARI-25570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25570: - Summary: Move DataNode RPC metrics graphs to HDFS DataNode dashboard and fix the metric name change (was: Some RPC metrics are lost in the Grafana NameNode page, shows: No Data Points) > Move DataNode RPC metrics graphs to HDFS DataNode dashboard and fix the > metric name change > -- > > Key: AMBARI-25570 > URL: https://issues.apache.org/jira/browse/AMBARI-25570 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: akiyamaneko >Priority: Major > Fix For: 2.7.6 > > Attachments: RPC Service Port metrics shows No datapoints.png > > > Some RPC Service Point metrics are lost in the Grafana NameNode page, shows: > 'No Data Points', as the attachment shows. > Ambari Version:2.7.4 > Grafana URL: http://${Grafana_HOST}:3000/dashboard/db/hdfs-namenodes > The abnormal metrics are shown as follows: > ||metric||widget name|| > |rpc.rpc.datanode.RpcQueueTimeAvgTime|RPC Service Port Queue Time| > |rpc.rpc.datanode.RpcQueueTimeNumOps|RPC Service Port Queue Num Ops| > |rpc.rpc.datanode.RpcProcessingTimeAvgTime|RPC Service Port Processing Time| > |rpc.rpc.datanode.RpcProcessingTimeNumOps|RPC Service Port Processing Num Ops| > |rpc.rpc.datanode.CallQueueLength|RPC Service Port Call Queue Length| > |rpc.rpc.datanode.RpcSlowCalls|RPC Service Port Slow Calls| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25572) Metrics cannot be stored if mutation size is not set properly
Tamas Payer created AMBARI-25572: Summary: Metrics cannot be stored if mutation size is not set properly Key: AMBARI-25572 URL: https://issues.apache.org/jira/browse/AMBARI-25572 Project: Ambari Issue Type: Task Components: ambari-metrics Affects Versions: 2.7.3, 2.7.4, 2.7.5 Reporter: Tamas Payer Ambari Metrics Collector sometimes fails to store metric because the batch size settings does not fit to the current row size and the assembled batch probably does not fit into the maximum batch memory space. {code:java} 2020-10-20 10:29:04,752 WARN org.apache.ambari.metrics.core.timeline.PhoenixHBaseAccessor: Failed on insert records to store : ERROR 730 (LIM02): MutationState size is bigger than maximum allowed number of bytes 2020-10-20 10:29:04,752 WARN org.apache.ambari.metrics.core.timeline.PhoenixHBaseAccessor: Metric that cannot be stored : [kafka.server.ConsumerGroupMetrics.CommittedOffset.clientId.-.group.console-consumer-46399.partition.0.topic.PREPROD_KAFKA_CBS_TRANSACTION._sum,kafka_broker]{1603186125049=2.65274693E8}{code} This error can be eliminated by the tweaking of *phoenix.mutate.batchSize* and *phoenix.mutate.batchSizeBytes* configuration settings. *Asses the feasibility of changing the AMS implementation that retries upon failure after manipulation the auto-commit or batch size, or whatever is needed for the successful commit and notifies the user about sub optimal batch size settings.* -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25568) The 'NodeManager RAM Utilized' metric in the heatmap of YARN does not show the unit
[ https://issues.apache.org/jira/browse/AMBARI-25568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25568. -- Resolution: Fixed [~echohlne] Thanks for the contribution! Closing the issue. > The 'NodeManager RAM Utilized' metric in the heatmap of YARN does not show > the unit > --- > > Key: AMBARI-25568 > URL: https://issues.apache.org/jira/browse/AMBARI-25568 > Project: Ambari > Issue Type: Improvement >Affects Versions: 2.7.4 >Reporter: akiyamaneko >Priority: Trivial > Fix For: 2.7.6 > > Attachments: NodeManager RAM Utilized no unit.png > > > The 'NodeManager RAM Utilized' metric in the heatmap of YARN does not show > the unit,as the attachment shows. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25569) Reassess Ambari Metrics data migration
[ https://issues.apache.org/jira/browse/AMBARI-25569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25569. -- Resolution: Fixed > Reassess Ambari Metrics data migration > -- > > Key: AMBARI-25569 > URL: https://issues.apache.org/jira/browse/AMBARI-25569 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metric-collector, migration, pull-request-available > Fix For: 2.7.6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > The data migration process of Ambari Metrics as described at > [https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/upgrading_HDP_post_upgrade_tasks.html] > is causing issues, like not migrating data that would be expected by the > user. (e.g. Yarn Queue metrics other than the root queue's.) > The data migration is usually called by the > > {code:java} > /usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start > /etc/ambari-metrics-collector/conf/metrics_whitelist > {code} > command where the whitelist is specified. > The migration code only looks for the metrics that are present in the > whitelist file. This is true even in the case when the AMS Whitelisting is > not enabled. The user will only have those metrics migrated that are present > in the whitelist file, which is usually not all that are required. > > I suggest the following change: > - If whitelist file parameter i*s provided* then > - migrate only the metrics that are in the whitelist file > - if whitelist file parameter is *not provided* then > - if whitelisting is *enabled* then > - discover the whitelist file configured in AMS and > migrate only the metrics that are in the whitelist file > - if whitelisting is *disabled* then > - migrate *all the metrics* present in the database > > Furthermore, the migration process frequently dies silently while saving the > metadata. The log message "Saving metadata to store..." is present in the > logs but the "Metadata was saved." is mostly never there, but there are no > other error messages. > I suggest revising the current solution where the saving of the metadata is > triggered in a Shutdown hook. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25569) Reassess Ambari Metrics data migration
[ https://issues.apache.org/jira/browse/AMBARI-25569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25569: - Description: The data migration process of Ambari Metrics as described at [https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/upgrading_HDP_post_upgrade_tasks.html] is causing issues, like not migrating data that would be expected by the user. (e.g. Yarn Queue metrics other than the root queue's.) The data migration is usually called by the {code:java} /usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start /etc/ambari-metrics-collector/conf/metrics_whitelist {code} command where the whitelist is specified. The migration code only looks for the metrics that are present in the whitelist file. This is true even in the case when the AMS Whitelisting is not enabled. The user will only have those metrics migrated that are present in the whitelist file, which is usually not all that are required. I suggest the following change: - If whitelist file parameter i*s provided* then - migrate only the metrics that are in the whitelist file - if whitelist file parameter is *not provided* then - if whitelisting is *enabled* then - discover the whitelist file configured in AMS and migrate only the metrics that are in the whitelist file - if whitelisting is *disabled* then - migrate *all the metrics* present in the database Furthermore, the migration process frequently dies silently while saving the metadata. The log message "Saving metadata to store..." is present in the logs but the "Metadata was saved." is mostly never there, but there are no other error messages. I suggest revising the current solution where the saving of the metadata is triggered in a Shutdown hook. was: The data migration process of Ambari Metrics as described at https://docs.cloudera.com/cdp-private-cloud/latest/upgrade-hdp/topics/amb-migrate-amb-metrics-data.html is causing issues, like not migrating data that would be expected by the user. (e.g. Yarn Queue metrics other than the root queue's.) The data migration is usually called by the {code:java} /usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start /etc/ambari-metrics-collector/conf/metrics_whitelist {code} command where the whitelist is specified. The migration code only looks for the metrics that are present in the whitelist file. This is true even in the case when the AMS Whitelisting is not enabled. The user will only have those metrics migrated that are present in the whitelist file, which is usually not all that are required. I suggest the following change: - If whitelist file parameter i*s provided* then - migrate only the metrics that are in the whitelist file - if whitelist file parameter is *not provided* then - if whitelisting is *enabled* then - discover the whitelist file configured in AMS and migrate only the metrics that are in the whitelist file - if whitelisting is *disabled* then - migrate *all the metrics* present in the database Furthermore, the migration process frequently dies silently while saving the metadata. The log message "Saving metadata to store..." is present in the logs but the "Metadata was saved." is mostly never there, but there are no other error messages. I suggest revising the current solution where the saving of the metadata is triggered in a Shutdown hook. > Reassess Ambari Metrics data migration > -- > > Key: AMBARI-25569 > URL: https://issues.apache.org/jira/browse/AMBARI-25569 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metric-collector, migration, pull-request-available > Fix For: 2.7.6 > > Time Spent: 20m > Remaining Estimate: 0h > > The data migration process of Ambari Metrics as described at > [https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/upgrading_HDP_post_upgrade_tasks.html] > is causing issues, like not migrating data that would be expected by the > user. (e.g. Yarn Queue metrics other than the root queue's.) > The data migration is usually called by the > > {code:java} > /usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start > /etc/ambari-metrics-collector/conf/metrics_whitelist > {code} > command where the whitelist is specified. > The migration code only looks for the metrics that are present in the > whitelist file. This is true even in the case when the AMS Whitelisting is > not enabled. The user wil
[jira] [Updated] (AMBARI-25569) Reassess Ambari Metrics data migration
[ https://issues.apache.org/jira/browse/AMBARI-25569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25569: - Description: The data migration process of Ambari Metrics as described at https://docs.cloudera.com/cdp-private-cloud/latest/upgrade-hdp/topics/amb-migrate-amb-metrics-data.html is causing issues, like not migrating data that would be expected by the user. (e.g. Yarn Queue metrics other than the root queue's.) The data migration is usually called by the {code:java} /usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start /etc/ambari-metrics-collector/conf/metrics_whitelist {code} command where the whitelist is specified. The migration code only looks for the metrics that are present in the whitelist file. This is true even in the case when the AMS Whitelisting is not enabled. The user will only have those metrics migrated that are present in the whitelist file, which is usually not all that are required. I suggest the following change: - If whitelist file parameter i*s provided* then - migrate only the metrics that are in the whitelist file - if whitelist file parameter is *not provided* then - if whitelisting is *enabled* then - discover the whitelist file configured in AMS and migrate only the metrics that are in the whitelist file - if whitelisting is *disabled* then - migrate *all the metrics* present in the database Furthermore, the migration process frequently dies silently while saving the metadata. The log message "Saving metadata to store..." is present in the logs but the "Metadata was saved." is mostly never there, but there are no other error messages. I suggest revising the current solution where the saving of the metadata is triggered in a Shutdown hook. was: The data migration process of Ambari Metrics as described at [https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/upgrading_HDP_post_upgrade_tasks.html] is causing issues, like not migrating data that would be expected by the user. (e.g. Yarn Queue metrics other than the root queue's.) The data migration is usually called by the {code:java} /usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start /etc/ambari-metrics-collector/conf/metrics_whitelist {code} command where the whitelist is specified. The migration code only looks for the metrics that are present in the whitelist file. This is true even in the case when the AMS Whitelisting is not enabled. The user will only have those metrics migrated that are present in the whitelist file, which is usually not all that are required. I suggest the following change: - If whitelist file parameter i*s provided* then - migrate only the metrics that are in the whitelist file - if whitelist file parameter is *not provided* then - if whitelisting is *enabled* then - discover the whitelist file configured in AMS and migrate only the metrics that are in the whitelist file - if whitelisting is *disabled* then - migrate *all the metrics* present in the database Furthermore, the migration process frequently dies silently while saving the metadata. The log message "Saving metadata to store..." is present in the logs but the "Metadata was saved." is mostly never there, but there are no other error messages. I suggest revising the current solution where the saving of the metadata is triggered in a Shutdown hook. > Reassess Ambari Metrics data migration > -- > > Key: AMBARI-25569 > URL: https://issues.apache.org/jira/browse/AMBARI-25569 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metric-collector, migration, pull-request-available > Fix For: 2.7.6 > > Time Spent: 20m > Remaining Estimate: 0h > > The data migration process of Ambari Metrics as described at > https://docs.cloudera.com/cdp-private-cloud/latest/upgrade-hdp/topics/amb-migrate-amb-metrics-data.html > is causing issues, like not migrating data that would be expected by the > user. (e.g. Yarn Queue metrics other than the root queue's.) > The data migration is usually called by the > > {code:java} > /usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start > /etc/ambari-metrics-collector/conf/metrics_whitelist > {code} > command where the whitelist is specified. > The migration code only looks for the metrics that are present in the > whitelist file. This is true even in the case when the AMS Whitelisting is > not enabled. The user will only have those metrics mig
[jira] [Updated] (AMBARI-25569) Reassess Ambari Metrics data migration
[ https://issues.apache.org/jira/browse/AMBARI-25569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25569: - Labels: metric-collector migration pull-request-available (was: metric-collector migration) > Reassess Ambari Metrics data migration > -- > > Key: AMBARI-25569 > URL: https://issues.apache.org/jira/browse/AMBARI-25569 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metric-collector, migration, pull-request-available > Fix For: 2.7.6 > > Time Spent: 10m > Remaining Estimate: 0h > > The data migration process of Ambari Metrics as described at > [https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/upgrading_HDP_post_upgrade_tasks.html] > is causing issues, like not migrating data that would be expected by the > user. (e.g. Yarn Queue metrics other than the root queue's.) > The data migration is usually called by the > > {code:java} > /usr/sbin/ambari-metrics-collector --config > /etc/ambari-metrics-collector/conf/ upgrade_start > /etc/ambari-metrics-collector/conf/metrics_whitelist > {code} > command where the whitelist is specified. > The migration code only looks for the metrics that are present in the > whitelist file. This is true even in the case when the AMS Whitelisting is > not enabled. The user will only have those metrics migrated that are present > in the whitelist file, which is usually not all that are required. > > I suggest the following change: > - If whitelist file parameter i*s provided* then > - migrate only the metrics that are in the whitelist file > - if whitelist file parameter is *not provided* then > - if whitelisting is *enabled* then > - discover the whitelist file configured in AMS and > migrate only the metrics that are in the whitelist file > - if whitelisting is *disabled* then > - migrate *all the metrics* present in the database > > Furthermore, the migration process frequently dies silently while saving the > metadata. The log message "Saving metadata to store..." is present in the > logs but the "Metadata was saved." is mostly never there, but there are no > other error messages. > I suggest revising the current solution where the saving of the metadata is > triggered in a Shutdown hook. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25569) Reassess Ambari Metrics data migration
Tamas Payer created AMBARI-25569: Summary: Reassess Ambari Metrics data migration Key: AMBARI-25569 URL: https://issues.apache.org/jira/browse/AMBARI-25569 Project: Ambari Issue Type: Task Components: ambari-metrics Affects Versions: 2.7.3, 2.7.4, 2.7.5 Reporter: Tamas Payer Assignee: Tamas Payer Fix For: 2.7.6 The data migration process of Ambari Metrics as described at [https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-upgrade-major/content/upgrading_HDP_post_upgrade_tasks.html] is causing issues, like not migrating data that would be expected by the user. (e.g. Yarn Queue metrics other than the root queue's.) The data migration is usually called by the {code:java} /usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf/ upgrade_start /etc/ambari-metrics-collector/conf/metrics_whitelist {code} command where the whitelist is specified. The migration code only looks for the metrics that are present in the whitelist file. This is true even in the case when the AMS Whitelisting is not enabled. The user will only have those metrics migrated that are present in the whitelist file, which is usually not all that are required. I suggest the following change: - If whitelist file parameter i*s provided* then - migrate only the metrics that are in the whitelist file - if whitelist file parameter is *not provided* then - if whitelisting is *enabled* then - discover the whitelist file configured in AMS and migrate only the metrics that are in the whitelist file - if whitelisting is *disabled* then - migrate *all the metrics* present in the database Furthermore, the migration process frequently dies silently while saving the metadata. The log message "Saving metadata to store..." is present in the logs but the "Metadata was saved." is mostly never there, but there are no other error messages. I suggest revising the current solution where the saving of the metadata is triggered in a Shutdown hook. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25563) Storm dashboards are not showing metrics
Tamas Payer created AMBARI-25563: Summary: Storm dashboards are not showing metrics Key: AMBARI-25563 URL: https://issues.apache.org/jira/browse/AMBARI-25563 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.7.5 Reporter: Tamas Payer Assignee: Tamas Payer Fix For: 2.7.6 All the Storm dashboards are failing to show metric data. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25549) NegativeArraySizeException thrown when invoking CurrentCollectorHost
[ https://issues.apache.org/jira/browse/AMBARI-25549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25549. -- Resolution: Fixed > NegativeArraySizeException thrown when invoking CurrentCollectorHost > > > Key: AMBARI-25549 > URL: https://issues.apache.org/jira/browse/AMBARI-25549 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.6.1, 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Fix For: 2.7.6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > SMM is using *AbstractTimelineMetricsSink* class to fetch and push metrics to > Ambari Metric Collector in multi-threaded manner. > When all the AMS live nodes are > down([http://localhost:6188/ws/v1/timeline/metrics/livenodes]), the method > [getCurrentCollectorHost|https://github.com/apache/ambari-metrics/blob/c7dcf2b25241e2cfe6931d6261a43be97e0deaba/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java#L273] > throws _NegativeArraySizeException_. > {code:java} > ava.lang.NegativeArraySizeException: null at > java.util.AbstractCollection.toArray(AbstractCollection.java:136) at > java.util.ArrayList.(ArrayList.java:178) at > org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:460) > at > org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:450) > at > org.apache.hadoop.metrics2.sink.relocated.google.common.base.Suppliers$ExpiringMemoizingSupplier.get(Suppliers.java:192) > at > org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.getCurrentCollectorHost(AbstractTimelineMetricsSink.java:264) > at > com.hortonworks.smm.kafka.services.metric.ams.AMSMetricsFetcher.getCollectorAPIUri(AMSMetricsFetcher.java:231) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25549) NegativeArraySizeException thrown when invoking CurrentCollectorHost
[ https://issues.apache.org/jira/browse/AMBARI-25549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25549: - Description: SMM is using *AbstractTimelineMetricsSink* class to fetch and push metrics to Ambari Metric Collector in multi-threaded manner. When all the AMS live nodes are down([http://localhost:6188/ws/v1/timeline/metrics/livenodes]), the method [getCurrentCollectorHost|https://github.com/apache/ambari-metrics/blob/c7dcf2b25241e2cfe6931d6261a43be97e0deaba/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java#L273] throws _NegativeArraySizeException_. {code:java} ava.lang.NegativeArraySizeException: null at java.util.AbstractCollection.toArray(AbstractCollection.java:136) at java.util.ArrayList.(ArrayList.java:178) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:460) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:450) at org.apache.hadoop.metrics2.sink.relocated.google.common.base.Suppliers$ExpiringMemoizingSupplier.get(Suppliers.java:192) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.getCurrentCollectorHost(AbstractTimelineMetricsSink.java:264) at com.hortonworks.smm.kafka.services.metric.ams.AMSMetricsFetcher.getCollectorAPIUri(AMSMetricsFetcher.java:231) {code} was: We are using *AbstractTimelineMetricsSink* class to fetch and push metrics to Ambari Metric Collector. When all the AMS live nodes are down([http://localhost:6188/ws/v1/timeline/metrics/livenodes]), the method [getCurrentCollectorHost|https://github.com/apache/ambari-metrics/blob/c7dcf2b25241e2cfe6931d6261a43be97e0deaba/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java#L273] throws _NegativeArraySizeException_. {code:java} ava.lang.NegativeArraySizeException: null at java.util.AbstractCollection.toArray(AbstractCollection.java:136) at java.util.ArrayList.(ArrayList.java:178) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:460) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:450) at org.apache.hadoop.metrics2.sink.relocated.google.common.base.Suppliers$ExpiringMemoizingSupplier.get(Suppliers.java:192) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.getCurrentCollectorHost(AbstractTimelineMetricsSink.java:264) at com.hortonworks.smm.kafka.services.metric.ams.AMSMetricsFetcher.getCollectorAPIUri(AMSMetricsFetcher.java:231) {code} > NegativeArraySizeException thrown when invoking CurrentCollectorHost > > > Key: AMBARI-25549 > URL: https://issues.apache.org/jira/browse/AMBARI-25549 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.6.1, 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Fix For: 2.7.6 > > > SMM is using *AbstractTimelineMetricsSink* class to fetch and push metrics to > Ambari Metric Collector in multi-threaded manner. > When all the AMS live nodes are > down([http://localhost:6188/ws/v1/timeline/metrics/livenodes]), the method > [getCurrentCollectorHost|https://github.com/apache/ambari-metrics/blob/c7dcf2b25241e2cfe6931d6261a43be97e0deaba/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java#L273] > throws _NegativeArraySizeException_. > {code:java} > ava.lang.NegativeArraySizeException: null at > java.util.AbstractCollection.toArray(AbstractCollection.java:136) at > java.util.ArrayList.(ArrayList.java:178) at > org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:460) > at > org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:450) > at > org.apache.hadoop.metrics2.sink.relocated.google.common.base.Suppliers$ExpiringMemoizingSupplier.get(Suppliers.java:192) > at > org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.getCurrentCollectorHost(AbstractTimelineMetricsSink.java:264) > at > com.hortonworks.smm.kafka.services.metric.ams.AMSMetricsFetcher.getCollectorAPIUri(AMSMetricsFetcher.java:231) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25549) NegativeArraySizeException thrown when invoking CurrentCollectorHost
Tamas Payer created AMBARI-25549: Summary: NegativeArraySizeException thrown when invoking CurrentCollectorHost Key: AMBARI-25549 URL: https://issues.apache.org/jira/browse/AMBARI-25549 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.6.1, 2.7.3, 2.7.4, 2.7.5 Reporter: Tamas Payer Assignee: Tamas Payer Fix For: 2.7.6 We are using *AbstractTimelineMetricsSink* class to fetch and push metrics to Ambari Metric Collector. When all the AMS live nodes are down([http://localhost:6188/ws/v1/timeline/metrics/livenodes]), the method [getCurrentCollectorHost|https://github.com/apache/ambari-metrics/blob/c7dcf2b25241e2cfe6931d6261a43be97e0deaba/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java#L273] throws _NegativeArraySizeException_. {code:java} ava.lang.NegativeArraySizeException: null at java.util.AbstractCollection.toArray(AbstractCollection.java:136) at java.util.ArrayList.(ArrayList.java:178) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:460) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink$1.get(AbstractTimelineMetricsSink.java:450) at org.apache.hadoop.metrics2.sink.relocated.google.common.base.Suppliers$ExpiringMemoizingSupplier.get(Suppliers.java:192) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.getCurrentCollectorHost(AbstractTimelineMetricsSink.java:264) at com.hortonworks.smm.kafka.services.metric.ams.AMSMetricsFetcher.getCollectorAPIUri(AMSMetricsFetcher.java:231) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25496) Upgrade pre-check fails when attempting to upgrade from HDP 2.6 to 7.1 on ubuntu18
Tamas Payer created AMBARI-25496: Summary: Upgrade pre-check fails when attempting to upgrade from HDP 2.6 to 7.1 on ubuntu18 Key: AMBARI-25496 URL: https://issues.apache.org/jira/browse/AMBARI-25496 Project: Ambari Issue Type: Bug Components: ambari-sever Affects Versions: 2.6.2 Environment: ubuntu18 Reporter: Tamas Payer Assignee: Tamas Payer Fix For: 2.7.6 When trying to Express Upgrade from HDP 2.6.5 to the latest version on ubuntu18 the upgrade pre checks failed. When we hit below api: [http://example.com:8080/api/v1/clusters/cl1/rolling_upgrades_check?fields=*&UpgradeChecks/repository_version_id=51&UpgradeChecks/upgrade_type=NON_ROLLING|http://172.27.30.73:8080/api/v1/clusters/cl1/rolling_upgrades_check?fields=*&UpgradeChecks/repository_version_id=51&UpgradeChecks/upgrade_type=NON_ROLLING] {code:java} { "href" : "http://172.27.30.73:8080/api/v1/clusters/cl1/rolling_upgrades_check/MISSING_OS_IN_REPO_VERSION";, "UpgradeChecks" : { "check" : "Missing OS in repository version.", "check_type" : "CLUSTER", "cluster_name" : "cl1", "failed_detail" : [ ], "failed_on" : [ "ubuntu18" ], "id" : "MISSING_OS_IN_REPO_VERSION", "reason" : "The source version must have an entry for each OS type in the cluster", "repository_version_id" : 51, "status" : "FAIL", "upgrade_type" : "NON_ROLLING" } } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25457) Hive 3 Grafana dashboards showing outdated metrics
[ https://issues.apache.org/jira/browse/AMBARI-25457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25457. -- Resolution: Fixed > Hive 3 Grafana dashboards showing outdated metrics > --- > > Key: AMBARI-25457 > URL: https://issues.apache.org/jira/browse/AMBARI-25457 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: grafana, hive, metrics, pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Some of the metrics' name has been changed in Hive 3. Due to this change many > graphs shows no data on Hive Home, HiveServer2 and HiveMetastore dashboards. > Suggested changes: > +HiveSever2+ > The _default.General.api_get_all_databases_ and > _default.General.api_get_partitions_by_names_ metrics are not provided by > HiveServer2 anymore - only by HiveMetastore - so the "API Times" row with the > two charts has been removed. > +Hive Metastore+ > |*Original Name*| *New Name*| > |default.General.api_get_all_databases_75thpercentile|default.General.api_get_databases_75thpercentile| > |default.General.api_get_all_databases_999thpercentile|default.General.api_get_databases_999thpercentile| > |default.General.memory.heap.max|default.General.heap.max| > |default.General.memory.heap.used|default.General.heap.used| > |default.General.memory.heap.committed|default.General.heap.committed| > |default.General.memory.non-heap.max|default.General.non-heap.max| > |default.General.memory.non-heap.used|default.General.non-heap.used| > |default.General.memory.non-heap.committed|default.General.non-heap.committed| > > +Hive Home+ > > | *Original Name*| *New Name*| > |default.General.init_total_count_db|default.General.total_count_dbs| > |default.General.init_total_count_tables|default.General.total_count_tables| > |default.General.init_total_count_partitions|default.General.total_count_partitions| > |default.General.api_create_table_count|api_create_table_req_count| > |default.General.memory.heap.max|default.General.heap.max| > |default.General.memory.heap.used|default.General.heap.used| > |default.General.memory.heap.committed|default.General.heap.committed| > |default.General.memory.non-heap.max|default.General.non-heap.max| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25457) Hive 3 Grafana dashboards showing outdated metrics
[ https://issues.apache.org/jira/browse/AMBARI-25457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25457: - Description: Some of the metrics' name has been changed in Hive 3. Due to this change many graphs shows no data on Hive Home, HiveServer2 and HiveMetastore dashboards. Suggested changes: +HiveSever2+ The _default.General.api_get_all_databases_ and _default.General.api_get_partitions_by_names_ metrics are not provided by HiveServer2 anymore - only by HiveMetastore - so the "API Times" row with the two charts has been removed. +Hive Metastore+ |*Original Name*| *New Name*| |default.General.api_get_all_databases_75thpercentile|default.General.api_get_databases_75thpercentile| |default.General.api_get_all_databases_999thpercentile|default.General.api_get_databases_999thpercentile| |default.General.memory.heap.max|default.General.heap.max| |default.General.memory.heap.used|default.General.heap.used| |default.General.memory.heap.committed|default.General.heap.committed| |default.General.memory.non-heap.max|default.General.non-heap.max| |default.General.memory.non-heap.used|default.General.non-heap.used| |default.General.memory.non-heap.committed|default.General.non-heap.committed| +Hive Home+ | *Original Name*| *New Name*| |default.General.init_total_count_db|default.General.total_count_dbs| |default.General.init_total_count_tables|default.General.total_count_tables| |default.General.init_total_count_partitions|default.General.total_count_partitions| |default.General.api_create_table_count|api_create_table_req_count| |default.General.memory.heap.max|default.General.heap.max| |default.General.memory.heap.used|default.General.heap.used| |default.General.memory.heap.committed|default.General.heap.committed| |default.General.memory.non-heap.max|default.General.non-heap.max| was: Some of the metrics' name has been changed in Hive 3. Due to this change many graphs shows no data on Hive Home, HiveServer2 and HiveMetastore dashboards. Suggested changes: +HiveSever2+ The _default.General.api_get_all_databases_ and _default.General.api_get_partitions_by_names_ metrics are not provided by HiveServer2 anymore - only by HiveMetastore - so the "API Times" row with the two charts has been removed. +Hive Metastore+ |*Original Name*| *New Name*| |default.General.api_get_all_databases_75thpercentile|default.General.api_get_databases_75thpercentile| |default.General.api_get_all_databases_999thpercentile|default.General.api_get_databases_999thpercentile| |default.General.memory.heap.max|default.General.heap.max| |default.General.memory.heap.used|default.General.heap.used| |default.General.memory.heap.committed|default.General.heap.committed| |default.General.memory.non-heap.max|default.General.non-heap.max| |default.General.memory.non-heap.used|default.General.non-heap.used| |default.General.memory.non-heap.committed|default.General.non-heap.committed| +Hive Home+ | *Original Name*| *New Name*| |default.General.init_total_count_db|default.General.total_count_dbs| |default.General.init_total_count_tables|default.General.total_count_tables| |default.General.init_total_count_partitions|default.General.total_count_partitions| |default.General.api_create_table_count|api_create_table_req_count| |default.General.memory.heap.max|default.General.heap.max| |default.General.memory.heap.used|default.General.heap.used| |default.General.memory.heap.committed|default.General.heap.committed| |default.General.memory.non-heap.max|default.General.non-heap.max| > Hive 3 Grafana dashboards showing outdated metrics > --- > > Key: AMBARI-25457 > URL: https://issues.apache.org/jira/browse/AMBARI-25457 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4, 2.7.5 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: grafana, hive, metrics > > Some of the metrics' name has been changed in Hive 3. Due to this change many > graphs shows no data on Hive Home, HiveServer2 and HiveMetastore dashboards. > Suggested changes: > +HiveSever2+ > The _default.General.api_get_all_databases_ and > _default.General.api_get_partitions_by_names_ metrics are not provided by > HiveServer2 anymore - only by HiveMetastore - so the "API Times" row with the > two charts has been removed. > +Hive Metastore+ > |*Original Name*| *New Name*| > |default.General.api_get_all_databases_75thpercentile|default.General.api_get_databases_75thpercentile| > |default.General.api_get_all_databases_999thpercentile|default.General.api_get_databases_999thpercentile| > |default.General.memory.heap.max|default.General.heap.max| > |default.General.memory.heap.used|default.General.heap.used| > |default.General.memory.heap.committed|default.G
[jira] [Created] (AMBARI-25457) Hive 3 Grafana dashboards showing outdated metrics
Tamas Payer created AMBARI-25457: Summary: Hive 3 Grafana dashboards showing outdated metrics Key: AMBARI-25457 URL: https://issues.apache.org/jira/browse/AMBARI-25457 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.7.3, 2.7.4, 2.7.5 Reporter: Tamas Payer Assignee: Tamas Payer Some of the metrics' name has been changed in Hive 3. Due to this change many graphs shows no data on Hive Home, HiveServer2 and HiveMetastore dashboards. Suggested changes: +HiveSever2+ The _default.General.api_get_all_databases_ and _default.General.api_get_partitions_by_names_ metrics are not provided by HiveServer2 anymore - only by HiveMetastore - so the "API Times" row with the two charts has been removed. +Hive Metastore+ |*Original Name*| *New Name*| |default.General.api_get_all_databases_75thpercentile|default.General.api_get_databases_75thpercentile| |default.General.api_get_all_databases_999thpercentile|default.General.api_get_databases_999thpercentile| |default.General.memory.heap.max|default.General.heap.max| |default.General.memory.heap.used|default.General.heap.used| |default.General.memory.heap.committed|default.General.heap.committed| |default.General.memory.non-heap.max|default.General.non-heap.max| |default.General.memory.non-heap.used|default.General.non-heap.used| |default.General.memory.non-heap.committed|default.General.non-heap.committed| +Hive Home+ | *Original Name*| *New Name*| |default.General.init_total_count_db|default.General.total_count_dbs| |default.General.init_total_count_tables|default.General.total_count_tables| |default.General.init_total_count_partitions|default.General.total_count_partitions| |default.General.api_create_table_count|api_create_table_req_count| |default.General.memory.heap.max|default.General.heap.max| |default.General.memory.heap.used|default.General.heap.used| |default.General.memory.heap.committed|default.General.heap.committed| |default.General.memory.non-heap.max|default.General.non-heap.max| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25379) Upgrade AMS Grafana version to 6.4.2
[ https://issues.apache.org/jira/browse/AMBARI-25379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25379. -- Resolution: Fixed > Upgrade AMS Grafana version to 6.4.2 > > > Key: AMBARI-25379 > URL: https://issues.apache.org/jira/browse/AMBARI-25379 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metrics, pull-request-available > Fix For: 2.7.5 > > Time Spent: 3.5h > Remaining Estimate: 0h > > Upgrade AMS Grafana version to 6.4.2 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (AMBARI-25379) Upgrade AMS Grafana version to 6.4.2
[ https://issues.apache.org/jira/browse/AMBARI-25379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer reopened AMBARI-25379: -- Reopened due to minor Grafana dashboard related issues. > Upgrade AMS Grafana version to 6.4.2 > > > Key: AMBARI-25379 > URL: https://issues.apache.org/jira/browse/AMBARI-25379 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metrics, pull-request-available > Fix For: 2.7.5 > > Time Spent: 2h > Remaining Estimate: 0h > > Upgrade AMS Grafana version to 6.4.2 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25379) Upgrade AMS Grafana version to 6.4.2
[ https://issues.apache.org/jira/browse/AMBARI-25379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25379: - Resolution: Fixed Status: Resolved (was: Patch Available) > Upgrade AMS Grafana version to 6.4.2 > > > Key: AMBARI-25379 > URL: https://issues.apache.org/jira/browse/AMBARI-25379 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metrics, pull-request-available > Fix For: 2.7.5 > > Time Spent: 2h > Remaining Estimate: 0h > > Upgrade AMS Grafana version to 6.4.2 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25379) Upgrade AMS Grafana version to 6.4.2
[ https://issues.apache.org/jira/browse/AMBARI-25379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25379: - Status: Patch Available (was: Reopened) > Upgrade AMS Grafana version to 6.4.2 > > > Key: AMBARI-25379 > URL: https://issues.apache.org/jira/browse/AMBARI-25379 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metrics, pull-request-available > Fix For: 2.7.5 > > Time Spent: 2h > Remaining Estimate: 0h > > Upgrade AMS Grafana version to 6.4.2 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (AMBARI-25379) Upgrade AMS Grafana version to 6.4.2
[ https://issues.apache.org/jira/browse/AMBARI-25379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer reopened AMBARI-25379: -- Reopened, because on Debian and Ubuntu the plugins/ambari-metrics directory and its content is missing. > Upgrade AMS Grafana version to 6.4.2 > > > Key: AMBARI-25379 > URL: https://issues.apache.org/jira/browse/AMBARI-25379 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metrics, pull-request-available > Fix For: 2.7.5 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Upgrade AMS Grafana version to 6.4.2 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (AMBARI-25379) Upgrade AMS Grafana version to 6.4.2
[ https://issues.apache.org/jira/browse/AMBARI-25379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer resolved AMBARI-25379. -- Fix Version/s: 2.7.5 Resolution: Fixed > Upgrade AMS Grafana version to 6.4.2 > > > Key: AMBARI-25379 > URL: https://issues.apache.org/jira/browse/AMBARI-25379 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metrics, pull-request-available > Fix For: 2.7.5 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Upgrade AMS Grafana version to 6.4.2 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25379) Upgrade AMS Grafana version to 6.4.2
[ https://issues.apache.org/jira/browse/AMBARI-25379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25379: - Summary: Upgrade AMS Grafana version to 6.4.2 (was: Upgrade AMS Grafana version to 6.3.5) > Upgrade AMS Grafana version to 6.4.2 > > > Key: AMBARI-25379 > URL: https://issues.apache.org/jira/browse/AMBARI-25379 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metrics > > Upgrade AMS Grafana version to 6.3.5 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25379) Upgrade AMS Grafana version to 6.4.2
[ https://issues.apache.org/jira/browse/AMBARI-25379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25379: - Description: Upgrade AMS Grafana version to 6.4.2 (was: Upgrade AMS Grafana version to 6.3.5) > Upgrade AMS Grafana version to 6.4.2 > > > Key: AMBARI-25379 > URL: https://issues.apache.org/jira/browse/AMBARI-25379 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metrics > > Upgrade AMS Grafana version to 6.4.2 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25383) Ambari Metrics whitelisting is failing on * wildcard for Kafka Topics
[ https://issues.apache.org/jira/browse/AMBARI-25383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25383: - Description: On Ambari 2.6.x and 2.7.x if the Ambari Metrics Collector whitelisting is enabled, the Kafka Topics are not discovered on the Kafka topics dashboard. (The Topics dropdown is empty.) It can be remediated by adding '._p_kafka.log.Log.*' to the whitelist file and restarting the Metrics Collector. Adding the '._p_kafka.log.Log.*' to the whitelist file by default should be considered. Also it should be investigated why we need the "._p_" prefix. Furthermore, it seems that the metrics enabled in the whitelist file as: {code:java} kafka.server.BrokerTopicMetrics.BytesInPerSec.topic.*.count kafka.server.BrokerTopicMetrics.BytesOutPerSec.topic.*.count kafka.server.BrokerTopicMetrics.MessagesInPerSec.topic.*.count kafka.server.BrokerTopicMetrics.TotalProduceRequestsPerSec.topic.*.count {code} are filtered out and not shown on Grafana dashboard. The issue can be worked around by adding the '._p_' prefix to the corresponding metrics in the whitelist file, e.g. ._p_kafka.server.BrokerTopicMetrics.BytesInPerSec.topic.*.count . was: On Ambari 2.6.x and 2.7.x if the Ambari Metrics Collector whitelisting is enabled, the Kafka Topics are not discovered on the Kafka topics dashboard. (The Topics dropdown is empty.) It can be remediated by adding '._p_kafka.log.Log.*' to the whitelist file and restarting the Metrics Collector. Adding the '._p_kafka.log.Log.*' to the whitelist file by default should be considered. Also it should be investigated why we need the "._p_" prefix. Furthermore, it seems that the metrics enabled in the whitelist file as: {code:java} kafka.server.BrokerTopicMetrics.BytesInPerSec.topic.*.count kafka.server.BrokerTopicMetrics.BytesOutPerSec.topic.*.count kafka.server.BrokerTopicMetrics.MessagesInPerSec.topic.*.count kafka.server.BrokerTopicMetrics.TotalProduceRequestsPerSec.topic.*.count {code} are filtered out and not shown on Grafana dashboard. > Ambari Metrics whitelisting is failing on * wildcard for Kafka Topics > - > > Key: AMBARI-25383 > URL: https://issues.apache.org/jira/browse/AMBARI-25383 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.6.0, 2.6.1, 2.6.2, 2.7.3, 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: grafana, kafka, metrics > > On Ambari 2.6.x and 2.7.x if the Ambari Metrics Collector whitelisting is > enabled, the Kafka Topics are not discovered on the Kafka topics dashboard. > (The Topics dropdown is empty.) > It can be remediated by adding '._p_kafka.log.Log.*' to the whitelist file > and restarting the Metrics Collector. > Adding the '._p_kafka.log.Log.*' to the whitelist file by default should be > considered. Also it should be investigated why we need the "._p_" prefix. > Furthermore, it seems that the metrics enabled in the whitelist file as: > {code:java} > kafka.server.BrokerTopicMetrics.BytesInPerSec.topic.*.count > kafka.server.BrokerTopicMetrics.BytesOutPerSec.topic.*.count > kafka.server.BrokerTopicMetrics.MessagesInPerSec.topic.*.count > kafka.server.BrokerTopicMetrics.TotalProduceRequestsPerSec.topic.*.count > {code} > are filtered out and not shown on Grafana dashboard. > The issue can be worked around by adding the '._p_' prefix to the > corresponding metrics in the whitelist file, e.g. > ._p_kafka.server.BrokerTopicMetrics.BytesInPerSec.topic.*.count . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25383) Ambari Metrics whitelisting is failing on * wildcard for Kafka Topics
[ https://issues.apache.org/jira/browse/AMBARI-25383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25383: - Description: On Ambari 2.6.x and 2.7.x if the Ambari Metrics Collector whitelisting is enabled, the Kafka Topics are not discovered on the Kafka topics dashboard. (The Topics dropdown is empty.) It can be remediated by adding '._p_kafka.log.Log.*' to the whitelist file and restarting the Metrics Collector. Adding the '._p_kafka.log.Log.*' to the whitelist file by default should be considered. Also it should be investigated why we need the "._p_" prefix. Furthermore, it seems that the metrics enabled in the whitelist file as: {code:java} kafka.server.BrokerTopicMetrics.BytesInPerSec.topic.*.count kafka.server.BrokerTopicMetrics.BytesOutPerSec.topic.*.count kafka.server.BrokerTopicMetrics.MessagesInPerSec.topic.*.count kafka.server.BrokerTopicMetrics.TotalProduceRequestsPerSec.topic.*.count {code} are filtered out and not shown on Grafana dashboard. was: On Ambari 2.6.2 if the Ambari Metrics Collector whitelisting is enabled, the Kafka Topics are not discovered on the Kafka topics dashboard. (The Topics dropdown is empty.) It can be remediated by adding '._p_kafka.log.Log.*' to the whitelist file and restarting the Metrics Collector. Adding the '._p_kafka.log.Log.*' to the whitelist file by default should be considered. Also it should be investigated why we need the "._p_" prefix. Furthermore, it seems that the metrics enabled in the whitelist file as: {code:java} kafka.server.BrokerTopicMetrics.BytesInPerSec.topic.*.count kafka.server.BrokerTopicMetrics.BytesOutPerSec.topic.*.count kafka.server.BrokerTopicMetrics.MessagesInPerSec.topic.*.count kafka.server.BrokerTopicMetrics.TotalProduceRequestsPerSec.topic.*.count {code} are filtered out and not shown on Grafana dashboard. > Ambari Metrics whitelisting is failing on * wildcard for Kafka Topics > - > > Key: AMBARI-25383 > URL: https://issues.apache.org/jira/browse/AMBARI-25383 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.6.0, 2.6.1, 2.6.2, 2.7.3, 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: grafana, kafka, metrics > > On Ambari 2.6.x and 2.7.x if the Ambari Metrics Collector whitelisting is > enabled, the Kafka Topics are not discovered on the Kafka topics dashboard. > (The Topics dropdown is empty.) > It can be remediated by adding '._p_kafka.log.Log.*' to the whitelist file > and restarting the Metrics Collector. > Adding the '._p_kafka.log.Log.*' to the whitelist file by default should be > considered. Also it should be investigated why we need the "._p_" prefix. > Furthermore, it seems that the metrics enabled in the whitelist file as: > {code:java} > kafka.server.BrokerTopicMetrics.BytesInPerSec.topic.*.count > kafka.server.BrokerTopicMetrics.BytesOutPerSec.topic.*.count > kafka.server.BrokerTopicMetrics.MessagesInPerSec.topic.*.count > kafka.server.BrokerTopicMetrics.TotalProduceRequestsPerSec.topic.*.count > {code} > are filtered out and not shown on Grafana dashboard. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25383) Ambari Metrics whitelisting is failing on * wildcard for Kafka Topics
[ https://issues.apache.org/jira/browse/AMBARI-25383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25383: - Labels: grafana kafka metrics (was: metrics) > Ambari Metrics whitelisting is failing on * wildcard for Kafka Topics > - > > Key: AMBARI-25383 > URL: https://issues.apache.org/jira/browse/AMBARI-25383 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.6.0, 2.6.1, 2.6.2, 2.7.3, 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: grafana, kafka, metrics > > On Ambari 2.6.2 if the Ambari Metrics Collector whitelisting is enabled, the > Kafka Topics are not discovered on the Kafka topics dashboard. (The Topics > dropdown is empty.) > It can be remediated by adding '._p_kafka.log.Log.*' to the whitelist file > and restarting the Metrics Collector. > Adding the '._p_kafka.log.Log.*' to the whitelist file by default should be > considered. Also it should be investigated why we need the "._p_" prefix. > Furthermore, it seems that the metrics enabled in the whitelist file as: > {code:java} > kafka.server.BrokerTopicMetrics.BytesInPerSec.topic.*.count > kafka.server.BrokerTopicMetrics.BytesOutPerSec.topic.*.count > kafka.server.BrokerTopicMetrics.MessagesInPerSec.topic.*.count > kafka.server.BrokerTopicMetrics.TotalProduceRequestsPerSec.topic.*.count > {code} > are filtered out and not shown on Grafana dashboard. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (AMBARI-25383) Ambari Metrics whitelisting is failing on * wildcard for Kafka Topics
Tamas Payer created AMBARI-25383: Summary: Ambari Metrics whitelisting is failing on * wildcard for Kafka Topics Key: AMBARI-25383 URL: https://issues.apache.org/jira/browse/AMBARI-25383 Project: Ambari Issue Type: Task Components: ambari-metrics Affects Versions: 2.6.0, 2.6.1, 2.6.2, 2.7.3, 2.7.4 Reporter: Tamas Payer Assignee: Tamas Payer On Ambari 2.6.2 if the Ambari Metrics Collector whitelisting is enabled, the Kafka Topics are not discovered on the Kafka topics dashboard. (The Topics dropdown is empty.) It can be remediated by adding '._p_kafka.log.Log.*' to the whitelist file and restarting the Metrics Collector. Adding the '._p_kafka.log.Log.*' to the whitelist file by default should be considered. Also it should be investigated why we need the "._p_" prefix. Furthermore, it seems that the metrics enabled in the whitelist file as: {code:java} kafka.server.BrokerTopicMetrics.BytesInPerSec.topic.*.count kafka.server.BrokerTopicMetrics.BytesOutPerSec.topic.*.count kafka.server.BrokerTopicMetrics.MessagesInPerSec.topic.*.count kafka.server.BrokerTopicMetrics.TotalProduceRequestsPerSec.topic.*.count {code} are filtered out and not shown on Grafana dashboard. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMBARI-25379) Upgrade AMS Grafana version to 6.3.5
[ https://issues.apache.org/jira/browse/AMBARI-25379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25379: - Labels: metrics (was: ) > Upgrade AMS Grafana version to 6.3.5 > > > Key: AMBARI-25379 > URL: https://issues.apache.org/jira/browse/AMBARI-25379 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: metrics > > Upgrade AMS Grafana version to 6.3.5 -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (AMBARI-25379) Upgrade AMS Grafana version to 6.3.5
Tamas Payer created AMBARI-25379: Summary: Upgrade AMS Grafana version to 6.3.5 Key: AMBARI-25379 URL: https://issues.apache.org/jira/browse/AMBARI-25379 Project: Ambari Issue Type: Task Components: ambari-metrics Affects Versions: 2.7.4 Reporter: Tamas Payer Assignee: Tamas Payer Upgrade AMS Grafana version to 6.3.5 -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (AMBARI-10047) 500 error on installing kerberos clients
[ https://issues.apache.org/jira/browse/AMBARI-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-10047: - Description: On stage of installing kerberos clients appears error 500 with text: {code:java} An internal system exception occurred; Unexpected error condition executing kadmin command§ {code} This is due to a bad search path used to find kadmin. It should be able to be set by a user. was: On stage of installing kerberos clients appears error 500 with text: {code} An internal system exception occurred; Unexpected error condition executing kadmin command {code} This is due to a bad search path used to find kadmin. It should be able to be set by a user. > 500 error on installing kerberos clients > > > Key: AMBARI-10047 > URL: https://issues.apache.org/jira/browse/AMBARI-10047 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.0.0 >Reporter: Robert Levas >Assignee: Robert Levas >Priority: Blocker > Labels: kerberos > Fix For: 2.0.0 > > > On stage of installing kerberos clients appears error 500 with text: > {code:java} > An internal system exception occurred; Unexpected error condition executing > kadmin command§ > {code} > This is due to a bad search path used to find kadmin. It should be able to be > set by a user. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (AMBARI-25370) Producer and Customer Request /s graphs are failing on Kafa Grafana dashboards
[ https://issues.apache.org/jira/browse/AMBARI-25370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25370: - Resolution: Fixed Status: Resolved (was: Patch Available) > Producer and Customer Request /s graphs are failing on Kafa Grafana dashboards > -- > > Key: AMBARI-25370 > URL: https://issues.apache.org/jira/browse/AMBARI-25370 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > From kafka 2.0.0 there has been addition of *version* tag in > kafka.network.RequestMetrics.RequestsPerSec.request.* metrics. > This is breaking the the default Grafana dashboard provided by Ambari. On > the *Kafka - Home* and *Kafka - Hosts* dashboards the *Producer requests /s* > and *Consumer requests /s* graphs are failing to show any data. > To get the total count for a specific request type, the tool needs to be > updated to aggregate across different versions. > *Previous metric*: > kafka.network:type=RequestMetrics,name=RequestsPerSec,request= > {Produce|FetchConsumer|FetchFollower|...} > *New metric*: > kafka.network:type=RequestMetrics,name=RequestsPerSec,request=\{Produce|FetchConsumer|FetchFollower|...},version=INTEGER > > Documentation of the Kafka change: > [https://cwiki.apache.org/confluence/display/KAFKA/KIP-272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric] -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (AMBARI-25370) Producer and Customer Request /s graphs are failing on Kafa Grafana dashboards
[ https://issues.apache.org/jira/browse/AMBARI-25370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25370: - Status: Patch Available (was: Open) > Producer and Customer Request /s graphs are failing on Kafa Grafana dashboards > -- > > Key: AMBARI-25370 > URL: https://issues.apache.org/jira/browse/AMBARI-25370 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > From kafka 2.0.0 there has been addition of *version* tag in > kafka.network.RequestMetrics.RequestsPerSec.request.* metrics. > This is breaking the the default Grafana dashboard provided by Ambari. On > the *Kafka - Home* and *Kafka - Hosts* dashboards the *Producer requests /s* > and *Consumer requests /s* graphs are failing to show any data. > To get the total count for a specific request type, the tool needs to be > updated to aggregate across different versions. > *Previous metric*: > kafka.network:type=RequestMetrics,name=RequestsPerSec,request= > {Produce|FetchConsumer|FetchFollower|...} > *New metric*: > kafka.network:type=RequestMetrics,name=RequestsPerSec,request=\{Produce|FetchConsumer|FetchFollower|...},version=INTEGER > > Documentation of the Kafka change: > [https://cwiki.apache.org/confluence/display/KAFKA/KIP-272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric] -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Assigned] (AMBARI-25370) Producer and Customer Request /s graphs are failing on Kafa Grafana dashboards
[ https://issues.apache.org/jira/browse/AMBARI-25370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer reassigned AMBARI-25370: Assignee: Tamas Payer > Producer and Customer Request /s graphs are failing on Kafa Grafana dashboards > -- > > Key: AMBARI-25370 > URL: https://issues.apache.org/jira/browse/AMBARI-25370 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Major > > From kafka 2.0.0 there has been addition of *version* tag in > kafka.network.RequestMetrics.RequestsPerSec.request.* metrics. > This is breaking the the default Grafana dashboard provided by Ambari. On > the *Kafka - Home* and *Kafka - Hosts* dashboards the *Producer requests /s* > and *Consumer requests /s* graphs are failing to show any data. > To get the total count for a specific request type, the tool needs to be > updated to aggregate across different versions. > *Previous metric*: > kafka.network:type=RequestMetrics,name=RequestsPerSec,request= > {Produce|FetchConsumer|FetchFollower|...} > *New metric*: > kafka.network:type=RequestMetrics,name=RequestsPerSec,request=\{Produce|FetchConsumer|FetchFollower|...},version=INTEGER > > Documentation of the Kafka change: > [https://cwiki.apache.org/confluence/display/KAFKA/KIP-272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric] -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (AMBARI-25370) Producer and Customer Request /s graphs are failing on Kafa Grafana dashboards
[ https://issues.apache.org/jira/browse/AMBARI-25370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer updated AMBARI-25370: - Description: >From kafka 2.0.0 there has been addition of *version* tag in >kafka.network.RequestMetrics.RequestsPerSec.request.* metrics. This is breaking the the default Grafana dashboard provided by Ambari. On the *Kafka - Home* and *Kafka - Hosts* dashboards the *Producer requests /s* and *Consumer requests /s* graphs are failing to show any data. To get the total count for a specific request type, the tool needs to be updated to aggregate across different versions. *Previous metric*: kafka.network:type=RequestMetrics,name=RequestsPerSec,request= {Produce|FetchConsumer|FetchFollower|...} *New metric*: kafka.network:type=RequestMetrics,name=RequestsPerSec,request=\{Produce|FetchConsumer|FetchFollower|...},version=INTEGER Documentation of the Kafka change: [https://cwiki.apache.org/confluence/display/KAFKA/KIP-272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric] was: >From kafka 2.0.0 there has been addition of *version* tag in >kafka.network.RequestMetrics.RequestsPerSec.request.* metrics. This is breaking the the default Grafana dashboard provided by ambari. To get the total count for a specific request type, the tool needs to be updated to aggregate across different versions. *Previous metric*: kafka.network:type=RequestMetrics,name=RequestsPerSec,request= {Produce|FetchConsumer|FetchFollower|...} *New metric*: kafka.network:type=RequestMetrics,name=RequestsPerSec,request=\{Produce|FetchConsumer|FetchFollower|...},version=INTEGER https://cwiki.apache.org/confluence/display/KAFKA/KIP-272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric > Producer and Customer Request /s graphs are failing on Kafa Grafana dashboards > -- > > Key: AMBARI-25370 > URL: https://issues.apache.org/jira/browse/AMBARI-25370 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.7.3, 2.7.4 >Reporter: Tamas Payer >Priority: Major > > From kafka 2.0.0 there has been addition of *version* tag in > kafka.network.RequestMetrics.RequestsPerSec.request.* metrics. > This is breaking the the default Grafana dashboard provided by Ambari. On > the *Kafka - Home* and *Kafka - Hosts* dashboards the *Producer requests /s* > and *Consumer requests /s* graphs are failing to show any data. > To get the total count for a specific request type, the tool needs to be > updated to aggregate across different versions. > *Previous metric*: > kafka.network:type=RequestMetrics,name=RequestsPerSec,request= > {Produce|FetchConsumer|FetchFollower|...} > *New metric*: > kafka.network:type=RequestMetrics,name=RequestsPerSec,request=\{Produce|FetchConsumer|FetchFollower|...},version=INTEGER > > Documentation of the Kafka change: > [https://cwiki.apache.org/confluence/display/KAFKA/KIP-272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric] -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (AMBARI-25370) Producer and Customer Request /s graphs are failing on Kafa Grafana dashboards
Tamas Payer created AMBARI-25370: Summary: Producer and Customer Request /s graphs are failing on Kafa Grafana dashboards Key: AMBARI-25370 URL: https://issues.apache.org/jira/browse/AMBARI-25370 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.7.3, 2.7.4 Reporter: Tamas Payer >From kafka 2.0.0 there has been addition of *version* tag in >kafka.network.RequestMetrics.RequestsPerSec.request.* metrics. This is breaking the the default Grafana dashboard provided by ambari. To get the total count for a specific request type, the tool needs to be updated to aggregate across different versions. *Previous metric*: kafka.network:type=RequestMetrics,name=RequestsPerSec,request= {Produce|FetchConsumer|FetchFollower|...} *New metric*: kafka.network:type=RequestMetrics,name=RequestsPerSec,request=\{Produce|FetchConsumer|FetchFollower|...},version=INTEGER https://cwiki.apache.org/confluence/display/KAFKA/KIP-272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Assigned] (AMBARI-23469) HostCleanup.py script is failing with AttributeError: 'NoneType' object has no attribute 'get'
[ https://issues.apache.org/jira/browse/AMBARI-23469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer reassigned AMBARI-23469: Assignee: Tamas Payer > HostCleanup.py script is failing with AttributeError: 'NoneType' object has > no attribute 'get' > -- > > Key: AMBARI-23469 > URL: https://issues.apache.org/jira/browse/AMBARI-23469 > Project: Ambari > Issue Type: Bug > Components: ambari-agent >Affects Versions: trunk, 2.7.0 >Reporter: JaySenSharma >Assignee: Tamas Payer >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > - While running HostCleanup.pu on Ambari 2.7.0 then it fails with the > following error: > {code} > # /usr/lib/ambari-agent/lib/ambari_agent/HostCleanup.py --silent --verbose > Traceback (most recent call last): > File "/usr/lib/ambari-agent/lib/ambari_agent/HostCleanup.py", line 710, in > > main() > File "/usr/lib/ambari-agent/lib/ambari_agent/HostCleanup.py", line 701, in > main > h.do_cleanup(propMap) > File "/usr/lib/ambari-agent/lib/ambari_agent/HostCleanup.py", line 144, in > do_cleanup > procList = proc_map.get(PROCESS_KEY) > AttributeError: 'NoneType' object has no attribute 'get' > {code} > - Used Python version is: > {code} > # python --version > Python 2.7.5 > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (AMBARI-25341) SmartSense API call fails with Unsupported Media Type
[ https://issues.apache.org/jira/browse/AMBARI-25341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Payer reassigned AMBARI-25341: Assignee: Tamas Payer > SmartSense API call fails with Unsupported Media Type > - > > Key: AMBARI-25341 > URL: https://issues.apache.org/jira/browse/AMBARI-25341 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.7.4 >Reporter: Tamas Payer >Assignee: Tamas Payer >Priority: Blocker > Labels: pull-request-available > Fix For: 2.7.4 > > Time Spent: 0.5h > Remaining Estimate: 0h > > AMBARI-9016 is causing regression in SmartSense by inappropriately changing > the Content-Type to "text/plain" for every request with "application/json" > Content-Type. This is a proper workaround for the Ambari API but not for > SmartSense. > Calling the SmartSense API failing with : > {code:java} > HTTP/1.1 415 Unsupported Media Type{code} > > {code:java} > curl > 'https://172.27.122.133:8443/api/v1/views/SMARTSENSE/versions/1.5.1.2.7.4.0-92/instances/SMARTSENSE_AUTO_INSTANCE/resources/hst/bundles' > -H 'Cookie: AMBARISESSIONID=node0mmlb052gnyw31rznh33qguoem22.node0; > SUPPORTSESSIONID=1nlk0dmxktunz1bvjr9259adjf' -H 'Origin: > https://172.27.122.133:8443' -H 'Accept-Encoding: gzip, deflate, br' -H > 'Accept-Language: en-US,en;q=0.9' -H 'X-Requested-By: ambari' -H 'User-Agent: > Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, > like Gecko) Chrome/75.0.3770.100 Safari/537.36' -H 'Content-Type: > application/json; charset=UTF-8' -H 'Accept: application/json, > text/javascript, */*; q=0.01' -H 'Referer: > https://172.27.122.133:8443/views/SMARTSENSE/1.5.1.2.7.4.0-92/SMARTSENSE_AUTO_INSTANCE/' > -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H > 'withCredentials: true' --data-binary '{"caseNumber":0}' --compressed > --insecure -v -u * > * Trying 172.27.122.133... > * TCP_NODELAY set > * Connected to 172.27.122.133 (172.27.122.133) port 8443 (#0) > * ALPN, offering h2 > * ALPN, offering http/1.1 > * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH > * TLSv1.2 (OUT), TLS handshake, Client hello (1): > * TLSv1.2 (IN), TLS handshake, Server hello (2): > * TLSv1.2 (IN), TLS handshake, Certificate (11): > * TLSv1.2 (IN), TLS handshake, Server key exchange (12): > * TLSv1.2 (IN), TLS handshake, Server finished (14): > * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): > * TLSv1.2 (OUT), TLS change cipher, Client hello (1): > * TLSv1.2 (OUT), TLS handshake, Finished (20): > * TLSv1.2 (IN), TLS change cipher, Client hello (1): > * TLSv1.2 (IN), TLS handshake, Finished (20): > * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 > * ALPN, server did not agree to a protocol > * Server certificate: > * subject: C=UA; ST=Some-State; O=Internet Widgits Pty Ltd; > CN=ctr-e139-1542663976389-172620-01-02.hwx.site > * start date: Jul 16 08:42:47 2019 GMT > * expire date: Jul 15 08:42:47 2020 GMT > * issuer: C=UA; ST=Some-State; O=Internet Widgits Pty Ltd; > CN=ctr-e139-1542663976389-172620-01-02.hwx.site > * SSL certificate verify result: self signed certificate (18), continuing > anyway. > * Server auth using Basic with user 'admin' > > POST > > /api/v1/views/SMARTSENSE/versions/1.5.1.2.7.4.0-92/instances/SMARTSENSE_AUTO_INSTANCE/resources/hst/bundles > > HTTP/1.1 > > Host: 172.27.122.133:8443 > > Authorization: Basic YWRtaW46YWRtaW4= > > Cookie: AMBARISESSIONID=node0mmlb052gnyw31rznh33qguoem22.node0; > > SUPPORTSESSIONID=1nlk0dmxktunz1bvjr9259adjf > > Origin: https://172.27.122.133:8443 > > Accept-Encoding: gzip, deflate, br > > Accept-Language: en-US,en;q=0.9 > > X-Requested-By: ambari > > User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) > > AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36 > > Content-Type: application/json; charset=UTF-8 > > Accept: application/json, text/javascript, */*; q=0.01 > > Referer: > > https://172.27.122.133:8443/views/SMARTSENSE/1.5.1.2.7.4.0-92/SMARTSENSE_AUTO_INSTANCE/ > > X-Requested-With: XMLHttpRequest > > Connection: keep-alive > > withCredentials: true > > Content-Length: 16 > > > * upload completely sent off: 16 out of 16 bytes > < HTTP/1.1 415 Unsupported Media Type > < Date: Tue, 16 Jul 2019 18:36:36 GMT > < Strict-Transport-Security: max-age=31536000 > < X-Frame-Options: SAMEORIGIN > < X-XSS-Protection: 1; mode=block > < X-Content-Type-Options: nosniff > < Cache-Control: no-store > < Pragma: no-cache > < Content-Type: application/json;charset=utf-8 > < User: admin > < X-Content-Type-Options: nosniff > < Content-Length: 0 > < > * Connection #0 to host 172.27.122.133 left intact > {code} > -- This message was sent by Atl
[jira] [Commented] (AMBARI-23469) HostCleanup.py script is failing with AttributeError: 'NoneType' object has no attribute 'get'
[ https://issues.apache.org/jira/browse/AMBARI-23469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16898097#comment-16898097 ] Tamas Payer commented on AMBARI-23469: -- Opened PR: [https://github.com/apache/ambari/pull/3061] > HostCleanup.py script is failing with AttributeError: 'NoneType' object has > no attribute 'get' > -- > > Key: AMBARI-23469 > URL: https://issues.apache.org/jira/browse/AMBARI-23469 > Project: Ambari > Issue Type: Bug > Components: ambari-agent >Affects Versions: trunk, 2.7.0 >Reporter: JaySenSharma >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > - While running HostCleanup.pu on Ambari 2.7.0 then it fails with the > following error: > {code} > # /usr/lib/ambari-agent/lib/ambari_agent/HostCleanup.py --silent --verbose > Traceback (most recent call last): > File "/usr/lib/ambari-agent/lib/ambari_agent/HostCleanup.py", line 710, in > > main() > File "/usr/lib/ambari-agent/lib/ambari_agent/HostCleanup.py", line 701, in > main > h.do_cleanup(propMap) > File "/usr/lib/ambari-agent/lib/ambari_agent/HostCleanup.py", line 144, in > do_cleanup > procList = proc_map.get(PROCESS_KEY) > AttributeError: 'NoneType' object has no attribute 'get' > {code} > - Used Python version is: > {code} > # python --version > Python 2.7.5 > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)