[jira] [Updated] (AMBARI-17867) Fix bad xml HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml
[ https://issues.apache.org/jira/browse/AMBARI-17867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajit Kumar updated AMBARI-17867: Description: Fix HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml which has bad xml element "" at line number 28. This is breaking build since https://builds.apache.org/job/Ambari-trunk-Commit/5334 Unit test failing: org.apache.ambari.server.state.ServicePropertiesTest.validatePropertySchemaOfServiceXMLs was: Fix HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml which has bad xml element "" at line number 28. This is breaking build since https://builds.apache.org/job/Ambari-trunk-Commit/5334 Unit test failing org.apache.ambari.server.state.ServicePropertiesTest.validatePropertySchemaOfServiceXMLs > Fix bad xml > HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml > > > Key: AMBARI-17867 > URL: https://issues.apache.org/jira/browse/AMBARI-17867 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Ajit Kumar >Assignee: Ajit Kumar > Fix For: 2.4.0 > > > Fix HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml > which has bad xml element "" at line number 28. This is > breaking build since https://builds.apache.org/job/Ambari-trunk-Commit/5334 > Unit test failing: > org.apache.ambari.server.state.ServicePropertiesTest.validatePropertySchemaOfServiceXMLs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17867) Fix bad xml HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml
[ https://issues.apache.org/jira/browse/AMBARI-17867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajit Kumar updated AMBARI-17867: Description: Fix HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml which has bad xml element "" at line number 28. This is breaking build since https://builds.apache.org/job/Ambari-trunk-Commit/5334 Unit test failing org.apache.ambari.server.state.ServicePropertiesTest.validatePropertySchemaOfServiceXMLs was:Fix HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml which has bad xml element "" at line number 28. This is breaking build since https://builds.apache.org/job/Ambari-trunk-Commit/5334 > Fix bad xml > HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml > > > Key: AMBARI-17867 > URL: https://issues.apache.org/jira/browse/AMBARI-17867 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Ajit Kumar >Assignee: Ajit Kumar > Fix For: 2.4.0 > > > Fix HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml > which has bad xml element "" at line number 28. This is > breaking build since https://builds.apache.org/job/Ambari-trunk-Commit/5334 > Unit test failing > org.apache.ambari.server.state.ServicePropertiesTest.validatePropertySchemaOfServiceXMLs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-17867) Fix bad xml HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml
Ajit Kumar created AMBARI-17867: --- Summary: Fix bad xml HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml Key: AMBARI-17867 URL: https://issues.apache.org/jira/browse/AMBARI-17867 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.4.0 Reporter: Ajit Kumar Assignee: Ajit Kumar Fix For: 2.4.0 Fix HDP/2.3.GlusterFS/services/YARN/configuration/capacity-scheduler.xml which has bad xml element "" at line number 28. This is breaking build since https://builds.apache.org/job/Ambari-trunk-Commit/5334 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17641) Add storm impersonation authorized along with default ACL
[ https://issues.apache.org/jira/browse/AMBARI-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390512#comment-15390512 ] Hudson commented on AMBARI-17641: - FAILURE: Integrated in Ambari-trunk-Commit #5372 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5372/]) AMBARI-17641. Add storm impersonation authorized along with default ACL. (smohanty: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=185ffb53d0d11597bcfdf2b6a4315bf6fae3ede8]) * ambari-server/src/main/resources/common-services/STORM/1.0.1/configuration/storm-site.xml > Add storm impersonation authorized along with default ACL > - > > Key: AMBARI-17641 > URL: https://issues.apache.org/jira/browse/AMBARI-17641 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17641-V2.patch, AMBARI-17641-V4.patch, > AMBARI-17641-addemdum-ut-fix.patch, AMBARI-17641.addendum.patch, > AMBARI-17641.patch, Ambari-17641-patch-appied-installed-Storm-step-1.png, > after-kerborizing-step-2.png > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17861) Ambari principal should be part of nimbus.admins for Storm View
[ https://issues.apache.org/jira/browse/AMBARI-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390510#comment-15390510 ] Hudson commented on AMBARI-17861: - FAILURE: Integrated in Ambari-trunk-Commit #5372 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5372/]) AMBARI-17861. Ambari principal should be part of nimbus.admins for Storm (smohanty: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=7a1871865019d607a6a1219edbdd831500fe7c01]) * ambari-server/src/test/python/stacks/2.3/configs/storm_default_secure.json * ambari-server/src/main/resources/common-services/STORM/1.0.1/kerberos.json * ambari-server/src/test/python/stacks/2.1/configs/secured-storm-start.json * ambari-server/src/test/python/stacks/2.1/configs/secured.json * ambari-server/src/main/resources/common-services/STORM/0.9.1/package/scripts/params_linux.py > Ambari principal should be part of nimbus.admins for Storm View > --- > > Key: AMBARI-17861 > URL: https://issues.apache.org/jira/browse/AMBARI-17861 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17861-II.patch, AMBARI-17861.patch, > AMBARI-17861.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17865) incorrect text for disabled "Change Password" button
[ https://issues.apache.org/jira/browse/AMBARI-17865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390511#comment-15390511 ] Hudson commented on AMBARI-17865: - FAILURE: Integrated in Ambari-trunk-Commit #5372 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5372/]) AMBARI-17865 incorrect text for disabled "Change Password" button (zhewang: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=661ee4cd72c083814a727a8850f4ac151e07067b]) * ambari-admin/src/main/resources/ui/admin-web/app/views/users/show.html > incorrect text for disabled "Change Password" button > -- > > Key: AMBARI-17865 > URL: https://issues.apache.org/jira/browse/AMBARI-17865 > Project: Ambari > Issue Type: Bug > Components: ambari-admin >Affects Versions: 2.4.0 >Reporter: Zhe (Joe) Wang >Assignee: Zhe (Joe) Wang > Fix For: 2.4.0 > > Attachments: AMBARI-17865.v0.patch > > > Incorrect "users.changePassword" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17863) AMS - topN does not work when metric name has a wildcard specified
[ https://issues.apache.org/jira/browse/AMBARI-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390466#comment-15390466 ] Hudson commented on AMBARI-17863: - FAILURE: Integrated in Ambari-trunk-Commit #5371 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5371/]) AMBARI-17863 : AMS - topN does not work when metric name has a wildcard (avijayan: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=7155e8e7283eb837a6b6be648eb513cbcf7b1f26]) * ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TopNConditionTest.java * ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/DefaultCondition.java * ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/TestPhoenixTransactSQL.java * ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/query/TopNCondition.java > AMS - topN does not work when metric name has a wildcard specified > -- > > Key: AMBARI-17863 > URL: https://issues.apache.org/jira/browse/AMBARI-17863 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.4.0 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17863.patch > > > Queries to AMS with topN specified do not work if metric name has wildcard > (%) in them. > Example: > collectorhost:port/ws/v1/timeline/metrics?metricNames=cpu_%&appId=HOST&startTime=1469203016&endtime=1469204816&topN=2&isBottomN=false -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17845) Storm cluster metrics do not show up because of AMS aggregation issue.
[ https://issues.apache.org/jira/browse/AMBARI-17845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390467#comment-15390467 ] Hudson commented on AMBARI-17845: - FAILURE: Integrated in Ambari-trunk-Commit #5371 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5371/]) AMBARI-17845 : Storm cluster metrics do not show up because of AMS (avijayan: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=1a415a66d569d0aa6e698564d54b446c921fa449]) * ambari-metrics/ambari-metrics-timelineservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecondTest.java * ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricClusterAggregatorSecond.java > Storm cluster metrics do not show up because of AMS aggregation issue. > -- > > Key: AMBARI-17845 > URL: https://issues.apache.org/jira/browse/AMBARI-17845 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.2.2 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17845.patch > > > PROBLEM > Storm cluster metrics (aggregated across hosts) don't show up in Ambari and > Grafana > BUG > Storm metrics collect and send data in 1 minute intervals. Since their data > is present in the right end of the spectrum for the 2 minute aggregator > (start_time ~ server_time), a bug in the second aggregator is causing these > values to slip between 2 aggregator cycles. > FIX > Within the 2 minute interval, look for the data in the time shifted interval > ( ams-site:timeline.metrics.service.cluster.aggregator.timeshift.adjustment). > In case no data is present, look for the data outside the right boundary of > the interval. Use that to interpolate the data in the 30second slices. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17843) App data aggregated for hosted apps is being calcualted for all apps, not just configured ones
[ https://issues.apache.org/jira/browse/AMBARI-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390465#comment-15390465 ] Hudson commented on AMBARI-17843: - FAILURE: Integrated in Ambari-trunk-Commit #5371 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5371/]) AMBARI-17843 : App data aggregated for hosted apps is being calcualted (avijayan: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=a484b2352b5c0d00b007bcdca241943909a5836a]) * ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataManager.java * ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/aggregators/TimelineMetricAppAggregator.java * ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/metrics/timeline/discovery/TimelineMetricMetadataKey.java > App data aggregated for hosted apps is being calcualted for all apps, not > just configured ones > -- > > Key: AMBARI-17843 > URL: https://issues.apache.org/jira/browse/AMBARI-17843 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.2.2 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17843.patch > > > timeline.metrics.service.cluster.aggregator.appIds defines what apps to > produce host metrics for: > {quote}List of application ids to use for aggregating host level metrics for > an application. Example: bytes_read across Yarn Nodemanagers. {quote} > Right now we are aggregating for all. > Additionally, metadata does not expose these additional metrics > {code} > 0: jdbc:phoenix:localhost:61181:/ams-hbase-un> select distinct(APP_ID) from > METRIC_AGGREGATE_MINUTE WHERE METRIC_NAME = 'cpu_user'; > ++ > | APP_ID | > ++ > | HOST | > | ams-hbase | > | amssmoketestfake | > | applicationhistoryserver | > | datanode | > | hivemetastore | > | hiveserver2 | > | jobhistoryserver | > | namenode | > | nimbus | > | nodemanager | > | resourcemanager | > ++ > 12 rows selected (0.117 seconds) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17866) HSI host need to be added to hadoop.proxyuser..hosts in core-site.xml
[ https://issues.apache.org/jira/browse/AMBARI-17866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapan Shridhar updated AMBARI-17866: - Status: Patch Available (was: Open) > HSI host need to be added to hadoop.proxyuser..hosts in > core-site.xml > > > Key: AMBARI-17866 > URL: https://issues.apache.org/jira/browse/AMBARI-17866 > Project: Ambari > Issue Type: Bug > Components: ambari-server, ambari-web >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar > Fix For: 2.4.0 > > Attachments: AMBARI-17866.patch > > > hadoop.proxyuser..hosts should be updated with Hive Server Host > also. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17865) incorrect text for disabled "Change Password" button
[ https://issues.apache.org/jira/browse/AMBARI-17865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe (Joe) Wang updated AMBARI-17865: Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk and branch-2.4 661ee4cd72c083814a727a8850f4ac151e07067b > incorrect text for disabled "Change Password" button > -- > > Key: AMBARI-17865 > URL: https://issues.apache.org/jira/browse/AMBARI-17865 > Project: Ambari > Issue Type: Bug > Components: ambari-admin >Affects Versions: 2.4.0 >Reporter: Zhe (Joe) Wang >Assignee: Zhe (Joe) Wang > Fix For: 2.4.0 > > Attachments: AMBARI-17865.v0.patch > > > Incorrect "users.changePassword" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17824) Show existing views under relevant service page as tabs
[ https://issues.apache.org/jira/browse/AMBARI-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaimin D Jetly updated AMBARI-17824: Resolution: Fixed Status: Resolved (was: Patch Available) Patch committed to branch-embedded-views > Show existing views under relevant service page as tabs > --- > > Key: AMBARI-17824 > URL: https://issues.apache.org/jira/browse/AMBARI-17824 > Project: Ambari > Issue Type: Task > Components: ambari-web >Affects Versions: 2.5.0 >Reporter: Jaimin D Jetly >Assignee: Manasi Maheshwari > Fix For: branch-embedded-views > > Attachments: AMBARI-17824.2.addendum.patch, AMBARI-17824.2.patch, > AMBARI-17824.patch > > > Following needs to be done as part of this work > # view instances needs to be shown under relevant service as tabs > # content of the view resources needs to be shown in iframe > # Quick links location needs to be adjusted so it does not collapse with view > tabs > # Left service menu on the service page needs to be removed to give more > space to view content > # Increase the span for the pages under service page for larger width -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17824) Show existing views under relevant service page as tabs
[ https://issues.apache.org/jira/browse/AMBARI-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390437#comment-15390437 ] Jaimin D Jetly commented on AMBARI-17824: - Verified that unit tests failure ServicePropertiesTest and ConfigUpgradeValidityTest are not related to this patch. +1 for the submitted patch and addendum patch. > Show existing views under relevant service page as tabs > --- > > Key: AMBARI-17824 > URL: https://issues.apache.org/jira/browse/AMBARI-17824 > Project: Ambari > Issue Type: Task > Components: ambari-web >Affects Versions: 2.5.0 >Reporter: Jaimin D Jetly >Assignee: Manasi Maheshwari > Fix For: branch-embedded-views > > Attachments: AMBARI-17824.2.addendum.patch, AMBARI-17824.2.patch, > AMBARI-17824.patch > > > Following needs to be done as part of this work > # view instances needs to be shown under relevant service as tabs > # content of the view resources needs to be shown in iframe > # Quick links location needs to be adjusted so it does not collapse with view > tabs > # Left service menu on the service page needs to be removed to give more > space to view content > # Increase the span for the pages under service page for larger width -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17641) Add storm impersonation authorized along with default ACL
[ https://issues.apache.org/jira/browse/AMBARI-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390431#comment-15390431 ] Jaimin D Jetly commented on AMBARI-17641: - +1 for the addendum patch that fixes UT failure > Add storm impersonation authorized along with default ACL > - > > Key: AMBARI-17641 > URL: https://issues.apache.org/jira/browse/AMBARI-17641 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17641-V2.patch, AMBARI-17641-V4.patch, > AMBARI-17641-addemdum-ut-fix.patch, AMBARI-17641.addendum.patch, > AMBARI-17641.patch, Ambari-17641-patch-appied-installed-Storm-step-1.png, > after-kerborizing-step-2.png > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17641) Add storm impersonation authorized along with default ACL
[ https://issues.apache.org/jira/browse/AMBARI-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sumit Mohanty updated AMBARI-17641: --- Attachment: (was: AMBARI-17641-addemdum-ut-fix.patch) > Add storm impersonation authorized along with default ACL > - > > Key: AMBARI-17641 > URL: https://issues.apache.org/jira/browse/AMBARI-17641 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17641-V2.patch, AMBARI-17641-V4.patch, > AMBARI-17641-addemdum-ut-fix.patch, AMBARI-17641.addendum.patch, > AMBARI-17641.patch, Ambari-17641-patch-appied-installed-Storm-step-1.png, > after-kerborizing-step-2.png > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17641) Add storm impersonation authorized along with default ACL
[ https://issues.apache.org/jira/browse/AMBARI-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sumit Mohanty updated AMBARI-17641: --- Attachment: AMBARI-17641-addemdum-ut-fix.patch > Add storm impersonation authorized along with default ACL > - > > Key: AMBARI-17641 > URL: https://issues.apache.org/jira/browse/AMBARI-17641 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17641-V2.patch, AMBARI-17641-V4.patch, > AMBARI-17641-addemdum-ut-fix.patch, AMBARI-17641.addendum.patch, > AMBARI-17641.patch, Ambari-17641-patch-appied-installed-Storm-step-1.png, > after-kerborizing-step-2.png > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17861) Ambari principal should be part of nimbus.admins for Storm View
[ https://issues.apache.org/jira/browse/AMBARI-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390417#comment-15390417 ] Sriharsha Chintalapani commented on AMBARI-17861: - Thanks for the help [~sumitmohanty] . +1 on the follow-up patch. > Ambari principal should be part of nimbus.admins for Storm View > --- > > Key: AMBARI-17861 > URL: https://issues.apache.org/jira/browse/AMBARI-17861 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17861-II.patch, AMBARI-17861.patch, > AMBARI-17861.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17861) Ambari principal should be part of nimbus.admins for Storm View
[ https://issues.apache.org/jira/browse/AMBARI-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sumit Mohanty updated AMBARI-17861: --- Attachment: AMBARI-17861-II.patch > Ambari principal should be part of nimbus.admins for Storm View > --- > > Key: AMBARI-17861 > URL: https://issues.apache.org/jira/browse/AMBARI-17861 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17861-II.patch, AMBARI-17861.patch, > AMBARI-17861.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17861) Ambari principal should be part of nimbus.admins for Storm View
[ https://issues.apache.org/jira/browse/AMBARI-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390413#comment-15390413 ] Hadoop QA commented on AMBARI-17861: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12819710/AMBARI-17861.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The test build failed in ambari-server Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/7984//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/7984//console This message is automatically generated. > Ambari principal should be part of nimbus.admins for Storm View > --- > > Key: AMBARI-17861 > URL: https://issues.apache.org/jira/browse/AMBARI-17861 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17861.patch, AMBARI-17861.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17383) User names should be case insensitive
[ https://issues.apache.org/jira/browse/AMBARI-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390408#comment-15390408 ] Tuong Truong commented on AMBARI-17383: --- Hi [~smnaha], [~sumitmohanty] [~rlevas], we just tripped over this JIRA while trying to test our PAM integration implementation (https://issues.apache.org/jira/browse/AMBARI-12263).Previous JIRA to make Ambari UI case-sensitive (https://issues.apache.org/jira/browse/AMBARI-13997) was not fully complete which causde an error reported in AMBARI-17359. I feel we should have addressed AMBARI-17359 properly by complete the support for case-sensitive userid instead. This JIRA has reverted the case-sensitivity support, and while this change may be OK for Ambari private users, but OS users and other directory services (LDAP/AD) are typically case sensitive. The case-insensitive support has created some inconsistency problems when integrating with PAM and even LDAP with granting permission to users in Ambari (since they are all lower-case). This is because admin and Admin will be mapped to admin. So this open a potential for identity hijiack in term of the authority granting in Ambari. I think we should revisit the decision of case insensitivity support. What do you think? We do have some customers requesting case-sensitivty support in Ambari. > User names should be case insensitive > - > > Key: AMBARI-17383 > URL: https://issues.apache.org/jira/browse/AMBARI-17383 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Nahappan Somasundaram >Assignee: Nahappan Somasundaram >Priority: Critical > Fix For: 2.4.0 > > Attachments: rb49119 (1).patch > > > User names should be case insensitive. The following usernames are the same: > VIEWUSER > viewUser > viewuser > Before adding a new user, a case sensitive search is made. Change this to > case insensitive. Additionally, store user names in the DB in lower case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17866) HSI host need to be added to hadoop.proxyuser..hosts in core-site.xml
[ https://issues.apache.org/jira/browse/AMBARI-17866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapan Shridhar updated AMBARI-17866: - Attachment: AMBARI-17866.patch > HSI host need to be added to hadoop.proxyuser..hosts in > core-site.xml > > > Key: AMBARI-17866 > URL: https://issues.apache.org/jira/browse/AMBARI-17866 > Project: Ambari > Issue Type: Bug > Components: ambari-server, ambari-web >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar > Fix For: 2.4.0 > > Attachments: AMBARI-17866.patch > > > hadoop.proxyuser..hosts should be updated with Hive Server Host > also. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17824) Show existing views under relevant service page as tabs
[ https://issues.apache.org/jira/browse/AMBARI-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390397#comment-15390397 ] Hadoop QA commented on AMBARI-17824: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12819711/AMBARI-17824.2.addendum.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/7983//console This message is automatically generated. > Show existing views under relevant service page as tabs > --- > > Key: AMBARI-17824 > URL: https://issues.apache.org/jira/browse/AMBARI-17824 > Project: Ambari > Issue Type: Task > Components: ambari-web >Affects Versions: 2.5.0 >Reporter: Jaimin D Jetly >Assignee: Manasi Maheshwari > Fix For: branch-embedded-views > > Attachments: AMBARI-17824.2.addendum.patch, AMBARI-17824.2.patch, > AMBARI-17824.patch > > > Following needs to be done as part of this work > # view instances needs to be shown under relevant service as tabs > # content of the view resources needs to be shown in iframe > # Quick links location needs to be adjusted so it does not collapse with view > tabs > # Left service menu on the service page needs to be removed to give more > space to view content > # Increase the span for the pages under service page for larger width -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17041) Support password type for custom properties
[ https://issues.apache.org/jira/browse/AMBARI-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390396#comment-15390396 ] Hadoop QA commented on AMBARI-17041: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12819716/AMBARI-17041-July22.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in ambari-server ambari-web: org.apache.ambari.server.state.ServicePropertiesTest org.apache.ambari.server.state.stack.ConfigUpgradeValidityTest Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/7982//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/7982//console This message is automatically generated. > Support password type for custom properties > --- > > Key: AMBARI-17041 > URL: https://issues.apache.org/jira/browse/AMBARI-17041 > Project: Ambari > Issue Type: New Feature > Components: ambari-server >Affects Versions: 2.2.2 >Reporter: Tuong Truong >Assignee: Keta Patel > Attachments: AMBARI-17041-July14.patch, AMBARI-17041-July15.patch, > AMBARI-17041-July20.patch, AMBARI-17041-July21-ES6.patch, > AMBARI-17041-July21-updated.patch, AMBARI-17041-July22.patch, > AMBARI-17041-trunk-July08.patch, AMBARI-17041-trunk-Jun29.patch, > AMBARI-17041-trunk.patch, add_property_pop_up.tiff, > ambari_web_failed_to_execute_test.png, > cluster_config_with_password_type_in_config_attributes_column.tiff, > custom_properties_after_save.tiff, custom_property_password_type.tiff, > custom_property_regular_type.tiff, property_type_schema.tiff, > schema_of_clusterconfig_table.tiff > > > Currently, services can define properties in the XML configuration files that > is flagged as type password: > > my.special.password > > PASSWORD > Password to be masked > > and it will be masked properly in the UI as well as blueprint. > Custom property should also support this option so that password can be added > as custom property and treat accordingly. > == > Proposed Design for the fix: > == > At present only the key-value information of the service properties is stored > in the DB ("clusterconfig" table in the "config_data" column). > The "config_attributes" column stores only certain attributes like "final" > indicating the list of properties set with the Final flag = true. > The information about the property-type (i.e PASSWORD, USER, GROUP, > ADDITIONAL_USER_PROPERTY, VALUE_FROM_PROPERTY_FILE, NOT_MANAGED_HDFS_PATH, > etc) is extracted from the corresponding service's property file (e.g. > hive-site.xml, core-site.xml, webhcat-env.xml, etc). These files contain > information of the existing properties only. Custom Properties added by > ambari user have no provision to store their additional attributes. > Since, for this Jira we are concerned with only attribute for > Custom Properties, we could add an additional field called "Property Type" in > the "Add Property" pop-up which shows up on clicking "Add Property ..." in > the Custom property section for a service. For now, only 2 options are shown > in the drop-down list: NONE and PASSWORD . > A few sample test properties are created using the new "Add Property" pop-up > as can be seen in the following attachments. > Attachments: > "add_property_pop_up.tiff" > "custom_property_password_type.tiff" > "custom_property_regular_type.tiff" > "custom_properties_after_save.tiff" > The information for these Custom properties is stored in the > DB in "clusterconfig" table, "config_attributes" column. > The schema for "clusterconfig" table can be seen in the attachment: > "schema_of_clusterconfig_table.tiff" > The content of the "config_attributes" column with the > information from the new Custom properties can be seen in the attachment: > "cluster_config_with_password_type_in_config_attributes_column.tiff" > Note: The fix so far is performed only for new Custom properties. The > information for existing properties is extracted from the > corresponding property xml files for the service. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17862) Unexpected warning modal window is appearing while config modification
[ https://issues.apache.org/jira/browse/AMBARI-17862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390392#comment-15390392 ] Hudson commented on AMBARI-17862: - FAILURE: Integrated in Ambari-trunk-Commit #5370 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5370/]) AMBARI-17862. Unexpected warning modal window is appearing while config (akovalenko: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=b267646365d108303e55c8988ccff2ea1cd045e3]) * ambari-web/app/controllers/main/service/info/configs.js * ambari-web/test/controllers/main/service/info/config_test.js * ambari-web/app/mixins/common/configs/enhanced_configs.js > Unexpected warning modal window is appearing while config modification > -- > > Key: AMBARI-17862 > URL: https://issues.apache.org/jira/browse/AMBARI-17862 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Aleksandr Kovalenko >Assignee: Aleksandr Kovalenko >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17862.patch > > > Un expected modal window is appearing while modifying configs. This issue is > seen only on upgraded cluster. > Steps to reproduce : > 1) Upgrade HDP from 2.4(or older) to 2.5. > 2) Go to HIVE configs. > 3) Turn on property 'ACID Transactions' > 4) Turn off property 'ACID Transactions' (Do this without saving the changes > done in step 3.)So no configs are changed. > 5) Try to navigate to another service. > Warning modal window 'You have unsaved changes. Save changes or discard?' is > shown though no configs are being changed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-17866) HSI host need to be added to hadoop.proxyuser..hosts in core-site.xml
Swapan Shridhar created AMBARI-17866: Summary: HSI host need to be added to hadoop.proxyuser..hosts in core-site.xml Key: AMBARI-17866 URL: https://issues.apache.org/jira/browse/AMBARI-17866 Project: Ambari Issue Type: Bug Components: ambari-server, ambari-web Affects Versions: 2.4.0 Reporter: Swapan Shridhar Assignee: Swapan Shridhar Fix For: 2.4.0 hadoop.proxyuser..hosts should be updated with Hive Server Host also. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17641) Add storm impersonation authorized along with default ACL
[ https://issues.apache.org/jira/browse/AMBARI-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390331#comment-15390331 ] Sumit Mohanty commented on AMBARI-17641: [~harsha_ch]/[~jaimin] can you review AMBARI-17641-addemdum-ut-fix.patch that fixes a UT failure. > Add storm impersonation authorized along with default ACL > - > > Key: AMBARI-17641 > URL: https://issues.apache.org/jira/browse/AMBARI-17641 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17641-V2.patch, AMBARI-17641-V4.patch, > AMBARI-17641-addemdum-ut-fix.patch, AMBARI-17641.addendum.patch, > AMBARI-17641.patch, Ambari-17641-patch-appied-installed-Storm-step-1.png, > after-kerborizing-step-2.png > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (AMBARI-17641) Add storm impersonation authorized along with default ACL
[ https://issues.apache.org/jira/browse/AMBARI-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sumit Mohanty reopened AMBARI-17641: Assignee: Sriharsha Chintalapani (was: Jaimin D Jetly) > Add storm impersonation authorized along with default ACL > - > > Key: AMBARI-17641 > URL: https://issues.apache.org/jira/browse/AMBARI-17641 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17641-V2.patch, AMBARI-17641-V4.patch, > AMBARI-17641-addemdum-ut-fix.patch, AMBARI-17641.addendum.patch, > AMBARI-17641.patch, Ambari-17641-patch-appied-installed-Storm-step-1.png, > after-kerborizing-step-2.png > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17641) Add storm impersonation authorized along with default ACL
[ https://issues.apache.org/jira/browse/AMBARI-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sumit Mohanty updated AMBARI-17641: --- Attachment: AMBARI-17641-addemdum-ut-fix.patch > Add storm impersonation authorized along with default ACL > - > > Key: AMBARI-17641 > URL: https://issues.apache.org/jira/browse/AMBARI-17641 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17641-V2.patch, AMBARI-17641-V4.patch, > AMBARI-17641-addemdum-ut-fix.patch, AMBARI-17641.addendum.patch, > AMBARI-17641.patch, Ambari-17641-patch-appied-installed-Storm-step-1.png, > after-kerborizing-step-2.png > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17865) incorrect text for disabled "Change Password" button
[ https://issues.apache.org/jira/browse/AMBARI-17865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390315#comment-15390315 ] Hadoop QA commented on AMBARI-17865: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12819721/AMBARI-17865.v0.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in ambari-admin. Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/7981//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/7981//console This message is automatically generated. > incorrect text for disabled "Change Password" button > -- > > Key: AMBARI-17865 > URL: https://issues.apache.org/jira/browse/AMBARI-17865 > Project: Ambari > Issue Type: Bug > Components: ambari-admin >Affects Versions: 2.4.0 >Reporter: Zhe (Joe) Wang >Assignee: Zhe (Joe) Wang > Fix For: 2.4.0 > > Attachments: AMBARI-17865.v0.patch > > > Incorrect "users.changePassword" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16027) Kafka upgrade from HDP 2.2 to HDP 2.3 is breaking
[ https://issues.apache.org/jira/browse/AMBARI-16027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390308#comment-15390308 ] Sriharsha Chintalapani commented on AMBARI-16027: - [~sumitmohanty] earlier we removed the port during hte upgrade. With this patch we are keeping the config as user might have changed the port but we need to preserve that port and reapply as part of new config. So thats what causing this unit test failure. > Kafka upgrade from HDP 2.2 to HDP 2.3 is breaking > - > > Key: AMBARI-16027 > URL: https://issues.apache.org/jira/browse/AMBARI-16027 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-16027-V1.patch, AMBARI-16027.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16027) Kafka upgrade from HDP 2.2 to HDP 2.3 is breaking
[ https://issues.apache.org/jira/browse/AMBARI-16027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390305#comment-15390305 ] Sumit Mohanty commented on AMBARI-16027: [~harsha_ch]/[~afernandez] it appears that the patch may have resulted in the following UT failure: {code} Error Message Missing hdp_2_3_0_0_kafka_broker_deprecate_port in upgrade from HDP-2.2 to HDP-2.3 (ROLLING) Stacktrace junit.framework.AssertionFailedError: Missing hdp_2_3_0_0_kafka_broker_deprecate_port in upgrade from HDP-2.2 to HDP-2.3 (ROLLING) at org.apache.ambari.server.state.stack.ConfigUpgradeValidityTest.assertIdDefinitionExists(ConfigUpgradeValidityTest.java:189) at org.apache.ambari.server.state.stack.ConfigUpgradeValidityTest.testConfigurationDefinitionsExist(ConfigUpgradeValidityTest.java:152) {code} https://builds.apache.org/job/Ambari-trunk-Commit/5354/testReport/junit/org.apache.ambari.server.state.stack/ConfigUpgradeValidityTest/testConfigurationDefinitionsExist/ Not sure why it did not get flagged on the associated UT run. Can you check the failure? > Kafka upgrade from HDP 2.2 to HDP 2.3 is breaking > - > > Key: AMBARI-16027 > URL: https://issues.apache.org/jira/browse/AMBARI-16027 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-16027-V1.patch, AMBARI-16027.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17843) App data aggregated for hosted apps is being calcualted for all apps, not just configured ones
[ https://issues.apache.org/jira/browse/AMBARI-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-17843: --- Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to trunk and branch-2.4 > App data aggregated for hosted apps is being calcualted for all apps, not > just configured ones > -- > > Key: AMBARI-17843 > URL: https://issues.apache.org/jira/browse/AMBARI-17843 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.2.2 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17843.patch > > > timeline.metrics.service.cluster.aggregator.appIds defines what apps to > produce host metrics for: > {quote}List of application ids to use for aggregating host level metrics for > an application. Example: bytes_read across Yarn Nodemanagers. {quote} > Right now we are aggregating for all. > Additionally, metadata does not expose these additional metrics > {code} > 0: jdbc:phoenix:localhost:61181:/ams-hbase-un> select distinct(APP_ID) from > METRIC_AGGREGATE_MINUTE WHERE METRIC_NAME = 'cpu_user'; > ++ > | APP_ID | > ++ > | HOST | > | ams-hbase | > | amssmoketestfake | > | applicationhistoryserver | > | datanode | > | hivemetastore | > | hiveserver2 | > | jobhistoryserver | > | namenode | > | nimbus | > | nodemanager | > | resourcemanager | > ++ > 12 rows selected (0.117 seconds) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17863) AMS - topN does not work when metric name has a wildcard specified
[ https://issues.apache.org/jira/browse/AMBARI-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-17863: --- Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to trunk and branch-2.4 > AMS - topN does not work when metric name has a wildcard specified > -- > > Key: AMBARI-17863 > URL: https://issues.apache.org/jira/browse/AMBARI-17863 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.4.0 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17863.patch > > > Queries to AMS with topN specified do not work if metric name has wildcard > (%) in them. > Example: > collectorhost:port/ws/v1/timeline/metrics?metricNames=cpu_%&appId=HOST&startTime=1469203016&endtime=1469204816&topN=2&isBottomN=false -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17845) Storm cluster metrics do not show up because of AMS aggregation issue.
[ https://issues.apache.org/jira/browse/AMBARI-17845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-17845: --- Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to trunk and branch-2.4 > Storm cluster metrics do not show up because of AMS aggregation issue. > -- > > Key: AMBARI-17845 > URL: https://issues.apache.org/jira/browse/AMBARI-17845 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.2.2 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17845.patch > > > PROBLEM > Storm cluster metrics (aggregated across hosts) don't show up in Ambari and > Grafana > BUG > Storm metrics collect and send data in 1 minute intervals. Since their data > is present in the right end of the spectrum for the 2 minute aggregator > (start_time ~ server_time), a bug in the second aggregator is causing these > values to slip between 2 aggregator cycles. > FIX > Within the 2 minute interval, look for the data in the time shifted interval > ( ams-site:timeline.metrics.service.cluster.aggregator.timeshift.adjustment). > In case no data is present, look for the data outside the right boundary of > the interval. Use that to interpolate the data in the 30second slices. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17840) Oozie service check failed after EU downgrade
[ https://issues.apache.org/jira/browse/AMBARI-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390250#comment-15390250 ] Hudson commented on AMBARI-17840: - FAILURE: Integrated in Ambari-trunk-Commit #5369 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5369/]) AMBARI-17840. Oozie service check failed after EU downgrade (ncole) (ncole: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=6b5387363ec9b2c7403d244fd65cf58a1e9b38ce]) * ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/nonrolling-upgrade-2.4.xml * ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/nonrolling-upgrade-2.2.xml * ambari-server/src/main/resources/stacks/HDP/2.2/upgrades/nonrolling-upgrade-2.3.xml * ambari-server/src/main/resources/stacks/HDP/2.4/upgrades/nonrolling-upgrade-2.5.xml * ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/nonrolling-upgrade-2.3.xml * ambari-server/src/main/resources/stacks/HDP/2.5/upgrades/nonrolling-upgrade-2.5.xml * ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/nonrolling-upgrade-2.4.xml * ambari-server/src/main/resources/stacks/HDP/2.4/upgrades/nonrolling-upgrade-2.4.xml * ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/nonrolling-upgrade-2.5.xml > Oozie service check failed after EU downgrade > - > > Key: AMBARI-17840 > URL: https://issues.apache.org/jira/browse/AMBARI-17840 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17840.patch > > > The EU Upgrade Packs are calling hdp-select set all at the end of an upgrade. > When downgrading, the Oozie Server and Oozie Client are being used out of a > mixed version, and the newer client is not compatible with the older server. > This occurs on downgrade. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17865) incorrect text for disabled "Change Password" button
[ https://issues.apache.org/jira/browse/AMBARI-17865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe (Joe) Wang updated AMBARI-17865: Status: Patch Available (was: Open) > incorrect text for disabled "Change Password" button > -- > > Key: AMBARI-17865 > URL: https://issues.apache.org/jira/browse/AMBARI-17865 > Project: Ambari > Issue Type: Bug > Components: ambari-admin >Affects Versions: 2.4.0 >Reporter: Zhe (Joe) Wang >Assignee: Zhe (Joe) Wang > Fix For: 2.4.0 > > Attachments: AMBARI-17865.v0.patch > > > Incorrect "users.changePassword" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17865) incorrect text for disabled "Change Password" button
[ https://issues.apache.org/jira/browse/AMBARI-17865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe (Joe) Wang updated AMBARI-17865: Attachment: AMBARI-17865.v0.patch ambari-admin: Executed 76 of 76 SUCCESS (0.166 secs / 0.381 secs) Manual testing done > incorrect text for disabled "Change Password" button > -- > > Key: AMBARI-17865 > URL: https://issues.apache.org/jira/browse/AMBARI-17865 > Project: Ambari > Issue Type: Bug > Components: ambari-admin >Affects Versions: 2.4.0 >Reporter: Zhe (Joe) Wang >Assignee: Zhe (Joe) Wang > Fix For: 2.4.0 > > Attachments: AMBARI-17865.v0.patch > > > Incorrect "users.changePassword" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-17865) incorrect text for disabled "Change Password" button
Zhe (Joe) Wang created AMBARI-17865: --- Summary: incorrect text for disabled "Change Password" button Key: AMBARI-17865 URL: https://issues.apache.org/jira/browse/AMBARI-17865 Project: Ambari Issue Type: Bug Components: ambari-admin Affects Versions: 2.4.0 Reporter: Zhe (Joe) Wang Assignee: Zhe (Joe) Wang Fix For: 2.4.0 Incorrect "users.changePassword" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17041) Support password type for custom properties
[ https://issues.apache.org/jira/browse/AMBARI-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keta Patel updated AMBARI-17041: Status: Patch Available (was: Open) > Support password type for custom properties > --- > > Key: AMBARI-17041 > URL: https://issues.apache.org/jira/browse/AMBARI-17041 > Project: Ambari > Issue Type: New Feature > Components: ambari-server >Affects Versions: 2.2.2 >Reporter: Tuong Truong >Assignee: Keta Patel > Attachments: AMBARI-17041-July14.patch, AMBARI-17041-July15.patch, > AMBARI-17041-July20.patch, AMBARI-17041-July21-ES6.patch, > AMBARI-17041-July21-updated.patch, AMBARI-17041-July22.patch, > AMBARI-17041-trunk-July08.patch, AMBARI-17041-trunk-Jun29.patch, > AMBARI-17041-trunk.patch, add_property_pop_up.tiff, > ambari_web_failed_to_execute_test.png, > cluster_config_with_password_type_in_config_attributes_column.tiff, > custom_properties_after_save.tiff, custom_property_password_type.tiff, > custom_property_regular_type.tiff, property_type_schema.tiff, > schema_of_clusterconfig_table.tiff > > > Currently, services can define properties in the XML configuration files that > is flagged as type password: > > my.special.password > > PASSWORD > Password to be masked > > and it will be masked properly in the UI as well as blueprint. > Custom property should also support this option so that password can be added > as custom property and treat accordingly. > == > Proposed Design for the fix: > == > At present only the key-value information of the service properties is stored > in the DB ("clusterconfig" table in the "config_data" column). > The "config_attributes" column stores only certain attributes like "final" > indicating the list of properties set with the Final flag = true. > The information about the property-type (i.e PASSWORD, USER, GROUP, > ADDITIONAL_USER_PROPERTY, VALUE_FROM_PROPERTY_FILE, NOT_MANAGED_HDFS_PATH, > etc) is extracted from the corresponding service's property file (e.g. > hive-site.xml, core-site.xml, webhcat-env.xml, etc). These files contain > information of the existing properties only. Custom Properties added by > ambari user have no provision to store their additional attributes. > Since, for this Jira we are concerned with only attribute for > Custom Properties, we could add an additional field called "Property Type" in > the "Add Property" pop-up which shows up on clicking "Add Property ..." in > the Custom property section for a service. For now, only 2 options are shown > in the drop-down list: NONE and PASSWORD . > A few sample test properties are created using the new "Add Property" pop-up > as can be seen in the following attachments. > Attachments: > "add_property_pop_up.tiff" > "custom_property_password_type.tiff" > "custom_property_regular_type.tiff" > "custom_properties_after_save.tiff" > The information for these Custom properties is stored in the > DB in "clusterconfig" table, "config_attributes" column. > The schema for "clusterconfig" table can be seen in the attachment: > "schema_of_clusterconfig_table.tiff" > The content of the "config_attributes" column with the > information from the new Custom properties can be seen in the attachment: > "cluster_config_with_password_type_in_config_attributes_column.tiff" > Note: The fix so far is performed only for new Custom properties. The > information for existing properties is extracted from the > corresponding property xml files for the service. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17041) Support password type for custom properties
[ https://issues.apache.org/jira/browse/AMBARI-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390185#comment-15390185 ] Keta Patel commented on AMBARI-17041: - Updated *AMBARI-17041-July22.patch* contains the corrections regarding variable 'FINAL' as per Review Board suggestions. The ambari-web test results with the patch are as follows: 29343 tests complete (58 seconds) 154 tests pending > Support password type for custom properties > --- > > Key: AMBARI-17041 > URL: https://issues.apache.org/jira/browse/AMBARI-17041 > Project: Ambari > Issue Type: New Feature > Components: ambari-server >Affects Versions: 2.2.2 >Reporter: Tuong Truong >Assignee: Keta Patel > Attachments: AMBARI-17041-July14.patch, AMBARI-17041-July15.patch, > AMBARI-17041-July20.patch, AMBARI-17041-July21-ES6.patch, > AMBARI-17041-July21-updated.patch, AMBARI-17041-July22.patch, > AMBARI-17041-trunk-July08.patch, AMBARI-17041-trunk-Jun29.patch, > AMBARI-17041-trunk.patch, add_property_pop_up.tiff, > ambari_web_failed_to_execute_test.png, > cluster_config_with_password_type_in_config_attributes_column.tiff, > custom_properties_after_save.tiff, custom_property_password_type.tiff, > custom_property_regular_type.tiff, property_type_schema.tiff, > schema_of_clusterconfig_table.tiff > > > Currently, services can define properties in the XML configuration files that > is flagged as type password: > > my.special.password > > PASSWORD > Password to be masked > > and it will be masked properly in the UI as well as blueprint. > Custom property should also support this option so that password can be added > as custom property and treat accordingly. > == > Proposed Design for the fix: > == > At present only the key-value information of the service properties is stored > in the DB ("clusterconfig" table in the "config_data" column). > The "config_attributes" column stores only certain attributes like "final" > indicating the list of properties set with the Final flag = true. > The information about the property-type (i.e PASSWORD, USER, GROUP, > ADDITIONAL_USER_PROPERTY, VALUE_FROM_PROPERTY_FILE, NOT_MANAGED_HDFS_PATH, > etc) is extracted from the corresponding service's property file (e.g. > hive-site.xml, core-site.xml, webhcat-env.xml, etc). These files contain > information of the existing properties only. Custom Properties added by > ambari user have no provision to store their additional attributes. > Since, for this Jira we are concerned with only attribute for > Custom Properties, we could add an additional field called "Property Type" in > the "Add Property" pop-up which shows up on clicking "Add Property ..." in > the Custom property section for a service. For now, only 2 options are shown > in the drop-down list: NONE and PASSWORD . > A few sample test properties are created using the new "Add Property" pop-up > as can be seen in the following attachments. > Attachments: > "add_property_pop_up.tiff" > "custom_property_password_type.tiff" > "custom_property_regular_type.tiff" > "custom_properties_after_save.tiff" > The information for these Custom properties is stored in the > DB in "clusterconfig" table, "config_attributes" column. > The schema for "clusterconfig" table can be seen in the attachment: > "schema_of_clusterconfig_table.tiff" > The content of the "config_attributes" column with the > information from the new Custom properties can be seen in the attachment: > "cluster_config_with_password_type_in_config_attributes_column.tiff" > Note: The fix so far is performed only for new Custom properties. The > information for existing properties is extracted from the > corresponding property xml files for the service. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17041) Support password type for custom properties
[ https://issues.apache.org/jira/browse/AMBARI-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keta Patel updated AMBARI-17041: Status: Open (was: Patch Available) > Support password type for custom properties > --- > > Key: AMBARI-17041 > URL: https://issues.apache.org/jira/browse/AMBARI-17041 > Project: Ambari > Issue Type: New Feature > Components: ambari-server >Affects Versions: 2.2.2 >Reporter: Tuong Truong >Assignee: Keta Patel > Attachments: AMBARI-17041-July14.patch, AMBARI-17041-July15.patch, > AMBARI-17041-July20.patch, AMBARI-17041-July21-ES6.patch, > AMBARI-17041-July21-updated.patch, AMBARI-17041-July22.patch, > AMBARI-17041-trunk-July08.patch, AMBARI-17041-trunk-Jun29.patch, > AMBARI-17041-trunk.patch, add_property_pop_up.tiff, > ambari_web_failed_to_execute_test.png, > cluster_config_with_password_type_in_config_attributes_column.tiff, > custom_properties_after_save.tiff, custom_property_password_type.tiff, > custom_property_regular_type.tiff, property_type_schema.tiff, > schema_of_clusterconfig_table.tiff > > > Currently, services can define properties in the XML configuration files that > is flagged as type password: > > my.special.password > > PASSWORD > Password to be masked > > and it will be masked properly in the UI as well as blueprint. > Custom property should also support this option so that password can be added > as custom property and treat accordingly. > == > Proposed Design for the fix: > == > At present only the key-value information of the service properties is stored > in the DB ("clusterconfig" table in the "config_data" column). > The "config_attributes" column stores only certain attributes like "final" > indicating the list of properties set with the Final flag = true. > The information about the property-type (i.e PASSWORD, USER, GROUP, > ADDITIONAL_USER_PROPERTY, VALUE_FROM_PROPERTY_FILE, NOT_MANAGED_HDFS_PATH, > etc) is extracted from the corresponding service's property file (e.g. > hive-site.xml, core-site.xml, webhcat-env.xml, etc). These files contain > information of the existing properties only. Custom Properties added by > ambari user have no provision to store their additional attributes. > Since, for this Jira we are concerned with only attribute for > Custom Properties, we could add an additional field called "Property Type" in > the "Add Property" pop-up which shows up on clicking "Add Property ..." in > the Custom property section for a service. For now, only 2 options are shown > in the drop-down list: NONE and PASSWORD . > A few sample test properties are created using the new "Add Property" pop-up > as can be seen in the following attachments. > Attachments: > "add_property_pop_up.tiff" > "custom_property_password_type.tiff" > "custom_property_regular_type.tiff" > "custom_properties_after_save.tiff" > The information for these Custom properties is stored in the > DB in "clusterconfig" table, "config_attributes" column. > The schema for "clusterconfig" table can be seen in the attachment: > "schema_of_clusterconfig_table.tiff" > The content of the "config_attributes" column with the > information from the new Custom properties can be seen in the attachment: > "cluster_config_with_password_type_in_config_attributes_column.tiff" > Note: The fix so far is performed only for new Custom properties. The > information for existing properties is extracted from the > corresponding property xml files for the service. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17862) Unexpected warning modal window is appearing while config modification
[ https://issues.apache.org/jira/browse/AMBARI-17862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390184#comment-15390184 ] Aleksandr Kovalenko commented on AMBARI-17862: -- committed to trunk and branch-2.4 > Unexpected warning modal window is appearing while config modification > -- > > Key: AMBARI-17862 > URL: https://issues.apache.org/jira/browse/AMBARI-17862 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Aleksandr Kovalenko >Assignee: Aleksandr Kovalenko >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17862.patch > > > Un expected modal window is appearing while modifying configs. This issue is > seen only on upgraded cluster. > Steps to reproduce : > 1) Upgrade HDP from 2.4(or older) to 2.5. > 2) Go to HIVE configs. > 3) Turn on property 'ACID Transactions' > 4) Turn off property 'ACID Transactions' (Do this without saving the changes > done in step 3.)So no configs are changed. > 5) Try to navigate to another service. > Warning modal window 'You have unsaved changes. Save changes or discard?' is > shown though no configs are being changed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17862) Unexpected warning modal window is appearing while config modification
[ https://issues.apache.org/jira/browse/AMBARI-17862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Kovalenko updated AMBARI-17862: - Resolution: Fixed Status: Resolved (was: Patch Available) > Unexpected warning modal window is appearing while config modification > -- > > Key: AMBARI-17862 > URL: https://issues.apache.org/jira/browse/AMBARI-17862 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Aleksandr Kovalenko >Assignee: Aleksandr Kovalenko >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17862.patch > > > Un expected modal window is appearing while modifying configs. This issue is > seen only on upgraded cluster. > Steps to reproduce : > 1) Upgrade HDP from 2.4(or older) to 2.5. > 2) Go to HIVE configs. > 3) Turn on property 'ACID Transactions' > 4) Turn off property 'ACID Transactions' (Do this without saving the changes > done in step 3.)So no configs are changed. > 5) Try to navigate to another service. > Warning modal window 'You have unsaved changes. Save changes or discard?' is > shown though no configs are being changed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17041) Support password type for custom properties
[ https://issues.apache.org/jira/browse/AMBARI-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Keta Patel updated AMBARI-17041: Attachment: AMBARI-17041-July22.patch > Support password type for custom properties > --- > > Key: AMBARI-17041 > URL: https://issues.apache.org/jira/browse/AMBARI-17041 > Project: Ambari > Issue Type: New Feature > Components: ambari-server >Affects Versions: 2.2.2 >Reporter: Tuong Truong >Assignee: Keta Patel > Attachments: AMBARI-17041-July14.patch, AMBARI-17041-July15.patch, > AMBARI-17041-July20.patch, AMBARI-17041-July21-ES6.patch, > AMBARI-17041-July21-updated.patch, AMBARI-17041-July22.patch, > AMBARI-17041-trunk-July08.patch, AMBARI-17041-trunk-Jun29.patch, > AMBARI-17041-trunk.patch, add_property_pop_up.tiff, > ambari_web_failed_to_execute_test.png, > cluster_config_with_password_type_in_config_attributes_column.tiff, > custom_properties_after_save.tiff, custom_property_password_type.tiff, > custom_property_regular_type.tiff, property_type_schema.tiff, > schema_of_clusterconfig_table.tiff > > > Currently, services can define properties in the XML configuration files that > is flagged as type password: > > my.special.password > > PASSWORD > Password to be masked > > and it will be masked properly in the UI as well as blueprint. > Custom property should also support this option so that password can be added > as custom property and treat accordingly. > == > Proposed Design for the fix: > == > At present only the key-value information of the service properties is stored > in the DB ("clusterconfig" table in the "config_data" column). > The "config_attributes" column stores only certain attributes like "final" > indicating the list of properties set with the Final flag = true. > The information about the property-type (i.e PASSWORD, USER, GROUP, > ADDITIONAL_USER_PROPERTY, VALUE_FROM_PROPERTY_FILE, NOT_MANAGED_HDFS_PATH, > etc) is extracted from the corresponding service's property file (e.g. > hive-site.xml, core-site.xml, webhcat-env.xml, etc). These files contain > information of the existing properties only. Custom Properties added by > ambari user have no provision to store their additional attributes. > Since, for this Jira we are concerned with only attribute for > Custom Properties, we could add an additional field called "Property Type" in > the "Add Property" pop-up which shows up on clicking "Add Property ..." in > the Custom property section for a service. For now, only 2 options are shown > in the drop-down list: NONE and PASSWORD . > A few sample test properties are created using the new "Add Property" pop-up > as can be seen in the following attachments. > Attachments: > "add_property_pop_up.tiff" > "custom_property_password_type.tiff" > "custom_property_regular_type.tiff" > "custom_properties_after_save.tiff" > The information for these Custom properties is stored in the > DB in "clusterconfig" table, "config_attributes" column. > The schema for "clusterconfig" table can be seen in the attachment: > "schema_of_clusterconfig_table.tiff" > The content of the "config_attributes" column with the > information from the new Custom properties can be seen in the attachment: > "cluster_config_with_password_type_in_config_attributes_column.tiff" > Note: The fix so far is performed only for new Custom properties. The > information for existing properties is extracted from the > corresponding property xml files for the service. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (AMBARI-17824) Show existing views under relevant service page as tabs
[ https://issues.apache.org/jira/browse/AMBARI-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390146#comment-15390146 ] Manasi Maheshwari edited comment on AMBARI-17824 at 7/22/16 8:37 PM: - revert unneccessary changes in [^AMBARI-17824.2.addendum.patch] was (Author: manasim): revert unneccessary changes in [^AMBARI-17824.2.patch] > Show existing views under relevant service page as tabs > --- > > Key: AMBARI-17824 > URL: https://issues.apache.org/jira/browse/AMBARI-17824 > Project: Ambari > Issue Type: Task > Components: ambari-web >Affects Versions: 2.5.0 >Reporter: Jaimin D Jetly >Assignee: Manasi Maheshwari > Fix For: branch-embedded-views > > Attachments: AMBARI-17824.2.addendum.patch, AMBARI-17824.2.patch, > AMBARI-17824.patch > > > Following needs to be done as part of this work > # view instances needs to be shown under relevant service as tabs > # content of the view resources needs to be shown in iframe > # Quick links location needs to be adjusted so it does not collapse with view > tabs > # Left service menu on the service page needs to be removed to give more > space to view content > # Increase the span for the pages under service page for larger width -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17824) Show existing views under relevant service page as tabs
[ https://issues.apache.org/jira/browse/AMBARI-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390146#comment-15390146 ] Manasi Maheshwari commented on AMBARI-17824: revert unneccessary changes in [^AMBARI-17824.2.patch] > Show existing views under relevant service page as tabs > --- > > Key: AMBARI-17824 > URL: https://issues.apache.org/jira/browse/AMBARI-17824 > Project: Ambari > Issue Type: Task > Components: ambari-web >Affects Versions: 2.5.0 >Reporter: Jaimin D Jetly >Assignee: Manasi Maheshwari > Fix For: branch-embedded-views > > Attachments: AMBARI-17824.2.addendum.patch, AMBARI-17824.2.patch, > AMBARI-17824.patch > > > Following needs to be done as part of this work > # view instances needs to be shown under relevant service as tabs > # content of the view resources needs to be shown in iframe > # Quick links location needs to be adjusted so it does not collapse with view > tabs > # Left service menu on the service page needs to be removed to give more > space to view content > # Increase the span for the pages under service page for larger width -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (AMBARI-17824) Show existing views under relevant service page as tabs
[ https://issues.apache.org/jira/browse/AMBARI-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390144#comment-15390144 ] Manasi Maheshwari edited comment on AMBARI-17824 at 7/22/16 8:30 PM: - revert unneccessary changes in [AMBARI-17824.2.patch] was (Author: manasim): revert unneccessary changes > Show existing views under relevant service page as tabs > --- > > Key: AMBARI-17824 > URL: https://issues.apache.org/jira/browse/AMBARI-17824 > Project: Ambari > Issue Type: Task > Components: ambari-web >Affects Versions: 2.5.0 >Reporter: Jaimin D Jetly >Assignee: Manasi Maheshwari > Fix For: branch-embedded-views > > Attachments: AMBARI-17824.2.addendum.patch, AMBARI-17824.2.patch, > AMBARI-17824.patch > > > Following needs to be done as part of this work > # view instances needs to be shown under relevant service as tabs > # content of the view resources needs to be shown in iframe > # Quick links location needs to be adjusted so it does not collapse with view > tabs > # Left service menu on the service page needs to be removed to give more > space to view content > # Increase the span for the pages under service page for larger width -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (AMBARI-17824) Show existing views under relevant service page as tabs
[ https://issues.apache.org/jira/browse/AMBARI-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manasi Maheshwari updated AMBARI-17824: --- Comment: was deleted (was: revert unneccessary changes in [AMBARI-17824.2.patch]) > Show existing views under relevant service page as tabs > --- > > Key: AMBARI-17824 > URL: https://issues.apache.org/jira/browse/AMBARI-17824 > Project: Ambari > Issue Type: Task > Components: ambari-web >Affects Versions: 2.5.0 >Reporter: Jaimin D Jetly >Assignee: Manasi Maheshwari > Fix For: branch-embedded-views > > Attachments: AMBARI-17824.2.addendum.patch, AMBARI-17824.2.patch, > AMBARI-17824.patch > > > Following needs to be done as part of this work > # view instances needs to be shown under relevant service as tabs > # content of the view resources needs to be shown in iframe > # Quick links location needs to be adjusted so it does not collapse with view > tabs > # Left service menu on the service page needs to be removed to give more > space to view content > # Increase the span for the pages under service page for larger width -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17861) Ambari principal should be part of nimbus.admins for Storm View
[ https://issues.apache.org/jira/browse/AMBARI-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated AMBARI-17861: Attachment: AMBARI-17861.patch > Ambari principal should be part of nimbus.admins for Storm View > --- > > Key: AMBARI-17861 > URL: https://issues.apache.org/jira/browse/AMBARI-17861 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17861.patch, AMBARI-17861.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17824) Show existing views under relevant service page as tabs
[ https://issues.apache.org/jira/browse/AMBARI-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manasi Maheshwari updated AMBARI-17824: --- Attachment: AMBARI-17824.2.addendum.patch revert unneccessary changes > Show existing views under relevant service page as tabs > --- > > Key: AMBARI-17824 > URL: https://issues.apache.org/jira/browse/AMBARI-17824 > Project: Ambari > Issue Type: Task > Components: ambari-web >Affects Versions: 2.5.0 >Reporter: Jaimin D Jetly >Assignee: Manasi Maheshwari > Fix For: branch-embedded-views > > Attachments: AMBARI-17824.2.addendum.patch, AMBARI-17824.2.patch, > AMBARI-17824.patch > > > Following needs to be done as part of this work > # view instances needs to be shown under relevant service as tabs > # content of the view resources needs to be shown in iframe > # Quick links location needs to be adjusted so it does not collapse with view > tabs > # Left service menu on the service page needs to be removed to give more > space to view content > # Increase the span for the pages under service page for larger width -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17862) Unexpected warning modal window is appearing while config modification
[ https://issues.apache.org/jira/browse/AMBARI-17862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390126#comment-15390126 ] Hadoop QA commented on AMBARI-17862: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12819704/AMBARI-17862.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in ambari-web. Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/7980//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/7980//console This message is automatically generated. > Unexpected warning modal window is appearing while config modification > -- > > Key: AMBARI-17862 > URL: https://issues.apache.org/jira/browse/AMBARI-17862 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Aleksandr Kovalenko >Assignee: Aleksandr Kovalenko >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17862.patch > > > Un expected modal window is appearing while modifying configs. This issue is > seen only on upgraded cluster. > Steps to reproduce : > 1) Upgrade HDP from 2.4(or older) to 2.5. > 2) Go to HIVE configs. > 3) Turn on property 'ACID Transactions' > 4) Turn off property 'ACID Transactions' (Do this without saving the changes > done in step 3.)So no configs are changed. > 5) Try to navigate to another service. > Warning modal window 'You have unsaved changes. Save changes or discard?' is > shown though no configs are being changed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17861) Ambari principal should be part of nimbus.admins for Storm View
[ https://issues.apache.org/jira/browse/AMBARI-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sumit Mohanty updated AMBARI-17861: --- Status: Patch Available (was: Open) > Ambari principal should be part of nimbus.admins for Storm View > --- > > Key: AMBARI-17861 > URL: https://issues.apache.org/jira/browse/AMBARI-17861 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17861.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-17864) [Grafana] Add Storm Dashboards
Prajwal Rao created AMBARI-17864: Summary: [Grafana] Add Storm Dashboards Key: AMBARI-17864 URL: https://issues.apache.org/jira/browse/AMBARI-17864 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.4.0 Reporter: Prajwal Rao Assignee: Prajwal Rao Priority: Critical Fix For: 2.4.0 Add the following Storm Dashboards - Storm Home - Storm Topology (templatized) - Storm Components (per topology) (templatized) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17845) Storm cluster metrics do not show up because of AMS aggregation issue.
[ https://issues.apache.org/jira/browse/AMBARI-17845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-17845: --- Attachment: AMBARI-17845.patch > Storm cluster metrics do not show up because of AMS aggregation issue. > -- > > Key: AMBARI-17845 > URL: https://issues.apache.org/jira/browse/AMBARI-17845 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.2.2 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17845.patch > > > PROBLEM > Storm cluster metrics (aggregated across hosts) don't show up in Ambari and > Grafana > BUG > Storm metrics collect and send data in 1 minute intervals. Since their data > is present in the right end of the spectrum for the 2 minute aggregator > (start_time ~ server_time), a bug in the second aggregator is causing these > values to slip between 2 aggregator cycles. > FIX > Within the 2 minute interval, look for the data in the time shifted interval > ( ams-site:timeline.metrics.service.cluster.aggregator.timeshift.adjustment). > In case no data is present, look for the data outside the right boundary of > the interval. Use that to interpolate the data in the 30second slices. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17845) Storm cluster metrics do not show up because of AMS aggregation issue.
[ https://issues.apache.org/jira/browse/AMBARI-17845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-17845: --- Status: Patch Available (was: In Progress) > Storm cluster metrics do not show up because of AMS aggregation issue. > -- > > Key: AMBARI-17845 > URL: https://issues.apache.org/jira/browse/AMBARI-17845 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.2.2 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17845.patch > > > PROBLEM > Storm cluster metrics (aggregated across hosts) don't show up in Ambari and > Grafana > BUG > Storm metrics collect and send data in 1 minute intervals. Since their data > is present in the right end of the spectrum for the 2 minute aggregator > (start_time ~ server_time), a bug in the second aggregator is causing these > values to slip between 2 aggregator cycles. > FIX > Within the 2 minute interval, look for the data in the time shifted interval > ( ams-site:timeline.metrics.service.cluster.aggregator.timeshift.adjustment). > In case no data is present, look for the data outside the right boundary of > the interval. Use that to interpolate the data in the 30second slices. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17843) App data aggregated for hosted apps is being calcualted for all apps, not just configured ones
[ https://issues.apache.org/jira/browse/AMBARI-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-17843: --- Status: Patch Available (was: In Progress) > App data aggregated for hosted apps is being calcualted for all apps, not > just configured ones > -- > > Key: AMBARI-17843 > URL: https://issues.apache.org/jira/browse/AMBARI-17843 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.2.2 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17843.patch > > > timeline.metrics.service.cluster.aggregator.appIds defines what apps to > produce host metrics for: > {quote}List of application ids to use for aggregating host level metrics for > an application. Example: bytes_read across Yarn Nodemanagers. {quote} > Right now we are aggregating for all. > Additionally, metadata does not expose these additional metrics > {code} > 0: jdbc:phoenix:localhost:61181:/ams-hbase-un> select distinct(APP_ID) from > METRIC_AGGREGATE_MINUTE WHERE METRIC_NAME = 'cpu_user'; > ++ > | APP_ID | > ++ > | HOST | > | ams-hbase | > | amssmoketestfake | > | applicationhistoryserver | > | datanode | > | hivemetastore | > | hiveserver2 | > | jobhistoryserver | > | namenode | > | nimbus | > | nodemanager | > | resourcemanager | > ++ > 12 rows selected (0.117 seconds) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17863) AMS - topN does not work when metric name has a wildcard specified
[ https://issues.apache.org/jira/browse/AMBARI-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-17863: --- Attachment: AMBARI-17863.patch > AMS - topN does not work when metric name has a wildcard specified > -- > > Key: AMBARI-17863 > URL: https://issues.apache.org/jira/browse/AMBARI-17863 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.4.0 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17863.patch > > > Queries to AMS with topN specified do not work if metric name has wildcard > (%) in them. > Example: > collectorhost:port/ws/v1/timeline/metrics?metricNames=cpu_%&appId=HOST&startTime=1469203016&endtime=1469204816&topN=2&isBottomN=false -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17863) AMS - topN does not work when metric name has a wildcard specified
[ https://issues.apache.org/jira/browse/AMBARI-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-17863: --- Status: Patch Available (was: Open) > AMS - topN does not work when metric name has a wildcard specified > -- > > Key: AMBARI-17863 > URL: https://issues.apache.org/jira/browse/AMBARI-17863 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.4.0 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17863.patch > > > Queries to AMS with topN specified do not work if metric name has wildcard > (%) in them. > Example: > collectorhost:port/ws/v1/timeline/metrics?metricNames=cpu_%&appId=HOST&startTime=1469203016&endtime=1469204816&topN=2&isBottomN=false -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17845) Storm cluster metrics do not show up because of AMS aggregation issue.
[ https://issues.apache.org/jira/browse/AMBARI-17845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-17845: --- Attachment: (was: AMBARI-17845.patch) > Storm cluster metrics do not show up because of AMS aggregation issue. > -- > > Key: AMBARI-17845 > URL: https://issues.apache.org/jira/browse/AMBARI-17845 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.2.2 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > > PROBLEM > Storm cluster metrics (aggregated across hosts) don't show up in Ambari and > Grafana > BUG > Storm metrics collect and send data in 1 minute intervals. Since their data > is present in the right end of the spectrum for the 2 minute aggregator > (start_time ~ server_time), a bug in the second aggregator is causing these > values to slip between 2 aggregator cycles. > FIX > Within the 2 minute interval, look for the data in the time shifted interval > ( ams-site:timeline.metrics.service.cluster.aggregator.timeshift.adjustment). > In case no data is present, look for the data outside the right boundary of > the interval. Use that to interpolate the data in the 30second slices. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17853) Upgrade from HDP 2.4 to HDP 2.5 is missing namenode HA configuration adjustments
[ https://issues.apache.org/jira/browse/AMBARI-17853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390111#comment-15390111 ] Hudson commented on AMBARI-17853: - FAILURE: Integrated in Ambari-trunk-Commit #5368 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5368/]) AMBARI-17853. Added namenode HA configuration adjustments to the upgrade (afernandez: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=28d24abb161f40150c7a64d5adf3c18d1e444a26]) * ambari-server/src/main/resources/stacks/HDP/2.4/upgrades/upgrade-2.5.xml * ambari-server/src/main/resources/stacks/HDP/2.4/upgrades/nonrolling-upgrade-2.5.xml * ambari-server/src/main/resources/stacks/HDP/2.4/upgrades/config-upgrade.xml > Upgrade from HDP 2.4 to HDP 2.5 is missing namenode HA configuration > adjustments > - > > Key: AMBARI-17853 > URL: https://issues.apache.org/jira/browse/AMBARI-17853 > Project: Ambari > Issue Type: Bug > Components: ambari-server, ambari-upgrade >Affects Versions: 2.4.0 >Reporter: Laszlo Puskas >Assignee: Laszlo Puskas > Fix For: 2.4.0 > > Attachments: AMBARI-17853.v1.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Problem: > When upgrading a NN HA cluster from HDP 2.4 to HDP 2.5 unwanted properties > from stack definitions are added to the configuration. > Solution: > Upgrade xml-s altered to take into account NN HA configurations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-17863) AMS - topN does not work when metric name has a wildcard specified
Aravindan Vijayan created AMBARI-17863: -- Summary: AMS - topN does not work when metric name has a wildcard specified Key: AMBARI-17863 URL: https://issues.apache.org/jira/browse/AMBARI-17863 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.4.0 Reporter: Aravindan Vijayan Assignee: Aravindan Vijayan Priority: Critical Fix For: 2.4.0 Queries to AMS with topN specified do not work if metric name has wildcard (%) in them. Example: collectorhost:port/ws/v1/timeline/metrics?metricNames=cpu_%&appId=HOST&startTime=1469203016&endtime=1469204816&topN=2&isBottomN=false -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17862) Unexpected warning modal window is appearing while config modification
[ https://issues.apache.org/jira/browse/AMBARI-17862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390093#comment-15390093 ] Antonenko Alexander commented on AMBARI-17862: -- +1 for the patch > Unexpected warning modal window is appearing while config modification > -- > > Key: AMBARI-17862 > URL: https://issues.apache.org/jira/browse/AMBARI-17862 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Aleksandr Kovalenko >Assignee: Aleksandr Kovalenko >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17862.patch > > > Un expected modal window is appearing while modifying configs. This issue is > seen only on upgraded cluster. > Steps to reproduce : > 1) Upgrade HDP from 2.4(or older) to 2.5. > 2) Go to HIVE configs. > 3) Turn on property 'ACID Transactions' > 4) Turn off property 'ACID Transactions' (Do this without saving the changes > done in step 3.)So no configs are changed. > 5) Try to navigate to another service. > Warning modal window 'You have unsaved changes. Save changes or discard?' is > shown though no configs are being changed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17862) Unexpected warning modal window is appearing while config modification
[ https://issues.apache.org/jira/browse/AMBARI-17862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Kovalenko updated AMBARI-17862: - Status: Patch Available (was: Open) > Unexpected warning modal window is appearing while config modification > -- > > Key: AMBARI-17862 > URL: https://issues.apache.org/jira/browse/AMBARI-17862 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Aleksandr Kovalenko >Assignee: Aleksandr Kovalenko >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17862.patch > > > Un expected modal window is appearing while modifying configs. This issue is > seen only on upgraded cluster. > Steps to reproduce : > 1) Upgrade HDP from 2.4(or older) to 2.5. > 2) Go to HIVE configs. > 3) Turn on property 'ACID Transactions' > 4) Turn off property 'ACID Transactions' (Do this without saving the changes > done in step 3.)So no configs are changed. > 5) Try to navigate to another service. > Warning modal window 'You have unsaved changes. Save changes or discard?' is > shown though no configs are being changed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17862) Unexpected warning modal window is appearing while config modification
[ https://issues.apache.org/jira/browse/AMBARI-17862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Kovalenko updated AMBARI-17862: - Attachment: AMBARI-17862.patch > Unexpected warning modal window is appearing while config modification > -- > > Key: AMBARI-17862 > URL: https://issues.apache.org/jira/browse/AMBARI-17862 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Aleksandr Kovalenko >Assignee: Aleksandr Kovalenko >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17862.patch > > > Un expected modal window is appearing while modifying configs. This issue is > seen only on upgraded cluster. > Steps to reproduce : > 1) Upgrade HDP from 2.4(or older) to 2.5. > 2) Go to HIVE configs. > 3) Turn on property 'ACID Transactions' > 4) Turn off property 'ACID Transactions' (Do this without saving the changes > done in step 3.)So no configs are changed. > 5) Try to navigate to another service. > Warning modal window 'You have unsaved changes. Save changes or discard?' is > shown though no configs are being changed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-17862) Unexpected warning modal window is appearing while config modification
Aleksandr Kovalenko created AMBARI-17862: Summary: Unexpected warning modal window is appearing while config modification Key: AMBARI-17862 URL: https://issues.apache.org/jira/browse/AMBARI-17862 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.4.0 Reporter: Aleksandr Kovalenko Assignee: Aleksandr Kovalenko Priority: Critical Fix For: 2.4.0 Un expected modal window is appearing while modifying configs. This issue is seen only on upgraded cluster. Steps to reproduce : 1) Upgrade HDP from 2.4(or older) to 2.5. 2) Go to HIVE configs. 3) Turn on property 'ACID Transactions' 4) Turn off property 'ACID Transactions' (Do this without saving the changes done in step 3.)So no configs are changed. 5) Try to navigate to another service. Warning modal window 'You have unsaved changes. Save changes or discard?' is shown though no configs are being changed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17840) Oozie service check failed after EU downgrade
[ https://issues.apache.org/jira/browse/AMBARI-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate Cole updated AMBARI-17840: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Oozie service check failed after EU downgrade > - > > Key: AMBARI-17840 > URL: https://issues.apache.org/jira/browse/AMBARI-17840 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17840.patch > > > The EU Upgrade Packs are calling hdp-select set all at the end of an upgrade. > When downgrading, the Oozie Server and Oozie Client are being used out of a > mixed version, and the newer client is not compatible with the older server. > This occurs on downgrade. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17840) Oozie service check failed after EU downgrade
[ https://issues.apache.org/jira/browse/AMBARI-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390075#comment-15390075 ] Hadoop QA commented on AMBARI-17840: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12819659/AMBARI-17840.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in ambari-server. Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/7979//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/7979//console This message is automatically generated. > Oozie service check failed after EU downgrade > - > > Key: AMBARI-17840 > URL: https://issues.apache.org/jira/browse/AMBARI-17840 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17840.patch > > > The EU Upgrade Packs are calling hdp-select set all at the end of an upgrade. > When downgrading, the Oozie Server and Oozie Client are being used out of a > mixed version, and the newer client is not compatible with the older server. > This occurs on downgrade. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17782) Config changes for Atlas in HDP 2.5 related to atlas.rest.address, atlas.cluster.name, etc
[ https://issues.apache.org/jira/browse/AMBARI-17782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390048#comment-15390048 ] Hadoop QA commented on AMBARI-17782: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12819689/AMBARI-17782.v1.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/7978//console This message is automatically generated. > Config changes for Atlas in HDP 2.5 related to atlas.rest.address, > atlas.cluster.name, etc > -- > > Key: AMBARI-17782 > URL: https://issues.apache.org/jira/browse/AMBARI-17782 > Project: Ambari > Issue Type: Bug > Components: stacks >Affects Versions: 2.4.0 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17782.v1.patch > > > To support Atlas in HDP 2.5, make several config changes: > /etc/hive/conf/atlas-application.properties: > * atlas.rest.address (add this property) > > /etc/hive/conf/hive-site.xml: > * atlas.cluster.name (remove this property) > * atlas.hook.hive.maxThreads (remove this property) > * atlas.hook.hive.minThreads (remove this property) > * atlas.rest.address (remove this property) > /etc/storm/conf/atlas-application.properties: > * atlas.rest.address (add this property) > > /etc/storm/conf/storm.yaml: > * atlas.cluster.name (remove this property) > > /etc/falcon/conf/atlas-application.properties: > * atlas.rest.address (add this property) > * atlas.cluster.name (this value is empty; need to set to correct value) > > /etc/sqoop/conf/atlas-application.properties: > * atlas.jaas.KafkaClient.option.keyTab (remove this property) > * atlas.jaas.KafkaClient.option.principal (remove this property) > * atlas.jaas.KafkaClient.option.storeKey (remove this property) > * atlas.jaas.KafkaClient.option.useKeyTab (remove this property) > * atlas.jaas.KafkaClient.option.useTicketCache=true (add this property) > * atlas.jaas.KafkaClient.option.renewTicket=true (add this property) > * atlas.rest.address (add this property) > > /etc/sqoop/conf/sqoop-site.xml: > * atlas.cluster.name (remove this property) > Also, there is no 'Custom sqoop-atlas-application.properties' section in Sqoop > Test Plan: > 1. Install Ambari 2.4, HDP 2.5 along with Hive, Storm, Kafka, Sqoop and all > of their dependencies. > Verify that none of the atlas.* configs exist nor the > $HOOK-atlas-application.properties file > Ensure that hive-site and storm.yaml don't have any Atlas properties. > 2. After #1, add Atlas and verify that all of the hooks have > atlas.rest.address and atlas.cluster.name > 3. After #2, kerberize the cluster and ensure Sqoop has the right configs in > its sqoop-atlas-application.properties file. > 4. After #1, kerberize the cluster. This should add the application configs > to sqoop-atlas-application.properties in the DB, but the file will not be > saved to the local file system until Atlas is added. > After this, add Atlas service (and its dependencies), and ensure that all of > the configs in /etc/sqoop/conf/sqoop-atlas-application.properties are correct. > 5. Install Ambari 2.2.2 with HDP 2.4 along with Hive, Storm, Kafka, Sqoop, > and Atlas. Upgrade Ambari to 2.4.0 and ensure Atlas still works. > Install bits for HDP 2.5, remove Atlas, and perform either an EU/RU to HDP > 2.5. > This should remove atlas.cluster.name and atlas.rest.address from hive-site, > plus several other security-related properties from sqoop-site if the cluster > was kerberized. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17781) Implement the animation effect on sliding the assemblies
[ https://issues.apache.org/jira/browse/AMBARI-17781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390036#comment-15390036 ] Xi Wang commented on AMBARI-17781: -- Committed to 2.4.0-multi > Implement the animation effect on sliding the assemblies > > > Key: AMBARI-17781 > URL: https://issues.apache.org/jira/browse/AMBARI-17781 > Project: Ambari > Issue Type: Task > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Xi Wang >Assignee: Xi Wang > Fix For: 3.0.0 > > Attachments: AMBARI-17781.patch > > > Currently we don't have any effect on sliding, so the assemblies will change > suddenly on clicking "Move Left" icon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17846) Assemblies page: Improve the browser window width change UX
[ https://issues.apache.org/jira/browse/AMBARI-17846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390034#comment-15390034 ] Xi Wang commented on AMBARI-17846: -- Committed to 2.4.0-multi > Assemblies page: Improve the browser window width change UX > --- > > Key: AMBARI-17846 > URL: https://issues.apache.org/jira/browse/AMBARI-17846 > Project: Ambari > Issue Type: Task > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Xi Wang >Assignee: Xi Wang > Fix For: 3.0.0 > > Attachments: AMBARI-17846.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17717) Ambari should have a script to add new repository and service to existing stack
[ https://issues.apache.org/jira/browse/AMBARI-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390031#comment-15390031 ] Jayush Luniya commented on AMBARI-17717: [~lavjain] Management Packs can handle the case of adding Apache HAWQ to HDP or any other custom stack. So I dont see why we need this. The only thing thats probably remaining is https://issues.apache.org/jira/browse/AMBARI-15538. I would rather prefer that you look at using management packs and if there are any gaps address them rather than diverging. > Ambari should have a script to add new repository and service to existing > stack > --- > > Key: AMBARI-17717 > URL: https://issues.apache.org/jira/browse/AMBARI-17717 > Project: Ambari > Issue Type: Improvement > Components: ambari-server >Affects Versions: trunk, 2.4.0 >Reporter: Matt >Assignee: Lav Jain > Fix For: trunk, 2.4.0 > > Attachments: AMBARI-17717.patch > > > Ambari should have a script that users can run to add a custom service and > repository to the stack or an existing cluster. > {code} > Lavs-MacBook-Pro:scripts ljain$ ./add_common_service.py -h > Usage: add_common_service.py [options] > Options: > -h, --helpshow this help message and exit > -u USER, --user=USER Ambari login username (Required) > -p PASSWORD, --password=PASSWORD > Ambari login password. Providing password through > command line is not recommended. The script prompts > for the password. > -t STACK, --stack=STACK > Stack Name and Version to be added (Required).(Eg: > HDP-2.4 or HDP-2.5) > -s SERVICE, --service=SERVICE > Service Name and Version to be added.(Eg: HAWQ/2.0.0 > or PXF/3.0.0) > -r REPOURL, --repourl=REPOURL > Repository URL which points to the rpm packages > -i REPOID, --repoid=REPOID > Repository ID of the new repository > -o OSTYPE, --ostype=OSTYPE > OS for the new repository (Eg: redhat6) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17782) Config changes for Atlas in HDP 2.5 related to atlas.rest.address, atlas.cluster.name, etc
[ https://issues.apache.org/jira/browse/AMBARI-17782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-17782: - Description: To support Atlas in HDP 2.5, make several config changes: /etc/hive/conf/atlas-application.properties: * atlas.rest.address (add this property) /etc/hive/conf/hive-site.xml: * atlas.cluster.name (remove this property) * atlas.hook.hive.maxThreads (remove this property) * atlas.hook.hive.minThreads (remove this property) * atlas.rest.address (remove this property) /etc/storm/conf/atlas-application.properties: * atlas.rest.address (add this property) /etc/storm/conf/storm.yaml: * atlas.cluster.name (remove this property) /etc/falcon/conf/atlas-application.properties: * atlas.rest.address (add this property) * atlas.cluster.name (this value is empty; need to set to correct value) /etc/sqoop/conf/atlas-application.properties: * atlas.jaas.KafkaClient.option.keyTab (remove this property) * atlas.jaas.KafkaClient.option.principal (remove this property) * atlas.jaas.KafkaClient.option.storeKey (remove this property) * atlas.jaas.KafkaClient.option.useKeyTab (remove this property) * atlas.jaas.KafkaClient.option.useTicketCache=true (add this property) * atlas.jaas.KafkaClient.option.renewTicket=true (add this property) * atlas.rest.address (add this property) /etc/sqoop/conf/sqoop-site.xml: * atlas.cluster.name (remove this property) Also, there is no 'Custom sqoop-atlas-application.properties' section in Sqoop Test Plan: 1. Install Ambari 2.4, HDP 2.5 along with Hive, Storm, Kafka, Sqoop and all of their dependencies. Verify that none of the atlas.* configs exist nor the $HOOK-atlas-application.properties file Ensure that hive-site and storm.yaml don't have any Atlas properties. 2. After #1, add Atlas and verify that all of the hooks have atlas.rest.address and atlas.cluster.name 3. After #2, kerberize the cluster and ensure Sqoop has the right configs in its sqoop-atlas-application.properties file. 4. After #1, kerberize the cluster. This should add the application configs to sqoop-atlas-application.properties in the DB, but the file will not be saved to the local file system until Atlas is added. After this, add Atlas service (and its dependencies), and ensure that all of the configs in /etc/sqoop/conf/sqoop-atlas-application.properties are correct. 5. Install Ambari 2.2.2 with HDP 2.4 along with Hive, Storm, Kafka, Sqoop, and Atlas. Upgrade Ambari to 2.4.0 and ensure Atlas still works. Install bits for HDP 2.5, remove Atlas, and perform either an EU/RU to HDP 2.5. This should remove atlas.cluster.name and atlas.rest.address from hive-site, plus several other security-related properties from sqoop-site if the cluster was kerberized. was: To support Atlas in HDP 2.5, make several config changes: /etc/hive/conf/atlas-application.properties: * atlas.rest.address (add this property) /etc/hive/conf/hive-site.xml: * atlas.cluster.name (remove this property) * atlas.hook.hive.maxThreads (remove this property) * atlas.hook.hive.minThreads (remove this property) * atlas.rest.address (remove this property) /etc/storm/conf/atlas-application.properties: * atlas.rest.address (add this property) /etc/storm/conf/storm.yaml: * atlas.cluster.name (remove this property) /etc/falcon/conf/atlas-application.properties: * atlas.rest.address (add this property) * atlas.cluster.name (this value is empty; need to set to correct value) /etc/sqoop/conf/atlas-application.properties: * atlas.jaas.KafkaClient.option.keyTab (remove this property) * atlas.jaas.KafkaClient.option.principal (remove this property) * atlas.jaas.KafkaClient.option.storeKey (remove this property) * atlas.jaas.KafkaClient.option.useKeyTab (remove this property) * atlas.jaas.KafkaClient.option.useTicketCache=true (add this property) * atlas.jaas.KafkaClient.option.renewTicket=true (add this property) * atlas.rest.address (add this property) /etc/sqoop/conf/sqoop-site.xml: * atlas.cluster.name (remove this property) Also, there is no 'Custom sqoop-atlas-application.properties' section in Sqoop > Config changes for Atlas in HDP 2.5 related to atlas.rest.address, > atlas.cluster.name, etc > -- > > Key: AMBARI-17782 > URL: https://issues.apache.org/jira/browse/AMBARI-17782 > Project: Ambari > Issue Type: Bug > Components: stacks >Affects Versions: 2.4.0 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17782.v1.patch > > > To support Atlas in HDP 2.5, make several config changes: > /etc/hive/conf/atlas-application.properties: > * atlas.rest.address (add this property) > > /etc/hive/conf/hive-site.xml: > * atlas.clu
[jira] [Updated] (AMBARI-17782) Config changes for Atlas in HDP 2.5 related to atlas.rest.address, atlas.cluster.name, etc
[ https://issues.apache.org/jira/browse/AMBARI-17782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-17782: - Description: To support Atlas in HDP 2.5, make several config changes: /etc/hive/conf/atlas-application.properties: * atlas.rest.address (add this property) /etc/hive/conf/hive-site.xml: * atlas.cluster.name (remove this property) * atlas.hook.hive.maxThreads (remove this property) * atlas.hook.hive.minThreads (remove this property) * atlas.rest.address (remove this property) /etc/storm/conf/atlas-application.properties: * atlas.rest.address (add this property) /etc/storm/conf/storm.yaml: * atlas.cluster.name (remove this property) /etc/falcon/conf/atlas-application.properties: * atlas.rest.address (add this property) * atlas.cluster.name (this value is empty; need to set to correct value) /etc/sqoop/conf/atlas-application.properties: * atlas.jaas.KafkaClient.option.keyTab (remove this property) * atlas.jaas.KafkaClient.option.principal (remove this property) * atlas.jaas.KafkaClient.option.storeKey (remove this property) * atlas.jaas.KafkaClient.option.useKeyTab (remove this property) * atlas.jaas.KafkaClient.option.useTicketCache=true (add this property) * atlas.jaas.KafkaClient.option.renewTicket=true (add this property) * atlas.rest.address (add this property) /etc/sqoop/conf/sqoop-site.xml: * atlas.cluster.name (remove this property) Also, there is no 'Custom sqoop-atlas-application.properties' section in Sqoop was: /etc/hive/conf/atlas-application.properties: * atlas.rest.address (add this property) /etc/hive/conf/hive-site.xml: * atlas.cluster.name (remove this property) * atlas.hook.hive.maxThreads (remove this property) * atlas.hook.hive.minThreads (remove this property) * atlas.rest.address (remove this property) /etc/storm/conf/atlas-application.properties: * atlas.rest.address (add this property) /etc/storm/conf/storm.yaml: * atlas.cluster.name (remove this property) /etc/falcon/conf/atlas-application.properties: * atlas.rest.address (add this property) * atlas.cluster.name (this value is empty; need to set to correct value) /etc/sqoop/conf/atlas-application.properties: * atlas.jaas.KafkaClient.option.keyTab (remove this property) * atlas.jaas.KafkaClient.option.principal (remove this property) * atlas.jaas.KafkaClient.option.storeKey (remove this property) * atlas.jaas.KafkaClient.option.useKeyTab (remove this property) * atlas.jaas.KafkaClient.option.useTicketCache=true (add this property) * atlas.jaas.KafkaClient.option.renewTicket=true (add this property) * atlas.rest.address (add this property) /etc/sqoop/conf/sqoop-site.xml: * atlas.cluster.name (remove this property) Also, there is no 'Custom sqoop-atlas-application.properties' section in Sqoop > Config changes for Atlas in HDP 2.5 related to atlas.rest.address, > atlas.cluster.name, etc > -- > > Key: AMBARI-17782 > URL: https://issues.apache.org/jira/browse/AMBARI-17782 > Project: Ambari > Issue Type: Bug > Components: stacks >Affects Versions: 2.4.0 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17782.v1.patch > > > To support Atlas in HDP 2.5, make several config changes: > /etc/hive/conf/atlas-application.properties: > * atlas.rest.address (add this property) > > /etc/hive/conf/hive-site.xml: > * atlas.cluster.name (remove this property) > * atlas.hook.hive.maxThreads (remove this property) > * atlas.hook.hive.minThreads (remove this property) > * atlas.rest.address (remove this property) > /etc/storm/conf/atlas-application.properties: > * atlas.rest.address (add this property) > > /etc/storm/conf/storm.yaml: > * atlas.cluster.name (remove this property) > > /etc/falcon/conf/atlas-application.properties: > * atlas.rest.address (add this property) > * atlas.cluster.name (this value is empty; need to set to correct value) > > /etc/sqoop/conf/atlas-application.properties: > * atlas.jaas.KafkaClient.option.keyTab (remove this property) > * atlas.jaas.KafkaClient.option.principal (remove this property) > * atlas.jaas.KafkaClient.option.storeKey (remove this property) > * atlas.jaas.KafkaClient.option.useKeyTab (remove this property) > * atlas.jaas.KafkaClient.option.useTicketCache=true (add this property) > * atlas.jaas.KafkaClient.option.renewTicket=true (add this property) > * atlas.rest.address (add this property) > > /etc/sqoop/conf/sqoop-site.xml: > * atlas.cluster.name (remove this property) > Also, there is no 'Custom sqoop-atlas-application.properties' section in Sqoop -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17861) Ambari principal should be part of nimbus.admins for Storm View
[ https://issues.apache.org/jira/browse/AMBARI-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated AMBARI-17861: Attachment: AMBARI-17861.patch > Ambari principal should be part of nimbus.admins for Storm View > --- > > Key: AMBARI-17861 > URL: https://issues.apache.org/jira/browse/AMBARI-17861 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17861.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-17861) Ambari principal should be part of nimbus.admins for Storm View
Sriharsha Chintalapani created AMBARI-17861: --- Summary: Ambari principal should be part of nimbus.admins for Storm View Key: AMBARI-17861 URL: https://issues.apache.org/jira/browse/AMBARI-17861 Project: Ambari Issue Type: Bug Reporter: Sriharsha Chintalapani Assignee: Sriharsha Chintalapani Priority: Blocker Fix For: 2.4.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17782) Config changes for Atlas in HDP 2.5 related to atlas.rest.address, atlas.cluster.name, etc
[ https://issues.apache.org/jira/browse/AMBARI-17782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-17782: - Status: Patch Available (was: Open) > Config changes for Atlas in HDP 2.5 related to atlas.rest.address, > atlas.cluster.name, etc > -- > > Key: AMBARI-17782 > URL: https://issues.apache.org/jira/browse/AMBARI-17782 > Project: Ambari > Issue Type: Bug > Components: stacks >Affects Versions: 2.4.0 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17782.v1.patch > > > /etc/hive/conf/atlas-application.properties: > * atlas.rest.address (add this property) > > /etc/hive/conf/hive-site.xml: > * atlas.cluster.name (remove this property) > * atlas.hook.hive.maxThreads (remove this property) > * atlas.hook.hive.minThreads (remove this property) > * atlas.rest.address (remove this property) > /etc/storm/conf/atlas-application.properties: > * atlas.rest.address (add this property) > > /etc/storm/conf/storm.yaml: > * atlas.cluster.name (remove this property) > > /etc/falcon/conf/atlas-application.properties: > * atlas.rest.address (add this property) > * atlas.cluster.name (this value is empty; need to set to correct value) > > /etc/sqoop/conf/atlas-application.properties: > * atlas.jaas.KafkaClient.option.keyTab (remove this property) > * atlas.jaas.KafkaClient.option.principal (remove this property) > * atlas.jaas.KafkaClient.option.storeKey (remove this property) > * atlas.jaas.KafkaClient.option.useKeyTab (remove this property) > * atlas.jaas.KafkaClient.option.useTicketCache=true (add this property) > * atlas.jaas.KafkaClient.option.renewTicket=true (add this property) > * atlas.rest.address (add this property) > > /etc/sqoop/conf/sqoop-site.xml: > * atlas.cluster.name (remove this property) > Also, there is no 'Custom sqoop-atlas-application.properties' section in Sqoop -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17782) Config changes for Atlas in HDP 2.5 related to atlas.rest.address, atlas.cluster.name, etc
[ https://issues.apache.org/jira/browse/AMBARI-17782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-17782: - Attachment: AMBARI-17782.v1.patch > Config changes for Atlas in HDP 2.5 related to atlas.rest.address, > atlas.cluster.name, etc > -- > > Key: AMBARI-17782 > URL: https://issues.apache.org/jira/browse/AMBARI-17782 > Project: Ambari > Issue Type: Bug > Components: stacks >Affects Versions: 2.4.0 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17782.v1.patch > > > /etc/hive/conf/atlas-application.properties: > * atlas.rest.address (add this property) > > /etc/hive/conf/hive-site.xml: > * atlas.cluster.name (remove this property) > * atlas.hook.hive.maxThreads (remove this property) > * atlas.hook.hive.minThreads (remove this property) > * atlas.rest.address (remove this property) > /etc/storm/conf/atlas-application.properties: > * atlas.rest.address (add this property) > > /etc/storm/conf/storm.yaml: > * atlas.cluster.name (remove this property) > > /etc/falcon/conf/atlas-application.properties: > * atlas.rest.address (add this property) > * atlas.cluster.name (this value is empty; need to set to correct value) > > /etc/sqoop/conf/atlas-application.properties: > * atlas.jaas.KafkaClient.option.keyTab (remove this property) > * atlas.jaas.KafkaClient.option.principal (remove this property) > * atlas.jaas.KafkaClient.option.storeKey (remove this property) > * atlas.jaas.KafkaClient.option.useKeyTab (remove this property) > * atlas.jaas.KafkaClient.option.useTicketCache=true (add this property) > * atlas.jaas.KafkaClient.option.renewTicket=true (add this property) > * atlas.rest.address (add this property) > > /etc/sqoop/conf/sqoop-site.xml: > * atlas.cluster.name (remove this property) > Also, there is no 'Custom sqoop-atlas-application.properties' section in Sqoop -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17853) Upgrade from HDP 2.4 to HDP 2.5 is missing namenode HA configuration adjustments
[ https://issues.apache.org/jira/browse/AMBARI-17853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-17853: - Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to trunk, commit 28d24abb161f40150c7a64d5adf3c18d1e444a26 branchb-2.4, commit 0ec9466ba988e3e1554ceb8f493856ec707d53d6 > Upgrade from HDP 2.4 to HDP 2.5 is missing namenode HA configuration > adjustments > - > > Key: AMBARI-17853 > URL: https://issues.apache.org/jira/browse/AMBARI-17853 > Project: Ambari > Issue Type: Bug > Components: ambari-server, ambari-upgrade >Affects Versions: 2.4.0 >Reporter: Laszlo Puskas >Assignee: Laszlo Puskas > Fix For: 2.4.0 > > Attachments: AMBARI-17853.v1.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Problem: > When upgrading a NN HA cluster from HDP 2.4 to HDP 2.5 unwanted properties > from stack definitions are added to the configuration. > Solution: > Upgrade xml-s altered to take into account NN HA configurations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17852) TDE encryption didnt happen as key wasnt created
[ https://issues.apache.org/jira/browse/AMBARI-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389941#comment-15389941 ] Hudson commented on AMBARI-17852: - FAILURE: Integrated in Ambari-trunk-Commit #5367 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5367/]) AMBARI-17852 TDE encryption didnt happen as key wasnt created. (ababiichuk: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=8595774b59ba514213f2ccab20c6cb334ab3cdd6]) * ambari-web/app/mixins/common/configs/enhanced_configs.js * ambari-web/app/controllers/wizard/step7_controller.js * ambari-web/test/controllers/wizard/step7_test.js > TDE encryption didnt happen as key wasnt created > > > Key: AMBARI-17852 > URL: https://issues.apache.org/jira/browse/AMBARI-17852 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Andrii Babiichuk >Assignee: Andrii Babiichuk >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17852.patch > > > Seems like Ambari UI didn't apply recommendations for HDFS while adding > RANGER_KMS service -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17626) Blueprint registration step uses wrong format for property-attributes in Configuration
[ https://issues.apache.org/jira/browse/AMBARI-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389942#comment-15389942 ] Hudson commented on AMBARI-17626: - FAILURE: Integrated in Ambari-trunk-Commit #5367 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5367/]) AMBARI-17626: Blueprint registration step uses wrong format for (dili: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=c879e937341c427714a0dff5c4bc73f272d4d30e]) * ambari-server/src/test/java/org/apache/ambari/server/controller/internal/ProvisionClusterRequestTest.java * ambari-server/src/main/java/org/apache/ambari/server/topology/ConfigurationFactory.java * ambari-server/src/test/java/org/apache/ambari/server/topology/ConfigurationFactoryTest.java > Blueprint registration step uses wrong format for property-attributes in > Configuration > -- > > Key: AMBARI-17626 > URL: https://issues.apache.org/jira/browse/AMBARI-17626 > Project: Ambari > Issue Type: Bug > Components: blueprints >Affects Versions: trunk >Reporter: Keta Patel >Assignee: Keta Patel > Fix For: trunk > > Attachments: AMBARI-17626-July08.patch, AMBARI-17626.patch, > cluster_with_defect_installed_from_blueprint.tiff, > cluster_with_fix_insatlled_from_blueprint.tiff, > original_cluster_used_to_create_blueprint.tiff > > > Blueprints make use of a population strategy in the registration step to > create JSON objects for properties and property-attributes. These properties > and property-attributes get stored in the database (DB) in the > "clusterconfig" table under "config_data" and "config_attributes" > respectively. Format of these JSON objects is critical as the UI parses these > objects assuming them to be in a certain format. > At present property-attributes like "final" get stored in "clusterconfig" > table in the format shown in attachment > "original_cluster_used_to_create_blueprint.tiff". > i.e. "final": {"attr1":"val1", "attr2":"val2"} > When a new cluster is to be installed using a blueprint, after the blueprint > is registered and the new cluster is installed using the registered > blueprint, the format of "property-attributes" is as shown in attachment > "cluster_with_defect_installed_from_blueprint.tiff". > i.e. "attr1":{"final":"val1"}, "attr2":{"final":"val2"}, > "final":{"attr1":"val1", "attr2":"val2"} > The population strategy is responsible for the "attr1":{"final":"val1"} and > "attr2":{"final":"val2"}. > The reason for "final":{"attr1":"val1", "attr2":"val2"} is because of a merge > with the Parent Configuration during blueprint registration. This Parent > Configuration comes from Stack using the XML files of the service properties. > Because of this "final" attribute, the UI still shows the properties > correctly and hence it went undetected. > Proposed fix involves correcting the population strategy so that the > attributes are placed in the correct format. After the fix, the new cluster > installed via blueprint shows "property-attributes" as shown in attachment > "cluster_with_fix_insatlled_from_blueprint.tiff" > i.e. "final":{"attr1":"val1", "attr2":"val2"} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17835) Maximum validation failure for 'yarn.scheduler.maximum-allocation-mb' after dependency change
[ https://issues.apache.org/jira/browse/AMBARI-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389944#comment-15389944 ] Hudson commented on AMBARI-17835: - FAILURE: Integrated in Ambari-trunk-Commit #5367 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5367/]) Revert "AMBARI-17835 Maximum validation failure for (dsen: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=40bdcb6480abfe6393bac9be961ae53654c1cb2f]) * ambari-server/src/main/java/org/apache/ambari/server/api/services/stackadvisor/commands/StackAdvisorCommand.java * ambari-server/src/main/resources/stacks/stack_advisor.py * ambari-server/src/test/python/stacks/2.2/common/test_stack_advisor.py * ambari-server/src/main/resources/stacks/HDP/2.2/services/stack_advisor.py * ambari-server/src/test/java/org/apache/ambari/server/api/services/stackadvisor/commands/StackAdvisorCommandTest.java > Maximum validation failure for 'yarn.scheduler.maximum-allocation-mb' after > dependency change > - > > Key: AMBARI-17835 > URL: https://issues.apache.org/jira/browse/AMBARI-17835 > Project: Ambari > Issue Type: Bug >Affects Versions: 2.4.0 >Reporter: Dmytro Sen >Assignee: Dmytro Sen >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17835-trunk.patch > > > STR > 1) Set yarn.nodemanager.resource.memory-mb to be more than recommended default > 2) click next -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17858) Hive service check failed
[ https://issues.apache.org/jira/browse/AMBARI-17858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389943#comment-15389943 ] Hudson commented on AMBARI-17858: - FAILURE: Integrated in Ambari-trunk-Commit #5367 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5367/]) AMBARI-17858 Hive service check failed (dsen) (dsen: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=bdcd39bfe5e1835ed19001e79e4cf2bcb3297fa4]) * ambari-server/src/test/python/stacks/2.2/common/test_stack_advisor.py * ambari-server/src/main/resources/stacks/HDP/2.2/services/stack_advisor.py > Hive service check failed > - > > Key: AMBARI-17858 > URL: https://issues.apache.org/jira/browse/AMBARI-17858 > Project: Ambari > Issue Type: Bug > Components: stacks >Affects Versions: 2.4.0 >Reporter: Dmytro Sen >Assignee: Dmytro Sen >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17858.patch > > > STR: > Install cluster with HDFS and Zk > Add other services (Yarn, MR, HIVE, Tez, Hbase, Sqoop, Oozie, Falcon, Storm, > Flume, Spark, Smartsense, Logsearch, Slider) > Hive install goes through without error > Enable Security (Hive service check fails here) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17860) UI does not response when try to compare AMS configs few times after Upgrade from 2.0.2 to 2.4.0.0
[ https://issues.apache.org/jira/browse/AMBARI-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389894#comment-15389894 ] Hadoop QA commented on AMBARI-17860: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12819669/AMBARI-17860.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in ambari-web. Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/7976//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/7976//console This message is automatically generated. > UI does not response when try to compare AMS configs few times after Upgrade > from 2.0.2 to 2.4.0.0 > -- > > Key: AMBARI-17860 > URL: https://issues.apache.org/jira/browse/AMBARI-17860 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Antonenko Alexander >Assignee: Antonenko Alexander >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17860.patch > > > STR: > 1) Install old version (Add ranger) > 2) Enable Rangers Pluggins (Hbase ranger plugin was enabled) > 2) Make ambari only upgrade > 3)Enable/Disable security > 4)try to compare AMS configs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17840) Oozie service check failed after EU downgrade
[ https://issues.apache.org/jira/browse/AMBARI-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate Cole updated AMBARI-17840: --- Status: Patch Available (was: Open) > Oozie service check failed after EU downgrade > - > > Key: AMBARI-17840 > URL: https://issues.apache.org/jira/browse/AMBARI-17840 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17840.patch > > > The EU Upgrade Packs are calling hdp-select set all at the end of an upgrade. > When downgrading, the Oozie Server and Oozie Client are being used out of a > mixed version, and the newer client is not compatible with the older server. > This occurs on downgrade. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17843) App data aggregated for hosted apps is being calcualted for all apps, not just configured ones
[ https://issues.apache.org/jira/browse/AMBARI-17843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-17843: --- Priority: Critical (was: Major) > App data aggregated for hosted apps is being calcualted for all apps, not > just configured ones > -- > > Key: AMBARI-17843 > URL: https://issues.apache.org/jira/browse/AMBARI-17843 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.2.2 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17843.patch > > > timeline.metrics.service.cluster.aggregator.appIds defines what apps to > produce host metrics for: > {quote}List of application ids to use for aggregating host level metrics for > an application. Example: bytes_read across Yarn Nodemanagers. {quote} > Right now we are aggregating for all. > Additionally, metadata does not expose these additional metrics > {code} > 0: jdbc:phoenix:localhost:61181:/ams-hbase-un> select distinct(APP_ID) from > METRIC_AGGREGATE_MINUTE WHERE METRIC_NAME = 'cpu_user'; > ++ > | APP_ID | > ++ > | HOST | > | ams-hbase | > | amssmoketestfake | > | applicationhistoryserver | > | datanode | > | hivemetastore | > | hiveserver2 | > | jobhistoryserver | > | namenode | > | nimbus | > | nodemanager | > | resourcemanager | > ++ > 12 rows selected (0.117 seconds) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17860) UI does not response when try to compare AMS configs few times after Upgrade from 2.0.2 to 2.4.0.0
[ https://issues.apache.org/jira/browse/AMBARI-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389845#comment-15389845 ] Aleksandr Kovalenko commented on AMBARI-17860: -- +1 for the patch > UI does not response when try to compare AMS configs few times after Upgrade > from 2.0.2 to 2.4.0.0 > -- > > Key: AMBARI-17860 > URL: https://issues.apache.org/jira/browse/AMBARI-17860 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Antonenko Alexander >Assignee: Antonenko Alexander >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17860.patch > > > STR: > 1) Install old version (Add ranger) > 2) Enable Rangers Pluggins (Hbase ranger plugin was enabled) > 2) Make ambari only upgrade > 3)Enable/Disable security > 4)try to compare AMS configs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17860) UI does not response when try to compare AMS configs few times after Upgrade from 2.0.2 to 2.4.0.0
[ https://issues.apache.org/jira/browse/AMBARI-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antonenko Alexander updated AMBARI-17860: - Status: Patch Available (was: Open) > UI does not response when try to compare AMS configs few times after Upgrade > from 2.0.2 to 2.4.0.0 > -- > > Key: AMBARI-17860 > URL: https://issues.apache.org/jira/browse/AMBARI-17860 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Antonenko Alexander >Assignee: Antonenko Alexander >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17860.patch > > > STR: > 1) Install old version (Add ranger) > 2) Enable Rangers Pluggins (Hbase ranger plugin was enabled) > 2) Make ambari only upgrade > 3)Enable/Disable security > 4)try to compare AMS configs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17860) UI does not response when try to compare AMS configs few times after Upgrade from 2.0.2 to 2.4.0.0
[ https://issues.apache.org/jira/browse/AMBARI-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antonenko Alexander updated AMBARI-17860: - Attachment: AMBARI-17860.patch > UI does not response when try to compare AMS configs few times after Upgrade > from 2.0.2 to 2.4.0.0 > -- > > Key: AMBARI-17860 > URL: https://issues.apache.org/jira/browse/AMBARI-17860 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Antonenko Alexander >Assignee: Antonenko Alexander >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17860.patch > > > STR: > 1) Install old version (Add ranger) > 2) Enable Rangers Pluggins (Hbase ranger plugin was enabled) > 2) Make ambari only upgrade > 3)Enable/Disable security > 4)try to compare AMS configs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-17860) UI does not response when try to compare AMS configs few times after Upgrade from 2.0.2 to 2.4.0.0
Antonenko Alexander created AMBARI-17860: Summary: UI does not response when try to compare AMS configs few times after Upgrade from 2.0.2 to 2.4.0.0 Key: AMBARI-17860 URL: https://issues.apache.org/jira/browse/AMBARI-17860 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.4.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.4.0 STR: 1) Install old version (Add ranger) 2) Enable Rangers Pluggins (Hbase ranger plugin was enabled) 2) Make ambari only upgrade 3)Enable/Disable security 4)try to compare AMS configs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17859) YARN service check failed during EU from HDP-2.4.0.0 to Erie
[ https://issues.apache.org/jira/browse/AMBARI-17859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Lysnichenko updated AMBARI-17859: Fix Version/s: 2.4.0 > YARN service check failed during EU from HDP-2.4.0.0 to Erie > > > Key: AMBARI-17859 > URL: https://issues.apache.org/jira/browse/AMBARI-17859 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Dmitry Lysnichenko >Assignee: Dmitry Lysnichenko > Fix For: 2.4.0 > > Attachments: AMBARI-17859.patch > > > *Steps* > # Deploy HDP-2.4.0.0 cluster with Ambari 2.2.1.1 (secure, non-HA cluster, > customized service users) > # Upgrade Ambari to 2.4.0.0 > # Perform EU to 2.5.0.0-934 > *Result* > During EU, observed YARN service check reported below errors: > {code} > Traceback (most recent call last):\n File > \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", > line 159, in \nServiceCheck().execute()\n File > \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\", > line 280, in execute\nmethod(env)\n File > \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", > line 117, in service_check\nuser=params.smokeuser,\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 71, in inner\nresult = function(command, **kwargs)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 93, in checked_call\ntries=tries, try_sleep=try_sleep)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 141, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 294, in _call\nraise > Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of > '/usr/bin/kinit -kt /etc/security/keytabs/smokeuser.headless.keytab > smk_rndalxit7oqf2pq3erlqg6r...@hwqe.hortonworks.com; yarn > org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls > -num_containers 1 -jar > /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar > -timeout 30 --queue default' returned 2. Hortonworks > #\nThis is MOTD message, added for testing in qe infra\n16/07/09 > 11:15:01 INFO impl.TimelineClientImpl: Timeline service address: > http://host:8188/ws/v1/timeline/\n16/07/09 11:15:01 INFO > distributedshell.Client: Initializing Client\n16/07/09 11:15:01 INFO > distributedshell.Client: Running Client\n16/07/09 11:15:01 INFO > client.RMProxy: Connecting to ResourceManager at > host-5.domainlocal/10.0.113.157:8050\n16/07/09 11:15:03 INFO > distributedshell.Client: Got Cluster metric info from ASM, > numNodeManagers=3\n16/07/09 11:15:03 INFO distributedshell.Client: Got > Cluster node info from ASM\n16/07/09 11:15:03 INFO distributedshell.Client: > Got node report from ASM for, nodeId=host:25454, nodeAddresshost:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Got node report from ASM for, > nodeId=host-5.domainlocal:25454, nodeAddresshost-5.domainlocal:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Got node report from ASM for, > nodeId=host-1.domainlocal:25454, nodeAddresshost-1.domainlocal:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Queue info, queueName=default, > queueCurrentCapacity=0.0, queueMaxCapacity=1.0, queueApplicationCount=0, > queueChildQueueCount=0\n16/07/09 11:15:04 INFO distributedshell.Client: User > ACL Info for Queue, queueName=root, userAcl=SUBMIT_APPLICATIONS\n16/07/09 > 11:15:04 INFO distributedshell.Client: User ACL Info for Queue, > queueName=default, userAcl=SUBMIT_APPLICATIONS\n16/07/09 11:15:04 INFO > distributedshell.Client: Max mem capability of resources in this cluster > 10240\n16/07/09 11:15:04 INFO distributedshell.Client: Max virtual cores > capabililty of resources in this cluster 1\n16/07/09 11:15:04 INFO > distributedshell.Client: Copy App Master jar from local filesystem and add to > local environment\n16/07/09 11:15:04 INFO distributedshell.Client: Set the > environment for the application master\n16/07/09 11:15:04 INFO > distributedshell.Client: Setting up app master command\n16/07/09 11:15:04 > INFO distributedshell.Client: Completed setting up app master command > {{JAVA_HOME}}/bin/java -Xmx10m > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster > --container_memory 10 --container_vcores 1 --num_containers 1 --priority 0 >
[jira] [Updated] (AMBARI-17859) YARN service check failed during EU from HDP-2.4.0.0 to Erie
[ https://issues.apache.org/jira/browse/AMBARI-17859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Lysnichenko updated AMBARI-17859: Affects Version/s: 2.4.0 > YARN service check failed during EU from HDP-2.4.0.0 to Erie > > > Key: AMBARI-17859 > URL: https://issues.apache.org/jira/browse/AMBARI-17859 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Dmitry Lysnichenko >Assignee: Dmitry Lysnichenko > Fix For: 2.4.0 > > Attachments: AMBARI-17859.patch > > > *Steps* > # Deploy HDP-2.4.0.0 cluster with Ambari 2.2.1.1 (secure, non-HA cluster, > customized service users) > # Upgrade Ambari to 2.4.0.0 > # Perform EU to 2.5.0.0-934 > *Result* > During EU, observed YARN service check reported below errors: > {code} > Traceback (most recent call last):\n File > \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", > line 159, in \nServiceCheck().execute()\n File > \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\", > line 280, in execute\nmethod(env)\n File > \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", > line 117, in service_check\nuser=params.smokeuser,\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 71, in inner\nresult = function(command, **kwargs)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 93, in checked_call\ntries=tries, try_sleep=try_sleep)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 141, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 294, in _call\nraise > Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of > '/usr/bin/kinit -kt /etc/security/keytabs/smokeuser.headless.keytab > smk_rndalxit7oqf2pq3erlqg6r...@hwqe.hortonworks.com; yarn > org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls > -num_containers 1 -jar > /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar > -timeout 30 --queue default' returned 2. Hortonworks > #\nThis is MOTD message, added for testing in qe infra\n16/07/09 > 11:15:01 INFO impl.TimelineClientImpl: Timeline service address: > http://host:8188/ws/v1/timeline/\n16/07/09 11:15:01 INFO > distributedshell.Client: Initializing Client\n16/07/09 11:15:01 INFO > distributedshell.Client: Running Client\n16/07/09 11:15:01 INFO > client.RMProxy: Connecting to ResourceManager at > host-5.domainlocal/10.0.113.157:8050\n16/07/09 11:15:03 INFO > distributedshell.Client: Got Cluster metric info from ASM, > numNodeManagers=3\n16/07/09 11:15:03 INFO distributedshell.Client: Got > Cluster node info from ASM\n16/07/09 11:15:03 INFO distributedshell.Client: > Got node report from ASM for, nodeId=host:25454, nodeAddresshost:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Got node report from ASM for, > nodeId=host-5.domainlocal:25454, nodeAddresshost-5.domainlocal:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Got node report from ASM for, > nodeId=host-1.domainlocal:25454, nodeAddresshost-1.domainlocal:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Queue info, queueName=default, > queueCurrentCapacity=0.0, queueMaxCapacity=1.0, queueApplicationCount=0, > queueChildQueueCount=0\n16/07/09 11:15:04 INFO distributedshell.Client: User > ACL Info for Queue, queueName=root, userAcl=SUBMIT_APPLICATIONS\n16/07/09 > 11:15:04 INFO distributedshell.Client: User ACL Info for Queue, > queueName=default, userAcl=SUBMIT_APPLICATIONS\n16/07/09 11:15:04 INFO > distributedshell.Client: Max mem capability of resources in this cluster > 10240\n16/07/09 11:15:04 INFO distributedshell.Client: Max virtual cores > capabililty of resources in this cluster 1\n16/07/09 11:15:04 INFO > distributedshell.Client: Copy App Master jar from local filesystem and add to > local environment\n16/07/09 11:15:04 INFO distributedshell.Client: Set the > environment for the application master\n16/07/09 11:15:04 INFO > distributedshell.Client: Setting up app master command\n16/07/09 11:15:04 > INFO distributedshell.Client: Completed setting up app master command > {{JAVA_HOME}}/bin/java -Xmx10m > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster > --container_memory 10 --container_vcores 1 --num_containers 1 --priority 0
[jira] [Updated] (AMBARI-17859) YARN service check failed during EU from HDP-2.4.0.0 to Erie
[ https://issues.apache.org/jira/browse/AMBARI-17859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Lysnichenko updated AMBARI-17859: Status: Patch Available (was: Open) > YARN service check failed during EU from HDP-2.4.0.0 to Erie > > > Key: AMBARI-17859 > URL: https://issues.apache.org/jira/browse/AMBARI-17859 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Dmitry Lysnichenko >Assignee: Dmitry Lysnichenko > Attachments: AMBARI-17859.patch > > > *Steps* > # Deploy HDP-2.4.0.0 cluster with Ambari 2.2.1.1 (secure, non-HA cluster, > customized service users) > # Upgrade Ambari to 2.4.0.0 > # Perform EU to 2.5.0.0-934 > *Result* > During EU, observed YARN service check reported below errors: > {code} > Traceback (most recent call last):\n File > \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", > line 159, in \nServiceCheck().execute()\n File > \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\", > line 280, in execute\nmethod(env)\n File > \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", > line 117, in service_check\nuser=params.smokeuser,\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 71, in inner\nresult = function(command, **kwargs)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 93, in checked_call\ntries=tries, try_sleep=try_sleep)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 141, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 294, in _call\nraise > Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of > '/usr/bin/kinit -kt /etc/security/keytabs/smokeuser.headless.keytab > smk_rndalxit7oqf2pq3erlqg6r...@hwqe.hortonworks.com; yarn > org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls > -num_containers 1 -jar > /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar > -timeout 30 --queue default' returned 2. Hortonworks > #\nThis is MOTD message, added for testing in qe infra\n16/07/09 > 11:15:01 INFO impl.TimelineClientImpl: Timeline service address: > http://host:8188/ws/v1/timeline/\n16/07/09 11:15:01 INFO > distributedshell.Client: Initializing Client\n16/07/09 11:15:01 INFO > distributedshell.Client: Running Client\n16/07/09 11:15:01 INFO > client.RMProxy: Connecting to ResourceManager at > host-5.domainlocal/10.0.113.157:8050\n16/07/09 11:15:03 INFO > distributedshell.Client: Got Cluster metric info from ASM, > numNodeManagers=3\n16/07/09 11:15:03 INFO distributedshell.Client: Got > Cluster node info from ASM\n16/07/09 11:15:03 INFO distributedshell.Client: > Got node report from ASM for, nodeId=host:25454, nodeAddresshost:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Got node report from ASM for, > nodeId=host-5.domainlocal:25454, nodeAddresshost-5.domainlocal:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Got node report from ASM for, > nodeId=host-1.domainlocal:25454, nodeAddresshost-1.domainlocal:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Queue info, queueName=default, > queueCurrentCapacity=0.0, queueMaxCapacity=1.0, queueApplicationCount=0, > queueChildQueueCount=0\n16/07/09 11:15:04 INFO distributedshell.Client: User > ACL Info for Queue, queueName=root, userAcl=SUBMIT_APPLICATIONS\n16/07/09 > 11:15:04 INFO distributedshell.Client: User ACL Info for Queue, > queueName=default, userAcl=SUBMIT_APPLICATIONS\n16/07/09 11:15:04 INFO > distributedshell.Client: Max mem capability of resources in this cluster > 10240\n16/07/09 11:15:04 INFO distributedshell.Client: Max virtual cores > capabililty of resources in this cluster 1\n16/07/09 11:15:04 INFO > distributedshell.Client: Copy App Master jar from local filesystem and add to > local environment\n16/07/09 11:15:04 INFO distributedshell.Client: Set the > environment for the application master\n16/07/09 11:15:04 INFO > distributedshell.Client: Setting up app master command\n16/07/09 11:15:04 > INFO distributedshell.Client: Completed setting up app master command > {{JAVA_HOME}}/bin/java -Xmx10m > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster > --container_memory 10 --container_vcores 1 --num_containers 1 --priority 0 > 1>/AppMaster.stdout 2>/AppMaster.stderr \n16
[jira] [Created] (AMBARI-17859) YARN service check failed during EU from HDP-2.4.0.0 to Erie
Dmitry Lysnichenko created AMBARI-17859: --- Summary: YARN service check failed during EU from HDP-2.4.0.0 to Erie Key: AMBARI-17859 URL: https://issues.apache.org/jira/browse/AMBARI-17859 Project: Ambari Issue Type: Bug Reporter: Dmitry Lysnichenko Assignee: Dmitry Lysnichenko Attachments: AMBARI-17859.patch *Steps* # Deploy HDP-2.4.0.0 cluster with Ambari 2.2.1.1 (secure, non-HA cluster, customized service users) # Upgrade Ambari to 2.4.0.0 # Perform EU to 2.5.0.0-934 *Result* During EU, observed YARN service check reported below errors: {code} Traceback (most recent call last):\n File \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", line 159, in \nServiceCheck().execute()\n File \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\", line 280, in execute\nmethod(env)\n File \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", line 117, in service_check\nuser=params.smokeuser,\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 71, in inner\nresult = function(command, **kwargs)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 93, in checked_call\ntries=tries, try_sleep=try_sleep)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 141, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 294, in _call\nraise Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of '/usr/bin/kinit -kt /etc/security/keytabs/smokeuser.headless.keytab smk_rndalxit7oqf2pq3erlqg6r...@hwqe.hortonworks.com; yarn org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls -num_containers 1 -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -timeout 30 --queue default' returned 2. Hortonworks #\nThis is MOTD message, added for testing in qe infra\n16/07/09 11:15:01 INFO impl.TimelineClientImpl: Timeline service address: http://host:8188/ws/v1/timeline/\n16/07/09 11:15:01 INFO distributedshell.Client: Initializing Client\n16/07/09 11:15:01 INFO distributedshell.Client: Running Client\n16/07/09 11:15:01 INFO client.RMProxy: Connecting to ResourceManager at host-5.domainlocal/10.0.113.157:8050\n16/07/09 11:15:03 INFO distributedshell.Client: Got Cluster metric info from ASM, numNodeManagers=3\n16/07/09 11:15:03 INFO distributedshell.Client: Got Cluster node info from ASM\n16/07/09 11:15:03 INFO distributedshell.Client: Got node report from ASM for, nodeId=host:25454, nodeAddresshost:8042, nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO distributedshell.Client: Got node report from ASM for, nodeId=host-5.domainlocal:25454, nodeAddresshost-5.domainlocal:8042, nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO distributedshell.Client: Got node report from ASM for, nodeId=host-1.domainlocal:25454, nodeAddresshost-1.domainlocal:8042, nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO distributedshell.Client: Queue info, queueName=default, queueCurrentCapacity=0.0, queueMaxCapacity=1.0, queueApplicationCount=0, queueChildQueueCount=0\n16/07/09 11:15:04 INFO distributedshell.Client: User ACL Info for Queue, queueName=root, userAcl=SUBMIT_APPLICATIONS\n16/07/09 11:15:04 INFO distributedshell.Client: User ACL Info for Queue, queueName=default, userAcl=SUBMIT_APPLICATIONS\n16/07/09 11:15:04 INFO distributedshell.Client: Max mem capability of resources in this cluster 10240\n16/07/09 11:15:04 INFO distributedshell.Client: Max virtual cores capabililty of resources in this cluster 1\n16/07/09 11:15:04 INFO distributedshell.Client: Copy App Master jar from local filesystem and add to local environment\n16/07/09 11:15:04 INFO distributedshell.Client: Set the environment for the application master\n16/07/09 11:15:04 INFO distributedshell.Client: Setting up app master command\n16/07/09 11:15:04 INFO distributedshell.Client: Completed setting up app master command {{JAVA_HOME}}/bin/java -Xmx10m org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster --container_memory 10 --container_vcores 1 --num_containers 1 --priority 0 1>/AppMaster.stdout 2>/AppMaster.stderr \n16/07/09 11:15:04 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 279 for ambari-qa on 10.0.113.145:8020\n16/07/09 11:15:04 INFO distributedshell.Client: Got dt for hdfs://host-1.domainlocal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 10.0.113.145:8020, Ident: (HDFS_DELEGATION_TOKEN token 279 for ambari-qa)\n16/07/09 11:15:04 INFO distributedshell.Client: Submitting applica
[jira] [Updated] (AMBARI-17859) YARN service check failed during EU from HDP-2.4.0.0 to Erie
[ https://issues.apache.org/jira/browse/AMBARI-17859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Lysnichenko updated AMBARI-17859: Attachment: AMBARI-17859.patch > YARN service check failed during EU from HDP-2.4.0.0 to Erie > > > Key: AMBARI-17859 > URL: https://issues.apache.org/jira/browse/AMBARI-17859 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Dmitry Lysnichenko >Assignee: Dmitry Lysnichenko > Attachments: AMBARI-17859.patch > > > *Steps* > # Deploy HDP-2.4.0.0 cluster with Ambari 2.2.1.1 (secure, non-HA cluster, > customized service users) > # Upgrade Ambari to 2.4.0.0 > # Perform EU to 2.5.0.0-934 > *Result* > During EU, observed YARN service check reported below errors: > {code} > Traceback (most recent call last):\n File > \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", > line 159, in \nServiceCheck().execute()\n File > \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\", > line 280, in execute\nmethod(env)\n File > \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", > line 117, in service_check\nuser=params.smokeuser,\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 71, in inner\nresult = function(command, **kwargs)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 93, in checked_call\ntries=tries, try_sleep=try_sleep)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 141, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 294, in _call\nraise > Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of > '/usr/bin/kinit -kt /etc/security/keytabs/smokeuser.headless.keytab > smk_rndalxit7oqf2pq3erlqg6r...@hwqe.hortonworks.com; yarn > org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls > -num_containers 1 -jar > /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar > -timeout 30 --queue default' returned 2. Hortonworks > #\nThis is MOTD message, added for testing in qe infra\n16/07/09 > 11:15:01 INFO impl.TimelineClientImpl: Timeline service address: > http://host:8188/ws/v1/timeline/\n16/07/09 11:15:01 INFO > distributedshell.Client: Initializing Client\n16/07/09 11:15:01 INFO > distributedshell.Client: Running Client\n16/07/09 11:15:01 INFO > client.RMProxy: Connecting to ResourceManager at > host-5.domainlocal/10.0.113.157:8050\n16/07/09 11:15:03 INFO > distributedshell.Client: Got Cluster metric info from ASM, > numNodeManagers=3\n16/07/09 11:15:03 INFO distributedshell.Client: Got > Cluster node info from ASM\n16/07/09 11:15:03 INFO distributedshell.Client: > Got node report from ASM for, nodeId=host:25454, nodeAddresshost:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Got node report from ASM for, > nodeId=host-5.domainlocal:25454, nodeAddresshost-5.domainlocal:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Got node report from ASM for, > nodeId=host-1.domainlocal:25454, nodeAddresshost-1.domainlocal:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Queue info, queueName=default, > queueCurrentCapacity=0.0, queueMaxCapacity=1.0, queueApplicationCount=0, > queueChildQueueCount=0\n16/07/09 11:15:04 INFO distributedshell.Client: User > ACL Info for Queue, queueName=root, userAcl=SUBMIT_APPLICATIONS\n16/07/09 > 11:15:04 INFO distributedshell.Client: User ACL Info for Queue, > queueName=default, userAcl=SUBMIT_APPLICATIONS\n16/07/09 11:15:04 INFO > distributedshell.Client: Max mem capability of resources in this cluster > 10240\n16/07/09 11:15:04 INFO distributedshell.Client: Max virtual cores > capabililty of resources in this cluster 1\n16/07/09 11:15:04 INFO > distributedshell.Client: Copy App Master jar from local filesystem and add to > local environment\n16/07/09 11:15:04 INFO distributedshell.Client: Set the > environment for the application master\n16/07/09 11:15:04 INFO > distributedshell.Client: Setting up app master command\n16/07/09 11:15:04 > INFO distributedshell.Client: Completed setting up app master command > {{JAVA_HOME}}/bin/java -Xmx10m > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster > --container_memory 10 --container_vcores 1 --num_containers 1 --priority 0 > 1>/AppMaster.stdout 2>/AppMaster.stderr \n16/07/09
[jira] [Updated] (AMBARI-17859) YARN service check failed during EU from HDP-2.4.0.0 to Erie
[ https://issues.apache.org/jira/browse/AMBARI-17859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Lysnichenko updated AMBARI-17859: Component/s: ambari-server > YARN service check failed during EU from HDP-2.4.0.0 to Erie > > > Key: AMBARI-17859 > URL: https://issues.apache.org/jira/browse/AMBARI-17859 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Dmitry Lysnichenko >Assignee: Dmitry Lysnichenko > Attachments: AMBARI-17859.patch > > > *Steps* > # Deploy HDP-2.4.0.0 cluster with Ambari 2.2.1.1 (secure, non-HA cluster, > customized service users) > # Upgrade Ambari to 2.4.0.0 > # Perform EU to 2.5.0.0-934 > *Result* > During EU, observed YARN service check reported below errors: > {code} > Traceback (most recent call last):\n File > \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", > line 159, in \nServiceCheck().execute()\n File > \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\", > line 280, in execute\nmethod(env)\n File > \"/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py\", > line 117, in service_check\nuser=params.smokeuser,\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 71, in inner\nresult = function(command, **kwargs)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 93, in checked_call\ntries=tries, try_sleep=try_sleep)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 141, in _call_wrapper\nresult = _call(command, **kwargs_copy)\n File > \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line > 294, in _call\nraise > Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of > '/usr/bin/kinit -kt /etc/security/keytabs/smokeuser.headless.keytab > smk_rndalxit7oqf2pq3erlqg6r...@hwqe.hortonworks.com; yarn > org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls > -num_containers 1 -jar > /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar > -timeout 30 --queue default' returned 2. Hortonworks > #\nThis is MOTD message, added for testing in qe infra\n16/07/09 > 11:15:01 INFO impl.TimelineClientImpl: Timeline service address: > http://host:8188/ws/v1/timeline/\n16/07/09 11:15:01 INFO > distributedshell.Client: Initializing Client\n16/07/09 11:15:01 INFO > distributedshell.Client: Running Client\n16/07/09 11:15:01 INFO > client.RMProxy: Connecting to ResourceManager at > host-5.domainlocal/10.0.113.157:8050\n16/07/09 11:15:03 INFO > distributedshell.Client: Got Cluster metric info from ASM, > numNodeManagers=3\n16/07/09 11:15:03 INFO distributedshell.Client: Got > Cluster node info from ASM\n16/07/09 11:15:03 INFO distributedshell.Client: > Got node report from ASM for, nodeId=host:25454, nodeAddresshost:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Got node report from ASM for, > nodeId=host-5.domainlocal:25454, nodeAddresshost-5.domainlocal:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Got node report from ASM for, > nodeId=host-1.domainlocal:25454, nodeAddresshost-1.domainlocal:8042, > nodeRackName/default-rack, nodeNumContainers0\n16/07/09 11:15:03 INFO > distributedshell.Client: Queue info, queueName=default, > queueCurrentCapacity=0.0, queueMaxCapacity=1.0, queueApplicationCount=0, > queueChildQueueCount=0\n16/07/09 11:15:04 INFO distributedshell.Client: User > ACL Info for Queue, queueName=root, userAcl=SUBMIT_APPLICATIONS\n16/07/09 > 11:15:04 INFO distributedshell.Client: User ACL Info for Queue, > queueName=default, userAcl=SUBMIT_APPLICATIONS\n16/07/09 11:15:04 INFO > distributedshell.Client: Max mem capability of resources in this cluster > 10240\n16/07/09 11:15:04 INFO distributedshell.Client: Max virtual cores > capabililty of resources in this cluster 1\n16/07/09 11:15:04 INFO > distributedshell.Client: Copy App Master jar from local filesystem and add to > local environment\n16/07/09 11:15:04 INFO distributedshell.Client: Set the > environment for the application master\n16/07/09 11:15:04 INFO > distributedshell.Client: Setting up app master command\n16/07/09 11:15:04 > INFO distributedshell.Client: Completed setting up app master command > {{JAVA_HOME}}/bin/java -Xmx10m > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster > --container_memory 10 --container_vcores 1 --num_containers 1 --priority 0 > 1>/AppMaster.stdout 2>/AppMaster.stderr \n16/07/09 11:
[jira] [Updated] (AMBARI-17858) Hive service check failed
[ https://issues.apache.org/jira/browse/AMBARI-17858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Sen updated AMBARI-17858: Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk and branch-2.4 > Hive service check failed > - > > Key: AMBARI-17858 > URL: https://issues.apache.org/jira/browse/AMBARI-17858 > Project: Ambari > Issue Type: Bug > Components: stacks >Affects Versions: 2.4.0 >Reporter: Dmytro Sen >Assignee: Dmytro Sen >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17858.patch > > > STR: > Install cluster with HDFS and Zk > Add other services (Yarn, MR, HIVE, Tez, Hbase, Sqoop, Oozie, Falcon, Storm, > Flume, Spark, Smartsense, Logsearch, Slider) > Hive install goes through without error > Enable Security (Hive service check fails here) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17858) Hive service check failed
[ https://issues.apache.org/jira/browse/AMBARI-17858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389799#comment-15389799 ] Dmytro Sen commented on AMBARI-17858: - {code} [INFO] Reactor Summary: [INFO] [INFO] Ambari Main ... SUCCESS [4.066s] [INFO] Apache Ambari Project POM . SUCCESS [0.080s] [INFO] Ambari Views .. SUCCESS [4.310s] [INFO] ambari-metrics SUCCESS [0.552s] [INFO] Ambari Metrics Common . SUCCESS [0.659s] [INFO] Ambari Server . SUCCESS [1:13.082s] [INFO] [INFO] BUILD SUCCESS {code} > Hive service check failed > - > > Key: AMBARI-17858 > URL: https://issues.apache.org/jira/browse/AMBARI-17858 > Project: Ambari > Issue Type: Bug > Components: stacks >Affects Versions: 2.4.0 >Reporter: Dmytro Sen >Assignee: Dmytro Sen >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17858.patch > > > STR: > Install cluster with HDFS and Zk > Add other services (Yarn, MR, HIVE, Tez, Hbase, Sqoop, Oozie, Falcon, Storm, > Flume, Spark, Smartsense, Logsearch, Slider) > Hive install goes through without error > Enable Security (Hive service check fails here) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17840) Oozie service check failed after EU downgrade
[ https://issues.apache.org/jira/browse/AMBARI-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate Cole updated AMBARI-17840: --- Attachment: AMBARI-17840.patch > Oozie service check failed after EU downgrade > - > > Key: AMBARI-17840 > URL: https://issues.apache.org/jira/browse/AMBARI-17840 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17840.patch > > > The EU Upgrade Packs are calling hdp-select set all at the end of an upgrade. > When downgrading, the Oozie Server and Oozie Client are being used out of a > mixed version, and the newer client is not compatible with the older server. > This occurs on downgrade. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-17840) Oozie service check failed after EU downgrade
[ https://issues.apache.org/jira/browse/AMBARI-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15389774#comment-15389774 ] Nate Cole commented on AMBARI-17840: No changed tests as these are Upgrade Pack changes only > Oozie service check failed after EU downgrade > - > > Key: AMBARI-17840 > URL: https://issues.apache.org/jira/browse/AMBARI-17840 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-17840.patch > > > The EU Upgrade Packs are calling hdp-select set all at the end of an upgrade. > When downgrading, the Oozie Server and Oozie Client are being used out of a > mixed version, and the newer client is not compatible with the older server. > This occurs on downgrade. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-17840) Oozie service check failed after downgrade
[ https://issues.apache.org/jira/browse/AMBARI-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate Cole updated AMBARI-17840: --- Summary: Oozie service check failed after downgrade (was: Oozie service check failed after downgrade due to authentication failure) > Oozie service check failed after downgrade > -- > > Key: AMBARI-17840 > URL: https://issues.apache.org/jira/browse/AMBARI-17840 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > > The EU Upgrade Packs are calling hdp-select set all at the end of an upgrade. > When downgrading, the Oozie Server and Oozie Client are being used out of a > mixed version, and the newer client is not compatible with the older server. > This occurs on downgrade. -- This message was sent by Atlassian JIRA (v6.3.4#6332)