[jira] [Updated] (AMBARI-16017) Update Moment.js to latest stable version 2.13.0
[ https://issues.apache.org/jira/browse/AMBARI-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangeeta Ravindran updated AMBARI-16017: Status: Patch Available (was: Open) > Update Moment.js to latest stable version 2.13.0 > > > Key: AMBARI-16017 > URL: https://issues.apache.org/jira/browse/AMBARI-16017 > Project: Ambari > Issue Type: Task > Components: ambari-views, ambari-web >Affects Versions: trunk >Reporter: Sangeeta Ravindran >Assignee: Sangeeta Ravindran > Labels: patch > Fix For: trunk > > Attachments: AMBARI-16017.patch > > > The latest stable version of Moment.js is 2.13.0. > This task is for updating the version of Moment.js used in > contrib/views/slider/src/main/resources/ui/vendor/scripts/common/ > ambari-web/vendor/scripts -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16017) Update Moment.js to latest stable version 2.13.0
[ https://issues.apache.org/jira/browse/AMBARI-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangeeta Ravindran updated AMBARI-16017: Attachment: (was: AMBARI-16017.patch) > Update Moment.js to latest stable version 2.13.0 > > > Key: AMBARI-16017 > URL: https://issues.apache.org/jira/browse/AMBARI-16017 > Project: Ambari > Issue Type: Task > Components: ambari-views, ambari-web >Affects Versions: trunk >Reporter: Sangeeta Ravindran >Assignee: Sangeeta Ravindran > Labels: patch > Fix For: trunk > > Attachments: AMBARI-16017.patch > > > The latest stable version of Moment.js is 2.13.0. > This task is for updating the version of Moment.js used in > contrib/views/slider/src/main/resources/ui/vendor/scripts/common/ > ambari-web/vendor/scripts -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16017) Update Moment.js to latest stable version 2.13.0
[ https://issues.apache.org/jira/browse/AMBARI-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangeeta Ravindran updated AMBARI-16017: Attachment: AMBARI-16017.patch > Update Moment.js to latest stable version 2.13.0 > > > Key: AMBARI-16017 > URL: https://issues.apache.org/jira/browse/AMBARI-16017 > Project: Ambari > Issue Type: Task > Components: ambari-views, ambari-web >Affects Versions: trunk >Reporter: Sangeeta Ravindran >Assignee: Sangeeta Ravindran > Labels: patch > Fix For: trunk > > Attachments: AMBARI-16017.patch > > > The latest stable version of Moment.js is 2.13.0. > This task is for updating the version of Moment.js used in > contrib/views/slider/src/main/resources/ui/vendor/scripts/common/ > ambari-web/vendor/scripts -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16017) Update Moment.js to latest stable version 2.13.0
[ https://issues.apache.org/jira/browse/AMBARI-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangeeta Ravindran updated AMBARI-16017: Attachment: (was: AMBARI-16017.patch) > Update Moment.js to latest stable version 2.13.0 > > > Key: AMBARI-16017 > URL: https://issues.apache.org/jira/browse/AMBARI-16017 > Project: Ambari > Issue Type: Task > Components: ambari-views, ambari-web >Affects Versions: trunk >Reporter: Sangeeta Ravindran >Assignee: Sangeeta Ravindran > Labels: patch > Fix For: trunk > > Attachments: AMBARI-16017.patch > > > The latest stable version of Moment.js is 2.13.0. > This task is for updating the version of Moment.js used in > contrib/views/slider/src/main/resources/ui/vendor/scripts/common/ > ambari-web/vendor/scripts -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16017) Update Moment.js to latest stable version 2.13.0
[ https://issues.apache.org/jira/browse/AMBARI-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangeeta Ravindran updated AMBARI-16017: Attachment: AMBARI-16017.patch > Update Moment.js to latest stable version 2.13.0 > > > Key: AMBARI-16017 > URL: https://issues.apache.org/jira/browse/AMBARI-16017 > Project: Ambari > Issue Type: Task > Components: ambari-views, ambari-web >Affects Versions: trunk >Reporter: Sangeeta Ravindran >Assignee: Sangeeta Ravindran > Labels: patch > Fix For: trunk > > Attachments: AMBARI-16017.patch > > > The latest stable version of Moment.js is 2.13.0. > This task is for updating the version of Moment.js used in > contrib/views/slider/src/main/resources/ui/vendor/scripts/common/ > ambari-web/vendor/scripts -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16017) Update Moment.js to latest stable version 2.13.0
[ https://issues.apache.org/jira/browse/AMBARI-16017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangeeta Ravindran updated AMBARI-16017: Status: Open (was: Patch Available) > Update Moment.js to latest stable version 2.13.0 > > > Key: AMBARI-16017 > URL: https://issues.apache.org/jira/browse/AMBARI-16017 > Project: Ambari > Issue Type: Task > Components: ambari-views, ambari-web >Affects Versions: trunk >Reporter: Sangeeta Ravindran >Assignee: Sangeeta Ravindran > Labels: patch > Fix For: trunk > > Attachments: AMBARI-16017.patch > > > The latest stable version of Moment.js is 2.13.0. > This task is for updating the version of Moment.js used in > contrib/views/slider/src/main/resources/ui/vendor/scripts/common/ > ambari-web/vendor/scripts -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16813) Ranger Usersync config to support Group Based Search for LDAP Sync Source
[ https://issues.apache.org/jira/browse/AMBARI-16813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301588#comment-15301588 ] Jayush Luniya commented on AMBARI-16813: Trunk commit 055288a3bda6fc49e01187fb40551543233e2f78 Author: Jayush Luniya Date: Wed May 25 23:09:50 2016 -0700 AMBARI-16813 Ranger Usersync config to support Group Based Search for LDAP Sync Source (Mugdha Varadkar via jluniya > Ranger Usersync config to support Group Based Search for LDAP Sync Source > - > > Key: AMBARI-16813 > URL: https://issues.apache.org/jira/browse/AMBARI-16813 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Mugdha Varadkar >Assignee: Mugdha Varadkar > Fix For: 2.4.0 > > Attachments: AMBARI-16813.patch > > > For Stack 2.5 onwards Ranger Usersync is supporting Group based search. > Need to add below two properties: > ranger.usersync.group.search.first.enabled=false > ranger.usersync.user.searchenabled=false > Proposed flow for Erie: > 1. ranger.usersync.group.search.first.enabled=false (by default) under Group > configs tab -- Can be always available or can be available & configurable > when ranger.usersync.group.searchenabled=true > 2. ranger.usersync.user.searchenabled=false (by default) under User configs > tab >-- Should be greyed out or hidden when > ranger.usersync.group.search.first.enabled=false >-- Should be configurable when > ranger.usersync.group.search.first.enabled=true -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16813) Ranger Usersync config to support Group Based Search for LDAP Sync Source
[ https://issues.apache.org/jira/browse/AMBARI-16813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jayush Luniya updated AMBARI-16813: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Ranger Usersync config to support Group Based Search for LDAP Sync Source > - > > Key: AMBARI-16813 > URL: https://issues.apache.org/jira/browse/AMBARI-16813 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Mugdha Varadkar >Assignee: Mugdha Varadkar > Fix For: 2.4.0 > > Attachments: AMBARI-16813.patch > > > For Stack 2.5 onwards Ranger Usersync is supporting Group based search. > Need to add below two properties: > ranger.usersync.group.search.first.enabled=false > ranger.usersync.user.searchenabled=false > Proposed flow for Erie: > 1. ranger.usersync.group.search.first.enabled=false (by default) under Group > configs tab -- Can be always available or can be available & configurable > when ranger.usersync.group.searchenabled=true > 2. ranger.usersync.user.searchenabled=false (by default) under User configs > tab >-- Should be greyed out or hidden when > ranger.usersync.group.search.first.enabled=false >-- Should be configurable when > ranger.usersync.group.search.first.enabled=true -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16813) Ranger Usersync config to support Group Based Search for LDAP Sync Source
[ https://issues.apache.org/jira/browse/AMBARI-16813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301590#comment-15301590 ] Jayush Luniya commented on AMBARI-16813: Branch-2.4 commit c22547d009415239eab27c4318d073b70a1bd3d5 Author: Jayush Luniya Date: Wed May 25 23:09:50 2016 -0700 AMBARI-16813 Ranger Usersync config to support Group Based Search for LDAP Sync Source (Mugdha Varadkar via jluniya > Ranger Usersync config to support Group Based Search for LDAP Sync Source > - > > Key: AMBARI-16813 > URL: https://issues.apache.org/jira/browse/AMBARI-16813 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Mugdha Varadkar >Assignee: Mugdha Varadkar > Fix For: 2.4.0 > > Attachments: AMBARI-16813.patch > > > For Stack 2.5 onwards Ranger Usersync is supporting Group based search. > Need to add below two properties: > ranger.usersync.group.search.first.enabled=false > ranger.usersync.user.searchenabled=false > Proposed flow for Erie: > 1. ranger.usersync.group.search.first.enabled=false (by default) under Group > configs tab -- Can be always available or can be available & configurable > when ranger.usersync.group.searchenabled=true > 2. ranger.usersync.user.searchenabled=false (by default) under User configs > tab >-- Should be greyed out or hidden when > ranger.usersync.group.search.first.enabled=false >-- Should be configurable when > ranger.usersync.group.search.first.enabled=true -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16874) Add capability to derive required core-site.xml properties in case if not already available
[ https://issues.apache.org/jira/browse/AMBARI-16874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jayush Luniya updated AMBARI-16874: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Add capability to derive required core-site.xml properties in case if not > already available > --- > > Key: AMBARI-16874 > URL: https://issues.apache.org/jira/browse/AMBARI-16874 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Mugdha Varadkar >Assignee: Mugdha Varadkar >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16874.1.patch, AMBARI-16874.patch > > > Handle cases in which the core-site.xml wont be available in a generic and > stack independent way. > Ranger requires two properties hadoop.security.authentication and > hadoop.security.auth_to_local from core-site.xml in case of kerberos enabled > environments. This JIRA is to make sure these properties are available. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16874) Add capability to derive required core-site.xml properties in case if not already available
[ https://issues.apache.org/jira/browse/AMBARI-16874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301579#comment-15301579 ] Jayush Luniya commented on AMBARI-16874: Branch-2.4 commit 14d8e72d3156ecfcf8467141f8c2aae177421178 Author: Jayush Luniya Date: Wed May 25 22:54:37 2016 -0700 AMBARI-16874 Add capability to derive required core-site.xml properties in case if not already available (Mugdha Va > Add capability to derive required core-site.xml properties in case if not > already available > --- > > Key: AMBARI-16874 > URL: https://issues.apache.org/jira/browse/AMBARI-16874 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Mugdha Varadkar >Assignee: Mugdha Varadkar >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16874.1.patch, AMBARI-16874.patch > > > Handle cases in which the core-site.xml wont be available in a generic and > stack independent way. > Ranger requires two properties hadoop.security.authentication and > hadoop.security.auth_to_local from core-site.xml in case of kerberos enabled > environments. This JIRA is to make sure these properties are available. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16874) Add capability to derive required core-site.xml properties in case if not already available
[ https://issues.apache.org/jira/browse/AMBARI-16874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301578#comment-15301578 ] Jayush Luniya commented on AMBARI-16874: Trunk commit 9b406d6e23bdc81f8f524461d98de1d679db5bce Author: Jayush Luniya Date: Wed May 25 22:54:37 2016 -0700 AMBARI-16874 Add capability to derive required core-site.xml properties in case if not already available (Mugdha Va > Add capability to derive required core-site.xml properties in case if not > already available > --- > > Key: AMBARI-16874 > URL: https://issues.apache.org/jira/browse/AMBARI-16874 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Mugdha Varadkar >Assignee: Mugdha Varadkar >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16874.1.patch, AMBARI-16874.patch > > > Handle cases in which the core-site.xml wont be available in a generic and > stack independent way. > Ranger requires two properties hadoop.security.authentication and > hadoop.security.auth_to_local from core-site.xml in case of kerberos enabled > environments. This JIRA is to make sure these properties are available. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16868) Insert Statement in Hive View is giving error in Kerberised cluster.
[ https://issues.apache.org/jira/browse/AMBARI-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gaurav Nagar updated AMBARI-16868: -- Attachment: AMBARI-16868_branch-2.4.patch > Insert Statement in Hive View is giving error in Kerberised cluster. > > > Key: AMBARI-16868 > URL: https://issues.apache.org/jira/browse/AMBARI-16868 > Project: Ambari > Issue Type: Bug > Components: ambari-views >Affects Versions: ambari-2.4.0 >Reporter: Gaurav Nagar >Assignee: Gaurav Nagar > Fix For: ambari-2.4.0 > > Attachments: AMBARI-16868_branch-2.4.patch > > > Statement : INSERT INTO TABLE testtable61 > values('name','name0','secondname0'); > {code} > Error Generated : > INFO : Tez session hasn't been created yet. Opening session > ERROR : Failed to execute tez graph. > org.apache.hadoop.security.AccessControlException: Permission denied: > user=${username}, access=WRITE, inode="/user/${username}":hdfs:hdfs:drwxr-xr-x > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794) > at > org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4004) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1102) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630) > at > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16868) Insert Statement in Hive View is giving error in Kerberised cluster.
[ https://issues.apache.org/jira/browse/AMBARI-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gaurav Nagar updated AMBARI-16868: -- Status: Patch Available (was: Open) > Insert Statement in Hive View is giving error in Kerberised cluster. > > > Key: AMBARI-16868 > URL: https://issues.apache.org/jira/browse/AMBARI-16868 > Project: Ambari > Issue Type: Bug > Components: ambari-views >Affects Versions: ambari-2.4.0 >Reporter: Gaurav Nagar >Assignee: Gaurav Nagar > Fix For: ambari-2.4.0 > > Attachments: AMBARI-16868_branch-2.4.patch > > > Statement : INSERT INTO TABLE testtable61 > values('name','name0','secondname0'); > {code} > Error Generated : > INFO : Tez session hasn't been created yet. Opening session > ERROR : Failed to execute tez graph. > org.apache.hadoop.security.AccessControlException: Permission denied: > user=${username}, access=WRITE, inode="/user/${username}":hdfs:hdfs:drwxr-xr-x > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794) > at > org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4004) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1102) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630) > at > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16874) Add capability to derive required core-site.xml properties in case if not already available
[ https://issues.apache.org/jira/browse/AMBARI-16874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mugdha Varadkar updated AMBARI-16874: - Attachment: AMBARI-16874.1.patch > Add capability to derive required core-site.xml properties in case if not > already available > --- > > Key: AMBARI-16874 > URL: https://issues.apache.org/jira/browse/AMBARI-16874 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Mugdha Varadkar >Assignee: Mugdha Varadkar >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16874.1.patch, AMBARI-16874.patch > > > Handle cases in which the core-site.xml wont be available in a generic and > stack independent way. > Ranger requires two properties hadoop.security.authentication and > hadoop.security.auth_to_local from core-site.xml in case of kerberos enabled > environments. This JIRA is to make sure these properties are available. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hi
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301529#comment-15301529 ] Hudson commented on AMBARI-16888: - FAILURE: Integrated in Ambari-trunk-Commit #4927 (See [https://builds.apache.org/job/Ambari-trunk-Commit/4927/]) AMBARI-16888. Handle the scenario when 'capacity-scheduler' configs is (sshridhar: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=77213ab7dba15f9ec92b40cb2c5d77d52c636587]) * ambari-server/src/main/resources/stacks/HDP/2.5/services/stack_advisor.py * ambari-server/src/test/python/stacks/2.5/common/test_stack_advisor.py > Handle the scenario when 'capacity-scheduler' configs is passed in as > dictionary to Stack Advisor (generally on 1st invocation) in order to create > 'llap' queue for Hive Server Interactive. > > > Key: AMBARI-16888 > URL: https://issues.apache.org/jira/browse/AMBARI-16888 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-16888.patch > > > - The 1st call to SA gets capacity-scheduler configs as dictionary compared > to "/n" separated single string of all the configs in subsequent calls. > * 1st invocation, passed-in capacity-scheduler looks like : > {code} > "capacity-scheduler" : { > "properties" : { > "capacity-scheduler" : "null", > "yarn.scheduler.capacity.root.accessible-node-labels" : "*", > "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", > "yarn.scheduler.capacity.root.acl_administer_queue" : "*", > "yarn.scheduler.capacity.queue-mappings-override.enable" : > 'false', > "yarn.scheduler.capacity.root.default.capacity" : "100", > "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", > "yarn.scheduler.capacity.root.queues" : "default", > "yarn.scheduler.capacity.root.capacity" : "100", > "yarn.scheduler.capacity.root.default.acl_submit_applications" : > "*", > "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", > "yarn.scheduler.capacity.node-locality-delay" : "40", > "yarn.scheduler.capacity.maximum-applications" : "1", > "yarn.scheduler.capacity.root.default.state" : "RUNNING" > } > }, > {code} > * subsequent invocations gets capacity-schdeuler as: > {code} > "capacity-scheduler": { > "properties": { > "capacity-scheduler": > "yarn.scheduler.capacity.root.queues=default\n" > > "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" > > "yarn.scheduler.capacity.root.default.state=RUNNING\n" > > "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" > > "yarn.scheduler.capacity.root.default.capacity=100\n" > > "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" > > "yarn.scheduler.capacity.root.capacity=100\n" > > "yarn.scheduler.capacity.root.acl_administer_queue=*\n" > > "yarn.scheduler.capacity.root.accessible-node-labels=*\n" > > "yarn.scheduler.capacity.node-locality-delay=40\n" > > "yarn.scheduler.capacity.maximum-applications=1\n" > > "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" > > "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" > } > }, > {code} > - Therefore, SA fails to create the 'llap' queue for Hive Server Interactive, > as SA knows to handle only the 2nd case. > - This specific issue was seen with while deploying cluster with Blueprints. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16757) Spark History Server heap size is not exposed (History Server crashed with OOM)
[ https://issues.apache.org/jira/browse/AMBARI-16757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301397#comment-15301397 ] Hudson commented on AMBARI-16757: - FAILURE: Integrated in Ambari-trunk-Commit #4926 (See [https://builds.apache.org/job/Ambari-trunk-Commit/4926/]) AMBARI-16757. Spark History Server heap size is not exposed (History (sgunturi: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=8059a6392747292fd2b180577cd5cd93a9392b57]) * ambari-server/src/main/resources/common-services/SPARK/1.2.1/configuration/spark-env.xml > Spark History Server heap size is not exposed (History Server crashed with > OOM) > --- > > Key: AMBARI-16757 > URL: https://issues.apache.org/jira/browse/AMBARI-16757 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.0.0 >Reporter: Weiqing Yang >Priority: Minor > Fix For: 2.4.0 > > Attachments: AMBARI-16757-1.patch, AMBARI-16757-2.patch, > AMBARI-16757-3.patch > > > Ambari is not exposing the heap size parameter for Spark History Server. > The workaround is to modify spark-env and add "SPARK_DAEMON_MEMORY=2g" for > example. > The newer versions of Spark defaults this to 1g, but on the older versions, > it was defaulting to 512m it seems, and it was causing OOM. > So in the patch, "SPARK_DAEMON_MEMORY=1G" is added in the spark-env template > (default: 1G). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16648) Support Storm 1.0 in EU/RU to HDP 2.5
[ https://issues.apache.org/jira/browse/AMBARI-16648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-16648: - Description: HDP 2.5 is introducing breaking changes for Storm, so must apply config changes and delete all local storm data plus data on Zookeeper. EU and RU from HDP 2.3 -> 2.5 and 2.4 -> 2.5 must do the following Apply config changes Delete Storm data on ZK only once Delete Storm local data on all Storm hosts > Support Storm 1.0 in EU/RU to HDP 2.5 > - > > Key: AMBARI-16648 > URL: https://issues.apache.org/jira/browse/AMBARI-16648 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Alejandro Fernandez > Attachments: AMBARI-16648.branch-2.4.patch, AMBARI-16648.patch, > AMBARI-16648.trunk.patch > > > HDP 2.5 is introducing breaking changes for Storm, so must apply config > changes and delete all local storm data plus data on Zookeeper. > EU and RU from HDP 2.3 -> 2.5 and 2.4 -> 2.5 must do the following > Apply config changes > Delete Storm data on ZK only once > Delete Storm local data on all Storm hosts -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hi
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301289#comment-15301289 ] Swapan Shridhar commented on AMBARI-16888: -- trunk commit : {code} commit 77213ab7dba15f9ec92b40cb2c5d77d52c636587 Author: Swapan Shridhar Date: Wed May 25 16:06:10 2016 -0700 AMBARI-16888. Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hive Server Interactive. {code} branch-2.4: {code} commit 43aad1bdeeece68847a319777f25429cc5bf6056 Author: Swapan Shridhar Date: Wed May 25 18:41:21 2016 -0700 AMBARI-16888. Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hive Server Interactive. {code} > Handle the scenario when 'capacity-scheduler' configs is passed in as > dictionary to Stack Advisor (generally on 1st invocation) in order to create > 'llap' queue for Hive Server Interactive. > > > Key: AMBARI-16888 > URL: https://issues.apache.org/jira/browse/AMBARI-16888 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-16888.patch > > > - The 1st call to SA gets capacity-scheduler configs as dictionary compared > to "/n" separated single string of all the configs in subsequent calls. > * 1st invocation, passed-in capacity-scheduler looks like : > {code} > "capacity-scheduler" : { > "properties" : { > "capacity-scheduler" : "null", > "yarn.scheduler.capacity.root.accessible-node-labels" : "*", > "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", > "yarn.scheduler.capacity.root.acl_administer_queue" : "*", > "yarn.scheduler.capacity.queue-mappings-override.enable" : > 'false', > "yarn.scheduler.capacity.root.default.capacity" : "100", > "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", > "yarn.scheduler.capacity.root.queues" : "default", > "yarn.scheduler.capacity.root.capacity" : "100", > "yarn.scheduler.capacity.root.default.acl_submit_applications" : > "*", > "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", > "yarn.scheduler.capacity.node-locality-delay" : "40", > "yarn.scheduler.capacity.maximum-applications" : "1", > "yarn.scheduler.capacity.root.default.state" : "RUNNING" > } > }, > {code} > * subsequent invocations gets capacity-schdeuler as: > {code} > "capacity-scheduler": { > "properties": { > "capacity-scheduler": > "yarn.scheduler.capacity.root.queues=default\n" > > "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" > > "yarn.scheduler.capacity.root.default.state=RUNNING\n" > > "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" > > "yarn.scheduler.capacity.root.default.capacity=100\n" > > "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" > > "yarn.scheduler.capacity.root.capacity=100\n" > > "yarn.scheduler.capacity.root.acl_administer_queue=*\n" > > "yarn.scheduler.capacity.root.accessible-node-labels=*\n" > > "yarn.scheduler.capacity.node-locality-delay=40\n" > > "yarn.scheduler.capacity.maximum-applications=1\n" > > "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" > > "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" > } > }, > {code} > - Therefore, SA fails to create the 'llap' queue for Hive Server Interactive, > as SA knows to handle only the 2nd case. > - This specific issue was seen with while deploying cluster with Blueprints. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hive
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapan Shridhar updated AMBARI-16888: - Resolution: Fixed Status: Resolved (was: Patch Available) > Handle the scenario when 'capacity-scheduler' configs is passed in as > dictionary to Stack Advisor (generally on 1st invocation) in order to create > 'llap' queue for Hive Server Interactive. > > > Key: AMBARI-16888 > URL: https://issues.apache.org/jira/browse/AMBARI-16888 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-16888.patch > > > - The 1st call to SA gets capacity-scheduler configs as dictionary compared > to "/n" separated single string of all the configs in subsequent calls. > * 1st invocation, passed-in capacity-scheduler looks like : > {code} > "capacity-scheduler" : { > "properties" : { > "capacity-scheduler" : "null", > "yarn.scheduler.capacity.root.accessible-node-labels" : "*", > "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", > "yarn.scheduler.capacity.root.acl_administer_queue" : "*", > "yarn.scheduler.capacity.queue-mappings-override.enable" : > 'false', > "yarn.scheduler.capacity.root.default.capacity" : "100", > "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", > "yarn.scheduler.capacity.root.queues" : "default", > "yarn.scheduler.capacity.root.capacity" : "100", > "yarn.scheduler.capacity.root.default.acl_submit_applications" : > "*", > "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", > "yarn.scheduler.capacity.node-locality-delay" : "40", > "yarn.scheduler.capacity.maximum-applications" : "1", > "yarn.scheduler.capacity.root.default.state" : "RUNNING" > } > }, > {code} > * subsequent invocations gets capacity-schdeuler as: > {code} > "capacity-scheduler": { > "properties": { > "capacity-scheduler": > "yarn.scheduler.capacity.root.queues=default\n" > > "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" > > "yarn.scheduler.capacity.root.default.state=RUNNING\n" > > "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" > > "yarn.scheduler.capacity.root.default.capacity=100\n" > > "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" > > "yarn.scheduler.capacity.root.capacity=100\n" > > "yarn.scheduler.capacity.root.acl_administer_queue=*\n" > > "yarn.scheduler.capacity.root.accessible-node-labels=*\n" > > "yarn.scheduler.capacity.node-locality-delay=40\n" > > "yarn.scheduler.capacity.maximum-applications=1\n" > > "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" > > "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" > } > }, > {code} > - Therefore, SA fails to create the 'llap' queue for Hive Server Interactive, > as SA knows to handle only the 2nd case. > - This specific issue was seen with while deploying cluster with Blueprints. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16648) Support Storm 1.0 in EU/RU to HDP 2.5
[ https://issues.apache.org/jira/browse/AMBARI-16648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-16648: - Summary: Support Storm 1.0 in EU/RU to HDP 2.5 (was: Upgrade pack changes to work with Storm 1.0) > Support Storm 1.0 in EU/RU to HDP 2.5 > - > > Key: AMBARI-16648 > URL: https://issues.apache.org/jira/browse/AMBARI-16648 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Alejandro Fernandez > Attachments: AMBARI-16648.branch-2.4.patch, AMBARI-16648.patch, > AMBARI-16648.trunk.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (AMBARI-16648) Upgrade pack changes to work with Storm 1.0
[ https://issues.apache.org/jira/browse/AMBARI-16648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez reassigned AMBARI-16648: Assignee: Alejandro Fernandez (was: Sriharsha Chintalapani) > Upgrade pack changes to work with Storm 1.0 > --- > > Key: AMBARI-16648 > URL: https://issues.apache.org/jira/browse/AMBARI-16648 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Alejandro Fernandez > Attachments: AMBARI-16648.branch-2.4.patch, AMBARI-16648.patch, > AMBARI-16648.trunk.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16648) Upgrade pack changes to work with Storm 1.0
[ https://issues.apache.org/jira/browse/AMBARI-16648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-16648: - Status: Open (was: Patch Available) > Upgrade pack changes to work with Storm 1.0 > --- > > Key: AMBARI-16648 > URL: https://issues.apache.org/jira/browse/AMBARI-16648 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > Attachments: AMBARI-16648.branch-2.4.patch, AMBARI-16648.patch, > AMBARI-16648.trunk.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16648) Upgrade pack changes to work with Storm 1.0
[ https://issues.apache.org/jira/browse/AMBARI-16648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-16648: - Attachment: AMBARI-16648.trunk.patch AMBARI-16648.branch-2.4.patch > Upgrade pack changes to work with Storm 1.0 > --- > > Key: AMBARI-16648 > URL: https://issues.apache.org/jira/browse/AMBARI-16648 > Project: Ambari > Issue Type: Bug >Reporter: Sriharsha Chintalapani >Assignee: Alejandro Fernandez > Attachments: AMBARI-16648.branch-2.4.patch, AMBARI-16648.patch, > AMBARI-16648.trunk.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16889) RU: install version should be blocked while upgrade in progress
[ https://issues.apache.org/jira/browse/AMBARI-16889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate Cole updated AMBARI-16889: --- Description: Throw an error when POSTing to /api/v1/clusters//stack_versions when an upgrade is already in progress. (was: Throw an error when POSTing to /api/v1/clusters//stack_versions) > RU: install version should be blocked while upgrade in progress > --- > > Key: AMBARI-16889 > URL: https://issues.apache.org/jira/browse/AMBARI-16889 > Project: Ambari > Issue Type: Task > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16889.patch > > > Throw an error when POSTing to /api/v1/clusters//stack_versions when > an upgrade is already in progress. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16861) User/Group with no Cluster Role assigned but having View Permissions of "VIEW.USER" are shown as not editable in the List View
[ https://issues.apache.org/jira/browse/AMBARI-16861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301247#comment-15301247 ] Hudson commented on AMBARI-16861: - SUCCESS: Integrated in Ambari-trunk-Commit #4925 (See [https://builds.apache.org/job/Ambari-trunk-Commit/4925/]) AMBARI-16861 - User/Group with no Cluster Role assigned but having View (rzang: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=b86a136692ceaca7efa609a1904553cbaaf739e2]) * ambari-admin/src/main/resources/ui/admin-web/test/unit/controllers/clusters/UserAccessListCtrl_test.js * ambari-admin/src/main/resources/ui/admin-web/app/scripts/controllers/clusters/UserAccessListCtrl.js > User/Group with no Cluster Role assigned but having View Permissions of > "VIEW.USER" are shown as not editable in the List View > -- > > Key: AMBARI-16861 > URL: https://issues.apache.org/jira/browse/AMBARI-16861 > Project: Ambari > Issue Type: Improvement > Components: ambari-admin >Affects Versions: trunk >Reporter: Keta Patel >Assignee: Keta Patel >Priority: Minor > Attachments: AMBARI-16861.patch, > ambari_admin_test_cases_after_fix.tiff, > ambari_admin_test_cases_before_fix.tiff, view_user_for_group.tiff, > view_user_for_user.tiff > > > Steps to reproduce this issue: > 1. Create a few local users. I have created users "aaa", "bbb", "ccc" in my > case. > 2. Create a view instance of your choice and assign one of the local users to > this view instance. I have assigned user "aaa" to the a Jobs view instance. > 3. Go to Manage Ambari -> Roles -> List View > 4. You see the user "aaa" showing Cluster Role of "View User" and is not > editable from this page. (attachment "view_user_for_user.tiff") > I had recreated this issue using group also. Please find attachment > "view_user_for_group.tiff" showing the List View for the case of group > assigned with View User permission alone. > Since View Permissions must not be mixed with the Cluster Permissions, the > proposed fix should show Cluster Permission of "None" for users who have only > "View User" permissions and no Cluster Permissions. This would allow the > ambari user to edit the Cluster Permissions of the local users from the List > View page. > Note: A workaround found to edit such users' Cluster Permissions is to go to > the Block View and assign some Cluster Permission. Then the List View shows > the Cluster Permission and is editable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16889) RU: install version should be blocked while upgrade in progress
[ https://issues.apache.org/jira/browse/AMBARI-16889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate Cole updated AMBARI-16889: --- Attachment: AMBARI-16889.patch > RU: install version should be blocked while upgrade in progress > --- > > Key: AMBARI-16889 > URL: https://issues.apache.org/jira/browse/AMBARI-16889 > Project: Ambari > Issue Type: Task > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16889.patch > > > Throw an error when POSTing to /api/v1/clusters//stack_versions -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16889) RU: install version should be blocked while upgrade in progress
[ https://issues.apache.org/jira/browse/AMBARI-16889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate Cole updated AMBARI-16889: --- Status: Patch Available (was: Open) > RU: install version should be blocked while upgrade in progress > --- > > Key: AMBARI-16889 > URL: https://issues.apache.org/jira/browse/AMBARI-16889 > Project: Ambari > Issue Type: Task > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16889.patch > > > Throw an error when POSTing to /api/v1/clusters//stack_versions -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16889) RU: install version should be blocked while upgrade in progress
[ https://issues.apache.org/jira/browse/AMBARI-16889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate Cole updated AMBARI-16889: --- Description: Throw an error when POSTing to /api/v1/clusters//stack_versions > RU: install version should be blocked while upgrade in progress > --- > > Key: AMBARI-16889 > URL: https://issues.apache.org/jira/browse/AMBARI-16889 > Project: Ambari > Issue Type: Task > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > > Throw an error when POSTing to /api/v1/clusters//stack_versions -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-16889) RU: install version should be blocked while upgrade in progress
Nate Cole created AMBARI-16889: -- Summary: RU: install version should be blocked while upgrade in progress Key: AMBARI-16889 URL: https://issues.apache.org/jira/browse/AMBARI-16889 Project: Ambari Issue Type: Task Components: ambari-server Reporter: Nate Cole Assignee: Nate Cole Priority: Critical Fix For: 2.4.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-13671) Ambari should check for duplicate config values
[ https://issues.apache.org/jira/browse/AMBARI-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated AMBARI-13671: Description: Using /#/main/services/HBASE/configs, I was able to save duplicate values for hbase.coprocessor.region.classes : org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint Ambari should check for duplicate config values. was: Using /#/main/services/HBASE/configs, I was able to save duplicate values for hbase.coprocessor.region.classes : org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint Ambari should check for duplicate config values. > Ambari should check for duplicate config values > --- > > Key: AMBARI-13671 > URL: https://issues.apache.org/jira/browse/AMBARI-13671 > Project: Ambari > Issue Type: Bug >Reporter: Ted Yu > > Using /#/main/services/HBASE/configs, I was able to save duplicate values for > hbase.coprocessor.region.classes : > org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint > Ambari should check for duplicate config values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-14163) zookeeper session timeout for hbase should take zookeeper tickTime into account
[ https://issues.apache.org/jira/browse/AMBARI-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated AMBARI-14163: Description: With tickTime=2000 in zoo.cfg, I tried to set zookeeper.session.timeout value of 1 min 40 seconds. The change was accepted. However, such timeout is not reachable (it is > 20 times tickTime) Ambari should detect such scenario and warn user. was: With tickTime=2000 in zoo.cfg, I tried to set zookeeper.session.timeout value of 1 min 40 seconds. The change was accepted. However, such timeout is not reachable (it is > 20 times tickTime) Ambari should detect such scenario and warn user. > zookeeper session timeout for hbase should take zookeeper tickTime into > account > --- > > Key: AMBARI-14163 > URL: https://issues.apache.org/jira/browse/AMBARI-14163 > Project: Ambari > Issue Type: Bug >Reporter: Ted Yu > > With tickTime=2000 in zoo.cfg, I tried to set zookeeper.session.timeout value > of 1 min 40 seconds. > The change was accepted. > However, such timeout is not reachable (it is > 20 times tickTime) > Ambari should detect such scenario and warn user. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hive
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapan Shridhar updated AMBARI-16888: - Attachment: AMBARI-16888.patch > Handle the scenario when 'capacity-scheduler' configs is passed in as > dictionary to Stack Advisor (generally on 1st invocation) in order to create > 'llap' queue for Hive Server Interactive. > > > Key: AMBARI-16888 > URL: https://issues.apache.org/jira/browse/AMBARI-16888 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-16888.patch > > > - The 1st call to SA gets capacity-scheduler configs as dictionary compared > to "/n" separated single string of all the configs in subsequent calls. > * 1st invocation, passed-in capacity-scheduler looks like : > {code} > "capacity-scheduler" : { > "properties" : { > "capacity-scheduler" : "null", > "yarn.scheduler.capacity.root.accessible-node-labels" : "*", > "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", > "yarn.scheduler.capacity.root.acl_administer_queue" : "*", > "yarn.scheduler.capacity.queue-mappings-override.enable" : > 'false', > "yarn.scheduler.capacity.root.default.capacity" : "100", > "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", > "yarn.scheduler.capacity.root.queues" : "default", > "yarn.scheduler.capacity.root.capacity" : "100", > "yarn.scheduler.capacity.root.default.acl_submit_applications" : > "*", > "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", > "yarn.scheduler.capacity.node-locality-delay" : "40", > "yarn.scheduler.capacity.maximum-applications" : "1", > "yarn.scheduler.capacity.root.default.state" : "RUNNING" > } > }, > {code} > * subsequent invocations gets capacity-schdeuler as: > {code} > "capacity-scheduler": { > "properties": { > "capacity-scheduler": > "yarn.scheduler.capacity.root.queues=default\n" > > "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" > > "yarn.scheduler.capacity.root.default.state=RUNNING\n" > > "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" > > "yarn.scheduler.capacity.root.default.capacity=100\n" > > "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" > > "yarn.scheduler.capacity.root.capacity=100\n" > > "yarn.scheduler.capacity.root.acl_administer_queue=*\n" > > "yarn.scheduler.capacity.root.accessible-node-labels=*\n" > > "yarn.scheduler.capacity.node-locality-delay=40\n" > > "yarn.scheduler.capacity.maximum-applications=1\n" > > "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" > > "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" > } > }, > {code} > - Therefore, SA fails to create the 'llap' queue for Hive Server Interactive, > as SA knows to handle only the 2nd case. > - This specific issue was seen with while deploying cluster with Blueprints. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hive
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapan Shridhar updated AMBARI-16888: - Status: Patch Available (was: In Progress) > Handle the scenario when 'capacity-scheduler' configs is passed in as > dictionary to Stack Advisor (generally on 1st invocation) in order to create > 'llap' queue for Hive Server Interactive. > > > Key: AMBARI-16888 > URL: https://issues.apache.org/jira/browse/AMBARI-16888 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-16888.patch > > > - The 1st call to SA gets capacity-scheduler configs as dictionary compared > to "/n" separated single string of all the configs in subsequent calls. > * 1st invocation, passed-in capacity-scheduler looks like : > {code} > "capacity-scheduler" : { > "properties" : { > "capacity-scheduler" : "null", > "yarn.scheduler.capacity.root.accessible-node-labels" : "*", > "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", > "yarn.scheduler.capacity.root.acl_administer_queue" : "*", > "yarn.scheduler.capacity.queue-mappings-override.enable" : > 'false', > "yarn.scheduler.capacity.root.default.capacity" : "100", > "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", > "yarn.scheduler.capacity.root.queues" : "default", > "yarn.scheduler.capacity.root.capacity" : "100", > "yarn.scheduler.capacity.root.default.acl_submit_applications" : > "*", > "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", > "yarn.scheduler.capacity.node-locality-delay" : "40", > "yarn.scheduler.capacity.maximum-applications" : "1", > "yarn.scheduler.capacity.root.default.state" : "RUNNING" > } > }, > {code} > * subsequent invocations gets capacity-schdeuler as: > {code} > "capacity-scheduler": { > "properties": { > "capacity-scheduler": > "yarn.scheduler.capacity.root.queues=default\n" > > "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" > > "yarn.scheduler.capacity.root.default.state=RUNNING\n" > > "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" > > "yarn.scheduler.capacity.root.default.capacity=100\n" > > "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" > > "yarn.scheduler.capacity.root.capacity=100\n" > > "yarn.scheduler.capacity.root.acl_administer_queue=*\n" > > "yarn.scheduler.capacity.root.accessible-node-labels=*\n" > > "yarn.scheduler.capacity.node-locality-delay=40\n" > > "yarn.scheduler.capacity.maximum-applications=1\n" > > "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" > > "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" > } > }, > {code} > - Therefore, SA fails to create the 'llap' queue for Hive Server Interactive, > as SA knows to handle only the 2nd case. > - This specific issue was seen with while deploying cluster with Blueprints. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16861) User/Group with no Cluster Role assigned but having View Permissions of "VIEW.USER" are shown as not editable in the List View
[ https://issues.apache.org/jira/browse/AMBARI-16861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Richard Zang updated AMBARI-16861: -- Resolution: Fixed Status: Resolved (was: Patch Available) committed to trunk and 2.4 b86a136692ceaca7efa609a1904553cbaaf739e2 > User/Group with no Cluster Role assigned but having View Permissions of > "VIEW.USER" are shown as not editable in the List View > -- > > Key: AMBARI-16861 > URL: https://issues.apache.org/jira/browse/AMBARI-16861 > Project: Ambari > Issue Type: Improvement > Components: ambari-admin >Affects Versions: trunk >Reporter: Keta Patel >Assignee: Keta Patel >Priority: Minor > Attachments: AMBARI-16861.patch, > ambari_admin_test_cases_after_fix.tiff, > ambari_admin_test_cases_before_fix.tiff, view_user_for_group.tiff, > view_user_for_user.tiff > > > Steps to reproduce this issue: > 1. Create a few local users. I have created users "aaa", "bbb", "ccc" in my > case. > 2. Create a view instance of your choice and assign one of the local users to > this view instance. I have assigned user "aaa" to the a Jobs view instance. > 3. Go to Manage Ambari -> Roles -> List View > 4. You see the user "aaa" showing Cluster Role of "View User" and is not > editable from this page. (attachment "view_user_for_user.tiff") > I had recreated this issue using group also. Please find attachment > "view_user_for_group.tiff" showing the List View for the case of group > assigned with View User permission alone. > Since View Permissions must not be mixed with the Cluster Permissions, the > proposed fix should show Cluster Permission of "None" for users who have only > "View User" permissions and no Cluster Permissions. This would allow the > ambari user to edit the Cluster Permissions of the local users from the List > View page. > Note: A workaround found to edit such users' Cluster Permissions is to go to > the Block View and assign some Cluster Permission. Then the List View shows > the Cluster Permission and is editable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16757) Spark History Server heap size is not exposed (History Server crashed with OOM)
[ https://issues.apache.org/jira/browse/AMBARI-16757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Srimanth Gunturi updated AMBARI-16757: -- Resolution: Fixed Status: Resolved (was: Patch Available) Committed to branch-2.4 and trunk > Spark History Server heap size is not exposed (History Server crashed with > OOM) > --- > > Key: AMBARI-16757 > URL: https://issues.apache.org/jira/browse/AMBARI-16757 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.0.0 >Reporter: Weiqing Yang >Priority: Minor > Fix For: 2.4.0 > > Attachments: AMBARI-16757-1.patch, AMBARI-16757-2.patch, > AMBARI-16757-3.patch > > > Ambari is not exposing the heap size parameter for Spark History Server. > The workaround is to modify spark-env and add "SPARK_DAEMON_MEMORY=2g" for > example. > The newer versions of Spark defaults this to 1g, but on the older versions, > it was defaulting to 512m it seems, and it was causing OOM. > So in the patch, "SPARK_DAEMON_MEMORY=1G" is added in the spark-env template > (default: 1G). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-12885) Dynamic stack extensions - install and upgrade support for custom services
[ https://issues.apache.org/jira/browse/AMBARI-12885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301060#comment-15301060 ] Matt commented on AMBARI-12885: --- [~Tim Thorpe] Can you please add some steps on how to install a custom service as an extension? I would like to try this out from a build of your patch and see if I can install a custom service on it seamlessly. > Dynamic stack extensions - install and upgrade support for custom services > -- > > Key: AMBARI-12885 > URL: https://issues.apache.org/jira/browse/AMBARI-12885 > Project: Ambari > Issue Type: New Feature > Components: ambari-agent, ambari-server, ambari-web >Reporter: Tim Thorpe >Assignee: Tim Thorpe > Attachments: AMBARI-12885.patch, Dynamic Stack Extensions - High > Level Design v4.pdf > > > The purpose of this proposal is to facilitate adding custom services to an > existing stack. Ideally this would support adding and upgrading custom > services separately from the core services defined in the stack. In > particular we are looking at custom services that need to support several > different stacks (different distributions of Ambari). The release cycle of > the custom services may be different from that of the core stack; that is, a > custom service may be upgraded at a different rate than the core distribution > itself and may be upgraded multiple times within the lifespan of a single > release of the core distribution. > One possible approach to handling this would be dynamically extending a stack > (after install time). It would be best to extend the stack in packages where > a stack extension package can have one or more custom services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16885) Change location of HAWQ tmp directories
[ https://issues.apache.org/jira/browse/AMBARI-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301062#comment-15301062 ] Hudson commented on AMBARI-16885: - FAILURE: Integrated in Ambari-trunk-Commit #4924 (See [https://builds.apache.org/job/Ambari-trunk-Commit/4924/]) AMBARI-16885: Change location of HAWQ tmp directories (mithmatt) (matt: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=24c6d1f21e299a67a65b3390d8e659b60baa93e5]) * ambari-server/src/test/python/stacks/2.3/configs/hawq_default.json * ambari-server/src/test/python/stacks/2.3/HAWQ/test_hawqsegment.py * ambari-server/src/test/python/stacks/2.3/HAWQ/test_hawqmaster.py * ambari-server/src/main/resources/common-services/HAWQ/2.0.0/configuration/hawq-site.xml * ambari-server/src/test/python/stacks/2.3/HAWQ/test_hawqstandby.py * ambari-server/src/main/resources/common-services/HAWQ/2.0.0/package/scripts/hawq_constants.py > Change location of HAWQ tmp directories > --- > > Key: AMBARI-16885 > URL: https://issues.apache.org/jira/browse/AMBARI-16885 > Project: Ambari > Issue Type: Bug > Components: stacks >Reporter: Matt >Assignee: Matt >Priority: Trivial > Fix For: trunk, 2.4.0 > > Attachments: AMBARI-16885-trunk-orig.patch, > AMBARI-16885-trunk-v1.patch > > > Update HAWQ temp directories to /data/hawq/tmp/master and > /data/hawq/tmp/segment respectively -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16857) Support yarn aux service classpath isolation feature in spark
[ https://issues.apache.org/jira/browse/AMBARI-16857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bikas Saha updated AMBARI-16857: Attachment: AMBARI-16857.3.patch > Support yarn aux service classpath isolation feature in spark > - > > Key: AMBARI-16857 > URL: https://issues.apache.org/jira/browse/AMBARI-16857 > Project: Ambari > Issue Type: Bug >Reporter: Bikas Saha >Assignee: Bikas Saha > Attachments: AMBARI-16857.1.patch, AMBARI-16857.2.patch, > AMBARI-16857.3.patch > > > Support yarn aux service classpath isolation feature in spark -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hi
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300979#comment-15300979 ] Swapan Shridhar commented on AMBARI-16888: -- Resolution : Added support for dealing with the capacity-schdeuler scenario when it's configs have been passed-in as dictionary. > Handle the scenario when 'capacity-scheduler' configs is passed in as > dictionary to Stack Advisor (generally on 1st invocation) in order to create > 'llap' queue for Hive Server Interactive. > > > Key: AMBARI-16888 > URL: https://issues.apache.org/jira/browse/AMBARI-16888 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar > Fix For: 2.4.0 > > > - The 1st call to SA gets capacity-scheduler configs as dictionary compared > to "/n" separated single string of all the configs in subsequent calls. > * 1st invocation, passed-in capacity-scheduler looks like : > {code} > "capacity-scheduler" : { > "properties" : { > "capacity-scheduler" : "null", > "yarn.scheduler.capacity.root.accessible-node-labels" : "*", > "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", > "yarn.scheduler.capacity.root.acl_administer_queue" : "*", > "yarn.scheduler.capacity.queue-mappings-override.enable" : > 'false', > "yarn.scheduler.capacity.root.default.capacity" : "100", > "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", > "yarn.scheduler.capacity.root.queues" : "default", > "yarn.scheduler.capacity.root.capacity" : "100", > "yarn.scheduler.capacity.root.default.acl_submit_applications" : > "*", > "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", > "yarn.scheduler.capacity.node-locality-delay" : "40", > "yarn.scheduler.capacity.maximum-applications" : "1", > "yarn.scheduler.capacity.root.default.state" : "RUNNING" > } > }, > {code} > * subsequent invocations gets capacity-schdeuler as: > {code} > "capacity-scheduler": { > "properties": { > "capacity-scheduler": > "yarn.scheduler.capacity.root.queues=default\n" > > "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" > > "yarn.scheduler.capacity.root.default.state=RUNNING\n" > > "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" > > "yarn.scheduler.capacity.root.default.capacity=100\n" > > "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" > > "yarn.scheduler.capacity.root.capacity=100\n" > > "yarn.scheduler.capacity.root.acl_administer_queue=*\n" > > "yarn.scheduler.capacity.root.accessible-node-labels=*\n" > > "yarn.scheduler.capacity.node-locality-delay=40\n" > > "yarn.scheduler.capacity.maximum-applications=1\n" > > "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" > > "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" > } > }, > {code} > - Therefore, SA fails to create the 'llap' queue for Hive Server Interactive, > as SA knows to handle only the 2nd case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hive
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapan Shridhar updated AMBARI-16888: - Description: - The 1st call to SA gets capacity-scheduler configs as dictionary compared to "/n" separated single string of all the configs in subsequent calls. * 1st invocation, passed-in capacity-scheduler looks like : {code} "capacity-scheduler" : { "properties" : { "capacity-scheduler" : "null", "yarn.scheduler.capacity.root.accessible-node-labels" : "*", "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", "yarn.scheduler.capacity.root.acl_administer_queue" : "*", "yarn.scheduler.capacity.queue-mappings-override.enable" : 'false', "yarn.scheduler.capacity.root.default.capacity" : "100", "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", "yarn.scheduler.capacity.root.queues" : "default", "yarn.scheduler.capacity.root.capacity" : "100", "yarn.scheduler.capacity.root.default.acl_submit_applications" : "*", "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", "yarn.scheduler.capacity.node-locality-delay" : "40", "yarn.scheduler.capacity.maximum-applications" : "1", "yarn.scheduler.capacity.root.default.state" : "RUNNING" } }, {code} * subsequent invocations gets capacity-schdeuler as: {code} "capacity-scheduler": { "properties": { "capacity-scheduler": "yarn.scheduler.capacity.root.queues=default\n" "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" "yarn.scheduler.capacity.root.default.state=RUNNING\n" "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" "yarn.scheduler.capacity.root.default.capacity=100\n" "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" "yarn.scheduler.capacity.root.capacity=100\n" "yarn.scheduler.capacity.root.acl_administer_queue=*\n" "yarn.scheduler.capacity.root.accessible-node-labels=*\n" "yarn.scheduler.capacity.node-locality-delay=40\n" "yarn.scheduler.capacity.maximum-applications=1\n" "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" } }, {code} - Therefore, SA fails to create the 'llap' queue for Hive Server Interactive, as SA knows to handle only the 2nd case. was: - The 1st call to SA gets capacity-scheduler configs as dictionary compared to "/n" separated single string of all the configs in subsequent calls. * 1st invocation, passed-in capacity-scheduler looks like : {code} "capacity-scheduler" : { "properties" : { "capacity-scheduler" : "null", "yarn.scheduler.capacity.root.accessible-node-labels" : "*", "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", "yarn.scheduler.capacity.root.acl_administer_queue" : "*", "yarn.scheduler.capacity.queue-mappings-override.enable" : 'false', "yarn.scheduler.capacity.root.default.capacity" : "100", "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", "yarn.scheduler.capacity.root.queues" : "default", "yarn.scheduler.capacity.root.capacity" : "100", "yarn.scheduler.capacity.root.default.acl_submit_applications" : "*", "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", "yarn.scheduler.capacity.node-locality-delay" : "40", "yarn.scheduler.capacity.maximum-applications" : "1", "yarn.scheduler.capacity.root.default.state" : "RUNNING" } }, {code} * subsequent invocations gets capacity-schdeuler as: {code} "capacity-scheduler": { "properties": { "capacity-scheduler": "yarn.scheduler.capacity.root.queues=default\n" "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" "yarn.scheduler.capacity.root.default.state=RUNNING\n" "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" "yarn.scheduler.capacity.root.default.capacity=100\n" "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" "yarn.scheduler.capacity.root.capacity=100\n"
[jira] [Updated] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hive
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapan Shridhar updated AMBARI-16888: - Summary: Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hive Server Interactive. (was: Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation).) > Handle the scenario when 'capacity-scheduler' configs is passed in as > dictionary to Stack Advisor (generally on 1st invocation) in order to create > 'llap' queue for Hive Server Interactive. > > > Key: AMBARI-16888 > URL: https://issues.apache.org/jira/browse/AMBARI-16888 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar > Fix For: 2.4.0 > > > - The 1st call to SA gets capacity-scheduler configs as dictionary compared > to "/n" separated single string of all the configs in subsequent calls. > * 1st invocation, passed-in capacity-scheduler looks like : > {code} > "capacity-scheduler" : { > "properties" : { > "capacity-scheduler" : "null", > "yarn.scheduler.capacity.root.accessible-node-labels" : "*", > "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", > "yarn.scheduler.capacity.root.acl_administer_queue" : "*", > "yarn.scheduler.capacity.queue-mappings-override.enable" : > 'false', > "yarn.scheduler.capacity.root.default.capacity" : "100", > "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", > "yarn.scheduler.capacity.root.queues" : "default", > "yarn.scheduler.capacity.root.capacity" : "100", > "yarn.scheduler.capacity.root.default.acl_submit_applications" : > "*", > "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", > "yarn.scheduler.capacity.node-locality-delay" : "40", > "yarn.scheduler.capacity.maximum-applications" : "1", > "yarn.scheduler.capacity.root.default.state" : "RUNNING" > } > }, > {code} > * subsequent invocations gets capacity-schdeuler as: > {code} > "capacity-scheduler": { > "properties": { > "capacity-scheduler": > "yarn.scheduler.capacity.root.queues=default\n" > > "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" > > "yarn.scheduler.capacity.root.default.state=RUNNING\n" > > "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" > > "yarn.scheduler.capacity.root.default.capacity=100\n" > > "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" > > "yarn.scheduler.capacity.root.capacity=100\n" > > "yarn.scheduler.capacity.root.acl_administer_queue=*\n" > > "yarn.scheduler.capacity.root.accessible-node-labels=*\n" > > "yarn.scheduler.capacity.node-locality-delay=40\n" > > "yarn.scheduler.capacity.maximum-applications=1\n" > > "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" > > "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" > } > }, > {code} > - Therefore, SA fails to create the 'llap' queue for Hive Server Interactive. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hive
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapan Shridhar updated AMBARI-16888: - Priority: Blocker (was: Major) > Handle the scenario when 'capacity-scheduler' configs is passed in as > dictionary to Stack Advisor (generally on 1st invocation) in order to create > 'llap' queue for Hive Server Interactive. > > > Key: AMBARI-16888 > URL: https://issues.apache.org/jira/browse/AMBARI-16888 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar >Priority: Blocker > Fix For: 2.4.0 > > > - The 1st call to SA gets capacity-scheduler configs as dictionary compared > to "/n" separated single string of all the configs in subsequent calls. > * 1st invocation, passed-in capacity-scheduler looks like : > {code} > "capacity-scheduler" : { > "properties" : { > "capacity-scheduler" : "null", > "yarn.scheduler.capacity.root.accessible-node-labels" : "*", > "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", > "yarn.scheduler.capacity.root.acl_administer_queue" : "*", > "yarn.scheduler.capacity.queue-mappings-override.enable" : > 'false', > "yarn.scheduler.capacity.root.default.capacity" : "100", > "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", > "yarn.scheduler.capacity.root.queues" : "default", > "yarn.scheduler.capacity.root.capacity" : "100", > "yarn.scheduler.capacity.root.default.acl_submit_applications" : > "*", > "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", > "yarn.scheduler.capacity.node-locality-delay" : "40", > "yarn.scheduler.capacity.maximum-applications" : "1", > "yarn.scheduler.capacity.root.default.state" : "RUNNING" > } > }, > {code} > * subsequent invocations gets capacity-schdeuler as: > {code} > "capacity-scheduler": { > "properties": { > "capacity-scheduler": > "yarn.scheduler.capacity.root.queues=default\n" > > "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" > > "yarn.scheduler.capacity.root.default.state=RUNNING\n" > > "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" > > "yarn.scheduler.capacity.root.default.capacity=100\n" > > "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" > > "yarn.scheduler.capacity.root.capacity=100\n" > > "yarn.scheduler.capacity.root.acl_administer_queue=*\n" > > "yarn.scheduler.capacity.root.accessible-node-labels=*\n" > > "yarn.scheduler.capacity.node-locality-delay=40\n" > > "yarn.scheduler.capacity.maximum-applications=1\n" > > "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" > > "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" > } > }, > {code} > - Therefore, SA fails to create the 'llap' queue for Hive Server Interactive, > as SA knows to handle only the 2nd case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation) in order to create 'llap' queue for Hive
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapan Shridhar updated AMBARI-16888: - Description: - The 1st call to SA gets capacity-scheduler configs as dictionary compared to "/n" separated single string of all the configs in subsequent calls. * 1st invocation, passed-in capacity-scheduler looks like : {code} "capacity-scheduler" : { "properties" : { "capacity-scheduler" : "null", "yarn.scheduler.capacity.root.accessible-node-labels" : "*", "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", "yarn.scheduler.capacity.root.acl_administer_queue" : "*", "yarn.scheduler.capacity.queue-mappings-override.enable" : 'false', "yarn.scheduler.capacity.root.default.capacity" : "100", "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", "yarn.scheduler.capacity.root.queues" : "default", "yarn.scheduler.capacity.root.capacity" : "100", "yarn.scheduler.capacity.root.default.acl_submit_applications" : "*", "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", "yarn.scheduler.capacity.node-locality-delay" : "40", "yarn.scheduler.capacity.maximum-applications" : "1", "yarn.scheduler.capacity.root.default.state" : "RUNNING" } }, {code} * subsequent invocations gets capacity-schdeuler as: {code} "capacity-scheduler": { "properties": { "capacity-scheduler": "yarn.scheduler.capacity.root.queues=default\n" "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" "yarn.scheduler.capacity.root.default.state=RUNNING\n" "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" "yarn.scheduler.capacity.root.default.capacity=100\n" "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" "yarn.scheduler.capacity.root.capacity=100\n" "yarn.scheduler.capacity.root.acl_administer_queue=*\n" "yarn.scheduler.capacity.root.accessible-node-labels=*\n" "yarn.scheduler.capacity.node-locality-delay=40\n" "yarn.scheduler.capacity.maximum-applications=1\n" "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" } }, {code} - Therefore, SA fails to create the 'llap' queue for Hive Server Interactive, as SA knows to handle only the 2nd case. - This specific issue was seen with while deploying cluster with Blueprints. was: - The 1st call to SA gets capacity-scheduler configs as dictionary compared to "/n" separated single string of all the configs in subsequent calls. * 1st invocation, passed-in capacity-scheduler looks like : {code} "capacity-scheduler" : { "properties" : { "capacity-scheduler" : "null", "yarn.scheduler.capacity.root.accessible-node-labels" : "*", "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", "yarn.scheduler.capacity.root.acl_administer_queue" : "*", "yarn.scheduler.capacity.queue-mappings-override.enable" : 'false', "yarn.scheduler.capacity.root.default.capacity" : "100", "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", "yarn.scheduler.capacity.root.queues" : "default", "yarn.scheduler.capacity.root.capacity" : "100", "yarn.scheduler.capacity.root.default.acl_submit_applications" : "*", "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", "yarn.scheduler.capacity.node-locality-delay" : "40", "yarn.scheduler.capacity.maximum-applications" : "1", "yarn.scheduler.capacity.root.default.state" : "RUNNING" } }, {code} * subsequent invocations gets capacity-schdeuler as: {code} "capacity-scheduler": { "properties": { "capacity-scheduler": "yarn.scheduler.capacity.root.queues=default\n" "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" "yarn.scheduler.capacity.root.default.state=RUNNING\n" "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" "yarn.scheduler.capacity.root.default.capacity=100\n" "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n"
[jira] [Updated] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation).
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapan Shridhar updated AMBARI-16888: - Description: - The 1st call to SA gets capacity-scheduler configs as dictionary compared to "/n" separated single string of all the configs in subsequent calls. * 1st invocation, passed-in capacity-scheduler looks like : {code} "capacity-scheduler" : { "properties" : { "capacity-scheduler" : "null", "yarn.scheduler.capacity.root.accessible-node-labels" : "*", "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", "yarn.scheduler.capacity.root.acl_administer_queue" : "*", "yarn.scheduler.capacity.queue-mappings-override.enable" : 'false', "yarn.scheduler.capacity.root.default.capacity" : "100", "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", "yarn.scheduler.capacity.root.queues" : "default", "yarn.scheduler.capacity.root.capacity" : "100", "yarn.scheduler.capacity.root.default.acl_submit_applications" : "*", "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", "yarn.scheduler.capacity.node-locality-delay" : "40", "yarn.scheduler.capacity.maximum-applications" : "1", "yarn.scheduler.capacity.root.default.state" : "RUNNING" } }, {code} * subsequent invocations gets capacity-schdeuler as: {code} "capacity-scheduler": { "properties": { "capacity-scheduler": "yarn.scheduler.capacity.root.queues=default\n" "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" "yarn.scheduler.capacity.root.default.state=RUNNING\n" "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" "yarn.scheduler.capacity.root.default.capacity=100\n" "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" "yarn.scheduler.capacity.root.capacity=100\n" "yarn.scheduler.capacity.root.acl_administer_queue=*\n" "yarn.scheduler.capacity.root.accessible-node-labels=*\n" "yarn.scheduler.capacity.node-locality-delay=40\n" "yarn.scheduler.capacity.maximum-applications=1\n" "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" } }, {code} - Therefore, SA fails to create the 'llap' queue for Hive Server Interactive. was: - The 1st call to SA gets capacity-scheduler configs as dictionary compared to "/n" separated single string of all the configs in subsequent calls. * 1st invocation, passed-in capacity-scheduler looks like : {code} "capacity-scheduler" : { "properties" : { "capacity-scheduler" : "null", "yarn.scheduler.capacity.root.accessible-node-labels" : "*", "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", "yarn.scheduler.capacity.root.acl_administer_queue" : "*", 'yarn.scheduler.capacity.queue-mappings-override.enable' : 'false', "yarn.scheduler.capacity.root.default.capacity" : "100", "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", "yarn.scheduler.capacity.root.queues" : "default", "yarn.scheduler.capacity.root.capacity" : "100", "yarn.scheduler.capacity.root.default.acl_submit_applications" : "*", "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", "yarn.scheduler.capacity.node-locality-delay" : "40", "yarn.scheduler.capacity.maximum-applications" : "1", "yarn.scheduler.capacity.root.default.state" : "RUNNING" } }, {code} * subsequent invocations gets capacity-schdeuler as: {code} "capacity-scheduler": { "properties": { "capacity-scheduler": "yarn.scheduler.capacity.root.queues=default\n" "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" "yarn.scheduler.capacity.root.default.state=RUNNING\n" "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" "yarn.scheduler.capacity.root.default.capacity=100\n" "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" "yarn.scheduler.capacity.root.capacity=100\n" "yarn.schedu
[jira] [Updated] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation).
[ https://issues.apache.org/jira/browse/AMBARI-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swapan Shridhar updated AMBARI-16888: - Description: - The 1st call to SA gets capacity-scheduler configs as dictionary compared to "/n" separated single string of all the configs in subsequent calls. * 1st invocation, passed-in capacity-scheduler looks like : {code} "capacity-scheduler" : { "properties" : { "capacity-scheduler" : "null", "yarn.scheduler.capacity.root.accessible-node-labels" : "*", "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", "yarn.scheduler.capacity.root.acl_administer_queue" : "*", 'yarn.scheduler.capacity.queue-mappings-override.enable' : 'false', "yarn.scheduler.capacity.root.default.capacity" : "100", "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", "yarn.scheduler.capacity.root.queues" : "default", "yarn.scheduler.capacity.root.capacity" : "100", "yarn.scheduler.capacity.root.default.acl_submit_applications" : "*", "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", "yarn.scheduler.capacity.node-locality-delay" : "40", "yarn.scheduler.capacity.maximum-applications" : "1", "yarn.scheduler.capacity.root.default.state" : "RUNNING" } }, {code} * subsequent invocations gets capacity-schdeuler as: {code} "capacity-scheduler": { "properties": { "capacity-scheduler": "yarn.scheduler.capacity.root.queues=default\n" "yarn.scheduler.capacity.root.default.user-limit-factor=1\n" "yarn.scheduler.capacity.root.default.state=RUNNING\n" "yarn.scheduler.capacity.root.default.maximum-capacity=100\n" "yarn.scheduler.capacity.root.default.capacity=100\n" "yarn.scheduler.capacity.root.default.acl_submit_applications=*\n" "yarn.scheduler.capacity.root.capacity=100\n" "yarn.scheduler.capacity.root.acl_administer_queue=*\n" "yarn.scheduler.capacity.root.accessible-node-labels=*\n" "yarn.scheduler.capacity.node-locality-delay=40\n" "yarn.scheduler.capacity.maximum-applications=1\n" "yarn.scheduler.capacity.maximum-am-resource-percent=1\n" "yarn.scheduler.capacity.queue-mappings-override.enable=false\n" } }, {code} > Handle the scenario when 'capacity-scheduler' configs is passed in as > dictionary to Stack Advisor (generally on 1st invocation). > > > Key: AMBARI-16888 > URL: https://issues.apache.org/jira/browse/AMBARI-16888 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Swapan Shridhar >Assignee: Swapan Shridhar > Fix For: 2.4.0 > > > - The 1st call to SA gets capacity-scheduler configs as dictionary compared > to "/n" separated single string of all the configs in subsequent calls. > * 1st invocation, passed-in capacity-scheduler looks like : > {code} > "capacity-scheduler" : { > "properties" : { > "capacity-scheduler" : "null", > "yarn.scheduler.capacity.root.accessible-node-labels" : "*", > "yarn.scheduler.capacity.maximum-am-resource-percent" : "1", > "yarn.scheduler.capacity.root.acl_administer_queue" : "*", > 'yarn.scheduler.capacity.queue-mappings-override.enable' : > 'false', > "yarn.scheduler.capacity.root.default.capacity" : "100", > "yarn.scheduler.capacity.root.default.user-limit-factor" : "1", > "yarn.scheduler.capacity.root.queues" : "default", > "yarn.scheduler.capacity.root.capacity" : "100", > "yarn.scheduler.capacity.root.default.acl_submit_applications" : > "*", > "yarn.scheduler.capacity.root.default.maximum-capacity" : "100", > "yarn.scheduler.capacity.node-locality-delay" : "40", > "yarn.scheduler.capacity.maximum-applications" : "1", > "yarn.scheduler.capacity.root.default.state" : "RUNNING" > } > }, > {code} > * subsequent invocations gets capacity-schdeuler as: > {code} > "capacity-scheduler": { > "properties": { > "capacity-scheduler": > "yarn.scheduler.capacity.root.queues=default\n" >
[jira] [Created] (AMBARI-16888) Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation).
Swapan Shridhar created AMBARI-16888: Summary: Handle the scenario when 'capacity-scheduler' configs is passed in as dictionary to Stack Advisor (generally on 1st invocation). Key: AMBARI-16888 URL: https://issues.apache.org/jira/browse/AMBARI-16888 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.4.0 Reporter: Swapan Shridhar Assignee: Swapan Shridhar Fix For: 2.4.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16887) [AMS / Grafana] Metrics are staying flat for 1 minutes, causing rate calculations to be 0
[ https://issues.apache.org/jira/browse/AMBARI-16887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-16887: --- Status: Patch Available (was: Open) > [AMS / Grafana] Metrics are staying flat for 1 minutes, causing rate > calculations to be 0 > - > > Key: AMBARI-16887 > URL: https://issues.apache.org/jira/browse/AMBARI-16887 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.2.2 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16887.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16887) [AMS / Grafana] Metrics are staying flat for 1 minutes, causing rate calculations to be 0
[ https://issues.apache.org/jira/browse/AMBARI-16887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan updated AMBARI-16887: --- Attachment: AMBARI-16887.patch > [AMS / Grafana] Metrics are staying flat for 1 minutes, causing rate > calculations to be 0 > - > > Key: AMBARI-16887 > URL: https://issues.apache.org/jira/browse/AMBARI-16887 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.2.2 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16887.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-16887) [AMS / Grafana] Metrics are staying flat for 1 minutes, causing rate calculations to be 0
Aravindan Vijayan created AMBARI-16887: -- Summary: [AMS / Grafana] Metrics are staying flat for 1 minutes, causing rate calculations to be 0 Key: AMBARI-16887 URL: https://issues.apache.org/jira/browse/AMBARI-16887 Project: Ambari Issue Type: Bug Components: ambari-metrics Affects Versions: 2.2.2 Reporter: Aravindan Vijayan Assignee: Aravindan Vijayan Priority: Critical Fix For: 2.4.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16873) Optimize UI error saving
[ https://issues.apache.org/jira/browse/AMBARI-16873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300856#comment-15300856 ] Hudson commented on AMBARI-16873: - FAILURE: Integrated in Ambari-trunk-Commit #4923 (See [https://builds.apache.org/job/Ambari-trunk-Commit/4923/]) AMBARI-16873 Optimize UI error saving. (atkach) (atkach: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=a198dced6316fbc79751c21cb9fee4f1147c45ed]) * ambari-web/app/controllers/global/cluster_controller.js * ambari-web/test/controllers/global/errors_handler_controller_test.js * ambari-web/app/assets/test/tests.js * ambari-web/app/controllers/global/errors_handler_controller.js > Optimize UI error saving > > > Key: AMBARI-16873 > URL: https://issues.apache.org/jira/browse/AMBARI-16873 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Andrii Tkach >Assignee: Andrii Tkach >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16873.patch > > > # Erase host "http://:8080/javascripts" > # Truncate stackTrace up to 1000 chars > # If errors container contains more than 500 000 chars then overwrite old > errors with new one -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16881) ZKFC restart failed during EU with 'upgrade_type' not defined error
[ https://issues.apache.org/jira/browse/AMBARI-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300858#comment-15300858 ] Hudson commented on AMBARI-16881: - FAILURE: Integrated in Ambari-trunk-Commit #4923 (See [https://builds.apache.org/job/Ambari-trunk-Commit/4923/]) AMBARI-16881. ZKFC restart failed during EU with upgrade_type not (vbrodetskyi: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=e7ae14918fe6f81288828855a584da5315743c71]) * ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py > ZKFC restart failed during EU with 'upgrade_type' not defined error > --- > > Key: AMBARI-16881 > URL: https://issues.apache.org/jira/browse/AMBARI-16881 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Vitaly Brodetskyi >Assignee: Vitaly Brodetskyi >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-16881.patch > > > Steps > Deploy HDP-2.4.2 cluster with Ambari 2.2.2 (Secure, HA cluster) > Upgrade Ambari to 2.4.0.0 > Perform EU to 2.5.0.0-555 > Result > ZKFC restart failed with below error: > Traceback (most recent call last): > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", > line 200, in > ZkfcSlave().execute() > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 257, in execute > method(env) > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 668, in restart > self.status(env) > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", > line 102, in status > ZkfcSlaveDefault.status_static(env, upgrade_type) > NameError: global name 'upgrade_type' is not defined -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16860) Disable HBase per user and per table metrics by default
[ https://issues.apache.org/jira/browse/AMBARI-16860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300859#comment-15300859 ] Hudson commented on AMBARI-16860: - FAILURE: Integrated in Ambari-trunk-Commit #4923 (See [https://builds.apache.org/job/Ambari-trunk-Commit/4923/]) AMBARI-16860 : Disable HBase per user and per table metrics by default (avijayan: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=4fce7d1de6dde408853f2c778a6899016f77e73c]) * ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-RS.j2 * ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2 * ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/templates/hadoop-metrics2-hbase.properties.j2 > Disable HBase per user and per table metrics by default > --- > > Key: AMBARI-16860 > URL: https://issues.apache.org/jira/browse/AMBARI-16860 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.4.0 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16860.patch > > > The workaround for 2.4.0 is to disable Per User and Per Table metrics for > HBase, for 2.4.0 and then tune AMS to work with these metrics for 2.4.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16862) [Grafana] Rename "HBase - Performance" dashboard to "HBase - RegionServers"
[ https://issues.apache.org/jira/browse/AMBARI-16862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300857#comment-15300857 ] Hudson commented on AMBARI-16862: - FAILURE: Integrated in Ambari-trunk-Commit #4923 (See [https://builds.apache.org/job/Ambari-trunk-Commit/4923/]) AMBARI-16862 : [Grafana] Rename HBase - Performance dashboard to HBase - (avijayan: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=102a71cd29ffa1a4efa5e7169742d6f9fe2b148a]) * ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/files/grafana-dashboards/HDP/grafana-hbase-performance.json * ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana_util.py * ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/files/grafana-dashboards/HDP/grafana-hbase-regionservers.json Revert "AMBARI-16862 : [Grafana] Rename HBase - Performance dashboard to (avijayan: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=b5220e30db3195d619dd21ab9610bd7bda0b5220]) * ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana_util.py * ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/files/grafana-dashboards/HDP/grafana-hbase-regionservers.json * ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/files/grafana-dashboards/HDP/grafana-hbase-performance.json AMBARI-16862 : [Grafana] Rename HBase - Performance dashboard to HBase - (avijayan: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=f4d9fdecbceff83d3d80f625f73ea444e3c69317]) * ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/files/grafana-dashboards/HDP/grafana-hbase-regionservers.json * ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/files/grafana-dashboards/HDP/grafana-hbase-performance.json * ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana_util.py > [Grafana] Rename "HBase - Performance" dashboard to "HBase - RegionServers" > --- > > Key: AMBARI-16862 > URL: https://issues.apache.org/jira/browse/AMBARI-16862 > Project: Ambari > Issue Type: Bug >Affects Versions: 2.4.0 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan > Fix For: 2.4.0 > > Attachments: AMBARI-16862.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16885) Change location of HAWQ tmp directories
[ https://issues.apache.org/jira/browse/AMBARI-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300844#comment-15300844 ] Matt commented on AMBARI-16885: --- Committed to trunk: {code} commit 24c6d1f21e299a67a65b3390d8e659b60baa93e5 Author: Matt Date: Wed May 25 14:00:20 2016 -0700 {code} Committed to branch-2.4: {code} commit 665cac47aba92ecd07689f4926416e9d2540dabf Author: Matt Date: Wed May 25 14:01:12 2016 -0700 {code} Marking as resolved > Change location of HAWQ tmp directories > --- > > Key: AMBARI-16885 > URL: https://issues.apache.org/jira/browse/AMBARI-16885 > Project: Ambari > Issue Type: Bug > Components: stacks >Reporter: Matt >Assignee: Matt >Priority: Trivial > Fix For: trunk, 2.4.0 > > Attachments: AMBARI-16885-trunk-orig.patch, > AMBARI-16885-trunk-v1.patch > > > Update HAWQ temp directories to /data/hawq/tmp/master and > /data/hawq/tmp/segment respectively -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16885) Change location of HAWQ tmp directories
[ https://issues.apache.org/jira/browse/AMBARI-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt updated AMBARI-16885: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Change location of HAWQ tmp directories > --- > > Key: AMBARI-16885 > URL: https://issues.apache.org/jira/browse/AMBARI-16885 > Project: Ambari > Issue Type: Bug > Components: stacks >Reporter: Matt >Assignee: Matt >Priority: Trivial > Fix For: trunk, 2.4.0 > > Attachments: AMBARI-16885-trunk-orig.patch, > AMBARI-16885-trunk-v1.patch > > > Update HAWQ temp directories to /data/hawq/tmp/master and > /data/hawq/tmp/segment respectively -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16857) Support yarn aux service classpath isolation feature in spark
[ https://issues.apache.org/jira/browse/AMBARI-16857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bikas Saha updated AMBARI-16857: Attachment: AMBARI-16857.2.patch Updating the paths and handling upgrades per offline advice from [~afernandez] > Support yarn aux service classpath isolation feature in spark > - > > Key: AMBARI-16857 > URL: https://issues.apache.org/jira/browse/AMBARI-16857 > Project: Ambari > Issue Type: Bug >Reporter: Bikas Saha >Assignee: Bikas Saha > Attachments: AMBARI-16857.1.patch, AMBARI-16857.2.patch > > > Support yarn aux service classpath isolation feature in spark -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16857) Support yarn aux service classpath isolation feature in spark
[ https://issues.apache.org/jira/browse/AMBARI-16857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300827#comment-15300827 ] Bikas Saha commented on AMBARI-16857: - Since these configs are extremely unlikely to have been modified by anyone, going with the upgrade option to override the existing value instead of appending. Manual workaround is possible for such rare cases. > Support yarn aux service classpath isolation feature in spark > - > > Key: AMBARI-16857 > URL: https://issues.apache.org/jira/browse/AMBARI-16857 > Project: Ambari > Issue Type: Bug >Reporter: Bikas Saha >Assignee: Bikas Saha > Attachments: AMBARI-16857.1.patch, AMBARI-16857.2.patch > > > Support yarn aux service classpath isolation feature in spark -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16885) Change location of HAWQ tmp directories
[ https://issues.apache.org/jira/browse/AMBARI-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300769#comment-15300769 ] Hadoop QA commented on AMBARI-16885: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12806193/AMBARI-16885-trunk-v1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in ambari-server. Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/6958//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/6958//console This message is automatically generated. > Change location of HAWQ tmp directories > --- > > Key: AMBARI-16885 > URL: https://issues.apache.org/jira/browse/AMBARI-16885 > Project: Ambari > Issue Type: Bug > Components: stacks >Reporter: Matt >Assignee: Matt >Priority: Trivial > Fix For: trunk, 2.4.0 > > Attachments: AMBARI-16885-trunk-orig.patch, > AMBARI-16885-trunk-v1.patch > > > Update HAWQ temp directories to /data/hawq/tmp/master and > /data/hawq/tmp/segment respectively -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16855) Grafana Dashboard Fix
[ https://issues.apache.org/jira/browse/AMBARI-16855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yusaku Sako updated AMBARI-16855: - Resolution: Fixed Status: Resolved (was: Patch Available) > Grafana Dashboard Fix > - > > Key: AMBARI-16855 > URL: https://issues.apache.org/jira/browse/AMBARI-16855 > Project: Ambari > Issue Type: Bug > Components: ambari-metrics >Affects Versions: 2.4.0 >Reporter: Prajwal Rao >Assignee: Prajwal Rao > Fix For: 2.4.0 > > Attachments: AMBARI-16855.patch > > > Fix HBase per-user dashboard. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16757) Spark History Server heap size is not exposed (History Server crashed with OOM)
[ https://issues.apache.org/jira/browse/AMBARI-16757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiqing Yang updated AMBARI-16757: -- Attachment: AMBARI-16757-3.patch > Spark History Server heap size is not exposed (History Server crashed with > OOM) > --- > > Key: AMBARI-16757 > URL: https://issues.apache.org/jira/browse/AMBARI-16757 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.0.0 >Reporter: Weiqing Yang >Priority: Minor > Fix For: 2.4.0 > > Attachments: AMBARI-16757-1.patch, AMBARI-16757-2.patch, > AMBARI-16757-3.patch > > > Ambari is not exposing the heap size parameter for Spark History Server. > The workaround is to modify spark-env and add "SPARK_DAEMON_MEMORY=2g" for > example. > The newer versions of Spark defaults this to 1g, but on the older versions, > it was defaulting to 512m it seems, and it was causing OOM. > So in the patch, "SPARK_DAEMON_MEMORY=1G" is added in the spark-env template > (default: 1G). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16859) Kafka dashboards for Grafana
[ https://issues.apache.org/jira/browse/AMBARI-16859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prajwal Rao updated AMBARI-16859: - Attachment: AMBARI-16859.patch > Kafka dashboards for Grafana > > > Key: AMBARI-16859 > URL: https://issues.apache.org/jira/browse/AMBARI-16859 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.4.0 >Reporter: Prajwal Rao >Assignee: Prajwal Rao > Fix For: 2.4.0 > > Attachments: AMBARI-16859.patch > > > Add 3 dashboards to Grafana > - Kafka Home > - Kafka Hosts (Templatized) > - Kafka Topics (Templatized per Topic) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16859) Kafka dashboards for Grafana
[ https://issues.apache.org/jira/browse/AMBARI-16859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prajwal Rao updated AMBARI-16859: - Status: Patch Available (was: Open) > Kafka dashboards for Grafana > > > Key: AMBARI-16859 > URL: https://issues.apache.org/jira/browse/AMBARI-16859 > Project: Ambari > Issue Type: Task > Components: ambari-metrics >Affects Versions: 2.4.0 >Reporter: Prajwal Rao >Assignee: Prajwal Rao > Fix For: 2.4.0 > > Attachments: AMBARI-16859.patch > > > Add 3 dashboards to Grafana > - Kafka Home > - Kafka Hosts (Templatized) > - Kafka Topics (Templatized per Topic) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16857) Support yarn aux service classpath isolation feature in spark
[ https://issues.apache.org/jira/browse/AMBARI-16857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300741#comment-15300741 ] Alejandro Fernandez commented on AMBARI-16857: -- +1 > Support yarn aux service classpath isolation feature in spark > - > > Key: AMBARI-16857 > URL: https://issues.apache.org/jira/browse/AMBARI-16857 > Project: Ambari > Issue Type: Bug >Reporter: Bikas Saha >Assignee: Bikas Saha > Attachments: AMBARI-16857.1.patch > > > Support yarn aux service classpath isolation feature in spark -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16807) After enabling HTTPS for HDFS, Data Node JVM Metrics on HDFS Heatmaps show NA
[ https://issues.apache.org/jira/browse/AMBARI-16807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300740#comment-15300740 ] Hadoop QA commented on AMBARI-16807: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12805909/AMBARI-16807.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in ambari-server: org.apache.ambari.server.state.ConfigHelperTest org.apache.ambari.server.controller.internal.HostResourceProviderTest Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/6957//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/6957//console This message is automatically generated. > After enabling HTTPS for HDFS, Data Node JVM Metrics on HDFS Heatmaps show NA > - > > Key: AMBARI-16807 > URL: https://issues.apache.org/jira/browse/AMBARI-16807 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.2.0 >Reporter: Qin Liu >Assignee: Qin Liu > Fix For: trunk > > Attachments: AMBARI-16807.patch > > > After enabling HTTPS for HDFS, "DataNode Garbage Collection Time", "DataNode > JVM Heap Memory Used ", and "DataNode JVM Heap Memory Committed" widgets on > HDFS Heatmaps show NA. > Steps to reproduce: > 1. install a cluster with default from Ambari Web UI. > 2. configuring SSL for HDFS, YARN, and MapReduce. > 3. enable HTTPS for HDFS > Set the following properties in Advanced hdfs-site from Ambari Web UI: > dfs.http.policy=HTTPS_ONLY > dfs.datanode.http.address=0.0.0.0:50075 > dfs.datanode.https.address=0.0.0.0:50475 > 4. "DataNode JVM Heap Memory Used " and "DataNode JVM Heap Memory Committed" > widgets on HDFS Heatmaps will show NA. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16278) Give more time for HBase system tables to be assigned
[ https://issues.apache.org/jira/browse/AMBARI-16278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated AMBARI-16278: Description: We have observed extended cluster downtime due to HBase system tables not being assigned at cluster start up. The default values for the following two parameters are too low: hbase.regionserver.executor.openregion.threads (default: 3) hbase.master.namespace.init.timeout (default: 30) We set hbase.regionserver.executor.openregion.threads=200 and hbase.master.namespace.init.timeout=240 in some case to work around HBASE-14190. Ambari can use 20 for hbase.regionserver.executor.openregion.threads and 240 for hbase.master.namespace.init.timeout as default value. was: We have observed extended cluster downtime due to HBase system tables not being assigned at cluster start up. The default values for the following two parameters are too low: hbase.regionserver.executor.openregion.threads (default: 3) hbase.master.namespace.init.timeout (default: 30) We set hbase.regionserver.executor.openregion.threads=200 and hbase.master.namespace.init.timeout=240 in some case to work around HBASE-14190. Ambari can use 20 for hbase.regionserver.executor.openregion.threads and 240 for hbase.master.namespace.init.timeout as default value. > Give more time for HBase system tables to be assigned > - > > Key: AMBARI-16278 > URL: https://issues.apache.org/jira/browse/AMBARI-16278 > Project: Ambari > Issue Type: Improvement >Reporter: Ted Yu > > We have observed extended cluster downtime due to HBase system tables not > being assigned at cluster start up. > The default values for the following two parameters are too low: > hbase.regionserver.executor.openregion.threads (default: 3) > hbase.master.namespace.init.timeout (default: 30) > We set hbase.regionserver.executor.openregion.threads=200 and > hbase.master.namespace.init.timeout=240 in some case to work around > HBASE-14190. > Ambari can use 20 for hbase.regionserver.executor.openregion.threads and > 240 for hbase.master.namespace.init.timeout as default value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16872) Cluster deploy fails if property admin_sever_host not set in blueprint
[ https://issues.apache.org/jira/browse/AMBARI-16872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300684#comment-15300684 ] Hudson commented on AMBARI-16872: - ABORTED: Integrated in Ambari-trunk-Commit #4922 (See [https://builds.apache.org/job/Ambari-trunk-Commit/4922/]) AMBARI-16872. Cluster deploy fails if property admin_sever_host not set (aonishuk: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=3f19456edd388c29417c8d73bfe08ed0e64446df]) * ambari-server/src/main/resources/common-services/KERBEROS/1.10.3-10/configuration/kerberos-env.xml > Cluster deploy fails if property admin_sever_host not set in blueprint > -- > > Key: AMBARI-16872 > URL: https://issues.apache.org/jira/browse/AMBARI-16872 > Project: Ambari > Issue Type: Bug >Reporter: Andrew Onischuk >Assignee: Andrew Onischuk > Fix For: 2.4.0 > > Attachments: AMBARI-16872.patch > > > Blueprint validation does't show warning if "admin_server_host" property not > set in kerberos-env config. And cluster deploy fails with: > > > > Command: /usr/bin/kadmin -s -p admin/ad...@example.com -w -r > EXAMPLE.COM -q "get_principal ambari-qa...@example.com" > ExitCode: 1 > STDOUT: Authenticating as principal admin/ad...@example.com with > password. > > STDERR: kadmin: Cannot resolve network address for admin server > in requested realm while initializing kadmin interface > > option "-s" is empty. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16885) Change location of HAWQ tmp directories
[ https://issues.apache.org/jira/browse/AMBARI-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt updated AMBARI-16885: -- Attachment: AMBARI-16885-trunk-v1.patch > Change location of HAWQ tmp directories > --- > > Key: AMBARI-16885 > URL: https://issues.apache.org/jira/browse/AMBARI-16885 > Project: Ambari > Issue Type: Bug > Components: stacks >Reporter: Matt >Assignee: Matt >Priority: Trivial > Fix For: trunk, 2.4.0 > > Attachments: AMBARI-16885-trunk-orig.patch, > AMBARI-16885-trunk-v1.patch > > > Update HAWQ temp directories to /data/hawq/tmp/master and > /data/hawq/tmp/segment respectively -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16854) Failed to install packages for HDP 2.4 and 2.5
[ https://issues.apache.org/jira/browse/AMBARI-16854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300683#comment-15300683 ] Hudson commented on AMBARI-16854: - ABORTED: Integrated in Ambari-trunk-Commit #4922 (See [https://builds.apache.org/job/Ambari-trunk-Commit/4922/]) AMBARI-16854. Failed to install packages for HDP 2.4 and 2.5 (ncole) (ncole: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=6d15dc6cf1271977c7076d6c8244603819e082c0]) * ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProvider.java * ambari-server/src/test/java/org/apache/ambari/server/controller/internal/ClusterStackVersionResourceProviderTest.java > Failed to install packages for HDP 2.4 and 2.5 > -- > > Key: AMBARI-16854 > URL: https://issues.apache.org/jira/browse/AMBARI-16854 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Reporter: Nate Cole >Assignee: Nate Cole >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16854.patch > > > If there is a repository that is already installed and the version > is GREATER than the one trying to install, we must fail (until we can > support that via Patch Upgrades) > For example: > 1. Install 2.3.0.0 > 2. Register and Install 2.5.0.0 (with or without package-version; it gets > computed correctly) > 3. Register 2.4 (without package-version) > Installation of 2.4 will fail because the way agents invoke installation > is to > install by name. if the package-version is not known, then the 'newest' > is ALWAYS installed. > In this case, that would be 2.5.0.0. 2.4 is never picked up as the > correct repository. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16657) Log count text on Log level select boxes in Service Logs Tab not getting updated while checking and unchecking
[ https://issues.apache.org/jira/browse/AMBARI-16657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300685#comment-15300685 ] Hudson commented on AMBARI-16657: - ABORTED: Integrated in Ambari-trunk-Commit #4922 (See [https://builds.apache.org/job/Ambari-trunk-Commit/4922/]) AMBARI-16657. Log count text on Log level select boxes in Service Logs (oleewere: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=4d638d08bef7179f2a2b9c4b496516ea988deb60]) * ambari-logsearch/ambari-logsearch-portal/src/main/webapp/scripts/views/dashboard/LogLevelBoxView.js > Log count text on Log level select boxes in Service Logs Tab not getting > updated while checking and unchecking > -- > > Key: AMBARI-16657 > URL: https://issues.apache.org/jira/browse/AMBARI-16657 > Project: Ambari > Issue Type: Bug > Components: ambari-logsearch >Affects Versions: 2.4.0 >Reporter: Dharmesh Makwana >Assignee: Dharmesh Makwana > Fix For: 2.4.0 > > Attachments: AMBARI-16657.patch > > > Log count text on Log level select boxes in Service Logs tab not getting > updated while checking and unchecking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-16886) "Zeppelin service check" was scheduled before "Zeppelin Notebook Start"
Renjith Kamath created AMBARI-16886: --- Summary: "Zeppelin service check" was scheduled before "Zeppelin Notebook Start" Key: AMBARI-16886 URL: https://issues.apache.org/jira/browse/AMBARI-16886 Project: Ambari Issue Type: Bug Affects Versions: 2.4.0 Reporter: Renjith Kamath Fix For: 2.4.0 During the deploy of HDP-2.5 with all services Zeppelin service check was scheduled before Zeppelin Notebook Start. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16886) "Zeppelin service check" was scheduled before "Zeppelin Notebook Start"
[ https://issues.apache.org/jira/browse/AMBARI-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Renjith Kamath updated AMBARI-16886: Status: Patch Available (was: Open) > "Zeppelin service check" was scheduled before "Zeppelin Notebook Start" > --- > > Key: AMBARI-16886 > URL: https://issues.apache.org/jira/browse/AMBARI-16886 > Project: Ambari > Issue Type: Bug >Affects Versions: 2.4.0 >Reporter: Renjith Kamath > Fix For: 2.4.0 > > Attachments: AMBARI-16886-trunk+2.4-v1.patch > > > During the deploy of HDP-2.5 with all services Zeppelin service check was > scheduled before Zeppelin Notebook Start. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16886) "Zeppelin service check" was scheduled before "Zeppelin Notebook Start"
[ https://issues.apache.org/jira/browse/AMBARI-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Renjith Kamath updated AMBARI-16886: Attachment: AMBARI-16886-trunk+2.4-v1.patch > "Zeppelin service check" was scheduled before "Zeppelin Notebook Start" > --- > > Key: AMBARI-16886 > URL: https://issues.apache.org/jira/browse/AMBARI-16886 > Project: Ambari > Issue Type: Bug >Affects Versions: 2.4.0 >Reporter: Renjith Kamath > Fix For: 2.4.0 > > Attachments: AMBARI-16886-trunk+2.4-v1.patch > > > During the deploy of HDP-2.5 with all services Zeppelin service check was > scheduled before Zeppelin Notebook Start. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16881) ZKFC restart failed during EU with 'upgrade_type' not defined error
[ https://issues.apache.org/jira/browse/AMBARI-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitaly Brodetskyi updated AMBARI-16881: --- Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk and branch-2.4 > ZKFC restart failed during EU with 'upgrade_type' not defined error > --- > > Key: AMBARI-16881 > URL: https://issues.apache.org/jira/browse/AMBARI-16881 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Vitaly Brodetskyi >Assignee: Vitaly Brodetskyi >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-16881.patch > > > Steps > Deploy HDP-2.4.2 cluster with Ambari 2.2.2 (Secure, HA cluster) > Upgrade Ambari to 2.4.0.0 > Perform EU to 2.5.0.0-555 > Result > ZKFC restart failed with below error: > Traceback (most recent call last): > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", > line 200, in > ZkfcSlave().execute() > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 257, in execute > method(env) > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 668, in restart > self.status(env) > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", > line 102, in status > ZkfcSlaveDefault.status_static(env, upgrade_type) > NameError: global name 'upgrade_type' is not defined -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (AMBARI-16755) Add spark.driver.extraLibraryPath & spark.executor.extraLibraryPath to spark-defaults.conf
[ https://issues.apache.org/jira/browse/AMBARI-16755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiqing Yang reassigned AMBARI-16755: - Assignee: Weiqing Yang > Add spark.driver.extraLibraryPath & spark.executor.extraLibraryPath to > spark-defaults.conf > -- > > Key: AMBARI-16755 > URL: https://issues.apache.org/jira/browse/AMBARI-16755 > Project: Ambari > Issue Type: Improvement > Components: stacks >Reporter: Jeff Zhang >Assignee: Weiqing Yang > Attachments: AMBARI-16755-1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16755) Add spark.driver.extraLibraryPath & spark.executor.extraLibraryPath to spark-defaults.conf
[ https://issues.apache.org/jira/browse/AMBARI-16755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiqing Yang updated AMBARI-16755: -- Attachment: AMBARI-16755-1.patch > Add spark.driver.extraLibraryPath & spark.executor.extraLibraryPath to > spark-defaults.conf > -- > > Key: AMBARI-16755 > URL: https://issues.apache.org/jira/browse/AMBARI-16755 > Project: Ambari > Issue Type: Improvement > Components: stacks >Reporter: Jeff Zhang >Assignee: Weiqing Yang > Attachments: AMBARI-16755-1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16883) [Zeppelin] Restart service button not available after the configuration update & intermittent restart failure
[ https://issues.apache.org/jira/browse/AMBARI-16883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Renjith Kamath updated AMBARI-16883: Attachment: AMBARI-16883-trunk-v1.patch > [Zeppelin] Restart service button not available after the configuration > update & intermittent restart failure > - > > Key: AMBARI-16883 > URL: https://issues.apache.org/jira/browse/AMBARI-16883 > Project: Ambari > Issue Type: Bug >Affects Versions: 2.4.0 >Reporter: Renjith Kamath > Fix For: 2.4.0 > > Attachments: AMBARI-16883-trunk-v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16883) [Zeppelin] Restart service button not available after the configuration update & intermittent restart failure
[ https://issues.apache.org/jira/browse/AMBARI-16883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Renjith Kamath updated AMBARI-16883: Status: Patch Available (was: Open) > [Zeppelin] Restart service button not available after the configuration > update & intermittent restart failure > - > > Key: AMBARI-16883 > URL: https://issues.apache.org/jira/browse/AMBARI-16883 > Project: Ambari > Issue Type: Bug >Affects Versions: 2.4.0 >Reporter: Renjith Kamath > Fix For: 2.4.0 > > Attachments: AMBARI-16883-trunk-v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16848) clean up import * for STORM, TEZ and ZEPPELIN services
[ https://issues.apache.org/jira/browse/AMBARI-16848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juanjo Marron updated AMBARI-16848: --- Summary: clean up import * for STORM, TEZ and ZEPPELIN services (was: clean up import * for SQOOP, STORM, TEZ and ZEPPELIN services) > clean up import * for STORM, TEZ and ZEPPELIN services > -- > > Key: AMBARI-16848 > URL: https://issues.apache.org/jira/browse/AMBARI-16848 > Project: Ambari > Issue Type: Technical task > Components: ambari-agent, ambari-server >Affects Versions: 2.1.0, 2.2.0, 2.4.0 >Reporter: Juanjo Marron >Assignee: Juanjo Marron > Fix For: 3.0.0 > > > Python code at at common-services level used generic imports form > resource_management (from resource_management import *) > Ideally, for easier code tracking and performance, these import should be > more specific, such as: > from resource_management.libraries.script.script import Script > from resource_management.core.resources.system import Directory > This subtask cleans up import * from resource_management and replace it for > specific imports for: > Sqoop, Storm, Tez and Zeppelin services -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16848) clean up import * for STORM, TEZ and ZEPPELIN services
[ https://issues.apache.org/jira/browse/AMBARI-16848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juanjo Marron updated AMBARI-16848: --- Description: Python code at at common-services level used generic imports form resource_management (from resource_management import *) Ideally, for easier code tracking and performance, these import should be more specific, such as: from resource_management.libraries.script.script import Script from resource_management.core.resources.system import Directory This subtask cleans up import * from resource_management and replace it for specific imports for: Storm, Tez and Zeppelin services was: Python code at at common-services level used generic imports form resource_management (from resource_management import *) Ideally, for easier code tracking and performance, these import should be more specific, such as: from resource_management.libraries.script.script import Script from resource_management.core.resources.system import Directory This subtask cleans up import * from resource_management and replace it for specific imports for: Sqoop, Storm, Tez and Zeppelin services > clean up import * for STORM, TEZ and ZEPPELIN services > -- > > Key: AMBARI-16848 > URL: https://issues.apache.org/jira/browse/AMBARI-16848 > Project: Ambari > Issue Type: Technical task > Components: ambari-agent, ambari-server >Affects Versions: 2.1.0, 2.2.0, 2.4.0 >Reporter: Juanjo Marron >Assignee: Juanjo Marron > Fix For: 3.0.0 > > > Python code at at common-services level used generic imports form > resource_management (from resource_management import *) > Ideally, for easier code tracking and performance, these import should be > more specific, such as: > from resource_management.libraries.script.script import Script > from resource_management.core.resources.system import Directory > This subtask cleans up import * from resource_management and replace it for > specific imports for: > Storm, Tez and Zeppelin services -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16885) Change location of HAWQ tmp directories
[ https://issues.apache.org/jira/browse/AMBARI-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt updated AMBARI-16885: -- Status: Patch Available (was: Open) > Change location of HAWQ tmp directories > --- > > Key: AMBARI-16885 > URL: https://issues.apache.org/jira/browse/AMBARI-16885 > Project: Ambari > Issue Type: Bug > Components: stacks >Reporter: Matt >Assignee: Matt >Priority: Trivial > Fix For: trunk, 2.4.0 > > Attachments: AMBARI-16885-trunk-orig.patch > > > Update HAWQ temp directories to /data/hawq/tmp/master and > /data/hawq/tmp/segment respectively -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16883) [Zeppelin] Restart service button not available after the configuration update & intermittent restart failure
[ https://issues.apache.org/jira/browse/AMBARI-16883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Renjith Kamath updated AMBARI-16883: Summary: [Zeppelin] Restart service button not available after the configuration update & intermittent restart failure (was: [Zeppelin] Restart service button not available after the configuration update) > [Zeppelin] Restart service button not available after the configuration > update & intermittent restart failure > - > > Key: AMBARI-16883 > URL: https://issues.apache.org/jira/browse/AMBARI-16883 > Project: Ambari > Issue Type: Bug >Affects Versions: 2.4.0 >Reporter: Renjith Kamath > Fix For: 2.4.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16885) Change location of HAWQ tmp directories
[ https://issues.apache.org/jira/browse/AMBARI-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt updated AMBARI-16885: -- Attachment: AMBARI-16885-trunk-orig.patch > Change location of HAWQ tmp directories > --- > > Key: AMBARI-16885 > URL: https://issues.apache.org/jira/browse/AMBARI-16885 > Project: Ambari > Issue Type: Bug > Components: stacks >Reporter: Matt >Assignee: Matt >Priority: Trivial > Fix For: trunk, 2.4.0 > > Attachments: AMBARI-16885-trunk-orig.patch > > > Update HAWQ temp directories to /data/hawq/tmp/master and > /data/hawq/tmp/segment respectively -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-16885) Change location of HAWQ tmp directories
Matt created AMBARI-16885: - Summary: Change location of HAWQ tmp directories Key: AMBARI-16885 URL: https://issues.apache.org/jira/browse/AMBARI-16885 Project: Ambari Issue Type: Bug Components: stacks Reporter: Matt Assignee: Matt Priority: Trivial Fix For: trunk, 2.4.0 Update HAWQ temp directories to /data/hawq/tmp/master and /data/hawq/tmp/segment respectively -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16884) Quickly show configuration changes when stale configuration is detected
[ https://issues.apache.org/jira/browse/AMBARI-16884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Codding updated AMBARI-16884: -- Attachment: AMBARI-16884.bmml AMBARI-16884.png > Quickly show configuration changes when stale configuration is detected > --- > > Key: AMBARI-16884 > URL: https://issues.apache.org/jira/browse/AMBARI-16884 > Project: Ambari > Issue Type: New Feature >Reporter: Paul Codding > Fix For: 3.0.0 > > Attachments: AMBARI-16884.bmml, AMBARI-16884.png > > > When configurations are changed and components require restarting, having a > button to easily view the changes that will be applied to the configuration > during the restart operation is required. Simply displaying the > configuration diff screen that we have for manually comparing versions would > be helpful so the operator knows exactly what changes will be applied. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-16884) Quickly show configuration changes when stale configuration is detected
Paul Codding created AMBARI-16884: - Summary: Quickly show configuration changes when stale configuration is detected Key: AMBARI-16884 URL: https://issues.apache.org/jira/browse/AMBARI-16884 Project: Ambari Issue Type: New Feature Reporter: Paul Codding Fix For: 3.0.0 When configurations are changed and components require restarting, having a button to easily view the changes that will be applied to the configuration during the restart operation is required. Simply displaying the configuration diff screen that we have for manually comparing versions would be helpful so the operator knows exactly what changes will be applied. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-16883) [Zeppelin] Restart service button not available after the configuration update
Renjith Kamath created AMBARI-16883: --- Summary: [Zeppelin] Restart service button not available after the configuration update Key: AMBARI-16883 URL: https://issues.apache.org/jira/browse/AMBARI-16883 Project: Ambari Issue Type: Bug Affects Versions: 2.4.0 Reporter: Renjith Kamath Fix For: 2.4.0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16882) Manually remove previous stack versions after upgrade is completed
[ https://issues.apache.org/jira/browse/AMBARI-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Codding updated AMBARI-16882: -- Attachment: AMBARI-16882.bmml AMBARI-16882.png > Manually remove previous stack versions after upgrade is completed > -- > > Key: AMBARI-16882 > URL: https://issues.apache.org/jira/browse/AMBARI-16882 > Project: Ambari > Issue Type: New Feature >Reporter: Paul Codding > Fix For: 3.0.0 > > Attachments: AMBARI-16882.bmml, AMBARI-16882.png > > > Each stack version takes up ~2GB of space on a host, and after a version is > successfully upgraded to and fully checked out, it would be nice to be able > to manually remove old stack versions that are no longer in use. Adding a (- > Remove) button from the Cluster -> Versions table of installed versions to > erase the packages and /usr/hdp entries is desired. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16873) Optimize UI error saving
[ https://issues.apache.org/jira/browse/AMBARI-16873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300410#comment-15300410 ] Andrii Tkach commented on AMBARI-16873: --- committed to trunk and branch-2.4 > Optimize UI error saving > > > Key: AMBARI-16873 > URL: https://issues.apache.org/jira/browse/AMBARI-16873 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Andrii Tkach >Assignee: Andrii Tkach >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16873.patch > > > # Erase host "http://:8080/javascripts" > # Truncate stackTrace up to 1000 chars > # If errors container contains more than 500 000 chars then overwrite old > errors with new one -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16873) Optimize UI error saving
[ https://issues.apache.org/jira/browse/AMBARI-16873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Tkach updated AMBARI-16873: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Optimize UI error saving > > > Key: AMBARI-16873 > URL: https://issues.apache.org/jira/browse/AMBARI-16873 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Andrii Tkach >Assignee: Andrii Tkach >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16873.patch > > > # Erase host "http://:8080/javascripts" > # Truncate stackTrace up to 1000 chars > # If errors container contains more than 500 000 chars then overwrite old > errors with new one -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16873) Optimize UI error saving
[ https://issues.apache.org/jira/browse/AMBARI-16873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300402#comment-15300402 ] Antonenko Alexander commented on AMBARI-16873: -- +1 for the patch > Optimize UI error saving > > > Key: AMBARI-16873 > URL: https://issues.apache.org/jira/browse/AMBARI-16873 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Andrii Tkach >Assignee: Andrii Tkach >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16873.patch > > > # Erase host "http://:8080/javascripts" > # Truncate stackTrace up to 1000 chars > # If errors container contains more than 500 000 chars then overwrite old > errors with new one -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16881) ZKFC restart failed during EU with 'upgrade_type' not defined error
[ https://issues.apache.org/jira/browse/AMBARI-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitaly Brodetskyi updated AMBARI-16881: --- Status: Patch Available (was: Open) > ZKFC restart failed during EU with 'upgrade_type' not defined error > --- > > Key: AMBARI-16881 > URL: https://issues.apache.org/jira/browse/AMBARI-16881 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Vitaly Brodetskyi >Assignee: Vitaly Brodetskyi >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-16881.patch > > > Steps > Deploy HDP-2.4.2 cluster with Ambari 2.2.2 (Secure, HA cluster) > Upgrade Ambari to 2.4.0.0 > Perform EU to 2.5.0.0-555 > Result > ZKFC restart failed with below error: > Traceback (most recent call last): > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", > line 200, in > ZkfcSlave().execute() > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 257, in execute > method(env) > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 668, in restart > self.status(env) > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", > line 102, in status > ZkfcSlaveDefault.status_static(env, upgrade_type) > NameError: global name 'upgrade_type' is not defined -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-16881) ZKFC restart failed during EU with 'upgrade_type' not defined error
Vitaly Brodetskyi created AMBARI-16881: -- Summary: ZKFC restart failed during EU with 'upgrade_type' not defined error Key: AMBARI-16881 URL: https://issues.apache.org/jira/browse/AMBARI-16881 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.4.0 Reporter: Vitaly Brodetskyi Assignee: Vitaly Brodetskyi Priority: Blocker Fix For: 2.4.0 Steps Deploy HDP-2.4.2 cluster with Ambari 2.2.2 (Secure, HA cluster) Upgrade Ambari to 2.4.0.0 Perform EU to 2.5.0.0-555 Result ZKFC restart failed with below error: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", line 200, in ZkfcSlave().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 257, in execute method(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 668, in restart self.status(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", line 102, in status ZkfcSlaveDefault.status_static(env, upgrade_type) NameError: global name 'upgrade_type' is not defined -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-16882) Manually remove previous stack versions after upgrade is completed
Paul Codding created AMBARI-16882: - Summary: Manually remove previous stack versions after upgrade is completed Key: AMBARI-16882 URL: https://issues.apache.org/jira/browse/AMBARI-16882 Project: Ambari Issue Type: New Feature Reporter: Paul Codding Fix For: 3.0.0 Each stack version takes up ~2GB of space on a host, and after a version is successfully upgraded to and fully checked out, it would be nice to be able to manually remove old stack versions that are no longer in use. Adding a (- Remove) button from the Cluster -> Versions table of installed versions to erase the packages and /usr/hdp entries is desired. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16881) ZKFC restart failed during EU with 'upgrade_type' not defined error
[ https://issues.apache.org/jira/browse/AMBARI-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitaly Brodetskyi updated AMBARI-16881: --- Attachment: AMBARI-16881.patch > ZKFC restart failed during EU with 'upgrade_type' not defined error > --- > > Key: AMBARI-16881 > URL: https://issues.apache.org/jira/browse/AMBARI-16881 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Vitaly Brodetskyi >Assignee: Vitaly Brodetskyi >Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-16881.patch > > > Steps > Deploy HDP-2.4.2 cluster with Ambari 2.2.2 (Secure, HA cluster) > Upgrade Ambari to 2.4.0.0 > Perform EU to 2.5.0.0-555 > Result > ZKFC restart failed with below error: > Traceback (most recent call last): > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", > line 200, in > ZkfcSlave().execute() > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 257, in execute > method(env) > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 668, in restart > self.status(env) > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/zkfc_slave.py", > line 102, in status > ZkfcSlaveDefault.status_static(env, upgrade_type) > NameError: global name 'upgrade_type' is not defined -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16875) LDAP sync cannot handle if the member attribute value is not DN or id
[ https://issues.apache.org/jira/browse/AMBARI-16875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Olivér Szabó updated AMBARI-16875: -- Attachment: (was: AMBARI-16875.patch) > LDAP sync cannot handle if the member attribute value is not DN or id > - > > Key: AMBARI-16875 > URL: https://issues.apache.org/jira/browse/AMBARI-16875 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Olivér Szabó >Assignee: Olivér Szabó >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16875.patch > > > in case of member attribute value looks like this: > ";;cn=myCn,dc=apache,dc=org", then sync stop working. > adding 2 new properties (to find the dn or the id of the member): > {{"authentication.ldap.sync.userMemberReplacePattern"}} > {{"authentication.ldap.sync.groupMemberReplacePattern"}} > These values are empty by default. > Example usage: > If we got this as ldapsearch response for group member > "member=";;cn=myCn,dc=apache,dc=org", > We need to define a regex which contains member group to specify the location > of the DN or id e.g.{{(?.\*)}} > authentication.ldap.sync.userMemberReplacePattern={{(?.\*);(?.\*);(?.\*)}} > Then the result will be: "cn=myCn,dc=apache,dc=org" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16875) LDAP sync cannot handle if the member attribute value is not DN or id
[ https://issues.apache.org/jira/browse/AMBARI-16875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Olivér Szabó updated AMBARI-16875: -- Attachment: (was: AMBARI-16875.patch) > LDAP sync cannot handle if the member attribute value is not DN or id > - > > Key: AMBARI-16875 > URL: https://issues.apache.org/jira/browse/AMBARI-16875 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Olivér Szabó >Assignee: Olivér Szabó >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16875.patch > > > in case of member attribute value looks like this: > ";;cn=myCn,dc=apache,dc=org", then sync stop working. > adding 2 new properties (to find the dn or the id of the member): > {{"authentication.ldap.sync.userMemberReplacePattern"}} > {{"authentication.ldap.sync.groupMemberReplacePattern"}} > These values are empty by default. > Example usage: > If we got this as ldapsearch response for group member > "member=";;cn=myCn,dc=apache,dc=org", > We need to define a regex which contains member group to specify the location > of the DN or id e.g.{{(?.\*)}} > authentication.ldap.sync.userMemberReplacePattern={{(?.\*);(?.\*);(?.\*)}} > Then the result will be: "cn=myCn,dc=apache,dc=org" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16873) Optimize UI error saving
[ https://issues.apache.org/jira/browse/AMBARI-16873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300365#comment-15300365 ] Hadoop QA commented on AMBARI-16873: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12806146/AMBARI-16873.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in ambari-web. Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/6956//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/6956//console This message is automatically generated. > Optimize UI error saving > > > Key: AMBARI-16873 > URL: https://issues.apache.org/jira/browse/AMBARI-16873 > Project: Ambari > Issue Type: Bug > Components: ambari-web >Affects Versions: 2.4.0 >Reporter: Andrii Tkach >Assignee: Andrii Tkach >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16873.patch > > > # Erase host "http://:8080/javascripts" > # Truncate stackTrace up to 1000 chars > # If errors container contains more than 500 000 chars then overwrite old > errors with new one -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-16875) LDAP sync cannot handle if the member attribute value is not DN or id
[ https://issues.apache.org/jira/browse/AMBARI-16875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Olivér Szabó updated AMBARI-16875: -- Attachment: AMBARI-16875.patch > LDAP sync cannot handle if the member attribute value is not DN or id > - > > Key: AMBARI-16875 > URL: https://issues.apache.org/jira/browse/AMBARI-16875 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Olivér Szabó >Assignee: Olivér Szabó >Priority: Critical > Fix For: 2.4.0 > > Attachments: AMBARI-16875.patch > > > in case of member attribute value looks like this: > ";;cn=myCn,dc=apache,dc=org", then sync stop working. > adding 2 new properties (to find the dn or the id of the member): > {{"authentication.ldap.sync.userMemberReplacePattern"}} > {{"authentication.ldap.sync.groupMemberReplacePattern"}} > These values are empty by default. > Example usage: > If we got this as ldapsearch response for group member > "member=";;cn=myCn,dc=apache,dc=org", > We need to define a regex which contains member group to specify the location > of the DN or id e.g.{{(?.\*)}} > authentication.ldap.sync.userMemberReplacePattern={{(?.\*);(?.\*);(?.\*)}} > Then the result will be: "cn=myCn,dc=apache,dc=org" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-16869) Stack advisor recommendation not working for HBASE
[ https://issues.apache.org/jira/browse/AMBARI-16869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15300311#comment-15300311 ] Robert Levas commented on AMBARI-16869: --- [~ababiichuk], Looking at this patch, I am confused as to why the conditional at {{ambari-server/src/main/resources/stacks/HDP/2.5/services/stack_advisor.py:143}} was changed in that way. By relying on the existence of the {{KERBEROS}} service, we run the risk of blindly setting {{hbase.master.ui.readonly}} because it is possible to have the {{KERBEROS}} servic installed in a non-Kerberized cluster. Instead, maybe we should investigate what is wrong with {code} if "cluster-env" in services["configurations"] and "security_enabled" in services["configurations"]["cluster-env"]["properties"] \ and services["configurations"]["cluster-env"]["properties"]["security_enabled"].lower() == "true": {code} > Stack advisor recommendation not working for HBASE > -- > > Key: AMBARI-16869 > URL: https://issues.apache.org/jira/browse/AMBARI-16869 > Project: Ambari > Issue Type: Bug > Components: ambari-server >Affects Versions: 2.4.0 >Reporter: Andrii Babiichuk >Assignee: Andrii Babiichuk > Fix For: 2.4.0 > > Attachments: AMBARI-16869.patch > > > stackadvisor should set 'hbase.master.ui.readonly' - 'true' for secure > cluster and 'false' for unsecure -- This message was sent by Atlassian JIRA (v6.3.4#6332)