[jira] [Updated] (AMBARI-9949) Add Service: Choose Services page, selected service issue during navigation to wizard from Stack Versions page.
[ https://issues.apache.org/jira/browse/AMBARI-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antonenko Alexander updated AMBARI-9949: Attachment: AMBARI-9949.patch Add Service: Choose Services page, selected service issue during navigation to wizard from Stack Versions page. --- Key: AMBARI-9949 URL: https://issues.apache.org/jira/browse/AMBARI-9949 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9949.patch STR: - install cluster with HDFS and ZooKeeper - go to Admin - Stack Versions page - choose any service to add, for example Storm - deselect Storm service and select another one service on Choose Services page - proceed to next step - *AR:* Storm master components present on Assign Masters page *ER:* No Storm components on Assign Master page - go to the previous step (Choose Services) - *AR:* Storm selected *ER:* Storm deselected -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9951) hadooplzo_2_2_2_0_2538-native can not be upgraded during RU
[ https://issues.apache.org/jira/browse/AMBARI-9951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Lysnichenko updated AMBARI-9951: --- Attachment: AMBARI-9951.patch hadooplzo_2_2_2_0_2538-native can not be upgraded during RU --- Key: AMBARI-9951 URL: https://issues.apache.org/jira/browse/AMBARI-9951 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Dmitry Lysnichenko Assignee: Dmitry Lysnichenko Fix For: 2.0.0 Attachments: AMBARI-9951.patch {code} 2015-03-05 13:43:38,414 - Installing package lzo ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 lzo') 2015-03-05 13:43:44,829 - uPackage['hadooplzo_2_2_2_0_2538*'] {'use_repos': ['base', 'HDP-UTILS-2.2.2.0-2538', 'HDP-2.2.2.0-2538']} 2015-03-05 13:43:45,787 - Installing package hadooplzo_2_2_2_0_2538* ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'') Can not install packages. Traceback (most recent call last): File /var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py, line 96, in actionexecute Package(name, use_repos=list(current_repo_files) if OSCheck.is_ubuntu_family() else current_repositories) File /usr/lib/python2.6/site-packages/resource_management/core/base.py, line 148, in __init__ self.env.run() File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 152, in run self.run_action(resource, action) File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 118, in run_action provider_action() File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py, line 43, in action_install self.install_package(package_name, self.resource.use_repos) File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/zypper.py, line 72, in install_package shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput()) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 70, in inner return function(command, **kwargs) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 82, in checked_call return _call(command, logoutput, True, cwd, env, preexec_fn, user, wait_for_finish, timeout, path, sudo, on_new_line) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 199, in _call raise Fail(err_msg) Fail: Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'' returned 4. Problem: nothing provides libjvm.so()(64bit) needed by hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 1: do not install hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 2: break hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/c] (c): c {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Review Request 31768: Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31768/ --- Review request for Ambari, Dmytro Sen and Robert Levas. Bugs: AMBARI-9948 https://issues.apache.org/jira/browse/AMBARI-9948 Repository: ambari Description --- STR: 1. Install Ambari 1.6.1, HDP 2.1 2. Enable security 3. Upgrade to Ambari 2.0.0 4. Execute Kerberos Wizard. Diffs - ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/configuration/oozie-site.xml e72a8be ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/files/oozieSmoke2.sh 84f913c ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py 471cd35 ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_service_check.py a478b57 Diff: https://reviews.apache.org/r/31768/diff/ Testing --- mvn clean test Thanks, Vitalyi Brodetskyi
[jira] [Commented] (AMBARI-9950) YARN RM HA Mode Configurations Are Incorrect
[ https://issues.apache.org/jira/browse/AMBARI-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349371#comment-14349371 ] Andrii Babiichuk commented on AMBARI-9950: -- +1 for the patch YARN RM HA Mode Configurations Are Incorrect Key: AMBARI-9950 URL: https://issues.apache.org/jira/browse/AMBARI-9950 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9950.patch When enabling YARN RM HA mode, it doesn't appear as though we are creating the additional {{yarn-site}} properties correctly. Consider that we do create the following: {noformat} yarn.resourcemanager.ha.automatic-failover.zk-base-path : /yarn-leader-election, yarn.resourcemanager.ha.enabled : true, yarn.resourcemanager.ha.rm-ids : rm1,rm2, yarn.resourcemanager.hostname : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm1 : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm2 : c6403.ambari.apache.org, yarn.resourcemanager.webapp.address : c6402.ambari.apache.org:8088, yarn.resourcemanager.webapp.https.address : c6402.ambari.apache.org:8090”, {noformat} You can see that we have created aliases (rm1 and rm2) and created some dynamic keys for these hosts, such as {{yarn.resourcemanager.hostname.rm1}} However, the {{yarn-site}} documentation states that other properties, such as the web address, need to also be specified in a similar manner. Otherwise, how does YARN know which port to spin up the RM on each host? {noformat:title=Missing Properties} yarn.resourcemanager.webapp.address.rm1 yarn.resourcemanager.webapp.address.rm2 yarn.resourcemanager.webapp.https.address.rm1 yarn.resourcemanager.webapp.https.address.rm2 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9944) RU: web should not display older versions by default
[ https://issues.apache.org/jira/browse/AMBARI-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349420#comment-14349420 ] Hudson commented on AMBARI-9944: ABORTED: Integrated in Ambari-branch-2.0.0 #13 (See [https://builds.apache.org/job/Ambari-branch-2.0.0/13/]) AMBARI-9944 RU: web should not display older versions by default. (ababiichuk) (ababiichuk: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=2cfc618ecbcfd4b055fd82d68d02efdf94b69a83) * ambari-web/app/views/main/admin/stack_upgrade/versions_view.js * ambari-web/app/templates/main/host.hbs * ambari-web/app/config.js * ambari-web/test/views/main/host/stack_versions_view_test.js * ambari-web/test/views/main/admin/stack_upgrade/version_view_test.js * ambari-web/app/views/main/host.js * ambari-web/app/views/main/host/stack_versions_view.js * ambari-web/app/mappers/hosts_mapper.js * ambari-web/app/models/host_stack_version.js RU: web should not display older versions by default Key: AMBARI-9944 URL: https://issues.apache.org/jira/browse/AMBARI-9944 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Babiichuk Assignee: Andrii Babiichuk Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9944.patch Ambari Web should not display older versions by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 31755: RU - Service Check group to include all services with a service_check script
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31755/#review75388 --- Ship it! Ship It! - Sid Wagle On March 5, 2015, 8:32 p.m., Alejandro Fernandez wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31755/ --- (Updated March 5, 2015, 8:32 p.m.) Review request for Ambari, Jonathan Hurley and Nate Cole. Bugs: AMBARI-9939 https://issues.apache.org/jira/browse/AMBARI-9939 Repository: ambari Description --- Performing an RU will run Service Check groups after several major phases. The Service Check group needs to include all services that can run a service check. However, certain services like Pig still have service check scripts even though they only have client components. Diffs - ambari-server/src/main/java/org/apache/ambari/server/state/UpgradeContext.java f8752a2 ambari-server/src/main/java/org/apache/ambari/server/state/stack/upgrade/ServiceCheckGrouping.java e40706c ambari-server/src/test/java/org/apache/ambari/server/stack/StackManagerTest.java f9e81af ambari-server/src/test/java/org/apache/ambari/server/state/UpgradeHelperTest.java 4a733b3 ambari-server/src/test/resources/stacks/HDP/2.1.1/services/PIG/metainfo.xml f310b70 ambari-server/src/test/resources/stacks/HDP/2.1.1/services/TEZ/metainfo.xml PRE-CREATION Diff: https://reviews.apache.org/r/31755/diff/ Testing --- Performed an RU request and verified that the Service Check group included one more service, namely Pig. Waiting for unit test results. Local unit tests passed, OK -- Total run:609 Total errors:0 Total failures:0 OK [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 32:36.712s AMBARI-9939. RU - Service Check group to include all services with a service_check script (alejandro) [INFO] Finished at: Thu Mar 05 12:21:27 PST 2015 [INFO] Final Memory: 67M/834M [INFO] Thanks, Alejandro Fernandez
[jira] [Updated] (AMBARI-9948) Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0
[ https://issues.apache.org/jira/browse/AMBARI-9948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitaly Brodetskyi updated AMBARI-9948: -- Attachment: (was: AMBARI-9948.patch) Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0 Key: AMBARI-9948 URL: https://issues.apache.org/jira/browse/AMBARI-9948 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Vitaly Brodetskyi Assignee: Vitaly Brodetskyi Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9948.patch STR: 1. Install Ambari 1.6.1, HDP 2.1 2. Enable security 3. Upgrade to Ambari 2.0.0 4. Execute Kerberos Wizard. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9950) YARN RM HA Mode Configurations Are Incorrect
[ https://issues.apache.org/jira/browse/AMBARI-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antonenko Alexander updated AMBARI-9950: Attachment: AMBARI-9950.patch YARN RM HA Mode Configurations Are Incorrect Key: AMBARI-9950 URL: https://issues.apache.org/jira/browse/AMBARI-9950 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9950.patch When enabling YARN RM HA mode, it doesn't appear as though we are creating the additional {{yarn-site}} properties correctly. Consider that we do create the following: {noformat} yarn.resourcemanager.ha.automatic-failover.zk-base-path : /yarn-leader-election, yarn.resourcemanager.ha.enabled : true, yarn.resourcemanager.ha.rm-ids : rm1,rm2, yarn.resourcemanager.hostname : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm1 : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm2 : c6403.ambari.apache.org, yarn.resourcemanager.webapp.address : c6402.ambari.apache.org:8088, yarn.resourcemanager.webapp.https.address : c6402.ambari.apache.org:8090”, {noformat} You can see that we have created aliases (rm1 and rm2) and created some dynamic keys for these hosts, such as {{yarn.resourcemanager.hostname.rm1}} However, the {{yarn-site}} documentation states that other properties, such as the web address, need to also be specified in a similar manner. Otherwise, how does YARN know which port to spin up the RM on each host? {noformat:title=Missing Properties} yarn.resourcemanager.webapp.address.rm1 yarn.resourcemanager.webapp.address.rm2 yarn.resourcemanager.webapp.https.address.rm1 yarn.resourcemanager.webapp.https.address.rm2 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9939) RU - Service Check group to include all services with a service_check script
[ https://issues.apache.org/jira/browse/AMBARI-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-9939: Attachment: AMBARI-9939.patch RU - Service Check group to include all services with a service_check script Key: AMBARI-9939 URL: https://issues.apache.org/jira/browse/AMBARI-9939 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Labels: rolling_upgrade Fix For: 2.0.0 Attachments: AMBARI-9939.patch Installed a minimal 3-node cluster with HDFS, MR, YARN, Pig, Tez. Performed an RU. Expected result is for the last service check to be ran on all components. However, it skipped Pig Service Check. http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8 {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8;, UpgradeGroup : { completed_task_count : 4, group_id : 8, in_progress_task_count : 0, name : SERVICE_CHECK, progress_percent : 100.0, request_id : 32, status : COMPLETED, title : All Service Checks, total_task_count : 4 }, upgrade_items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50 } } ] } {code} http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text;, items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47, text : Service Check HDFS } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48, text : Service Check YARN } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49, text : Service Check ZooKeeper } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50, text : Service Check MapReduce2 } } ] } {code} The Upgrade Pack contains, {code} group name=SERVICE_CHECK title=All Service Checks xsi:type=service-check skippabletrue/skippable directionUPGRADE/direction priority serviceHDFS/service serviceYARN/service serviceHBASE/service /priority /group {code} Because the pig service check was not ran, the new tez tarball was not copied to HDFS. The underlying issue is that a service is not added to the Service Check group if it is a clientOnly service. However, Pig is clientOnly but still have a service check python script. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9939) RU - Service Check group to include all services with a service_check script
[ https://issues.apache.org/jira/browse/AMBARI-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-9939: Attachment: (was: AMBARI-9939.patch) RU - Service Check group to include all services with a service_check script Key: AMBARI-9939 URL: https://issues.apache.org/jira/browse/AMBARI-9939 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Labels: rolling_upgrade Fix For: 2.0.0 Attachments: AMBARI-9939.patch Installed a minimal 3-node cluster with HDFS, MR, YARN, Pig, Tez. Performed an RU. Expected result is for the last service check to be ran on all components. However, it skipped Pig Service Check. http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8 {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8;, UpgradeGroup : { completed_task_count : 4, group_id : 8, in_progress_task_count : 0, name : SERVICE_CHECK, progress_percent : 100.0, request_id : 32, status : COMPLETED, title : All Service Checks, total_task_count : 4 }, upgrade_items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50 } } ] } {code} http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text;, items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47, text : Service Check HDFS } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48, text : Service Check YARN } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49, text : Service Check ZooKeeper } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50, text : Service Check MapReduce2 } } ] } {code} The Upgrade Pack contains, {code} group name=SERVICE_CHECK title=All Service Checks xsi:type=service-check skippabletrue/skippable directionUPGRADE/direction priority serviceHDFS/service serviceYARN/service serviceHBASE/service /priority /group {code} Because the pig service check was not ran, the new tez tarball was not copied to HDFS. The underlying issue is that a service is not added to the Service Check group if it is a clientOnly service. However, Pig is clientOnly but still have a service check python script. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 31755: RU - Service Check group to include all services with a service_check script
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31755/ --- (Updated March 5, 2015, 8:32 p.m.) Review request for Ambari, Jonathan Hurley and Nate Cole. Bugs: AMBARI-9939 https://issues.apache.org/jira/browse/AMBARI-9939 Repository: ambari Description --- Performing an RU will run Service Check groups after several major phases. The Service Check group needs to include all services that can run a service check. However, certain services like Pig still have service check scripts even though they only have client components. Diffs (updated) - ambari-server/src/main/java/org/apache/ambari/server/state/UpgradeContext.java f8752a2 ambari-server/src/main/java/org/apache/ambari/server/state/stack/upgrade/ServiceCheckGrouping.java e40706c ambari-server/src/test/java/org/apache/ambari/server/stack/StackManagerTest.java f9e81af ambari-server/src/test/java/org/apache/ambari/server/state/UpgradeHelperTest.java 4a733b3 ambari-server/src/test/resources/stacks/HDP/2.1.1/services/PIG/metainfo.xml f310b70 ambari-server/src/test/resources/stacks/HDP/2.1.1/services/TEZ/metainfo.xml PRE-CREATION Diff: https://reviews.apache.org/r/31755/diff/ Testing (updated) --- Performed an RU request and verified that the Service Check group included one more service, namely Pig. Waiting for unit test results. Local unit tests passed, OK -- Total run:609 Total errors:0 Total failures:0 OK [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 32:36.712s AMBARI-9939. RU - Service Check group to include all services with a service_check script (alejandro) [INFO] Finished at: Thu Mar 05 12:21:27 PST 2015 [INFO] Final Memory: 67M/834M [INFO] Thanks, Alejandro Fernandez
Review Request 31777: hadooplzo_2_2_2_0_2538-native can not be upgraded during RU
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31777/ --- Review request for Ambari and Vitalyi Brodetskyi. Bugs: AMBARI-9951 https://issues.apache.org/jira/browse/AMBARI-9951 Repository: ambari Description --- {code} 2015-03-05 13:43:38,414 - Installing package lzo ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 lzo') 2015-03-05 13:43:44,829 - uPackage['hadooplzo_2_2_2_0_2538*'] {'use_repos': ['base', 'HDP-UTILS-2.2.2.0-2538', 'HDP-2.2.2.0-2538']} 2015-03-05 13:43:45,787 - Installing package hadooplzo_2_2_2_0_2538* ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'') Can not install packages. Traceback (most recent call last): File /var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py, line 96, in actionexecute Package(name, use_repos=list(current_repo_files) if OSCheck.is_ubuntu_family() else current_repositories) File /usr/lib/python2.6/site-packages/resource_management/core/base.py, line 148, in __init__ self.env.run() File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 152, in run self.run_action(resource, action) File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 118, in run_action provider_action() File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py, line 43, in action_install self.install_package(package_name, self.resource.use_repos) File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/zypper.py, line 72, in install_package shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput()) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 70, in inner return function(command, **kwargs) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 82, in checked_call return _call(command, logoutput, True, cwd, env, preexec_fn, user, wait_for_finish, timeout, path, sudo, on_new_line) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 199, in _call raise Fail(err_msg) Fail: Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'' returned 4. Problem: nothing provides libjvm.so()(64bit) needed by hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 1: do not install hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 2: break hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/c] (c): c {code} Diffs - ambari-server/src/main/resources/custom_actions/scripts/install_packages.py 5819390 ambari-server/src/test/python/custom_actions/TestInstallPackages.py 4975757 Diff: https://reviews.apache.org/r/31777/diff/ Testing --- -- Ran 234 tests in 6.845s OK -- Total run:609 Total errors:0 Total failures:0 OK Thanks, Dmitro Lisnichenko
Re: Review Request 31777: hadooplzo_2_2_2_0_2538-native can not be upgraded during RU
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31777/#review75396 --- Ship it! Ship It! - Dmytro Sen On Март 5, 2015, 9:04 п.п., Dmitro Lisnichenko wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31777/ --- (Updated Март 5, 2015, 9:04 п.п.) Review request for Ambari and Dmytro Sen. Bugs: AMBARI-9951 https://issues.apache.org/jira/browse/AMBARI-9951 Repository: ambari Description --- {code} 2015-03-05 13:43:38,414 - Installing package lzo ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 lzo') 2015-03-05 13:43:44,829 - uPackage['hadooplzo_2_2_2_0_2538*'] {'use_repos': ['base', 'HDP-UTILS-2.2.2.0-2538', 'HDP-2.2.2.0-2538']} 2015-03-05 13:43:45,787 - Installing package hadooplzo_2_2_2_0_2538* ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'') Can not install packages. Traceback (most recent call last): File /var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py, line 96, in actionexecute Package(name, use_repos=list(current_repo_files) if OSCheck.is_ubuntu_family() else current_repositories) File /usr/lib/python2.6/site-packages/resource_management/core/base.py, line 148, in __init__ self.env.run() File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 152, in run self.run_action(resource, action) File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 118, in run_action provider_action() File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py, line 43, in action_install self.install_package(package_name, self.resource.use_repos) File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/zypper.py, line 72, in install_package shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput()) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 70, in inner return function(command, **kwargs) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 82, in checked_call return _call(command, logoutput, True, cwd, env, preexec_fn, user, wait_for_finish, timeout, path, sudo, on_new_line) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 199, in _call raise Fail(err_msg) Fail: Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'' returned 4. Problem: nothing provides libjvm.so()(64bit) needed by hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 1: do not install hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 2: break hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/c] (c): c {code} Diffs - ambari-server/src/main/resources/custom_actions/scripts/install_packages.py 5819390 ambari-server/src/test/python/custom_actions/TestInstallPackages.py 4975757 ambari-server/src/test/python/custom_actions/configs/install_packages_config.json 4f262ea Diff: https://reviews.apache.org/r/31777/diff/ Testing --- -- Ran 234 tests in 6.845s OK -- Total run:609 Total errors:0 Total failures:0 OK Thanks, Dmitro Lisnichenko
Re: Review Request 31768: Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31768/#review75377 --- Ship it! Ship It! - Dmytro Sen On Март 5, 2015, 7:29 п.п., Vitalyi Brodetskyi wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31768/ --- (Updated Март 5, 2015, 7:29 п.п.) Review request for Ambari, Dmytro Sen and Robert Levas. Bugs: AMBARI-9948 https://issues.apache.org/jira/browse/AMBARI-9948 Repository: ambari Description --- STR: 1. Install Ambari 1.6.1, HDP 2.1 2. Enable security 3. Upgrade to Ambari 2.0.0 4. Execute Kerberos Wizard. Diffs - ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/configuration/oozie-site.xml e72a8be ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/files/oozieSmoke2.sh 84f913c ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py 471cd35 ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_service_check.py a478b57 Diff: https://reviews.apache.org/r/31768/diff/ Testing --- mvn clean test Thanks, Vitalyi Brodetskyi
Re: Review Request 31768: Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31768/#review75378 --- Ship it! Ship It! - Dmitro Lisnichenko On March 5, 2015, 7:45 p.m., Vitalyi Brodetskyi wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31768/ --- (Updated March 5, 2015, 7:45 p.m.) Review request for Ambari, Dmitro Lisnichenko, Dmytro Sen, and Robert Levas. Bugs: AMBARI-9948 https://issues.apache.org/jira/browse/AMBARI-9948 Repository: ambari Description --- STR: 1. Install Ambari 1.6.1, HDP 2.1 2. Enable security 3. Upgrade to Ambari 2.0.0 4. Execute Kerberos Wizard. Diffs - ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/configuration/oozie-site.xml e72a8be ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/files/oozieSmoke2.sh 84f913c ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py 471cd35 ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_service_check.py a478b57 Diff: https://reviews.apache.org/r/31768/diff/ Testing --- mvn clean test Thanks, Vitalyi Brodetskyi
Re: Review Request 31777: hadooplzo_2_2_2_0_2538-native can not be upgraded during RU
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31777/#review75393 --- Ship it! Ship It! - Dmytro Sen On Март 5, 2015, 8:54 п.п., Dmitro Lisnichenko wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31777/ --- (Updated Март 5, 2015, 8:54 п.п.) Review request for Ambari and Dmytro Sen. Bugs: AMBARI-9951 https://issues.apache.org/jira/browse/AMBARI-9951 Repository: ambari Description --- {code} 2015-03-05 13:43:38,414 - Installing package lzo ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 lzo') 2015-03-05 13:43:44,829 - uPackage['hadooplzo_2_2_2_0_2538*'] {'use_repos': ['base', 'HDP-UTILS-2.2.2.0-2538', 'HDP-2.2.2.0-2538']} 2015-03-05 13:43:45,787 - Installing package hadooplzo_2_2_2_0_2538* ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'') Can not install packages. Traceback (most recent call last): File /var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py, line 96, in actionexecute Package(name, use_repos=list(current_repo_files) if OSCheck.is_ubuntu_family() else current_repositories) File /usr/lib/python2.6/site-packages/resource_management/core/base.py, line 148, in __init__ self.env.run() File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 152, in run self.run_action(resource, action) File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 118, in run_action provider_action() File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py, line 43, in action_install self.install_package(package_name, self.resource.use_repos) File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/zypper.py, line 72, in install_package shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput()) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 70, in inner return function(command, **kwargs) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 82, in checked_call return _call(command, logoutput, True, cwd, env, preexec_fn, user, wait_for_finish, timeout, path, sudo, on_new_line) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 199, in _call raise Fail(err_msg) Fail: Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'' returned 4. Problem: nothing provides libjvm.so()(64bit) needed by hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 1: do not install hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 2: break hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/c] (c): c {code} Diffs - ambari-server/src/main/resources/custom_actions/scripts/install_packages.py 5819390 ambari-server/src/test/python/custom_actions/TestInstallPackages.py 4975757 Diff: https://reviews.apache.org/r/31777/diff/ Testing --- -- Ran 234 tests in 6.845s OK -- Total run:609 Total errors:0 Total failures:0 OK Thanks, Dmitro Lisnichenko
Re: Review Request 31777: hadooplzo_2_2_2_0_2538-native can not be upgraded during RU
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31777/ --- (Updated March 5, 2015, 9:04 p.m.) Review request for Ambari and Dmytro Sen. Bugs: AMBARI-9951 https://issues.apache.org/jira/browse/AMBARI-9951 Repository: ambari Description --- {code} 2015-03-05 13:43:38,414 - Installing package lzo ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 lzo') 2015-03-05 13:43:44,829 - uPackage['hadooplzo_2_2_2_0_2538*'] {'use_repos': ['base', 'HDP-UTILS-2.2.2.0-2538', 'HDP-2.2.2.0-2538']} 2015-03-05 13:43:45,787 - Installing package hadooplzo_2_2_2_0_2538* ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'') Can not install packages. Traceback (most recent call last): File /var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py, line 96, in actionexecute Package(name, use_repos=list(current_repo_files) if OSCheck.is_ubuntu_family() else current_repositories) File /usr/lib/python2.6/site-packages/resource_management/core/base.py, line 148, in __init__ self.env.run() File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 152, in run self.run_action(resource, action) File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 118, in run_action provider_action() File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py, line 43, in action_install self.install_package(package_name, self.resource.use_repos) File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/zypper.py, line 72, in install_package shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput()) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 70, in inner return function(command, **kwargs) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 82, in checked_call return _call(command, logoutput, True, cwd, env, preexec_fn, user, wait_for_finish, timeout, path, sudo, on_new_line) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 199, in _call raise Fail(err_msg) Fail: Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'' returned 4. Problem: nothing provides libjvm.so()(64bit) needed by hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 1: do not install hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 2: break hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/c] (c): c {code} Diffs (updated) - ambari-server/src/main/resources/custom_actions/scripts/install_packages.py 5819390 ambari-server/src/test/python/custom_actions/TestInstallPackages.py 4975757 ambari-server/src/test/python/custom_actions/configs/install_packages_config.json 4f262ea Diff: https://reviews.apache.org/r/31777/diff/ Testing --- -- Ran 234 tests in 6.845s OK -- Total run:609 Total errors:0 Total failures:0 OK Thanks, Dmitro Lisnichenko
[jira] [Created] (AMBARI-9949) Add Service: Choose Services page, selected service issue during navigation to wizard from Stack Versions page.
Antonenko Alexander created AMBARI-9949: --- Summary: Add Service: Choose Services page, selected service issue during navigation to wizard from Stack Versions page. Key: AMBARI-9949 URL: https://issues.apache.org/jira/browse/AMBARI-9949 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 STR: - install cluster with HDFS and ZooKeeper - go to Admin - Stack Versions page - choose any service to add, for example Storm - deselect Storm service and select another one service on Choose Services page - proceed to next step - *AR:* Storm master components present on Assign Masters page *ER:* No Storm components on Assign Master page - go to the previous step (Choose Services) - *AR:* Storm selected *ER:* Storm deselected -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-9950) YARN RM HA Mode Configurations Are Incorrect
Antonenko Alexander created AMBARI-9950: --- Summary: YARN RM HA Mode Configurations Are Incorrect Key: AMBARI-9950 URL: https://issues.apache.org/jira/browse/AMBARI-9950 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 When enabling YARN RM HA mode, it doesn't appear as though we are creating the additional {{yarn-site}} properties correctly. Consider that we do create the following: {noformat} yarn.resourcemanager.ha.automatic-failover.zk-base-path : /yarn-leader-election, yarn.resourcemanager.ha.enabled : true, yarn.resourcemanager.ha.rm-ids : rm1,rm2, yarn.resourcemanager.hostname : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm1 : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm2 : c6403.ambari.apache.org, yarn.resourcemanager.webapp.address : c6402.ambari.apache.org:8088, yarn.resourcemanager.webapp.https.address : c6402.ambari.apache.org:8090”, {noformat} You can see that we have created aliases (rm1 and rm2) and created some dynamic keys for these hosts, such as {{yarn.resourcemanager.hostname.rm1}} However, the {{yarn-site}} documentation states that other properties, such as the web address, need to also be specified in a similar manner. Otherwise, how does YARN know which port to spin up the RM on each host? {noformat:title=Missing Properties} yarn.resourcemanager.webapp.address.rm1 yarn.resourcemanager.webapp.address.rm2 yarn.resourcemanager.webapp.https.address.rm1 yarn.resourcemanager.webapp.https.address.rm2 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 31768: Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31768/#review75381 --- Ship it! Ship It! - Robert Levas On March 5, 2015, 2:45 p.m., Vitalyi Brodetskyi wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31768/ --- (Updated March 5, 2015, 2:45 p.m.) Review request for Ambari, Dmitro Lisnichenko, Dmytro Sen, and Robert Levas. Bugs: AMBARI-9948 https://issues.apache.org/jira/browse/AMBARI-9948 Repository: ambari Description --- STR: 1. Install Ambari 1.6.1, HDP 2.1 2. Enable security 3. Upgrade to Ambari 2.0.0 4. Execute Kerberos Wizard. Diffs - ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/configuration/oozie-site.xml e72a8be ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/files/oozieSmoke2.sh 84f913c ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py 471cd35 ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_service_check.py a478b57 Diff: https://reviews.apache.org/r/31768/diff/ Testing --- mvn clean test Thanks, Vitalyi Brodetskyi
[jira] [Commented] (AMBARI-9947) Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster
[ https://issues.apache.org/jira/browse/AMBARI-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349315#comment-14349315 ] Hudson commented on AMBARI-9947: ABORTED: Integrated in Ambari-branch-2.0.0 #12 (See [https://builds.apache.org/job/Ambari-branch-2.0.0/12/]) AMBARI-9947 Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster (dsen) (dsen: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=e9623812ea36a4e3b7df95358e34636852a412f3) * ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py * ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster - Key: AMBARI-9947 URL: https://issues.apache.org/jira/browse/AMBARI-9947 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Dmytro Sen Assignee: Dmytro Sen Priority: Blocker Fix For: 2.0.0 Attachments: AMBARI-9947.patch After upgrading secured cluster and restarting all services oozie has warning alert: Oozie Server Web UI HTTP 500 response in 0.000 seconds STR: 1. Installed ambari 1.7.0 on 3-node cluster, all services 2. Enabl security 3. Upgrade ambari to 2.0.0 4. Stop all services 5. Start all services -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9948) Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0
[ https://issues.apache.org/jira/browse/AMBARI-9948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitaly Brodetskyi updated AMBARI-9948: -- Attachment: AMBARI-9948.patch Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0 Key: AMBARI-9948 URL: https://issues.apache.org/jira/browse/AMBARI-9948 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Vitaly Brodetskyi Assignee: Vitaly Brodetskyi Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9948.patch STR: 1. Install Ambari 1.6.1, HDP 2.1 2. Enable security 3. Upgrade to Ambari 2.0.0 4. Execute Kerberos Wizard. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9949) Add Service: Choose Services page, selected service issue during navigation to wizard from Stack Versions page.
[ https://issues.apache.org/jira/browse/AMBARI-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349373#comment-14349373 ] Andrii Babiichuk commented on AMBARI-9949: -- +1 for the patch Add Service: Choose Services page, selected service issue during navigation to wizard from Stack Versions page. --- Key: AMBARI-9949 URL: https://issues.apache.org/jira/browse/AMBARI-9949 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9949.patch STR: - install cluster with HDFS and ZooKeeper - go to Admin - Stack Versions page - choose any service to add, for example Storm - deselect Storm service and select another one service on Choose Services page - proceed to next step - *AR:* Storm master components present on Assign Masters page *ER:* No Storm components on Assign Master page - go to the previous step (Choose Services) - *AR:* Storm selected *ER:* Storm deselected -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9948) Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0
[ https://issues.apache.org/jira/browse/AMBARI-9948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349479#comment-14349479 ] Hudson commented on AMBARI-9948: SUCCESS: Integrated in Ambari-trunk-Commit #1961 (See [https://builds.apache.org/job/Ambari-trunk-Commit/1961/]) AMBARI-9948. Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0.(vbrodetskyi) (vbrodetskyi: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=bdfba90bb6185ca90b2fc1fcca6d66695aa5f71d) * ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py * ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/configuration/oozie-site.xml * ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_service_check.py * ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/files/oozieSmoke2.sh Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0 Key: AMBARI-9948 URL: https://issues.apache.org/jira/browse/AMBARI-9948 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Vitaly Brodetskyi Assignee: Vitaly Brodetskyi Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9948.patch STR: 1. Install Ambari 1.6.1, HDP 2.1 2. Enable security 3. Upgrade to Ambari 2.0.0 4. Execute Kerberos Wizard. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9948) Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0
[ https://issues.apache.org/jira/browse/AMBARI-9948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349322#comment-14349322 ] Hadoop QA commented on AMBARI-9948: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12702862/AMBARI-9948.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The test build failed in ambari-server Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/1937//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/1937//console This message is automatically generated. Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0 Key: AMBARI-9948 URL: https://issues.apache.org/jira/browse/AMBARI-9948 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Vitaly Brodetskyi Assignee: Vitaly Brodetskyi Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9948.patch STR: 1. Install Ambari 1.6.1, HDP 2.1 2. Enable security 3. Upgrade to Ambari 2.0.0 4. Execute Kerberos Wizard. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9944) RU: web should not display older versions by default
[ https://issues.apache.org/jira/browse/AMBARI-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349401#comment-14349401 ] Hudson commented on AMBARI-9944: SUCCESS: Integrated in Ambari-trunk-Commit #1960 (See [https://builds.apache.org/job/Ambari-trunk-Commit/1960/]) AMBARI-9944 RU: web should not display older versions by default. (ababiichuk) (ababiichuk: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=d39ceac00289c0372467db506abc856fdf3b4a1e) * ambari-web/app/views/main/host/stack_versions_view.js * ambari-web/app/views/main/host.js * ambari-web/app/models/host_stack_version.js * ambari-web/test/views/main/host/stack_versions_view_test.js * ambari-web/app/views/main/admin/stack_upgrade/versions_view.js * ambari-web/app/templates/main/host.hbs * ambari-web/test/views/main/admin/stack_upgrade/version_view_test.js * ambari-web/app/config.js * ambari-web/app/mappers/hosts_mapper.js RU: web should not display older versions by default Key: AMBARI-9944 URL: https://issues.apache.org/jira/browse/AMBARI-9944 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Babiichuk Assignee: Andrii Babiichuk Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9944.patch Ambari Web should not display older versions by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9368) Deadlock Between Dependent Cluster/Service/Component/Host Implementations
[ https://issues.apache.org/jira/browse/AMBARI-9368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349435#comment-14349435 ] Hudson commented on AMBARI-9368: SUCCESS: Integrated in Ambari-branch-1.7.0-docker #181 (See [https://builds.apache.org/job/Ambari-branch-1.7.0-docker/181/]) AMBARI-9368 - Deadlock Between Dependent Cluster/Service/Component/Host Implementations (jonathanhurley) (jhurley: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=dd572d3544eb6a36c78155ae88ac423a16922d00) * ambari-server/src/main/java/org/apache/ambari/server/state/svccomphost/ServiceComponentHostImpl.java * ambari-web/package.json * ambari-server/src/test/java/org/apache/ambari/server/state/cluster/ClusterDeadlockTest.java * ambari-server/src/main/java/org/apache/ambari/server/state/ServiceComponentImpl.java * ambari-server/src/main/java/org/apache/ambari/server/state/ServiceImpl.java * ambari-server/src/main/java/org/apache/ambari/server/state/cluster/ClusterImpl.java Deadlock Between Dependent Cluster/Service/Component/Host Implementations - Key: AMBARI-9368 URL: https://issues.apache.org/jira/browse/AMBARI-9368 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 1.6.1 Reporter: Jonathan Hurley Assignee: Jonathan Hurley Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9368.patch, jstack.29096, monitor_lock-1-pid10099.txt, monitor_lock-2-pid10099.txt, monitor_lock-3-pid10099.txt Looks like a textbook deadlock. Why jstack doesn't report it, I don't know. Call Hierarchy {code} qtp572501352-104 ServiceComponentImpl.convertToResponse readWriteLock.readLock().lock() ACQUIRED ServiceComponentHostImpl.getState() readLock.lock() BLOCKED qtp572501352-34 ServiceComponentHostImpl.persist() writeLock.lock() ACQUIRED ServiceComponentImpl.refresh() readWriteLock.writeLock() BLOCKED {code} Deadlock Order {code} qtp572501352-104 ServiceComponentImpl.convertToResponse readWriteLock.readLock().lock() ACQUIRED qtp572501352-34 ServiceComponentHostImpl.persist() writeLock.lock() ACQUIRED ServiceComponentImpl.refresh() readWriteLock.writeLock() BLOCKED qtp572501352-104 ServiceComponentHostImpl.getState() readLock.lock() BLOCKED {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9948) Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0
[ https://issues.apache.org/jira/browse/AMBARI-9948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349480#comment-14349480 ] Hudson commented on AMBARI-9948: ABORTED: Integrated in Ambari-branch-2.0.0 #14 (See [https://builds.apache.org/job/Ambari-branch-2.0.0/14/]) AMBARI-9948. Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0.(vbrodetskyi) (vbrodetskyi: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=165944598ceac8008274d354a7cd4ba21dbfffe5) * ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_service_check.py * ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py * ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/files/oozieSmoke2.sh * ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/configuration/oozie-site.xml Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0 Key: AMBARI-9948 URL: https://issues.apache.org/jira/browse/AMBARI-9948 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Vitaly Brodetskyi Assignee: Vitaly Brodetskyi Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9948.patch STR: 1. Install Ambari 1.6.1, HDP 2.1 2. Enable security 3. Upgrade to Ambari 2.0.0 4. Execute Kerberos Wizard. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9951) hadooplzo_2_2_2_0_2538-native can not be upgraded during RU
[ https://issues.apache.org/jira/browse/AMBARI-9951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349567#comment-14349567 ] Hadoop QA commented on AMBARI-9951: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12702904/AMBARI-9951.2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The test build failed in ambari-server Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/1938//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/1938//console This message is automatically generated. hadooplzo_2_2_2_0_2538-native can not be upgraded during RU --- Key: AMBARI-9951 URL: https://issues.apache.org/jira/browse/AMBARI-9951 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Dmitry Lysnichenko Assignee: Dmitry Lysnichenko Fix For: 2.0.0 Attachments: AMBARI-9951.2.patch, AMBARI-9951.patch {code} 2015-03-05 13:43:38,414 - Installing package lzo ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 lzo') 2015-03-05 13:43:44,829 - uPackage['hadooplzo_2_2_2_0_2538*'] {'use_repos': ['base', 'HDP-UTILS-2.2.2.0-2538', 'HDP-2.2.2.0-2538']} 2015-03-05 13:43:45,787 - Installing package hadooplzo_2_2_2_0_2538* ('/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'') Can not install packages. Traceback (most recent call last): File /var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py, line 96, in actionexecute Package(name, use_repos=list(current_repo_files) if OSCheck.is_ubuntu_family() else current_repositories) File /usr/lib/python2.6/site-packages/resource_management/core/base.py, line 148, in __init__ self.env.run() File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 152, in run self.run_action(resource, action) File /usr/lib/python2.6/site-packages/resource_management/core/environment.py, line 118, in run_action provider_action() File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py, line 43, in action_install self.install_package(package_name, self.resource.use_repos) File /usr/lib/python2.6/site-packages/resource_management/core/providers/package/zypper.py, line 72, in install_package shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput()) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 70, in inner return function(command, **kwargs) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 82, in checked_call return _call(command, logoutput, True, cwd, env, preexec_fn, user, wait_for_finish, timeout, path, sudo, on_new_line) File /usr/lib/python2.6/site-packages/resource_management/core/shell.py, line 199, in _call raise Fail(err_msg) Fail: Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm --repo HDP-UTILS-2.2.2.0-2538 --repo HDP-2.2.2.0-2538 'hadooplzo_2_2_2_0_2538*'' returned 4. Problem: nothing provides libjvm.so()(64bit) needed by hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 1: do not install hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 Solution 2: break hadooplzo_2_2_2_0_2538-native-0.6.0.2.2.2.0-2538.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/c] (c): c {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9949) Add Service: Choose Services page, selected service issue during navigation to wizard from Stack Versions page.
[ https://issues.apache.org/jira/browse/AMBARI-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349582#comment-14349582 ] Hadoop QA commented on AMBARI-9949: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12702887/AMBARI-9949.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/1940//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/1940//console This message is automatically generated. Add Service: Choose Services page, selected service issue during navigation to wizard from Stack Versions page. --- Key: AMBARI-9949 URL: https://issues.apache.org/jira/browse/AMBARI-9949 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9949.patch STR: - install cluster with HDFS and ZooKeeper - go to Admin - Stack Versions page - choose any service to add, for example Storm - deselect Storm service and select another one service on Choose Services page - proceed to next step - *AR:* Storm master components present on Assign Masters page *ER:* No Storm components on Assign Master page - go to the previous step (Choose Services) - *AR:* Storm selected *ER:* Storm deselected -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9950) YARN RM HA Mode Configurations Are Incorrect
[ https://issues.apache.org/jira/browse/AMBARI-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349597#comment-14349597 ] Hadoop QA commented on AMBARI-9950: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12702888/AMBARI-9950.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/1941//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/1941//console This message is automatically generated. YARN RM HA Mode Configurations Are Incorrect Key: AMBARI-9950 URL: https://issues.apache.org/jira/browse/AMBARI-9950 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9950.patch When enabling YARN RM HA mode, it doesn't appear as though we are creating the additional {{yarn-site}} properties correctly. Consider that we do create the following: {noformat} yarn.resourcemanager.ha.automatic-failover.zk-base-path : /yarn-leader-election, yarn.resourcemanager.ha.enabled : true, yarn.resourcemanager.ha.rm-ids : rm1,rm2, yarn.resourcemanager.hostname : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm1 : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm2 : c6403.ambari.apache.org, yarn.resourcemanager.webapp.address : c6402.ambari.apache.org:8088, yarn.resourcemanager.webapp.https.address : c6402.ambari.apache.org:8090”, {noformat} You can see that we have created aliases (rm1 and rm2) and created some dynamic keys for these hosts, such as {{yarn.resourcemanager.hostname.rm1}} However, the {{yarn-site}} documentation states that other properties, such as the web address, need to also be specified in a similar manner. Otherwise, how does YARN know which port to spin up the RM on each host? {noformat:title=Missing Properties} yarn.resourcemanager.webapp.address.rm1 yarn.resourcemanager.webapp.address.rm2 yarn.resourcemanager.webapp.https.address.rm1 yarn.resourcemanager.webapp.https.address.rm2 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9939) RU - Service Check group to include all services with a service_check script
[ https://issues.apache.org/jira/browse/AMBARI-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349659#comment-14349659 ] Alejandro Fernandez commented on AMBARI-9939: - Pushed to branch-2.0.0 in commit b1e41005b198e467fc0668c3ad2d7c4123315851 RU - Service Check group to include all services with a service_check script Key: AMBARI-9939 URL: https://issues.apache.org/jira/browse/AMBARI-9939 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Labels: rolling_upgrade Fix For: 2.0.0 Attachments: AMBARI-9939.patch Installed a minimal 3-node cluster with HDFS, MR, YARN, Pig, Tez. Performed an RU. Expected result is for the last service check to be ran on all components. However, it skipped Pig Service Check. http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8 {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8;, UpgradeGroup : { completed_task_count : 4, group_id : 8, in_progress_task_count : 0, name : SERVICE_CHECK, progress_percent : 100.0, request_id : 32, status : COMPLETED, title : All Service Checks, total_task_count : 4 }, upgrade_items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50 } } ] } {code} http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text;, items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47, text : Service Check HDFS } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48, text : Service Check YARN } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49, text : Service Check ZooKeeper } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50, text : Service Check MapReduce2 } } ] } {code} The Upgrade Pack contains, {code} group name=SERVICE_CHECK title=All Service Checks xsi:type=service-check skippabletrue/skippable directionUPGRADE/direction priority serviceHDFS/service serviceYARN/service serviceHBASE/service /priority /group {code} Because the pig service check was not ran, the new tez tarball was not copied to HDFS. The underlying issue is that a service is not added to the Service Check group if it is a clientOnly service. However, Pig is clientOnly but still have a service check python script. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9939) RU - Service Check group to include all services with a service_check script
[ https://issues.apache.org/jira/browse/AMBARI-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349654#comment-14349654 ] Hudson commented on AMBARI-9939: SUCCESS: Integrated in Ambari-trunk-Commit #1962 (See [https://builds.apache.org/job/Ambari-trunk-Commit/1962/]) AMBARI-9939. RU - Service Check group to include all services with a service_check script (alejandro) (afernandez: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=4dc9a3087b7a42f45c6cf0019c628a4fc3c69f20) * ambari-server/src/test/java/org/apache/ambari/server/state/UpgradeHelperTest.java * ambari-server/src/test/java/org/apache/ambari/server/stack/StackManagerTest.java * ambari-server/src/test/resources/stacks/HDP/2.1.1/services/PIG/metainfo.xml * ambari-server/src/main/java/org/apache/ambari/server/state/UpgradeContext.java * ambari-server/src/test/resources/stacks/HDP/2.1.1/services/TEZ/metainfo.xml * ambari-server/src/main/java/org/apache/ambari/server/state/stack/upgrade/ServiceCheckGrouping.java RU - Service Check group to include all services with a service_check script Key: AMBARI-9939 URL: https://issues.apache.org/jira/browse/AMBARI-9939 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Labels: rolling_upgrade Fix For: 2.0.0 Attachments: AMBARI-9939.patch Installed a minimal 3-node cluster with HDFS, MR, YARN, Pig, Tez. Performed an RU. Expected result is for the last service check to be ran on all components. However, it skipped Pig Service Check. http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8 {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8;, UpgradeGroup : { completed_task_count : 4, group_id : 8, in_progress_task_count : 0, name : SERVICE_CHECK, progress_percent : 100.0, request_id : 32, status : COMPLETED, title : All Service Checks, total_task_count : 4 }, upgrade_items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50 } } ] } {code} http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text;, items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47, text : Service Check HDFS } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48, text : Service Check YARN } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49, text : Service Check ZooKeeper } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50, text : Service Check MapReduce2 } } ] } {code} The Upgrade Pack contains, {code} group
Review Request 31786: Ambari Metrics monitor restart failed with error JAVA_HOME is not set
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31786/ --- Review request for Ambari, Dmytro Sen, Erik Bergenholtz, Mahadev Konar, Myroslav Papirkovskyy, and Sid Wagle. Bugs: AMBARI-9953 https://issues.apache.org/jira/browse/AMBARI-9953 Repository: ambari Description --- Ensure config() gets called on restart(), when this one is invoked before start(). Diffs - ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py a5cb74cde8e2df76f1d46cd613aa7b9ec5f8575e Diff: https://reviews.apache.org/r/31786/diff/ Testing --- mvn clean test [INFO] [INFO] Reactor Summary: [INFO] [INFO] Ambari Main ... SUCCESS [9.078s] [INFO] Apache Ambari Project POM . SUCCESS [0.223s] [INFO] Ambari Web SUCCESS [3:11.044s] [INFO] Ambari Views .. SUCCESS [3.759s] [INFO] Ambari Admin View . SUCCESS [16.977s] [INFO] Ambari Metrics Common . SUCCESS [7.225s] [INFO] Ambari Server . SUCCESS [1:26.069s] [INFO] Ambari Agent .. SUCCESS [9.595s] [INFO] Ambari Client . SUCCESS [0.047s] [INFO] Ambari Python Client .. SUCCESS [0.541s] [INFO] Ambari Groovy Client .. SUCCESS [18.041s] [INFO] Ambari Shell .. SUCCESS [0.069s] [INFO] Ambari Python Shell ... SUCCESS [0.093s] [INFO] Ambari Groovy Shell ... SUCCESS [9.764s] [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 6:00.313s [INFO] Finished at: Fri Mar 06 02:25:27 UTC 2015 [INFO] Final Memory: 57M/272M [INFO] Thanks, Florian Barca
[jira] [Commented] (AMBARI-9939) RU - Service Check group to include all services with a service_check script
[ https://issues.apache.org/jira/browse/AMBARI-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349728#comment-14349728 ] Hudson commented on AMBARI-9939: SUCCESS: Integrated in Ambari-branch-2.0.0 #15 (See [https://builds.apache.org/job/Ambari-branch-2.0.0/15/]) AMBARI-9939. RU - Service Check group to include all services with a service_check script (alejandro) (afernandez: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=b1e41005b198e467fc0668c3ad2d7c4123315851) * ambari-server/src/test/resources/stacks/HDP/2.1.1/services/TEZ/metainfo.xml * ambari-server/src/test/resources/stacks/HDP/2.1.1/services/PIG/metainfo.xml * ambari-server/src/test/java/org/apache/ambari/server/stack/StackManagerTest.java * ambari-server/src/test/java/org/apache/ambari/server/state/UpgradeHelperTest.java * ambari-server/src/main/java/org/apache/ambari/server/state/stack/upgrade/ServiceCheckGrouping.java * ambari-server/src/main/java/org/apache/ambari/server/state/UpgradeContext.java RU - Service Check group to include all services with a service_check script Key: AMBARI-9939 URL: https://issues.apache.org/jira/browse/AMBARI-9939 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Labels: rolling_upgrade Fix For: 2.0.0 Attachments: AMBARI-9939.patch Installed a minimal 3-node cluster with HDFS, MR, YARN, Pig, Tez. Performed an RU. Expected result is for the last service check to be ran on all components. However, it skipped Pig Service Check. http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8 {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8;, UpgradeGroup : { completed_task_count : 4, group_id : 8, in_progress_task_count : 0, name : SERVICE_CHECK, progress_percent : 100.0, request_id : 32, status : COMPLETED, title : All Service Checks, total_task_count : 4 }, upgrade_items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50 } } ] } {code} http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text;, items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47, text : Service Check HDFS } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48, text : Service Check YARN } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49, text : Service Check ZooKeeper } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50, text : Service Check MapReduce2 } } ] } {code} The Upgrade Pack contains, {code} group
[jira] [Commented] (AMBARI-9950) YARN RM HA Mode Configurations Are Incorrect
[ https://issues.apache.org/jira/browse/AMBARI-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349866#comment-14349866 ] Hudson commented on AMBARI-9950: FAILURE: Integrated in Ambari-branch-2.0.0 #16 (See [https://builds.apache.org/job/Ambari-branch-2.0.0/16/]) AMBARI-9950. YARN RM HA Mode Configurations Are Incorrect (alexantonenko) (hiveww: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=1956e0cd6f5f70f830cf7d6c01ea493c945e2a4b) * ambari-web/app/controllers/main/admin/highAvailability/resourceManager/step3_controller.js * ambari-web/test/controllers/main/admin/highAvailability/resourceManager/step3_controller_test.js * ambari-web/app/data/HDP2/rm_ha_properties.js YARN RM HA Mode Configurations Are Incorrect Key: AMBARI-9950 URL: https://issues.apache.org/jira/browse/AMBARI-9950 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9950.patch When enabling YARN RM HA mode, it doesn't appear as though we are creating the additional {{yarn-site}} properties correctly. Consider that we do create the following: {noformat} yarn.resourcemanager.ha.automatic-failover.zk-base-path : /yarn-leader-election, yarn.resourcemanager.ha.enabled : true, yarn.resourcemanager.ha.rm-ids : rm1,rm2, yarn.resourcemanager.hostname : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm1 : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm2 : c6403.ambari.apache.org, yarn.resourcemanager.webapp.address : c6402.ambari.apache.org:8088, yarn.resourcemanager.webapp.https.address : c6402.ambari.apache.org:8090”, {noformat} You can see that we have created aliases (rm1 and rm2) and created some dynamic keys for these hosts, such as {{yarn.resourcemanager.hostname.rm1}} However, the {{yarn-site}} documentation states that other properties, such as the web address, need to also be specified in a similar manner. Otherwise, how does YARN know which port to spin up the RM on each host? {noformat:title=Missing Properties} yarn.resourcemanager.webapp.address.rm1 yarn.resourcemanager.webapp.address.rm2 yarn.resourcemanager.webapp.https.address.rm1 yarn.resourcemanager.webapp.https.address.rm2 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9949) Add Service: Choose Services page, selected service issue during navigation to wizard from Stack Versions page.
[ https://issues.apache.org/jira/browse/AMBARI-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349865#comment-14349865 ] Hudson commented on AMBARI-9949: FAILURE: Integrated in Ambari-branch-2.0.0 #16 (See [https://builds.apache.org/job/Ambari-branch-2.0.0/16/]) AMBARI-9949. Add Service: Choose Services page, selected service issue during navigation to wizard from Stack Versions page. (alexantonenko) (hiveww: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=21c67f93ba87750f418ac1e08da4ce4d9430a35c) * ambari-web/test/controllers/main/service/add_controller_test.js * ambari-web/app/controllers/main/service/add_controller.js Add Service: Choose Services page, selected service issue during navigation to wizard from Stack Versions page. --- Key: AMBARI-9949 URL: https://issues.apache.org/jira/browse/AMBARI-9949 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9949.patch STR: - install cluster with HDFS and ZooKeeper - go to Admin - Stack Versions page - choose any service to add, for example Storm - deselect Storm service and select another one service on Choose Services page - proceed to next step - *AR:* Storm master components present on Assign Masters page *ER:* No Storm components on Assign Master page - go to the previous step (Choose Services) - *AR:* Storm selected *ER:* Storm deselected -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9950) YARN RM HA Mode Configurations Are Incorrect
[ https://issues.apache.org/jira/browse/AMBARI-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antonenko Alexander updated AMBARI-9950: Attachment: AMBARI-9950.patch YARN RM HA Mode Configurations Are Incorrect Key: AMBARI-9950 URL: https://issues.apache.org/jira/browse/AMBARI-9950 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9950.patch When enabling YARN RM HA mode, it doesn't appear as though we are creating the additional {{yarn-site}} properties correctly. Consider that we do create the following: {noformat} yarn.resourcemanager.ha.automatic-failover.zk-base-path : /yarn-leader-election, yarn.resourcemanager.ha.enabled : true, yarn.resourcemanager.ha.rm-ids : rm1,rm2, yarn.resourcemanager.hostname : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm1 : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm2 : c6403.ambari.apache.org, yarn.resourcemanager.webapp.address : c6402.ambari.apache.org:8088, yarn.resourcemanager.webapp.https.address : c6402.ambari.apache.org:8090”, {noformat} You can see that we have created aliases (rm1 and rm2) and created some dynamic keys for these hosts, such as {{yarn.resourcemanager.hostname.rm1}} However, the {{yarn-site}} documentation states that other properties, such as the web address, need to also be specified in a similar manner. Otherwise, how does YARN know which port to spin up the RM on each host? {noformat:title=Missing Properties} yarn.resourcemanager.webapp.address.rm1 yarn.resourcemanager.webapp.address.rm2 yarn.resourcemanager.webapp.https.address.rm1 yarn.resourcemanager.webapp.https.address.rm2 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9950) YARN RM HA Mode Configurations Are Incorrect
[ https://issues.apache.org/jira/browse/AMBARI-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antonenko Alexander updated AMBARI-9950: Attachment: (was: AMBARI-9950.patch) YARN RM HA Mode Configurations Are Incorrect Key: AMBARI-9950 URL: https://issues.apache.org/jira/browse/AMBARI-9950 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Antonenko Alexander Assignee: Antonenko Alexander Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9950.patch When enabling YARN RM HA mode, it doesn't appear as though we are creating the additional {{yarn-site}} properties correctly. Consider that we do create the following: {noformat} yarn.resourcemanager.ha.automatic-failover.zk-base-path : /yarn-leader-election, yarn.resourcemanager.ha.enabled : true, yarn.resourcemanager.ha.rm-ids : rm1,rm2, yarn.resourcemanager.hostname : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm1 : c6402.ambari.apache.org, yarn.resourcemanager.hostname.rm2 : c6403.ambari.apache.org, yarn.resourcemanager.webapp.address : c6402.ambari.apache.org:8088, yarn.resourcemanager.webapp.https.address : c6402.ambari.apache.org:8090”, {noformat} You can see that we have created aliases (rm1 and rm2) and created some dynamic keys for these hosts, such as {{yarn.resourcemanager.hostname.rm1}} However, the {{yarn-site}} documentation states that other properties, such as the web address, need to also be specified in a similar manner. Otherwise, how does YARN know which port to spin up the RM on each host? {noformat:title=Missing Properties} yarn.resourcemanager.webapp.address.rm1 yarn.resourcemanager.webapp.address.rm2 yarn.resourcemanager.webapp.https.address.rm1 yarn.resourcemanager.webapp.https.address.rm2 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-9953) Ambari Metrics monitor restart failed with error JAVA_HOME is not set
Florian Barca created AMBARI-9953: - Summary: Ambari Metrics monitor restart failed with error JAVA_HOME is not set Key: AMBARI-9953 URL: https://issues.apache.org/jira/browse/AMBARI-9953 Project: Ambari Issue Type: Bug Components: ambari-agent, ambari-metrics, ambari-server Affects Versions: 2.0.0 Reporter: Florian Barca Assignee: Florian Barca Fix For: 2.0.0 AMS Collector fails to restart if the installation of a certain upstream component failed. In this case, start() is skipped, and install() is followed by restart() immediately. restart() being implemented by a stop()+start() sequence, it results that stop() needs to make sure that the service is configured prior to invoking any low-level scripts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-9954) Spark on tez apps fails needs tez.tar.gz copied to HDFS
Alejandro Fernandez created AMBARI-9954: --- Summary: Spark on tez apps fails needs tez.tar.gz copied to HDFS Key: AMBARI-9954 URL: https://issues.apache.org/jira/browse/AMBARI-9954 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Fix For: 2.0.0 The spark on tez apps fails because tez.tar.gz needs to be copied to HDFS. Currently, only Pig Service Check and Hive START copy it to HDFS. {noformat} $ /usr/hdp/current/spark-client/bin/spark-submit --class org.apache.spark.examples.SparkPi --master execution-context:org.apache.spark.tez.TezJobExecutionContext /usr/hdp/current/spark-client/lib/spark-examples-1.2.1.2.2.2.0-2538-hadoop2.6.0.2.2.2.0-2538.jar 3 tput: No value for $TERM and no -T specified Spark assembly has been built with Hive, including Datanucleus jars on classpath SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/grid/0/hdp/2.2.2.0-2538/spark/lib/spark-examples-1.2.1.2.2.2.0-2538-hadoop2.6.0.2.2.2.0-2538.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/grid/0/hdp/2.2.2.0-2538/spark/external/spark-native-yarn/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 15/03/04 09:27:53 INFO spark.SecurityManager: Changing view acls to: hrt_qa 15/03/04 09:27:53 INFO spark.SecurityManager: Changing modify acls to: hrt_qa 15/03/04 09:27:53 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hrt_qa); users with modify permissions: Set(hrt_qa) 15/03/04 09:27:54 INFO slf4j.Slf4jLogger: Slf4jLogger started 15/03/04 09:27:54 INFO Remoting: Starting remoting 15/03/04 09:27:54 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-172-31-47-166.ec2.internal:34628] 15/03/04 09:27:54 INFO util.Utils: Successfully started service 'sparkDriver' on port 34628. 15/03/04 09:27:54 INFO spark.SparkEnv: Registering MapOutputTracker 15/03/04 09:27:54 INFO spark.SparkEnv: Registering BlockManagerMaster 15/03/04 09:27:54 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-c3fe89f7-4117-41fc-8a62-01f0451c9060/spark-a209539b-07ae-42ad-a83d-9ad53b1c6adc 15/03/04 09:27:54 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB 15/03/04 09:27:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/03/04 09:27:55 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-995afa1c-3d9a-453b-a84c-b02b73aab8d7/spark-57d211b1-32a5-453c-b2d0-5f010b4cf74a 15/03/04 09:27:55 INFO spark.HttpServer: Starting HTTP Server 15/03/04 09:27:55 INFO server.Server: jetty-8.y.z-SNAPSHOT 15/03/04 09:27:55 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:52966 15/03/04 09:27:55 INFO util.Utils: Successfully started service 'HTTP file server' on port 52966. 15/03/04 09:27:55 INFO server.Server: jetty-8.y.z-SNAPSHOT 15/03/04 09:27:55 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040 15/03/04 09:27:55 INFO util.Utils: Successfully started service 'SparkUI' on port 4040. 15/03/04 09:27:55 INFO ui.SparkUI: Started SparkUI at http://ip-172-31-47-166.ec2.internal:4040 15/03/04 09:27:55 INFO spark.SparkContext: Will use custom job execution context org.apache.spark.tez.TezJobExecutionContext 15/03/04 09:27:56 INFO tez.TezJobExecutionContext: Config dir: /etc/hadoop/conf 15/03/04 09:27:56 INFO tez.TezJobExecutionContext: FileSystem: hdfs://ip-172-31-47-165.ec2.internal:8020 15/03/04 09:27:56 INFO tez.TezJobExecutionContext: Error while accessing configuration. Possible cause - 'version missmatch' org.apache.hadoop.conf.Configuration is loaded from file:/grid/0/hdp/2.2.2.0-2538/spark/external/spark-native-yarn/lib/hadoop-common-2.6.0.2.2.2.0-2538.jar org.apache.tez.dag.api.TezConfiguration is loaded from file:/grid/0/hdp/2.2.2.0-2538/spark/external/spark-native-yarn/lib/tez-api-0.5.2.2.2.2.0-2538.jar 15/03/04 09:27:57 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 15/03/04 09:27:57 INFO netty.NettyBlockTransferService: Server created on 58099 15/03/04 09:27:57 INFO storage.BlockManagerMaster: Trying to register BlockManager 15/03/04 09:27:57 INFO storage.BlockManagerMasterActor: Registering block manager ip-172-31-47-166.ec2.internal:58099 with 265.4 MB RAM, BlockManagerId(driver, ip-172-31-47-166.ec2.internal, 58099) 15/03/04 09:27:57 INFO storage.BlockManagerMaster: Registered BlockManager 15/03/04 09:27:58 INFO
[jira] [Commented] (AMBARI-9954) Spark on tez apps fails needs tez.tar.gz copied to HDFS
[ https://issues.apache.org/jira/browse/AMBARI-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349884#comment-14349884 ] Alejandro Fernandez commented on AMBARI-9954: - Still need to write unit tests and confirm during a Rolling Upgrade. Spark on tez apps fails needs tez.tar.gz copied to HDFS --- Key: AMBARI-9954 URL: https://issues.apache.org/jira/browse/AMBARI-9954 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Fix For: 2.0.0 Attachments: AMBARI-9954.patch The spark on tez apps fails because tez.tar.gz needs to be copied to HDFS. Currently, only Pig Service Check and Hive START copy it to HDFS. {noformat} $ /usr/hdp/current/spark-client/bin/spark-submit --class org.apache.spark.examples.SparkPi --master execution-context:org.apache.spark.tez.TezJobExecutionContext /usr/hdp/current/spark-client/lib/spark-examples-1.2.1.2.2.2.0-2538-hadoop2.6.0.2.2.2.0-2538.jar 3 tput: No value for $TERM and no -T specified Spark assembly has been built with Hive, including Datanucleus jars on classpath SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/grid/0/hdp/2.2.2.0-2538/spark/lib/spark-examples-1.2.1.2.2.2.0-2538-hadoop2.6.0.2.2.2.0-2538.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/grid/0/hdp/2.2.2.0-2538/spark/external/spark-native-yarn/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 15/03/04 09:27:53 INFO spark.SecurityManager: Changing view acls to: hrt_qa 15/03/04 09:27:53 INFO spark.SecurityManager: Changing modify acls to: hrt_qa 15/03/04 09:27:53 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hrt_qa); users with modify permissions: Set(hrt_qa) 15/03/04 09:27:54 INFO slf4j.Slf4jLogger: Slf4jLogger started 15/03/04 09:27:54 INFO Remoting: Starting remoting 15/03/04 09:27:54 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-172-31-47-166.ec2.internal:34628] 15/03/04 09:27:54 INFO util.Utils: Successfully started service 'sparkDriver' on port 34628. 15/03/04 09:27:54 INFO spark.SparkEnv: Registering MapOutputTracker 15/03/04 09:27:54 INFO spark.SparkEnv: Registering BlockManagerMaster 15/03/04 09:27:54 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-c3fe89f7-4117-41fc-8a62-01f0451c9060/spark-a209539b-07ae-42ad-a83d-9ad53b1c6adc 15/03/04 09:27:54 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB 15/03/04 09:27:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/03/04 09:27:55 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-995afa1c-3d9a-453b-a84c-b02b73aab8d7/spark-57d211b1-32a5-453c-b2d0-5f010b4cf74a 15/03/04 09:27:55 INFO spark.HttpServer: Starting HTTP Server 15/03/04 09:27:55 INFO server.Server: jetty-8.y.z-SNAPSHOT 15/03/04 09:27:55 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:52966 15/03/04 09:27:55 INFO util.Utils: Successfully started service 'HTTP file server' on port 52966. 15/03/04 09:27:55 INFO server.Server: jetty-8.y.z-SNAPSHOT 15/03/04 09:27:55 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040 15/03/04 09:27:55 INFO util.Utils: Successfully started service 'SparkUI' on port 4040. 15/03/04 09:27:55 INFO ui.SparkUI: Started SparkUI at http://ip-172-31-47-166.ec2.internal:4040 15/03/04 09:27:55 INFO spark.SparkContext: Will use custom job execution context org.apache.spark.tez.TezJobExecutionContext 15/03/04 09:27:56 INFO tez.TezJobExecutionContext: Config dir: /etc/hadoop/conf 15/03/04 09:27:56 INFO tez.TezJobExecutionContext: FileSystem: hdfs://ip-172-31-47-165.ec2.internal:8020 15/03/04 09:27:56 INFO tez.TezJobExecutionContext: Error while accessing configuration. Possible cause - 'version missmatch' org.apache.hadoop.conf.Configuration is loaded from file:/grid/0/hdp/2.2.2.0-2538/spark/external/spark-native-yarn/lib/hadoop-common-2.6.0.2.2.2.0-2538.jar org.apache.tez.dag.api.TezConfiguration is loaded from file:/grid/0/hdp/2.2.2.0-2538/spark/external/spark-native-yarn/lib/tez-api-0.5.2.2.2.2.0-2538.jar 15/03/04 09:27:57 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 15/03/04 09:27:57 INFO netty.NettyBlockTransferService: Server
[jira] [Updated] (AMBARI-9954) Spark on tez apps fails needs tez.tar.gz copied to HDFS
[ https://issues.apache.org/jira/browse/AMBARI-9954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-9954: Attachment: AMBARI-9954.patch Spark on tez apps fails needs tez.tar.gz copied to HDFS --- Key: AMBARI-9954 URL: https://issues.apache.org/jira/browse/AMBARI-9954 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Fix For: 2.0.0 Attachments: AMBARI-9954.patch The spark on tez apps fails because tez.tar.gz needs to be copied to HDFS. Currently, only Pig Service Check and Hive START copy it to HDFS. {noformat} $ /usr/hdp/current/spark-client/bin/spark-submit --class org.apache.spark.examples.SparkPi --master execution-context:org.apache.spark.tez.TezJobExecutionContext /usr/hdp/current/spark-client/lib/spark-examples-1.2.1.2.2.2.0-2538-hadoop2.6.0.2.2.2.0-2538.jar 3 tput: No value for $TERM and no -T specified Spark assembly has been built with Hive, including Datanucleus jars on classpath SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/grid/0/hdp/2.2.2.0-2538/spark/lib/spark-examples-1.2.1.2.2.2.0-2538-hadoop2.6.0.2.2.2.0-2538.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/grid/0/hdp/2.2.2.0-2538/spark/external/spark-native-yarn/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 15/03/04 09:27:53 INFO spark.SecurityManager: Changing view acls to: hrt_qa 15/03/04 09:27:53 INFO spark.SecurityManager: Changing modify acls to: hrt_qa 15/03/04 09:27:53 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hrt_qa); users with modify permissions: Set(hrt_qa) 15/03/04 09:27:54 INFO slf4j.Slf4jLogger: Slf4jLogger started 15/03/04 09:27:54 INFO Remoting: Starting remoting 15/03/04 09:27:54 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@ip-172-31-47-166.ec2.internal:34628] 15/03/04 09:27:54 INFO util.Utils: Successfully started service 'sparkDriver' on port 34628. 15/03/04 09:27:54 INFO spark.SparkEnv: Registering MapOutputTracker 15/03/04 09:27:54 INFO spark.SparkEnv: Registering BlockManagerMaster 15/03/04 09:27:54 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-c3fe89f7-4117-41fc-8a62-01f0451c9060/spark-a209539b-07ae-42ad-a83d-9ad53b1c6adc 15/03/04 09:27:54 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB 15/03/04 09:27:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/03/04 09:27:55 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-995afa1c-3d9a-453b-a84c-b02b73aab8d7/spark-57d211b1-32a5-453c-b2d0-5f010b4cf74a 15/03/04 09:27:55 INFO spark.HttpServer: Starting HTTP Server 15/03/04 09:27:55 INFO server.Server: jetty-8.y.z-SNAPSHOT 15/03/04 09:27:55 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:52966 15/03/04 09:27:55 INFO util.Utils: Successfully started service 'HTTP file server' on port 52966. 15/03/04 09:27:55 INFO server.Server: jetty-8.y.z-SNAPSHOT 15/03/04 09:27:55 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040 15/03/04 09:27:55 INFO util.Utils: Successfully started service 'SparkUI' on port 4040. 15/03/04 09:27:55 INFO ui.SparkUI: Started SparkUI at http://ip-172-31-47-166.ec2.internal:4040 15/03/04 09:27:55 INFO spark.SparkContext: Will use custom job execution context org.apache.spark.tez.TezJobExecutionContext 15/03/04 09:27:56 INFO tez.TezJobExecutionContext: Config dir: /etc/hadoop/conf 15/03/04 09:27:56 INFO tez.TezJobExecutionContext: FileSystem: hdfs://ip-172-31-47-165.ec2.internal:8020 15/03/04 09:27:56 INFO tez.TezJobExecutionContext: Error while accessing configuration. Possible cause - 'version missmatch' org.apache.hadoop.conf.Configuration is loaded from file:/grid/0/hdp/2.2.2.0-2538/spark/external/spark-native-yarn/lib/hadoop-common-2.6.0.2.2.2.0-2538.jar org.apache.tez.dag.api.TezConfiguration is loaded from file:/grid/0/hdp/2.2.2.0-2538/spark/external/spark-native-yarn/lib/tez-api-0.5.2.2.2.2.0-2538.jar 15/03/04 09:27:57 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 15/03/04 09:27:57 INFO netty.NettyBlockTransferService: Server created on 58099 15/03/04 09:27:57 INFO storage.BlockManagerMaster: Trying to register
[jira] [Commented] (AMBARI-9383) Configs: Ambari support for HBase bucketcache
[ https://issues.apache.org/jira/browse/AMBARI-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349060#comment-14349060 ] Hudson commented on AMBARI-9383: FAILURE: Integrated in Ambari-trunk-Commit #1957 (See [https://builds.apache.org/job/Ambari-trunk-Commit/1957/]) AMBARI-9383 Configs: Ambari support for HBase bucketcache (dsen) (dsen: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=64648d034cc1cb69f83ed60912a339290a86be40) * ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/params.py * ambari-server/src/main/resources/stacks/HDP/2.2/services/HBASE/configuration/hbase-site.xml * ambari-server/src/main/resources/stacks/HDP/2.2/services/HBASE/configuration/hbase-env.xml * ambari-web/app/data/HDP2.2/site_properties.js * ambari-server/src/main/resources/stacks/HDP/2.2/services/stack_advisor.py Configs: Ambari support for HBase bucketcache - Key: AMBARI-9383 URL: https://issues.apache.org/jira/browse/AMBARI-9383 Project: Ambari Issue Type: Improvement Components: ambari-server Affects Versions: 2.0.0 Reporter: Dmytro Sen Assignee: Dmytro Sen Fix For: 2.1.0 Off-heap caches in HBase have been implemented. Ambari should expose the necessary configurations for a user to enable this feature. We're interested only in the BucketCache off-heap mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 31767: Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31767/#review75344 --- Ship it! Ship It! - Vitalyi Brodetskyi On Березень 5, 2015, 4:06 після полудня, Dmytro Sen wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31767/ --- (Updated Березень 5, 2015, 4:06 після полудня) Review request for Ambari, Dmitro Lisnichenko, Myroslav Papirkovskyy, and Vitalyi Brodetskyi. Bugs: AMBARI-9947 https://issues.apache.org/jira/browse/AMBARI-9947 Repository: ambari Description --- After upgrading secured cluster and restarting all services oozie has warning alert: Oozie Server Web UI HTTP 500 response in 0.000 seconds STR: 1. Installed ambari 1.7.0 on 3-node cluster, all services 2. Enabl security 3. Upgrade ambari to 2.0.0 4. Stop all services 5. Start all services Diffs - ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py 8bd697a ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 8d388ab Diff: https://reviews.apache.org/r/31767/diff/ Testing --- all tests passed Thanks, Dmytro Sen
[jira] [Commented] (AMBARI-9947) Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster
[ https://issues.apache.org/jira/browse/AMBARI-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349080#comment-14349080 ] Hadoop QA commented on AMBARI-9947: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12702840/AMBARI-9947.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The test build failed in ambari-server Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/1934//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/1934//console This message is automatically generated. Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster - Key: AMBARI-9947 URL: https://issues.apache.org/jira/browse/AMBARI-9947 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Dmytro Sen Assignee: Dmytro Sen Priority: Blocker Fix For: 2.0.0 Attachments: AMBARI-9947.patch After upgrading secured cluster and restarting all services oozie has warning alert: Oozie Server Web UI HTTP 500 response in 0.000 seconds STR: 1. Installed ambari 1.7.0 on 3-node cluster, all services 2. Enabl security 3. Upgrade ambari to 2.0.0 4. Stop all services 5. Start all services -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9943) Umask 027 + non-root result in failures during deploy
[ https://issues.apache.org/jira/browse/AMBARI-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349151#comment-14349151 ] Hudson commented on AMBARI-9943: SUCCESS: Integrated in Ambari-trunk-Commit #1958 (See [https://builds.apache.org/job/Ambari-trunk-Commit/1958/]) AMBARI-9943. Umask 027 + non-root result in failures during deploy (aonishuk) (aonishuk: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=9a0b211a877fac2e030c27ca8eef0c40d78b6fe3) * ambari-agent/src/test/python/resource_management/TestFileResource.py * ambari-agent/src/test/python/resource_management/TestPropertiesFileResource.py * ambari-agent/src/test/python/resource_management/TestXmlConfigResource.py * ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py * ambari-server/src/test/python/stacks/2.0.6/hooks/before-INSTALL/test_before_install.py * ambari-agent/src/test/python/resource_management/TestLinkResource.py * ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/shared_initialization.py * ambari-agent/conf/unix/ambari-env.sh * ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_metastore.py * ambari-agent/src/test/python/resource_management/TestExecuteResource.py * ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py * ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py * ambari-common/src/main/python/resource_management/core/sudo.py * ambari-common/src/main/python/resource_management/core/providers/system.py * ambari-agent/src/test/python/resource_management/TestDirectoryResource.py Umask 027 + non-root result in failures during deploy - Key: AMBARI-9943 URL: https://issues.apache.org/jira/browse/AMBARI-9943 Project: Ambari Issue Type: Bug Reporter: Andrew Onischuk Assignee: Andrew Onischuk Fix For: 2.0.0 This is double permission restriction which we seems like haven't yet tested. The most common problem I see over the code that hardcore umask removes some x bits. We didn't really need them until we were using root account, but now it becomes a problem in multiple or more places. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-8559) Deploy Config without service restart
[ https://issues.apache.org/jira/browse/AMBARI-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14348943#comment-14348943 ] Hari Sekhon commented on AMBARI-8559: - This is needed for rolling restarts too. I've done a manual rolling restart (need AMBARI-6706) and having restarted each Resource Manager individually and then done a rolling restart of NodeManagers, I'm left in a state where the configs aren't deployed with the new configuration and I'm unable to deploy them without this feature to clear the stale configuration. Deploy Config without service restart - Key: AMBARI-8559 URL: https://issues.apache.org/jira/browse/AMBARI-8559 Project: Ambari Issue Type: Improvement Affects Versions: 1.6.1, 1.7.0 Environment: HDP 2.1 HDP 2.2 Reporter: Hari Sekhon Feature request for Deploy Config button - without doing a service restart. Not all config changes require a service restart (eg. a client side option change or when changing NFS gateway setup there is no point in bouncing all the NameNode and DataNodes). It's more work to figure out what settings affect which components (and probably impractical), but it's sufficient to provide Deploy Config and leave the service marked with stale config for restart for a later time. This is analagous to Cloudera Manager's Deploy Client Configuration button. Regards, Hari Sekhon (ex-Cloudera) http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-8559) Deploy Config without service restart
[ https://issues.apache.org/jira/browse/AMBARI-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated AMBARI-8559: Affects Version/s: 1.7.0 Deploy Config without service restart - Key: AMBARI-8559 URL: https://issues.apache.org/jira/browse/AMBARI-8559 Project: Ambari Issue Type: Improvement Affects Versions: 1.6.1, 1.7.0 Environment: HDP 2.1 Reporter: Hari Sekhon Feature request for Deploy Config button - without doing a service restart. Not all config changes require a service restart (eg. a client side option change or when changing NFS gateway setup there is no point in bouncing all the NameNode and DataNodes). It's more work to figure out what settings affect which components (and probably impractical), but it's sufficient to provide Deploy Config and leave the service marked with stale config for restart for a later time. This is analagous to Cloudera Manager's Deploy Client Configuration button. Regards, Hari Sekhon (ex-Cloudera) http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-9947) Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster
Dmytro Sen created AMBARI-9947: -- Summary: Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster Key: AMBARI-9947 URL: https://issues.apache.org/jira/browse/AMBARI-9947 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Dmytro Sen Assignee: Dmytro Sen Priority: Blocker Fix For: 2.0.0 After upgrading secured cluster and restarting all services oozie has warning alert: Oozie Server Web UI HTTP 500 response in 0.000 seconds STR: 1. Installed ambari 1.7.0 on 3-node cluster, all services 2. Enabl security 3. Upgrade ambari to 2.0.0 4. Stop all services 5. Start all services -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9943) Umask 027 + non-root result in failures during deploy
[ https://issues.apache.org/jira/browse/AMBARI-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349094#comment-14349094 ] Hudson commented on AMBARI-9943: ABORTED: Integrated in Ambari-branch-2.0.0 #10 (See [https://builds.apache.org/job/Ambari-branch-2.0.0/10/]) AMBARI-9943. Umask 027 + non-root result in failures during deploy (aonishuk) (aonishuk: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=72dfdd499f6102ce4efa1ed45ed7a48e13aabe0d) * ambari-common/src/main/python/resource_management/core/providers/system.py * ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py * ambari-common/src/main/python/resource_management/core/sudo.py * ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py * ambari-agent/src/test/python/resource_management/TestPropertiesFileResource.py * ambari-server/src/test/python/stacks/2.0.6/hooks/before-INSTALL/test_before_install.py * ambari-agent/src/test/python/resource_management/TestFileResource.py * ambari-agent/conf/unix/ambari-env.sh * ambari-agent/src/test/python/resource_management/TestXmlConfigResource.py * ambari-agent/src/test/python/resource_management/TestDirectoryResource.py * ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py * ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/shared_initialization.py * ambari-agent/src/test/python/resource_management/TestLinkResource.py * ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_metastore.py * ambari-agent/src/test/python/resource_management/TestExecuteResource.py Umask 027 + non-root result in failures during deploy - Key: AMBARI-9943 URL: https://issues.apache.org/jira/browse/AMBARI-9943 Project: Ambari Issue Type: Bug Reporter: Andrew Onischuk Assignee: Andrew Onischuk Fix For: 2.0.0 This is double permission restriction which we seems like haven't yet tested. The most common problem I see over the code that hardcore umask removes some x bits. We didn't really need them until we were using root account, but now it becomes a problem in multiple or more places. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 31737: Spark Thriftserver fails to initialize correctly in secure Ambari cluster
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31737/#review75350 --- Ship it! Ship It! - Sumit Mohanty On March 4, 2015, 6:52 p.m., Gautam Borad wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31737/ --- (Updated March 4, 2015, 6:52 p.m.) Review request for Ambari, Alejandro Fernandez, Giridharan Kesavan, Sumit Mohanty, and Yusaku Sako. Bugs: AMBARI-9921 https://issues.apache.org/jira/browse/AMBARI-9921 Repository: ambari Description --- Spark Thriftserver fails to initialize correctly on secure cluster. To fix it following changes need to be made to $SPARK_CONF/hive-site.xml: property namehive.security.authorization.enabled/name valuefalse/value /property Diffs - ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/params.py 5de997e ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/setup_spark.py 2c7d287 Diff: https://reviews.apache.org/r/31737/diff/ Testing --- NA, since cant get secure env. Tested the code by writing the property in non-secure node. ( just for testing ) Thanks, Gautam Borad
[jira] [Commented] (AMBARI-9944) RU: web should not display older versions by default
[ https://issues.apache.org/jira/browse/AMBARI-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349109#comment-14349109 ] Hadoop QA commented on AMBARI-9944: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12702835/AMBARI-9944.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/1935//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/1935//console This message is automatically generated. RU: web should not display older versions by default Key: AMBARI-9944 URL: https://issues.apache.org/jira/browse/AMBARI-9944 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Babiichuk Assignee: Andrii Babiichuk Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9944.patch Ambari Web should not display older versions by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9941) Skip group mod text changes after failure
[ https://issues.apache.org/jira/browse/AMBARI-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14348864#comment-14348864 ] Hudson commented on AMBARI-9941: SUCCESS: Integrated in Ambari-branch-2.0.0 #9 (See [https://builds.apache.org/job/Ambari-branch-2.0.0/9/]) AMBARI-9941. Skip group mod text changes after failure (onechiporenko) (onechiporenko: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=34d004c8622496295c3240cbb2b8c85860855514) * ambari-web/app/utils/config.js * ambari-web/test/utils/config_test.js Skip group mod text changes after failure - Key: AMBARI-9941 URL: https://issues.apache.org/jira/browse/AMBARI-9941 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Oleg Nechiporenko Assignee: Oleg Nechiporenko Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9941.patch, right-text.tiff, wrong-text.tiff 1. install wizard 2. on customize services misc tab, the text is correct (see right-text screen shot) 3. go forward and fail the install 4. click back to customize services misc section, the text is now incorrect (see wrong-text screen shot). And the checkbox is moved from it's original position. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9941) Skip group mod text changes after failure
[ https://issues.apache.org/jira/browse/AMBARI-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14348894#comment-14348894 ] Hudson commented on AMBARI-9941: SUCCESS: Integrated in Ambari-trunk-Commit #1956 (See [https://builds.apache.org/job/Ambari-trunk-Commit/1956/]) AMBARI-9941. Skip group mod text changes after failure (onechiporenko) (onechiporenko: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=4a54f03048e827bc7affb2262e44ae6d14cfd74c) * ambari-web/app/utils/config.js * ambari-web/test/utils/config_test.js Skip group mod text changes after failure - Key: AMBARI-9941 URL: https://issues.apache.org/jira/browse/AMBARI-9941 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Oleg Nechiporenko Assignee: Oleg Nechiporenko Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9941.patch, right-text.tiff, wrong-text.tiff 1. install wizard 2. on customize services misc tab, the text is correct (see right-text screen shot) 3. go forward and fail the install 4. click back to customize services misc section, the text is now incorrect (see wrong-text screen shot). And the checkbox is moved from it's original position. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9947) Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster
[ https://issues.apache.org/jira/browse/AMBARI-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Sen updated AMBARI-9947: --- Attachment: AMBARI-9947.patch Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster - Key: AMBARI-9947 URL: https://issues.apache.org/jira/browse/AMBARI-9947 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Dmytro Sen Assignee: Dmytro Sen Priority: Blocker Fix For: 2.0.0 Attachments: AMBARI-9947.patch After upgrading secured cluster and restarting all services oozie has warning alert: Oozie Server Web UI HTTP 500 response in 0.000 seconds STR: 1. Installed ambari 1.7.0 on 3-node cluster, all services 2. Enabl security 3. Upgrade ambari to 2.0.0 4. Stop all services 5. Start all services -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 31752: ambari-sudo.sh needs full path, install fails if JDK is not installed
On March 5, 2015, 1:15 p.m., Nate Cole wrote: Seems like we should be abstracting somehow - maybe with an ExecuteSudo or something that takes the same exact arguments as Execute, but does all this sudo voodoo. Jonathan Hurley wrote: Agreed; why is the separate script necessary? I even thought that the existing Execute resource took a `sudo=true` parameter. Guys we have Execute with sudo=True argument. This thing on lower level below this Execute. - Andrew --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/#review75325 --- On March 5, 2015, 12:59 a.m., Alejandro Fernandez wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/ --- (Updated March 5, 2015, 12:59 a.m.) Review request for Ambari, Andrew Onischuk, Jonathan Hurley, Nate Cole, and Sid Wagle. Bugs: AMBARI-9938 https://issues.apache.org/jira/browse/AMBARI-9938 Repository: ambari Description --- When HDP is installed on a host without JDK, the before-install hook will attempt to install JDK if it is not present. However, this fails because ambari-sudo.sh needs the fully qualified path to the script. ``` Execution of 'mkdir -p /var/lib/ambari-agent/data/tmp/jdk cd /var/lib/ambari-agent/data/tmp/jdk tar -xf /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jdk-7u67-linux-x64.tar.gz ambari-sudo.sh cp -r /var/lib/ambari-agent/data/tmp/jdk/* /usr/jdk64' returned 2. [3/4/15, 12:58:13 PM] Alejandro Fernandez: tar: Unexpected EOF in archive ``` Diffs - ambari-common/src/main/python/ambari_commons/constants.py b823b31 ambari-server/src/test/python/stacks/2.0.6/FLUME/test_flume.py b6f4821 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_monitor.py 396b9d2 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_server.py 7d0afc7 ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_master.py 36c942e ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_regionserver.py 8a79701 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_datanode.py 54ca083 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_journalnode.py 21cefae ambari-server/src/test/python/stacks/2.0.6/HDFS/test_namenode.py 1e4142f ambari-server/src/test/python/stacks/2.0.6/HDFS/test_service_check.py e24ff8d ambari-server/src/test/python/stacks/2.0.6/HDFS/test_snamenode.py 5bedf5b ambari-server/src/test/python/stacks/2.0.6/HDFS/test_zkfc.py 8aa4871 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_metastore.py 9153a84 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py 5230196 ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 8d388ab ambari-server/src/test/python/stacks/2.0.6/hooks/before-INSTALL/test_before_install.py e038ddf ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py 990eac8 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_drpc_server.py d5afb42 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_nimbus.py 3ef45ad ambari-server/src/test/python/stacks/2.1/STORM/test_storm_rest_api_service.py 64a4662 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor.py 26089fb ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor_prod.py 549c5fc ambari-server/src/test/python/stacks/2.1/STORM/test_storm_ui_server.py d23114a ambari-server/src/test/python/stacks/2.2/KNOX/test_knox_gateway.py b1d9888 Diff: https://reviews.apache.org/r/31752/diff/ Testing --- Waiting for unit test results. Local tests passed, -- Total run:609 Total errors:0 Total failures:0 OK Thanks, Alejandro Fernandez
Re: Review Request 31752: ambari-sudo.sh needs full path, install fails if JDK is not installed
On March 5, 2015, 1:15 p.m., Nate Cole wrote: Seems like we should be abstracting somehow - maybe with an ExecuteSudo or something that takes the same exact arguments as Execute, but does all this sudo voodoo. Jonathan Hurley wrote: Agreed; why is the separate script necessary? I even thought that the existing Execute resource took a `sudo=true` parameter. Andrew Onischuk wrote: Guys we have Execute with sudo=True argument. This thing on lower level below this Execute. What I'm talking about is that the bug message fails due to tar error, it doesn't even execute ambari-sudo.sh tar -xf ... ambari-sudo.sh ... tar: Unexpected EOF in archive first one fails I don't understand how this fix can anyhow fix the problem. Also I don't see much purpose of doing this. Ambari-sudo is always in $PATH, which is set in ambari-env.sh and is inherited by the child processes. If this doesn't fix anything I think it's looks nicer with short path in task logs. - Andrew --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/#review75325 --- On March 5, 2015, 12:59 a.m., Alejandro Fernandez wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/ --- (Updated March 5, 2015, 12:59 a.m.) Review request for Ambari, Andrew Onischuk, Jonathan Hurley, Nate Cole, and Sid Wagle. Bugs: AMBARI-9938 https://issues.apache.org/jira/browse/AMBARI-9938 Repository: ambari Description --- When HDP is installed on a host without JDK, the before-install hook will attempt to install JDK if it is not present. However, this fails because ambari-sudo.sh needs the fully qualified path to the script. ``` Execution of 'mkdir -p /var/lib/ambari-agent/data/tmp/jdk cd /var/lib/ambari-agent/data/tmp/jdk tar -xf /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jdk-7u67-linux-x64.tar.gz ambari-sudo.sh cp -r /var/lib/ambari-agent/data/tmp/jdk/* /usr/jdk64' returned 2. [3/4/15, 12:58:13 PM] Alejandro Fernandez: tar: Unexpected EOF in archive ``` Diffs - ambari-common/src/main/python/ambari_commons/constants.py b823b31 ambari-server/src/test/python/stacks/2.0.6/FLUME/test_flume.py b6f4821 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_monitor.py 396b9d2 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_server.py 7d0afc7 ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_master.py 36c942e ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_regionserver.py 8a79701 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_datanode.py 54ca083 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_journalnode.py 21cefae ambari-server/src/test/python/stacks/2.0.6/HDFS/test_namenode.py 1e4142f ambari-server/src/test/python/stacks/2.0.6/HDFS/test_service_check.py e24ff8d ambari-server/src/test/python/stacks/2.0.6/HDFS/test_snamenode.py 5bedf5b ambari-server/src/test/python/stacks/2.0.6/HDFS/test_zkfc.py 8aa4871 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_metastore.py 9153a84 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py 5230196 ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 8d388ab ambari-server/src/test/python/stacks/2.0.6/hooks/before-INSTALL/test_before_install.py e038ddf ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py 990eac8 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_drpc_server.py d5afb42 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_nimbus.py 3ef45ad ambari-server/src/test/python/stacks/2.1/STORM/test_storm_rest_api_service.py 64a4662 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor.py 26089fb ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor_prod.py 549c5fc ambari-server/src/test/python/stacks/2.1/STORM/test_storm_ui_server.py d23114a ambari-server/src/test/python/stacks/2.2/KNOX/test_knox_gateway.py b1d9888 Diff: https://reviews.apache.org/r/31752/diff/ Testing --- Waiting for unit test results. Local tests passed, -- Total run:609 Total errors:0 Total failures:0 OK Thanks, Alejandro Fernandez
[jira] [Commented] (AMBARI-9931) RU: upgrade dialog does not refresh with current tasks w/o browser reload
[ https://issues.apache.org/jira/browse/AMBARI-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14348758#comment-14348758 ] Hudson commented on AMBARI-9931: SUCCESS: Integrated in Ambari-trunk-Commit #1955 (See [https://builds.apache.org/job/Ambari-trunk-Commit/1955/]) AMBARI-9931 RU: upgrade dialog does not refresh with current tasks w/o browser reload. (atkach) (atkach: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=f149fd15cf83c64a28e7ce1e20d3b7b08f159cb5) * ambari-web/app/views/main/admin/stack_upgrade/upgrade_wizard_view.js * ambari-web/app/views/main/admin/stack_upgrade/upgrade_group_view.js * ambari-web/app/templates/main/admin/stack_upgrade/stack_upgrade_wizard.hbs * ambari-web/app/controllers/main/admin/stack_and_upgrade_controller.js * ambari-web/test/views/main/admin/stack_upgrade/upgrade_wizard_view_test.js * ambari-web/app/templates/main/admin/stack_upgrade/upgrade_group.hbs * ambari-web/app/templates/main/admin/stack_upgrade/upgrade_task.hbs * ambari-web/test/controllers/main/admin/stack_and_upgrade_controller_test.js RU: upgrade dialog does not refresh with current tasks w/o browser reload - Key: AMBARI-9931 URL: https://issues.apache.org/jira/browse/AMBARI-9931 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Tkach Assignee: Andrii Tkach Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9931.patch Perform upgrade, dialog goes thru prepare tasks, I click to proceed. At that point, the % numbers move and I see progress in ambari-server.log but the UI list of tasks is not updating. It only shows Prepare task. Also, looks like things are coming back from REST API that show progress. It's the UI is just not updating. I reload my browser and then it catches up and starts showing tasks. What I notice: top level tasks don't show, I have to refresh browser, then the top-level will shows and the subs show. 1) So Prepare Backups shows 2) once PB is done, the next task does not show until I refresh browser 3) Then PB + ZooKeeper shows. once ZK is done, the next task does not show until I refresh browser 4) Then PB + ZK + Core Masters shows. once CM is done, the next task does not show until I refresh browser I see this this for all top-level tasks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-9942) Improve view URL interaction with ambari-web's location bar
Andrew Onischuk created AMBARI-9942: --- Summary: Improve view URL interaction with ambari-web's location bar Key: AMBARI-9942 URL: https://issues.apache.org/jira/browse/AMBARI-9942 Project: Ambari Issue Type: Bug Reporter: Andrew Onischuk Assignee: Andrew Onischuk Fix For: 2.1.0 Currently views are embedded inside ambari-web via an IFrame. Though this provides a clean separation of ambari-web and view-ui, it raises the issue that the view-ui URLs are completely disconnected from ambari-web's URLs. So if user wanted to bookmark a particular page inside the view, it is not possible as the location bar is completely owned by ambari-web. Some of the interactions that have to work are 1. As user interacts within a view, ambari-web location-bar should be updated (bottom-up) so that user can copy location from location-bar and send it to someone. 2. When another user gets URL and pastes into location-bar, it should propagate down to the view and show correct UI (top-down) 3. When user right-clicks on any link inside the view and copies its location, that location should be accessible to any other user. Another user should get login screen if need be, following which they should be redirected to correct UI Ambari-web uses EmberJS which relies heavily on the URL's hash. The views however are free to choose whatever relative paths they want. Trying to merge EmberJS hash and view relative paths is problematic in some cases: 1. '=' onwards is chopped off in Firefox 2. Having two '#' hashes in the URL is not standard Ambari-web needs to provide some help/guidance in better managing view URLs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 31766: Umask 027 + non-root result in failures during deploy
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31766/ --- (Updated March 5, 2015, 3:34 p.m.) Review request for Ambari, Dmitro Lisnichenko and Vitalyi Brodetskyi. Bugs: AMBARI-9943 https://issues.apache.org/jira/browse/AMBARI-9943 Repository: ambari Description --- This is double permission restriction which we seems like haven't yet tested. The most common problem I see over the code that hardcore umask removes some x bits. We didn't really need them until we were using root account, but now it becomes a problem in multiple or more places. Diffs (updated) - ambari-agent/conf/unix/ambari-env.sh bf7ca55 ambari-agent/src/test/python/resource_management/TestDirectoryResource.py fe64400 ambari-agent/src/test/python/resource_management/TestExecuteResource.py 216fd1a ambari-agent/src/test/python/resource_management/TestFileResource.py 28fa610 ambari-agent/src/test/python/resource_management/TestLinkResource.py cdb6061 ambari-agent/src/test/python/resource_management/TestPropertiesFileResource.py bdb64de ambari-agent/src/test/python/resource_management/TestXmlConfigResource.py 4affd31 ambari-common/src/main/python/resource_management/core/providers/system.py ac63b21 ambari-common/src/main/python/resource_management/core/sudo.py ae21f84 ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py 64bcebc ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/shared_initialization.py 745402a ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_metastore.py 9153a84 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py 5230196 ambari-server/src/test/python/stacks/2.0.6/hooks/before-INSTALL/test_before_install.py e038ddf ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py 990eac8 Diff: https://reviews.apache.org/r/31766/diff/ Testing --- mvn clean test Thanks, Andrew Onischuk
Re: Review Request 31766: Umask 027 + non-root result in failures during deploy
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31766/#review75338 --- Ship it! Ship It! - Dmitro Lisnichenko On March 5, 2015, 3:12 p.m., Andrew Onischuk wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31766/ --- (Updated March 5, 2015, 3:12 p.m.) Review request for Ambari, Dmitro Lisnichenko and Vitalyi Brodetskyi. Bugs: AMBARI-9943 https://issues.apache.org/jira/browse/AMBARI-9943 Repository: ambari Description --- This is double permission restriction which we seems like haven't yet tested. The most common problem I see over the code that hardcore umask removes some x bits. We didn't really need them until we were using root account, but now it becomes a problem in multiple or more places. Diffs - ambari-agent/conf/unix/ambari-env.sh bf7ca55 ambari-agent/src/test/python/resource_management/TestDirectoryResource.py fe64400 ambari-agent/src/test/python/resource_management/TestExecuteResource.py 216fd1a ambari-agent/src/test/python/resource_management/TestFileResource.py 28fa610 ambari-agent/src/test/python/resource_management/TestLinkResource.py cdb6061 ambari-agent/src/test/python/resource_management/TestPropertiesFileResource.py bdb64de ambari-agent/src/test/python/resource_management/TestXmlConfigResource.py 4affd31 ambari-common/src/main/python/resource_management/core/providers/system.py ac63b21 ambari-common/src/main/python/resource_management/core/sudo.py ae21f84 ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py 64bcebc ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/shared_initialization.py 745402a ambari-server/src/test/python/stacks/2.0.6/hooks/before-INSTALL/test_before_install.py e038ddf ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py 990eac8 Diff: https://reviews.apache.org/r/31766/diff/ Testing --- mvn clean test Thanks, Andrew Onischuk
[jira] [Created] (AMBARI-9944) RU: web should not display older versions by default
Andrii Babiichuk created AMBARI-9944: Summary: RU: web should not display older versions by default Key: AMBARI-9944 URL: https://issues.apache.org/jira/browse/AMBARI-9944 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Babiichuk Assignee: Andrii Babiichuk Priority: Critical Fix For: 2.0.0 Ambari Web should not display older versions by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9944) RU: web should not display older versions by default
[ https://issues.apache.org/jira/browse/AMBARI-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Babiichuk updated AMBARI-9944: - Attachment: AMBARI-9944.patch RU: web should not display older versions by default Key: AMBARI-9944 URL: https://issues.apache.org/jira/browse/AMBARI-9944 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Babiichuk Assignee: Andrii Babiichuk Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9944.patch Ambari Web should not display older versions by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (AMBARI-9383) Configs: Ambari support for HBase bucketcache
[ https://issues.apache.org/jira/browse/AMBARI-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Sen resolved AMBARI-9383. Resolution: Fixed Committed to trunk Configs: Ambari support for HBase bucketcache - Key: AMBARI-9383 URL: https://issues.apache.org/jira/browse/AMBARI-9383 Project: Ambari Issue Type: Improvement Components: ambari-server Affects Versions: 2.0.0 Reporter: Dmytro Sen Assignee: Dmytro Sen Fix For: 2.0.0 Off-heap caches in HBase have been implemented. Ambari should expose the necessary configurations for a user to enable this feature. We're interested only in the BucketCache off-heap mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-8559) Deploy Config without service restart
[ https://issues.apache.org/jira/browse/AMBARI-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated AMBARI-8559: Component/s: ambari-web Deploy Config without service restart - Key: AMBARI-8559 URL: https://issues.apache.org/jira/browse/AMBARI-8559 Project: Ambari Issue Type: Improvement Components: ambari-web Affects Versions: 1.6.1, 1.7.0 Environment: HDP 2.1 HDP 2.2 Reporter: Hari Sekhon Feature request for Deploy Config button - without doing a service restart. Not all config changes require a service restart (eg. a client side option change or when changing NFS gateway setup there is no point in bouncing all the NameNode and DataNodes). It's more work to figure out what settings affect which components (and probably impractical), but it's sufficient to provide Deploy Config and leave the service marked with stale config for restart for a later time. This is analagous to Cloudera Manager's Deploy Client Configuration button. Regards, Hari Sekhon (ex-Cloudera) http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-6706) Rolling restarts for HA Masters eg HDFS NameNodes
[ https://issues.apache.org/jira/browse/AMBARI-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated AMBARI-6706: Component/s: ambari-web Rolling restarts for HA Masters eg HDFS NameNodes - Key: AMBARI-6706 URL: https://issues.apache.org/jira/browse/AMBARI-6706 Project: Ambari Issue Type: New Feature Components: ambari-web Affects Versions: 1.6.1, 1.7.0 Environment: HDP 2.1 HDP 2.2 Reporter: Hari Sekhon Rolling restarts currently only work for slaves such as Datanodes in Web UI when doing Service Actions - Restart Datanodes. When doing a Restart All it doesn't give any option to do a rolling restart across all components including masters. I usually configure HDFS NameNode HA but this means that in Ambari there is downtime in reconfiguration even with NN HA enabled since it doesn't do rolling restarts of the NameNodes in order to keep the cluster up. This is needed to actually make the service really HA including HA during routine maintenance and reconfiguration. Maybe I could workaround to shut master components one by one manually but would be nice if Ambari could support this directly. Regards, Hari Sekhon http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9383) Configs: Ambari support for HBase bucketcache
[ https://issues.apache.org/jira/browse/AMBARI-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Sen updated AMBARI-9383: --- Fix Version/s: (was: 2.0.0) 2.1.0 Configs: Ambari support for HBase bucketcache - Key: AMBARI-9383 URL: https://issues.apache.org/jira/browse/AMBARI-9383 Project: Ambari Issue Type: Improvement Components: ambari-server Affects Versions: 2.0.0 Reporter: Dmytro Sen Assignee: Dmytro Sen Fix For: 2.1.0 Off-heap caches in HBase have been implemented. Ambari should expose the necessary configurations for a user to enable this feature. We're interested only in the BucketCache off-heap mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-9946) Host Actions should allow service management of other components than just DataNode/NodeManager
Hari Sekhon created AMBARI-9946: --- Summary: Host Actions should allow service management of other components than just DataNode/NodeManager Key: AMBARI-9946 URL: https://issues.apache.org/jira/browse/AMBARI-9946 Project: Ambari Issue Type: New Feature Components: ambari-web Affects Versions: 1.7.0 Environment: HDP 2.2 Reporter: Hari Sekhon Priority: Minor Feature request for Node Actions for all services. Currently there is no way to restart all of one type of component across all nodes in the web UI via Node Actions. It currently only gives the option to restart DataNodes/NodeManagers and no other components. Regards, Hari Sekhon http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (AMBARI-9943) Umask 027 + non-root result in failures during deploy
[ https://issues.apache.org/jira/browse/AMBARI-9943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Onischuk resolved AMBARI-9943. - Resolution: Fixed Committed to trunk and branch-2.0.0 Umask 027 + non-root result in failures during deploy - Key: AMBARI-9943 URL: https://issues.apache.org/jira/browse/AMBARI-9943 Project: Ambari Issue Type: Bug Reporter: Andrew Onischuk Assignee: Andrew Onischuk Fix For: 2.0.0 This is double permission restriction which we seems like haven't yet tested. The most common problem I see over the code that hardcore umask removes some x bits. We didn't really need them until we were using root account, but now it becomes a problem in multiple or more places. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Review Request 31767: Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31767/ --- Review request for Ambari. Bugs: AMBARI-9947 https://issues.apache.org/jira/browse/AMBARI-9947 Repository: ambari Description --- After upgrading secured cluster and restarting all services oozie has warning alert: Oozie Server Web UI HTTP 500 response in 0.000 seconds STR: 1. Installed ambari 1.7.0 on 3-node cluster, all services 2. Enabl security 3. Upgrade ambari to 2.0.0 4. Stop all services 5. Start all services Diffs - ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py 8bd697a ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 8d388ab Diff: https://reviews.apache.org/r/31767/diff/ Testing --- all tests passed Thanks, Dmytro Sen
Re: Review Request 31767: Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31767/#review75340 --- Ship it! Ship It! - Myroslav Papirkovskyy On Березень 5, 2015, 6:06 після полудня, Dmytro Sen wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31767/ --- (Updated Березень 5, 2015, 6:06 після полудня) Review request for Ambari, Dmitro Lisnichenko, Myroslav Papirkovskyy, and Vitalyi Brodetskyi. Bugs: AMBARI-9947 https://issues.apache.org/jira/browse/AMBARI-9947 Repository: ambari Description --- After upgrading secured cluster and restarting all services oozie has warning alert: Oozie Server Web UI HTTP 500 response in 0.000 seconds STR: 1. Installed ambari 1.7.0 on 3-node cluster, all services 2. Enabl security 3. Upgrade ambari to 2.0.0 4. Stop all services 5. Start all services Diffs - ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py 8bd697a ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 8d388ab Diff: https://reviews.apache.org/r/31767/diff/ Testing --- all tests passed Thanks, Dmytro Sen
[jira] [Commented] (AMBARI-9944) RU: web should not display older versions by default
[ https://issues.apache.org/jira/browse/AMBARI-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14348872#comment-14348872 ] Oleg Nechiporenko commented on AMBARI-9944: --- +1 for patch. RU: web should not display older versions by default Key: AMBARI-9944 URL: https://issues.apache.org/jira/browse/AMBARI-9944 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Babiichuk Assignee: Andrii Babiichuk Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9944.patch Ambari Web should not display older versions by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (AMBARI-9945) Rolling restart including masters
Hari Sekhon created AMBARI-9945: --- Summary: Rolling restart including masters Key: AMBARI-9945 URL: https://issues.apache.org/jira/browse/AMBARI-9945 Project: Ambari Issue Type: New Feature Affects Versions: 1.7.0 Environment: HDP 2.2 Reporter: Hari Sekhon Rolling restarts are still only applied to slave roles, needs to be at the service-level and cluster-level and included HA masters such as Resource Managers. Regards, Hari Sekhon http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-6706) Rolling restarts for HA Masters eg HDFS NameNodes
[ https://issues.apache.org/jira/browse/AMBARI-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated AMBARI-6706: Environment: HDP 2.1 HDP 2.2 (was: HDP 2.1) Rolling restarts for HA Masters eg HDFS NameNodes - Key: AMBARI-6706 URL: https://issues.apache.org/jira/browse/AMBARI-6706 Project: Ambari Issue Type: Improvement Affects Versions: 1.6.1, 1.7.0 Environment: HDP 2.1 HDP 2.2 Reporter: Hari Sekhon Rolling restarts currently only work for slaves such as Datanodes in Web UI when doing Service Actions - Restart Datanodes. When doing a Restart All it doesn't give any option to do a rolling restart across all components including masters. I usually configure HDFS NameNode HA but this means that in Ambari there is downtime in reconfiguration even with NN HA enabled since it doesn't do rolling restarts of the NameNodes in order to keep the cluster up. This is needed to actually make the service really HA including HA during routine maintenance and reconfiguration. Maybe I could workaround to shut master components one by one manually but would be nice if Ambari could support this directly. Regards, Hari Sekhon http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-6706) Rolling restarts for HA Masters eg HDFS NameNodes
[ https://issues.apache.org/jira/browse/AMBARI-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated AMBARI-6706: Issue Type: New Feature (was: Improvement) Rolling restarts for HA Masters eg HDFS NameNodes - Key: AMBARI-6706 URL: https://issues.apache.org/jira/browse/AMBARI-6706 Project: Ambari Issue Type: New Feature Affects Versions: 1.6.1, 1.7.0 Environment: HDP 2.1 HDP 2.2 Reporter: Hari Sekhon Rolling restarts currently only work for slaves such as Datanodes in Web UI when doing Service Actions - Restart Datanodes. When doing a Restart All it doesn't give any option to do a rolling restart across all components including masters. I usually configure HDFS NameNode HA but this means that in Ambari there is downtime in reconfiguration even with NN HA enabled since it doesn't do rolling restarts of the NameNodes in order to keep the cluster up. This is needed to actually make the service really HA including HA during routine maintenance and reconfiguration. Maybe I could workaround to shut master components one by one manually but would be nice if Ambari could support this directly. Regards, Hari Sekhon http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (AMBARI-9945) Rolling restart including masters
[ https://issues.apache.org/jira/browse/AMBARI-9945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon resolved AMBARI-9945. - Resolution: Duplicate AMBARI-6706 also raised by me actually covers this. Rolling restart including masters - Key: AMBARI-9945 URL: https://issues.apache.org/jira/browse/AMBARI-9945 Project: Ambari Issue Type: New Feature Affects Versions: 1.7.0 Environment: HDP 2.2 Reporter: Hari Sekhon Rolling restarts are still only applied to slave roles, needs to be at the service-level and cluster-level and included HA masters such as Resource Managers. Regards, Hari Sekhon http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-8559) Deploy Config without service restart
[ https://issues.apache.org/jira/browse/AMBARI-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated AMBARI-8559: Priority: Major (was: Minor) Deploy Config without service restart - Key: AMBARI-8559 URL: https://issues.apache.org/jira/browse/AMBARI-8559 Project: Ambari Issue Type: Improvement Affects Versions: 1.6.1 Environment: HDP 2.1 Reporter: Hari Sekhon Feature request for Deploy Config button - without doing a service restart. Not all config changes require a service restart (eg. a client side option change or when changing NFS gateway setup there is no point in bouncing all the NameNode and DataNodes). It's more work to figure out what settings affect which components (and probably impractical), but it's sufficient to provide Deploy Config and leave the service marked with stale config for restart for a later time. This is analagous to Cloudera Manager's Deploy Client Configuration button. Regards, Hari Sekhon (ex-Cloudera) http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-8559) Deploy Config without service restart
[ https://issues.apache.org/jira/browse/AMBARI-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated AMBARI-8559: Environment: HDP 2.1 HDP 2.2 (was: HDP 2.1) Deploy Config without service restart - Key: AMBARI-8559 URL: https://issues.apache.org/jira/browse/AMBARI-8559 Project: Ambari Issue Type: Improvement Affects Versions: 1.6.1, 1.7.0 Environment: HDP 2.1 HDP 2.2 Reporter: Hari Sekhon Feature request for Deploy Config button - without doing a service restart. Not all config changes require a service restart (eg. a client side option change or when changing NFS gateway setup there is no point in bouncing all the NameNode and DataNodes). It's more work to figure out what settings affect which components (and probably impractical), but it's sufficient to provide Deploy Config and leave the service marked with stale config for restart for a later time. This is analagous to Cloudera Manager's Deploy Client Configuration button. Regards, Hari Sekhon (ex-Cloudera) http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-6706) Rolling restarts for HA Masters eg HDFS NameNodes
[ https://issues.apache.org/jira/browse/AMBARI-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated AMBARI-6706: Affects Version/s: 1.7.0 Rolling restarts for HA Masters eg HDFS NameNodes - Key: AMBARI-6706 URL: https://issues.apache.org/jira/browse/AMBARI-6706 Project: Ambari Issue Type: Improvement Affects Versions: 1.6.1, 1.7.0 Environment: HDP 2.1 HDP 2.2 Reporter: Hari Sekhon Rolling restarts currently only work for slaves such as Datanodes in Web UI when doing Service Actions - Restart Datanodes. When doing a Restart All it doesn't give any option to do a rolling restart across all components including masters. I usually configure HDFS NameNode HA but this means that in Ambari there is downtime in reconfiguration even with NN HA enabled since it doesn't do rolling restarts of the NameNodes in order to keep the cluster up. This is needed to actually make the service really HA including HA during routine maintenance and reconfiguration. Maybe I could workaround to shut master components one by one manually but would be nice if Ambari could support this directly. Regards, Hari Sekhon http://www.linkedin.com/in/harisekhon -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 31767: Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31767/#review75341 --- Ship it! Ship It! - Dmitro Lisnichenko On March 5, 2015, 4:06 p.m., Dmytro Sen wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31767/ --- (Updated March 5, 2015, 4:06 p.m.) Review request for Ambari, Dmitro Lisnichenko, Myroslav Papirkovskyy, and Vitalyi Brodetskyi. Bugs: AMBARI-9947 https://issues.apache.org/jira/browse/AMBARI-9947 Repository: ambari Description --- After upgrading secured cluster and restarting all services oozie has warning alert: Oozie Server Web UI HTTP 500 response in 0.000 seconds STR: 1. Installed ambari 1.7.0 on 3-node cluster, all services 2. Enabl security 3. Upgrade ambari to 2.0.0 4. Stop all services 5. Start all services Diffs - ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py 8bd697a ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 8d388ab Diff: https://reviews.apache.org/r/31767/diff/ Testing --- all tests passed Thanks, Dmytro Sen
[jira] [Created] (AMBARI-9948) Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0
Vitaly Brodetskyi created AMBARI-9948: - Summary: Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0 Key: AMBARI-9948 URL: https://issues.apache.org/jira/browse/AMBARI-9948 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Vitaly Brodetskyi Assignee: Vitaly Brodetskyi Priority: Critical Fix For: 2.0.0 STR: 1. Install Ambari 1.6.1, HDP 2.1 2. Enable security 3. Upgrade to Ambari 2.0.0 4. Execute Kerberos Wizard. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9939) RU - Service Check group to include all services with a service_check script
[ https://issues.apache.org/jira/browse/AMBARI-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-9939: Attachment: AMBARI-9939.patch RU - Service Check group to include all services with a service_check script Key: AMBARI-9939 URL: https://issues.apache.org/jira/browse/AMBARI-9939 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Labels: rolling_upgrade Fix For: 2.0.0 Attachments: AMBARI-9939.patch Installed a minimal 3-node cluster with HDFS, MR, YARN, Pig, Tez. Performed an RU. Expected result is for the last service check to be ran on all components. However, it skipped Pig Service Check. http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8 {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8;, UpgradeGroup : { completed_task_count : 4, group_id : 8, in_progress_task_count : 0, name : SERVICE_CHECK, progress_percent : 100.0, request_id : 32, status : COMPLETED, title : All Service Checks, total_task_count : 4 }, upgrade_items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50 } } ] } {code} http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text;, items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47, text : Service Check HDFS } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48, text : Service Check YARN } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49, text : Service Check ZooKeeper } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50, text : Service Check MapReduce2 } } ] } {code} The Upgrade Pack contains, {code} group name=SERVICE_CHECK title=All Service Checks xsi:type=service-check skippabletrue/skippable directionUPGRADE/direction priority serviceHDFS/service serviceYARN/service serviceHBASE/service /priority /group {code} Because the pig service check was not ran, the new tez tarball was not copied to HDFS. The underlying issue is that a service is not added to the Service Check group if it is a clientOnly service. However, Pig is clientOnly but still have a service check python script. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9921) Spark Thriftserver fails to initialize correctly in secure Ambari cluster
[ https://issues.apache.org/jira/browse/AMBARI-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349193#comment-14349193 ] Hudson commented on AMBARI-9921: ABORTED: Integrated in Ambari-branch-2.0.0 #11 (See [https://builds.apache.org/job/Ambari-branch-2.0.0/11/]) AMBARI-9921. Spark Thriftserver fails to initialize correctly in secure Ambari cluster (Gautam Borad via smohanty) (smohanty: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=3662bd41fe6f197a4ac083ddbf9bd9bc159c1bc2) * ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/setup_spark.py * ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/params.py Spark Thriftserver fails to initialize correctly in secure Ambari cluster - Key: AMBARI-9921 URL: https://issues.apache.org/jira/browse/AMBARI-9921 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Gautam Borad Assignee: Gautam Borad Fix For: 2.0.0 Attachments: AMBARI-9921-Spark-Thriftserver-fails-to-initialize-c.patch Spark Thriftserver fails to initialize correctly on secure cluster. To fix it following changes need to be made to $SPARK_CONF/hive-site.xml: {noformat} property namehive.security.authorization.enabled/name valuefalse/value /property {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9948) Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0
[ https://issues.apache.org/jira/browse/AMBARI-9948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitaly Brodetskyi updated AMBARI-9948: -- Attachment: AMBARI-9948.patch Oozie Service Check fails after upgrade secured cluster 1.6.1-2.0.0 Key: AMBARI-9948 URL: https://issues.apache.org/jira/browse/AMBARI-9948 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Vitaly Brodetskyi Assignee: Vitaly Brodetskyi Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9948.patch STR: 1. Install Ambari 1.6.1, HDP 2.1 2. Enable security 3. Upgrade to Ambari 2.0.0 4. Execute Kerberos Wizard. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 31752: ambari-sudo.sh needs full path, install fails if JDK is not installed
On March 5, 2015, 1:15 p.m., Nate Cole wrote: Seems like we should be abstracting somehow - maybe with an ExecuteSudo or something that takes the same exact arguments as Execute, but does all this sudo voodoo. Jonathan Hurley wrote: Agreed; why is the separate script necessary? I even thought that the existing Execute resource took a `sudo=true` parameter. Andrew Onischuk wrote: Guys we have Execute with sudo=True argument. This thing on lower level below this Execute. Andrew Onischuk wrote: What I'm talking about is that the bug message fails due to tar error, it doesn't even execute ambari-sudo.sh tar -xf ... ambari-sudo.sh ... tar: Unexpected EOF in archive first one fails I don't understand how this fix can anyhow fix the problem. Also I don't see much purpose of doing this. Ambari-sudo is always in $PATH, which is set in ambari-env.sh and is inherited by the child processes. If this doesn't fix anything I think it's looks nicer with short path in task logs. Jonathan Hurley wrote: In my dev environment, simplying copying the new ambari-sudo.sh to the right spot worked, so the path being set is correct. I initially commented that it might be an RPM issue if the ambari-sudo.sh can't be found. But I think that Andrew is right that this doesn't seem like a missing script issue. Andrew Onischuk wrote: After some googling: seems like the error happens due to corrupt archive file. As for me I cannot reproduce that failure. Maybe just a one time issue with corrupt download of jdk? I'll abandon this patch. Yet it passed on 375 hosts but failed on the remaining 25. - Alejandro --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/#review75325 --- On March 5, 2015, 12:59 a.m., Alejandro Fernandez wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/ --- (Updated March 5, 2015, 12:59 a.m.) Review request for Ambari, Andrew Onischuk, Jonathan Hurley, Nate Cole, and Sid Wagle. Bugs: AMBARI-9938 https://issues.apache.org/jira/browse/AMBARI-9938 Repository: ambari Description --- When HDP is installed on a host without JDK, the before-install hook will attempt to install JDK if it is not present. However, this fails because ambari-sudo.sh needs the fully qualified path to the script. ``` Execution of 'mkdir -p /var/lib/ambari-agent/data/tmp/jdk cd /var/lib/ambari-agent/data/tmp/jdk tar -xf /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jdk-7u67-linux-x64.tar.gz ambari-sudo.sh cp -r /var/lib/ambari-agent/data/tmp/jdk/* /usr/jdk64' returned 2. [3/4/15, 12:58:13 PM] Alejandro Fernandez: tar: Unexpected EOF in archive ``` Diffs - ambari-common/src/main/python/ambari_commons/constants.py b823b31 ambari-server/src/test/python/stacks/2.0.6/FLUME/test_flume.py b6f4821 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_monitor.py 396b9d2 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_server.py 7d0afc7 ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_master.py 36c942e ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_regionserver.py 8a79701 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_datanode.py 54ca083 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_journalnode.py 21cefae ambari-server/src/test/python/stacks/2.0.6/HDFS/test_namenode.py 1e4142f ambari-server/src/test/python/stacks/2.0.6/HDFS/test_service_check.py e24ff8d ambari-server/src/test/python/stacks/2.0.6/HDFS/test_snamenode.py 5bedf5b ambari-server/src/test/python/stacks/2.0.6/HDFS/test_zkfc.py 8aa4871 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_metastore.py 9153a84 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py 5230196 ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 8d388ab ambari-server/src/test/python/stacks/2.0.6/hooks/before-INSTALL/test_before_install.py e038ddf ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py 990eac8 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_drpc_server.py d5afb42 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_nimbus.py 3ef45ad ambari-server/src/test/python/stacks/2.1/STORM/test_storm_rest_api_service.py 64a4662 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor.py 26089fb ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor_prod.py 549c5fc
[jira] [Updated] (AMBARI-9939) RU - Service Check group to include all services with a service_check script
[ https://issues.apache.org/jira/browse/AMBARI-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated AMBARI-9939: Attachment: (was: AMBARI-9939.patch) RU - Service Check group to include all services with a service_check script Key: AMBARI-9939 URL: https://issues.apache.org/jira/browse/AMBARI-9939 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Labels: rolling_upgrade Fix For: 2.0.0 Installed a minimal 3-node cluster with HDFS, MR, YARN, Pig, Tez. Performed an RU. Expected result is for the last service check to be ran on all components. However, it skipped Pig Service Check. http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8 {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8;, UpgradeGroup : { completed_task_count : 4, group_id : 8, in_progress_task_count : 0, name : SERVICE_CHECK, progress_percent : 100.0, request_id : 32, status : COMPLETED, title : All Service Checks, total_task_count : 4 }, upgrade_items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50 } } ] } {code} http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text;, items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47, text : Service Check HDFS } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48, text : Service Check YARN } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49, text : Service Check ZooKeeper } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50, text : Service Check MapReduce2 } } ] } {code} The Upgrade Pack contains, {code} group name=SERVICE_CHECK title=All Service Checks xsi:type=service-check skippabletrue/skippable directionUPGRADE/direction priority serviceHDFS/service serviceYARN/service serviceHBASE/service /priority /group {code} Because the pig service check was not ran, the new tez tarball was not copied to HDFS. The underlying issue is that a service is not added to the Service Check group if it is a clientOnly service. However, Pig is clientOnly but still have a service check python script. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (AMBARI-9938) ambari-sudo.sh needs full path, install fails if JDK is not installed
[ https://issues.apache.org/jira/browse/AMBARI-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez resolved AMBARI-9938. - Resolution: Invalid Turns out that this is an environment issue in which the true cause is the untar command is failing. No code changes are required. ambari-sudo.sh needs full path, install fails if JDK is not installed - Key: AMBARI-9938 URL: https://issues.apache.org/jira/browse/AMBARI-9938 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Fix For: 2.0.0 Attachments: AMBARI-9938.patch When HDP is installed on a host without JDK, the before-install hook will attempt to install JDK if it is not present. However, this fails because ambari-sudo.sh needs the fully qualified path to the script. {code} Execution of 'mkdir -p /var/lib/ambari-agent/data/tmp/jdk cd /var/lib/ambari-agent/data/tmp/jdk tar -xf /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jdk-7u67-linux-x64.tar.gz ambari-sudo.sh cp -r /var/lib/ambari-agent/data/tmp/jdk/* /usr/jdk64' returned 2. [3/4/15, 12:58:13 PM] Alejandro Fernandez: tar: Unexpected EOF in archive {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9921) Spark Thriftserver fails to initialize correctly in secure Ambari cluster
[ https://issues.apache.org/jira/browse/AMBARI-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349263#comment-14349263 ] Hudson commented on AMBARI-9921: SUCCESS: Integrated in Ambari-trunk-Commit #1959 (See [https://builds.apache.org/job/Ambari-trunk-Commit/1959/]) AMBARI-9921. Spark Thriftserver fails to initialize correctly in secure Ambari cluster (Gautam Borad via smohanty) (smohanty: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=fb3f33539eb9f49a30bd99238e607603d4d6d9c7) * ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/setup_spark.py * ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/params.py Spark Thriftserver fails to initialize correctly in secure Ambari cluster - Key: AMBARI-9921 URL: https://issues.apache.org/jira/browse/AMBARI-9921 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Gautam Borad Assignee: Gautam Borad Fix For: 2.0.0 Attachments: AMBARI-9921-Spark-Thriftserver-fails-to-initialize-c.patch Spark Thriftserver fails to initialize correctly on secure cluster. To fix it following changes need to be made to $SPARK_CONF/hive-site.xml: {noformat} property namehive.security.authorization.enabled/name valuefalse/value /property {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9947) Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster
[ https://issues.apache.org/jira/browse/AMBARI-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349262#comment-14349262 ] Hudson commented on AMBARI-9947: SUCCESS: Integrated in Ambari-trunk-Commit #1959 (See [https://builds.apache.org/job/Ambari-trunk-Commit/1959/]) AMBARI-9947 Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster (dsen: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=1fcffa7319fb485cf722444b01b466c0db4ac97d) * ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py * ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py Oozie alert after ambari upgrade 1.7.0-2.0.0 secured cluster - Key: AMBARI-9947 URL: https://issues.apache.org/jira/browse/AMBARI-9947 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Dmytro Sen Assignee: Dmytro Sen Priority: Blocker Fix For: 2.0.0 Attachments: AMBARI-9947.patch After upgrading secured cluster and restarting all services oozie has warning alert: Oozie Server Web UI HTTP 500 response in 0.000 seconds STR: 1. Installed ambari 1.7.0 on 3-node cluster, all services 2. Enabl security 3. Upgrade ambari to 2.0.0 4. Stop all services 5. Start all services -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (AMBARI-9939) RU - Service Check group to include all services with a service_check script
[ https://issues.apache.org/jira/browse/AMBARI-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349258#comment-14349258 ] Hadoop QA commented on AMBARI-9939: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12702866/AMBARI-9939.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The test build failed in ambari-server Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/1936//testReport/ Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/1936//console This message is automatically generated. RU - Service Check group to include all services with a service_check script Key: AMBARI-9939 URL: https://issues.apache.org/jira/browse/AMBARI-9939 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.0.0 Reporter: Alejandro Fernandez Assignee: Alejandro Fernandez Labels: rolling_upgrade Fix For: 2.0.0 Attachments: AMBARI-9939.patch Installed a minimal 3-node cluster with HDFS, MR, YARN, Pig, Tez. Performed an RU. Expected result is for the last service check to be ran on all components. However, it skipped Pig Service Check. http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8 {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8;, UpgradeGroup : { completed_task_count : 4, group_id : 8, in_progress_task_count : 0, name : SERVICE_CHECK, progress_percent : 100.0, request_id : 32, status : COMPLETED, title : All Service Checks, total_task_count : 4 }, upgrade_items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49 } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50 } } ] } {code} http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text {code} { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items?fields=UpgradeItem/text;, items : [ { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/47;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 47, text : Service Check HDFS } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/48;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 48, text : Service Check YARN } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/49;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 49, text : Service Check ZooKeeper } }, { href : http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/upgrades/32/upgrade_groups/8/upgrade_items/50;, UpgradeItem : { cluster_name : c1, group_id : 8, request_id : 32, stage_id : 50, text : Service Check MapReduce2
[jira] [Commented] (AMBARI-9368) Deadlock Between Dependent Cluster/Service/Component/Host Implementations
[ https://issues.apache.org/jira/browse/AMBARI-9368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14349294#comment-14349294 ] Hudson commented on AMBARI-9368: FAILURE: Integrated in Ambari-branch-1.7.0 #417 (See [https://builds.apache.org/job/Ambari-branch-1.7.0/417/]) AMBARI-9368 - Deadlock Between Dependent Cluster/Service/Component/Host Implementations (jonathanhurley) (jhurley: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=dd572d3544eb6a36c78155ae88ac423a16922d00) * ambari-server/src/main/java/org/apache/ambari/server/state/ServiceComponentImpl.java * ambari-server/src/main/java/org/apache/ambari/server/state/ServiceImpl.java * ambari-web/package.json * ambari-server/src/test/java/org/apache/ambari/server/state/cluster/ClusterDeadlockTest.java * ambari-server/src/main/java/org/apache/ambari/server/state/cluster/ClusterImpl.java * ambari-server/src/main/java/org/apache/ambari/server/state/svccomphost/ServiceComponentHostImpl.java Deadlock Between Dependent Cluster/Service/Component/Host Implementations - Key: AMBARI-9368 URL: https://issues.apache.org/jira/browse/AMBARI-9368 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 1.6.1 Reporter: Jonathan Hurley Assignee: Jonathan Hurley Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9368.patch, jstack.29096, monitor_lock-1-pid10099.txt, monitor_lock-2-pid10099.txt, monitor_lock-3-pid10099.txt Looks like a textbook deadlock. Why jstack doesn't report it, I don't know. Call Hierarchy {code} qtp572501352-104 ServiceComponentImpl.convertToResponse readWriteLock.readLock().lock() ACQUIRED ServiceComponentHostImpl.getState() readLock.lock() BLOCKED qtp572501352-34 ServiceComponentHostImpl.persist() writeLock.lock() ACQUIRED ServiceComponentImpl.refresh() readWriteLock.writeLock() BLOCKED {code} Deadlock Order {code} qtp572501352-104 ServiceComponentImpl.convertToResponse readWriteLock.readLock().lock() ACQUIRED qtp572501352-34 ServiceComponentHostImpl.persist() writeLock.lock() ACQUIRED ServiceComponentImpl.refresh() readWriteLock.writeLock() BLOCKED qtp572501352-104 ServiceComponentHostImpl.getState() readLock.lock() BLOCKED {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (AMBARI-9931) RU: upgrade dialog does not refresh with current tasks w/o browser reload
[ https://issues.apache.org/jira/browse/AMBARI-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrii Tkach updated AMBARI-9931: - Attachment: (was: AMBARI-9931.patch) RU: upgrade dialog does not refresh with current tasks w/o browser reload - Key: AMBARI-9931 URL: https://issues.apache.org/jira/browse/AMBARI-9931 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Tkach Assignee: Andrii Tkach Priority: Critical Fix For: 2.0.0 Perform upgrade, dialog goes thru prepare tasks, I click to proceed. At that point, the % numbers move and I see progress in ambari-server.log but the UI list of tasks is not updating. It only shows Prepare task. Also, looks like things are coming back from REST API that show progress. It's the UI is just not updating. I reload my browser and then it catches up and starts showing tasks. What I notice: top level tasks don't show, I have to refresh browser, then the top-level will shows and the subs show. 1) So Prepare Backups shows 2) once PB is done, the next task does not show until I refresh browser 3) Then PB + ZooKeeper shows. once ZK is done, the next task does not show until I refresh browser 4) Then PB + ZK + Core Masters shows. once CM is done, the next task does not show until I refresh browser I see this this for all top-level tasks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 31752: ambari-sudo.sh needs full path, install fails if JDK is not installed
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/#review75315 --- Why would that make any change? The binary is in PATH and works in a lot of places like that. - Andrew Onischuk On March 5, 2015, 12:59 a.m., Alejandro Fernandez wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/ --- (Updated March 5, 2015, 12:59 a.m.) Review request for Ambari, Andrew Onischuk, Jonathan Hurley, Nate Cole, and Sid Wagle. Bugs: AMBARI-9938 https://issues.apache.org/jira/browse/AMBARI-9938 Repository: ambari Description --- When HDP is installed on a host without JDK, the before-install hook will attempt to install JDK if it is not present. However, this fails because ambari-sudo.sh needs the fully qualified path to the script. ``` Execution of 'mkdir -p /var/lib/ambari-agent/data/tmp/jdk cd /var/lib/ambari-agent/data/tmp/jdk tar -xf /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jdk-7u67-linux-x64.tar.gz ambari-sudo.sh cp -r /var/lib/ambari-agent/data/tmp/jdk/* /usr/jdk64' returned 2. [3/4/15, 12:58:13 PM] Alejandro Fernandez: tar: Unexpected EOF in archive ``` Diffs - ambari-common/src/main/python/ambari_commons/constants.py b823b31 ambari-server/src/test/python/stacks/2.0.6/FLUME/test_flume.py b6f4821 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_monitor.py 396b9d2 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_server.py 7d0afc7 ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_master.py 36c942e ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_regionserver.py 8a79701 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_datanode.py 54ca083 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_journalnode.py 21cefae ambari-server/src/test/python/stacks/2.0.6/HDFS/test_namenode.py 1e4142f ambari-server/src/test/python/stacks/2.0.6/HDFS/test_service_check.py e24ff8d ambari-server/src/test/python/stacks/2.0.6/HDFS/test_snamenode.py 5bedf5b ambari-server/src/test/python/stacks/2.0.6/HDFS/test_zkfc.py 8aa4871 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_metastore.py 9153a84 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py 5230196 ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 8d388ab ambari-server/src/test/python/stacks/2.0.6/hooks/before-INSTALL/test_before_install.py e038ddf ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py 990eac8 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_drpc_server.py d5afb42 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_nimbus.py 3ef45ad ambari-server/src/test/python/stacks/2.1/STORM/test_storm_rest_api_service.py 64a4662 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor.py 26089fb ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor_prod.py 549c5fc ambari-server/src/test/python/stacks/2.1/STORM/test_storm_ui_server.py d23114a ambari-server/src/test/python/stacks/2.2/KNOX/test_knox_gateway.py b1d9888 Diff: https://reviews.apache.org/r/31752/diff/ Testing --- Waiting for unit test results. Local tests passed, -- Total run:609 Total errors:0 Total failures:0 OK Thanks, Alejandro Fernandez
[jira] [Commented] (AMBARI-9933) UI stucks on manual step during RU (can be skipped)
[ https://issues.apache.org/jira/browse/AMBARI-9933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14348601#comment-14348601 ] Hudson commented on AMBARI-9933: ABORTED: Integrated in Ambari-branch-2.0.0 #7 (See [https://builds.apache.org/job/Ambari-branch-2.0.0/7/]) AMBARI-9933 UI stucks on manual step during RU (can be skipped). (atkach) (atkach: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=4c035f916d74267dc27840310645908671f89a32) * ambari-web/test/controllers/main/admin/stack_and_upgrade_controller_test.js * ambari-web/app/controllers/main/admin/stack_and_upgrade_controller.js UI stucks on manual step during RU (can be skipped) --- Key: AMBARI-9933 URL: https://issues.apache.org/jira/browse/AMBARI-9933 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Tkach Assignee: Andrii Tkach Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9933.patch, scr2.png RU dialog stopped as depicted on attached screenshot, upgrade is pending. this is caused by JS error due to invalid server reply. If I reload page, manual step displays correctly. {code} Json http://198.vm:8080/api/v1/clusters/cc/upgrades/18?upgrade_groups/UpgradeGroup/status!=PENDINGfields=Upgrade/progress_percent,Upgrade/request_context,Upgrade/request_status,Upgrade/direction,upgrade_groups/UpgradeGroup,upgrade_groups/upgrade_items/UpgradeItem/status,upgrade_groups/upgrade_items/UpgradeItem/context,upgrade_groups/upgrade_items/UpgradeItem/group_id,upgrade_groups/upgrade_items/UpgradeItem/progress_percent,upgrade_groups/upgrade_items/UpgradeItem/request_id,upgrade_groups/upgrade_items/UpgradeItem/skippable,upgrade_groups/upgrade_items/UpgradeItem/stage_id,upgrade_groups/upgrade_items/UpgradeItem/status,upgrade_groups/upgrade_items/UpgradeItem/textminimal_response=true_=1425322480944 --- { Upgrade: { cluster_name: cc, direction: UPGRADE, progress_percent: 48.92857142857142, request_context: Upgrading to 2.2.2.0-2513, request_id: 18, request_status: HOLDING }, upgrade_groups: [ { UpgradeGroup: { completed_task_count: 1, group_id: 1, in_progress_task_count: 0, name: PRE_CLUSTER, progress_percent: 100, request_id: 18, status: COMPLETED, title: Prepare Backups, total_task_count: 1 }, upgrade_items: [ { UpgradeItem: { context: Pre Upgrade HDFS, group_id: 1, progress_percent: 100, request_id: 18, skippable: false, stage_id: 1, status: COMPLETED, text: Pre Upgrade HDFS } } ] }, { UpgradeGroup: { completed_task_count: 4, group_id: 2, in_progress_task_count: 0, name: ZOOKEEPER, progress_percent: 100, request_id: 18, status: COMPLETED, title: ZooKeeper, total_task_count: 4 }, upgrade_items: [ { UpgradeItem: { context: Restarting ZooKeeper Server on 19a.vm, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 2, status: COMPLETED, text: Restarting ZooKeeper Server on 19a.vm } }, { UpgradeItem: { context: Restarting ZooKeeper Server on 199.vm, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 3, status: COMPLETED, text: Restarting ZooKeeper Server on 199.vm } }, { UpgradeItem: { context: Restarting ZooKeeper Server on 198.vm, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 4, status: COMPLETED, text: Restarting ZooKeeper Server on 198.vm } }, { UpgradeItem: { context: Service Check ZooKeeper, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 5, status: COMPLETED, text: Service Check ZooKeeper } } ] }, { UpgradeGroup: { completed_task_count: 5, group_id: 3, in_progress_task_count: 0, name: CORE_MASTER, progress_percent: 100, request_id: 18, status: COMPLETED, title: Core Masters, total_task_count: 5 }, upgrade_items: [ { UpgradeItem: { context: Restarting JournalNode on 19a.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 6, status: COMPLETED, text: Restarting JournalNode on 19a.vm } }, { UpgradeItem: { context: Restarting JournalNode on 199.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 7, status: COMPLETED, text: Restarting JournalNode on 199.vm } }, { UpgradeItem: { context: Restarting JournalNode on 198.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 8, status: COMPLETED, text: Restarting JournalNode on 198.vm } }, { UpgradeItem: { context: Restarting NameNode on 198.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 9, status: COMPLETED, text: Restarting NameNode on 198.vm } }, { UpgradeItem: { context:
[jira] [Commented] (AMBARI-9933) UI stucks on manual step during RU (can be skipped)
[ https://issues.apache.org/jira/browse/AMBARI-9933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14348514#comment-14348514 ] Andrii Tkach commented on AMBARI-9933: -- committed to trunk and branch-2.0.0 UI stucks on manual step during RU (can be skipped) --- Key: AMBARI-9933 URL: https://issues.apache.org/jira/browse/AMBARI-9933 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Tkach Assignee: Andrii Tkach Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9933.patch, scr2.png RU dialog stopped as depicted on attached screenshot, upgrade is pending. this is caused by JS error due to invalid server reply. If I reload page, manual step displays correctly. {code} Json http://198.vm:8080/api/v1/clusters/cc/upgrades/18?upgrade_groups/UpgradeGroup/status!=PENDINGfields=Upgrade/progress_percent,Upgrade/request_context,Upgrade/request_status,Upgrade/direction,upgrade_groups/UpgradeGroup,upgrade_groups/upgrade_items/UpgradeItem/status,upgrade_groups/upgrade_items/UpgradeItem/context,upgrade_groups/upgrade_items/UpgradeItem/group_id,upgrade_groups/upgrade_items/UpgradeItem/progress_percent,upgrade_groups/upgrade_items/UpgradeItem/request_id,upgrade_groups/upgrade_items/UpgradeItem/skippable,upgrade_groups/upgrade_items/UpgradeItem/stage_id,upgrade_groups/upgrade_items/UpgradeItem/status,upgrade_groups/upgrade_items/UpgradeItem/textminimal_response=true_=1425322480944 --- { Upgrade: { cluster_name: cc, direction: UPGRADE, progress_percent: 48.92857142857142, request_context: Upgrading to 2.2.2.0-2513, request_id: 18, request_status: HOLDING }, upgrade_groups: [ { UpgradeGroup: { completed_task_count: 1, group_id: 1, in_progress_task_count: 0, name: PRE_CLUSTER, progress_percent: 100, request_id: 18, status: COMPLETED, title: Prepare Backups, total_task_count: 1 }, upgrade_items: [ { UpgradeItem: { context: Pre Upgrade HDFS, group_id: 1, progress_percent: 100, request_id: 18, skippable: false, stage_id: 1, status: COMPLETED, text: Pre Upgrade HDFS } } ] }, { UpgradeGroup: { completed_task_count: 4, group_id: 2, in_progress_task_count: 0, name: ZOOKEEPER, progress_percent: 100, request_id: 18, status: COMPLETED, title: ZooKeeper, total_task_count: 4 }, upgrade_items: [ { UpgradeItem: { context: Restarting ZooKeeper Server on 19a.vm, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 2, status: COMPLETED, text: Restarting ZooKeeper Server on 19a.vm } }, { UpgradeItem: { context: Restarting ZooKeeper Server on 199.vm, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 3, status: COMPLETED, text: Restarting ZooKeeper Server on 199.vm } }, { UpgradeItem: { context: Restarting ZooKeeper Server on 198.vm, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 4, status: COMPLETED, text: Restarting ZooKeeper Server on 198.vm } }, { UpgradeItem: { context: Service Check ZooKeeper, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 5, status: COMPLETED, text: Service Check ZooKeeper } } ] }, { UpgradeGroup: { completed_task_count: 5, group_id: 3, in_progress_task_count: 0, name: CORE_MASTER, progress_percent: 100, request_id: 18, status: COMPLETED, title: Core Masters, total_task_count: 5 }, upgrade_items: [ { UpgradeItem: { context: Restarting JournalNode on 19a.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 6, status: COMPLETED, text: Restarting JournalNode on 19a.vm } }, { UpgradeItem: { context: Restarting JournalNode on 199.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 7, status: COMPLETED, text: Restarting JournalNode on 199.vm } }, { UpgradeItem: { context: Restarting JournalNode on 198.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 8, status: COMPLETED, text: Restarting JournalNode on 198.vm } }, { UpgradeItem: { context: Restarting NameNode on 198.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 9, status: COMPLETED, text: Restarting NameNode on 198.vm } }, { UpgradeItem: { context: Restarting NameNode on 199.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 10, status: COMPLETED, text: Restarting NameNode on 199.vm } } ] }, { UpgradeGroup: { completed_task_count: 2, group_id: 4, in_progress_task_count: 0, name: SERVICE_CHECK, progress_percent: 100, request_id: 18, status: COMPLETED, title: All Service Checks,
[jira] [Commented] (AMBARI-9931) RU: upgrade dialog does not refresh with current tasks w/o browser reload
[ https://issues.apache.org/jira/browse/AMBARI-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14348765#comment-14348765 ] Hudson commented on AMBARI-9931: ABORTED: Integrated in Ambari-branch-2.0.0 #8 (See [https://builds.apache.org/job/Ambari-branch-2.0.0/8/]) AMBARI-9931 RU: upgrade dialog does not refresh with current tasks w/o browser reload. (atkach) (atkach: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=45b8a093cf43db57cbfd984540ee72700f329e24) * ambari-web/test/controllers/main/admin/stack_and_upgrade_controller_test.js * ambari-web/app/views/main/admin/stack_upgrade/upgrade_group_view.js * ambari-web/test/views/main/admin/stack_upgrade/upgrade_wizard_view_test.js * ambari-web/app/templates/main/admin/stack_upgrade/upgrade_task.hbs * ambari-web/app/controllers/main/admin/stack_and_upgrade_controller.js * ambari-web/app/templates/main/admin/stack_upgrade/upgrade_group.hbs * ambari-web/app/templates/main/admin/stack_upgrade/stack_upgrade_wizard.hbs * ambari-web/app/views/main/admin/stack_upgrade/upgrade_wizard_view.js RU: upgrade dialog does not refresh with current tasks w/o browser reload - Key: AMBARI-9931 URL: https://issues.apache.org/jira/browse/AMBARI-9931 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Tkach Assignee: Andrii Tkach Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9931.patch Perform upgrade, dialog goes thru prepare tasks, I click to proceed. At that point, the % numbers move and I see progress in ambari-server.log but the UI list of tasks is not updating. It only shows Prepare task. Also, looks like things are coming back from REST API that show progress. It's the UI is just not updating. I reload my browser and then it catches up and starts showing tasks. What I notice: top level tasks don't show, I have to refresh browser, then the top-level will shows and the subs show. 1) So Prepare Backups shows 2) once PB is done, the next task does not show until I refresh browser 3) Then PB + ZooKeeper shows. once ZK is done, the next task does not show until I refresh browser 4) Then PB + ZK + Core Masters shows. once CM is done, the next task does not show until I refresh browser I see this this for all top-level tasks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 31752: ambari-sudo.sh needs full path, install fails if JDK is not installed
On March 5, 2015, 1:15 p.m., Nate Cole wrote: Seems like we should be abstracting somehow - maybe with an ExecuteSudo or something that takes the same exact arguments as Execute, but does all this sudo voodoo. Jonathan Hurley wrote: Agreed; why is the separate script necessary? I even thought that the existing Execute resource took a `sudo=true` parameter. Andrew Onischuk wrote: Guys we have Execute with sudo=True argument. This thing on lower level below this Execute. Andrew Onischuk wrote: What I'm talking about is that the bug message fails due to tar error, it doesn't even execute ambari-sudo.sh tar -xf ... ambari-sudo.sh ... tar: Unexpected EOF in archive first one fails I don't understand how this fix can anyhow fix the problem. Also I don't see much purpose of doing this. Ambari-sudo is always in $PATH, which is set in ambari-env.sh and is inherited by the child processes. If this doesn't fix anything I think it's looks nicer with short path in task logs. Jonathan Hurley wrote: In my dev environment, simplying copying the new ambari-sudo.sh to the right spot worked, so the path being set is correct. I initially commented that it might be an RPM issue if the ambari-sudo.sh can't be found. But I think that Andrew is right that this doesn't seem like a missing script issue. After some googling: seems like the error happens due to corrupt archive file. As for me I cannot reproduce that failure. Maybe just a one time issue with corrupt download of jdk? - Andrew --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/#review75325 --- On March 5, 2015, 12:59 a.m., Alejandro Fernandez wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/ --- (Updated March 5, 2015, 12:59 a.m.) Review request for Ambari, Andrew Onischuk, Jonathan Hurley, Nate Cole, and Sid Wagle. Bugs: AMBARI-9938 https://issues.apache.org/jira/browse/AMBARI-9938 Repository: ambari Description --- When HDP is installed on a host without JDK, the before-install hook will attempt to install JDK if it is not present. However, this fails because ambari-sudo.sh needs the fully qualified path to the script. ``` Execution of 'mkdir -p /var/lib/ambari-agent/data/tmp/jdk cd /var/lib/ambari-agent/data/tmp/jdk tar -xf /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jdk-7u67-linux-x64.tar.gz ambari-sudo.sh cp -r /var/lib/ambari-agent/data/tmp/jdk/* /usr/jdk64' returned 2. [3/4/15, 12:58:13 PM] Alejandro Fernandez: tar: Unexpected EOF in archive ``` Diffs - ambari-common/src/main/python/ambari_commons/constants.py b823b31 ambari-server/src/test/python/stacks/2.0.6/FLUME/test_flume.py b6f4821 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_monitor.py 396b9d2 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_server.py 7d0afc7 ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_master.py 36c942e ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_regionserver.py 8a79701 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_datanode.py 54ca083 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_journalnode.py 21cefae ambari-server/src/test/python/stacks/2.0.6/HDFS/test_namenode.py 1e4142f ambari-server/src/test/python/stacks/2.0.6/HDFS/test_service_check.py e24ff8d ambari-server/src/test/python/stacks/2.0.6/HDFS/test_snamenode.py 5bedf5b ambari-server/src/test/python/stacks/2.0.6/HDFS/test_zkfc.py 8aa4871 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_metastore.py 9153a84 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py 5230196 ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 8d388ab ambari-server/src/test/python/stacks/2.0.6/hooks/before-INSTALL/test_before_install.py e038ddf ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py 990eac8 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_drpc_server.py d5afb42 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_nimbus.py 3ef45ad ambari-server/src/test/python/stacks/2.1/STORM/test_storm_rest_api_service.py 64a4662 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor.py 26089fb ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor_prod.py 549c5fc ambari-server/src/test/python/stacks/2.1/STORM/test_storm_ui_server.py d23114a ambari-server/src/test/python/stacks/2.2/KNOX/test_knox_gateway.py b1d9888 Diff:
Re: Review Request 31752: ambari-sudo.sh needs full path, install fails if JDK is not installed
On March 5, 2015, 8:15 a.m., Nate Cole wrote: Seems like we should be abstracting somehow - maybe with an ExecuteSudo or something that takes the same exact arguments as Execute, but does all this sudo voodoo. Jonathan Hurley wrote: Agreed; why is the separate script necessary? I even thought that the existing Execute resource took a `sudo=true` parameter. Andrew Onischuk wrote: Guys we have Execute with sudo=True argument. This thing on lower level below this Execute. Andrew Onischuk wrote: What I'm talking about is that the bug message fails due to tar error, it doesn't even execute ambari-sudo.sh tar -xf ... ambari-sudo.sh ... tar: Unexpected EOF in archive first one fails I don't understand how this fix can anyhow fix the problem. Also I don't see much purpose of doing this. Ambari-sudo is always in $PATH, which is set in ambari-env.sh and is inherited by the child processes. If this doesn't fix anything I think it's looks nicer with short path in task logs. In my dev environment, simplying copying the new ambari-sudo.sh to the right spot worked, so the path being set is correct. I initially commented that it might be an RPM issue if the ambari-sudo.sh can't be found. But I think that Andrew is right that this doesn't seem like a missing script issue. - Jonathan --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/#review75325 --- On March 4, 2015, 7:59 p.m., Alejandro Fernandez wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/31752/ --- (Updated March 4, 2015, 7:59 p.m.) Review request for Ambari, Andrew Onischuk, Jonathan Hurley, Nate Cole, and Sid Wagle. Bugs: AMBARI-9938 https://issues.apache.org/jira/browse/AMBARI-9938 Repository: ambari Description --- When HDP is installed on a host without JDK, the before-install hook will attempt to install JDK if it is not present. However, this fails because ambari-sudo.sh needs the fully qualified path to the script. ``` Execution of 'mkdir -p /var/lib/ambari-agent/data/tmp/jdk cd /var/lib/ambari-agent/data/tmp/jdk tar -xf /var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jdk-7u67-linux-x64.tar.gz ambari-sudo.sh cp -r /var/lib/ambari-agent/data/tmp/jdk/* /usr/jdk64' returned 2. [3/4/15, 12:58:13 PM] Alejandro Fernandez: tar: Unexpected EOF in archive ``` Diffs - ambari-common/src/main/python/ambari_commons/constants.py b823b31 ambari-server/src/test/python/stacks/2.0.6/FLUME/test_flume.py b6f4821 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_monitor.py 396b9d2 ambari-server/src/test/python/stacks/2.0.6/GANGLIA/test_ganglia_server.py 7d0afc7 ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_master.py 36c942e ambari-server/src/test/python/stacks/2.0.6/HBASE/test_hbase_regionserver.py 8a79701 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_datanode.py 54ca083 ambari-server/src/test/python/stacks/2.0.6/HDFS/test_journalnode.py 21cefae ambari-server/src/test/python/stacks/2.0.6/HDFS/test_namenode.py 1e4142f ambari-server/src/test/python/stacks/2.0.6/HDFS/test_service_check.py e24ff8d ambari-server/src/test/python/stacks/2.0.6/HDFS/test_snamenode.py 5bedf5b ambari-server/src/test/python/stacks/2.0.6/HDFS/test_zkfc.py 8aa4871 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_metastore.py 9153a84 ambari-server/src/test/python/stacks/2.0.6/HIVE/test_hive_server.py 5230196 ambari-server/src/test/python/stacks/2.0.6/OOZIE/test_oozie_server.py 8d388ab ambari-server/src/test/python/stacks/2.0.6/hooks/before-INSTALL/test_before_install.py e038ddf ambari-server/src/test/python/stacks/2.1/HIVE/test_hive_metastore.py 990eac8 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_drpc_server.py d5afb42 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_nimbus.py 3ef45ad ambari-server/src/test/python/stacks/2.1/STORM/test_storm_rest_api_service.py 64a4662 ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor.py 26089fb ambari-server/src/test/python/stacks/2.1/STORM/test_storm_supervisor_prod.py 549c5fc ambari-server/src/test/python/stacks/2.1/STORM/test_storm_ui_server.py d23114a ambari-server/src/test/python/stacks/2.2/KNOX/test_knox_gateway.py b1d9888 Diff: https://reviews.apache.org/r/31752/diff/ Testing --- Waiting for unit test results. Local tests passed, -- Total run:609 Total errors:0 Total
[jira] [Commented] (AMBARI-9933) UI stucks on manual step during RU (can be skipped)
[ https://issues.apache.org/jira/browse/AMBARI-9933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14348618#comment-14348618 ] Hudson commented on AMBARI-9933: SUCCESS: Integrated in Ambari-trunk-Commit #1954 (See [https://builds.apache.org/job/Ambari-trunk-Commit/1954/]) AMBARI-9933 UI stucks on manual step during RU (can be skipped). (atkach) (atkach: http://git-wip-us.apache.org/repos/asf?p=ambari.gita=commith=d385af880df04690bcd56a562391914541dfe145) * ambari-web/app/controllers/main/admin/stack_and_upgrade_controller.js * ambari-web/test/controllers/main/admin/stack_and_upgrade_controller_test.js UI stucks on manual step during RU (can be skipped) --- Key: AMBARI-9933 URL: https://issues.apache.org/jira/browse/AMBARI-9933 Project: Ambari Issue Type: Bug Components: ambari-web Affects Versions: 2.0.0 Reporter: Andrii Tkach Assignee: Andrii Tkach Priority: Critical Fix For: 2.0.0 Attachments: AMBARI-9933.patch, scr2.png RU dialog stopped as depicted on attached screenshot, upgrade is pending. this is caused by JS error due to invalid server reply. If I reload page, manual step displays correctly. {code} Json http://198.vm:8080/api/v1/clusters/cc/upgrades/18?upgrade_groups/UpgradeGroup/status!=PENDINGfields=Upgrade/progress_percent,Upgrade/request_context,Upgrade/request_status,Upgrade/direction,upgrade_groups/UpgradeGroup,upgrade_groups/upgrade_items/UpgradeItem/status,upgrade_groups/upgrade_items/UpgradeItem/context,upgrade_groups/upgrade_items/UpgradeItem/group_id,upgrade_groups/upgrade_items/UpgradeItem/progress_percent,upgrade_groups/upgrade_items/UpgradeItem/request_id,upgrade_groups/upgrade_items/UpgradeItem/skippable,upgrade_groups/upgrade_items/UpgradeItem/stage_id,upgrade_groups/upgrade_items/UpgradeItem/status,upgrade_groups/upgrade_items/UpgradeItem/textminimal_response=true_=1425322480944 --- { Upgrade: { cluster_name: cc, direction: UPGRADE, progress_percent: 48.92857142857142, request_context: Upgrading to 2.2.2.0-2513, request_id: 18, request_status: HOLDING }, upgrade_groups: [ { UpgradeGroup: { completed_task_count: 1, group_id: 1, in_progress_task_count: 0, name: PRE_CLUSTER, progress_percent: 100, request_id: 18, status: COMPLETED, title: Prepare Backups, total_task_count: 1 }, upgrade_items: [ { UpgradeItem: { context: Pre Upgrade HDFS, group_id: 1, progress_percent: 100, request_id: 18, skippable: false, stage_id: 1, status: COMPLETED, text: Pre Upgrade HDFS } } ] }, { UpgradeGroup: { completed_task_count: 4, group_id: 2, in_progress_task_count: 0, name: ZOOKEEPER, progress_percent: 100, request_id: 18, status: COMPLETED, title: ZooKeeper, total_task_count: 4 }, upgrade_items: [ { UpgradeItem: { context: Restarting ZooKeeper Server on 19a.vm, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 2, status: COMPLETED, text: Restarting ZooKeeper Server on 19a.vm } }, { UpgradeItem: { context: Restarting ZooKeeper Server on 199.vm, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 3, status: COMPLETED, text: Restarting ZooKeeper Server on 199.vm } }, { UpgradeItem: { context: Restarting ZooKeeper Server on 198.vm, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 4, status: COMPLETED, text: Restarting ZooKeeper Server on 198.vm } }, { UpgradeItem: { context: Service Check ZooKeeper, group_id: 2, progress_percent: 100, request_id: 18, skippable: false, stage_id: 5, status: COMPLETED, text: Service Check ZooKeeper } } ] }, { UpgradeGroup: { completed_task_count: 5, group_id: 3, in_progress_task_count: 0, name: CORE_MASTER, progress_percent: 100, request_id: 18, status: COMPLETED, title: Core Masters, total_task_count: 5 }, upgrade_items: [ { UpgradeItem: { context: Restarting JournalNode on 19a.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 6, status: COMPLETED, text: Restarting JournalNode on 19a.vm } }, { UpgradeItem: { context: Restarting JournalNode on 199.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 7, status: COMPLETED, text: Restarting JournalNode on 199.vm } }, { UpgradeItem: { context: Restarting JournalNode on 198.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 8, status: COMPLETED, text: Restarting JournalNode on 198.vm } }, { UpgradeItem: { context: Restarting NameNode on 198.vm, group_id: 3, progress_percent: 100, request_id: 18, skippable: false, stage_id: 9, status: COMPLETED, text: Restarting NameNode on 198.vm } }, { UpgradeItem: { context: