[jira] [Commented] (YARN-4629) Distributed shell breaks under strong security
[ https://issues.apache.org/jira/browse/YARN-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123533#comment-15123533 ] Steve Loughran commented on YARN-4629: -- presumably this problem only surfaces when the RM is listed as {{rm@_HOST/REALM}}, which is inherently more likely in an HA env, where you have > 1 RM. Updating the env appropriately > Distributed shell breaks under strong security > -- > > Key: YARN-4629 > URL: https://issues.apache.org/jira/browse/YARN-4629 > Project: Hadoop YARN > Issue Type: Bug > Components: applications/distributed-shell >Affects Versions: 2.7.1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-4629.001.patch, YARN-4629.002.patch > > > If the auth_to_local is set to map requests from unknown hosts to nobody, the > dist shell app fails. The reason is that the client doesn't translate the > _HOST placeholder to the local hostname. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4411) RMAppAttemptImpl#createApplicationAttemptReport throws IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123203#comment-15123203 ] Bibin A Chundatt commented on YARN-4411: [~devaraj.k] Thank you for review and commit. > RMAppAttemptImpl#createApplicationAttemptReport throws > IllegalArgumentException > --- > > Key: YARN-4411 > URL: https://issues.apache.org/jira/browse/YARN-4411 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.1 >Reporter: yarntime >Assignee: Bibin A Chundatt > Fix For: 2.8.0 > > Attachments: 0002-YARN-4411.patch, 0003-YARN-4411.patch, > YARN-4411.001.patch > > > in version 2.7.1, line 1914 may cause IllegalArgumentException in > RMAppAttemptImpl: > YarnApplicationAttemptState.valueOf(this.getState().toString()) > cause by this.getState() returns type RMAppAttemptState which may not be > converted to YarnApplicationAttemptState. > {noformat} > java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState.LAUNCHED_UNMANAGED_SAVING > at java.lang.Enum.valueOf(Enum.java:236) > at > org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState.valueOf(YarnApplicationAttemptState.java:27) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.createApplicationAttemptReport(RMAppAttemptImpl.java:1870) > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationAttemptReport(ClientRMService.java:355) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationAttemptReport(ApplicationClientProtocolPBServiceImpl.java:355) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4629) Distributed shell breaks under strong security
[ https://issues.apache.org/jira/browse/YARN-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123530#comment-15123530 ] Steve Loughran commented on YARN-4629: -- I like the patch, just some tweaks # you remove the date from the header; this is not the header used on the rest of the Hadoop codebase, and we want to keep maintenance costs down. # have you got a test for this? Even a little one? > Distributed shell breaks under strong security > -- > > Key: YARN-4629 > URL: https://issues.apache.org/jira/browse/YARN-4629 > Project: Hadoop YARN > Issue Type: Bug > Components: applications/distributed-shell >Affects Versions: 2.7.1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-4629.001.patch, YARN-4629.002.patch > > > If the auth_to_local is set to map requests from unknown hosts to nobody, the > dist shell app fails. The reason is that the client doesn't translate the > _HOST placeholder to the local hostname. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4615) TestAbstractYarnScheduler#testResourceRequestRecoveryToTheRightAppAttempt fails occasionally
[ https://issues.apache.org/jira/browse/YARN-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-4615: -- Attachment: 0005-YARN-4615.patch Thanks [~rohithsharma] . Updating patch. > TestAbstractYarnScheduler#testResourceRequestRecoveryToTheRightAppAttempt > fails occasionally > > > Key: YARN-4615 > URL: https://issues.apache.org/jira/browse/YARN-4615 > Project: Hadoop YARN > Issue Type: Sub-task > Components: test >Reporter: Jason Lowe >Assignee: Sunil G > Attachments: 0001-YARN-4615.patch, 0002-YARN-4615.patch, > 0003-YARN-4615.patch, 0004-YARN-4615.patch, 0005-YARN-4615.patch > > > Sometimes > TestAbstractYarnScheduler#testResourceRequestRecoveryToTheRightAppAttempt > will fail like this: > {noformat} > org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler > testResourceRequestRecoveryToTheRightAppAttempt[1](org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler) > Time elapsed: 77.427 sec <<< FAILURE! > java.lang.AssertionError: Attempt state is not correct (timedout): expected: > SCHEDULED actual: ALLOCATED for the application attempt > appattempt_1453254869107_0001_02 > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:197) > at > org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:172) > at > org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForAttemptScheduled(MockRM.java:831) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler.testResourceRequestRecoveryToTheRightAppAttempt(TestAbstractYarnScheduler.java:572) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4465) SchedulerUtils#validateRequest for Label check should happen only when nodelabel enabled
[ https://issues.apache.org/jira/browse/YARN-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bibin A Chundatt updated YARN-4465: --- Attachment: 0004-YARN-4465.patch Attaching patch for review. > SchedulerUtils#validateRequest for Label check should happen only when > nodelabel enabled > > > Key: YARN-4465 > URL: https://issues.apache.org/jira/browse/YARN-4465 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bibin A Chundatt >Assignee: Bibin A Chundatt >Priority: Minor > Attachments: 0001-YARN-4465.patch, 0002-YARN-4465.patch, > 0003-YARN-4465.patch, 0004-YARN-4465.patch > > > Disable label from rm side yarn.nodelabel.enable=false > Capacity scheduler label configuration for queue is available as below > default label for queue = b1 as 3 and accessible labels as 1,3 > Submit application to queue A . > {noformat} > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException): > Invalid resource request, queue=b1 doesn't have permission to access all > labels in resource request. labelExpression of resource request=3. Queue > labels=1,3 > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:304) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:234) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:216) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:401) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:340) > at > org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:283) > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:602) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:247) > {noformat} > # Ignore default label expression when label is disabled *or* > # NormalizeResourceRequest we can set label expression to > when node label is not enabled *or* > # Improve message -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4629) Distributed shell breaks under strong security
[ https://issues.apache.org/jira/browse/YARN-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated YARN-4629: - Environment: Secure cluster with the RM principal listed with a /_HOST entry to be expanded, most common with YARN HA enabled. Component/s: security > Distributed shell breaks under strong security > -- > > Key: YARN-4629 > URL: https://issues.apache.org/jira/browse/YARN-4629 > Project: Hadoop YARN > Issue Type: Bug > Components: applications/distributed-shell, security >Affects Versions: 2.7.1 > Environment: Secure cluster with the RM principal listed with a > /_HOST entry to be expanded, most common with YARN HA enabled. >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-4629.001.patch, YARN-4629.002.patch > > > If the auth_to_local is set to map requests from unknown hosts to nobody, the > dist shell app fails. The reason is that the client doesn't translate the > _HOST placeholder to the local hostname. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4654) Yarn node label CLI should parse "=" correctly when trying to remove all labels on a node
[ https://issues.apache.org/jira/browse/YARN-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123353#comment-15123353 ] Hadoop QA commented on YARN-4654: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 53s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 29s {color} | {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 41s {color} | {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 142m 32s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_66 Failed junit tests | hadoop.yarn.client.TestGetGroups | | JDK v1.8.0_66 Timed out junit tests | org.apache.hadoop.yarn.client.cli.TestYarnCLI | | | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient | | | org.apache.hadoop.yarn.client.api.impl.TestYarnClient | | | org.apache.hadoop.yarn.client.api.impl.TestNMClient | | JDK v1.7.0_91 Failed junit tests | hadoop.yarn.client.TestGetGroups | | JDK v1.7.0_91 Timed out junit tests | org.apache.hadoop.yarn.client.cli.TestYarnCLI | | | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient | | | org.apache.hadoop.yarn.client.api.impl.TestYarnClient | | |
[jira] [Commented] (YARN-4649) Add additional logging to some NM state store operations
[ https://issues.apache.org/jira/browse/YARN-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123332#comment-15123332 ] Varun Vasudev commented on YARN-4649: - Thanks for the patch [~sidharta-s]. Some feedack - 1) Instead of {code} +if (LOG.isDebugEnabled()) { + LOG.debug("Recovering container with state: "); + LOG.debug("Diagnostics: " + rcs.getDiagnostics()); + LOG.debug("Exit Code: " + rcs.getExitCode()); + LOG.debug("Start Request: " + rcs.getStartRequest()); + LOG.debug("Status: " + rcs.getStatus()); +} {code} can you add a toString() method to RecoveredContainerState and use that when logging? That way if someone adds a new field, it'll get reflected in the logs automatically 2) For the following statements, can you merge them to one log line? {code} +if (LOG.isDebugEnabled()) { + LOG.debug("storeContainer.containerId: " + containerId ); + LOG.debug("storeContainer.startRequest: " + startRequest); +} {code} and {code} +if (LOG.isDebugEnabled()) { + LOG.debug("storeContainerDiagnostics.containerId: " + containerId); + LOG.debug("storeContainerDiagnostics.diagnostics: " + diagnostics); +} {code} and {code} +if (LOG.isDebugEnabled()) { + LOG.debug("storeContainerResourceChanged.containerId: " + containerId); + LOG.debug("storeContainerResourceChanged.capability: " + capability); +} {code} and {code} +if (LOG.isDebugEnabled()) { + LOG.debug("storeApplication.appId: " + appId); + LOG.debug("storeApplication.proto: " + p); +} {code} > Add additional logging to some NM state store operations > > > Key: YARN-4649 > URL: https://issues.apache.org/jira/browse/YARN-4649 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Sidharta Seethana >Assignee: Sidharta Seethana >Priority: Minor > Attachments: YARN-4649.001.patch > > > Adding additional logging to NM container recovery code (specifically > application/container status operations) makes it easier to debug container > recovery related issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4411) RMAppAttemptImpl#createApplicationAttemptReport throws IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devaraj K updated YARN-4411: Hadoop Flags: Reviewed Summary: RMAppAttemptImpl#createApplicationAttemptReport throws IllegalArgumentException (was: ResourceManager IllegalArgumentException error) +1, lgtm, will commit it shortly. > RMAppAttemptImpl#createApplicationAttemptReport throws > IllegalArgumentException > --- > > Key: YARN-4411 > URL: https://issues.apache.org/jira/browse/YARN-4411 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.1 >Reporter: yarntime >Assignee: Bibin A Chundatt > Attachments: 0002-YARN-4411.patch, 0003-YARN-4411.patch, > YARN-4411.001.patch > > > in version 2.7.1, line 1914 may cause IllegalArgumentException in > RMAppAttemptImpl: > YarnApplicationAttemptState.valueOf(this.getState().toString()) > cause by this.getState() returns type RMAppAttemptState which may not be > converted to YarnApplicationAttemptState. > {noformat} > java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState.LAUNCHED_UNMANAGED_SAVING > at java.lang.Enum.valueOf(Enum.java:236) > at > org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState.valueOf(YarnApplicationAttemptState.java:27) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.createApplicationAttemptReport(RMAppAttemptImpl.java:1870) > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationAttemptReport(ClientRMService.java:355) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationAttemptReport(ApplicationClientProtocolPBServiceImpl.java:355) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4340) Add "list" API to reservation system
[ https://issues.apache.org/jira/browse/YARN-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Po updated YARN-4340: -- Attachment: YARN-4340.v12.patch I double checked the ASFLicense failures by running test-patch on my local machine and there were no failures found. I also ran this patch end to end with a single node cluster on my local machine, with no failures. With this patch applied, a reservation was made, and an application was submitted against the reservation, and again - there were no failures. > Add "list" API to reservation system > > > Key: YARN-4340 > URL: https://issues.apache.org/jira/browse/YARN-4340 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Sean Po > Attachments: YARN-4340.v1.patch, YARN-4340.v10.patch, > YARN-4340.v11.patch, YARN-4340.v12.patch, YARN-4340.v2.patch, > YARN-4340.v3.patch, YARN-4340.v4.patch, YARN-4340.v5.patch, > YARN-4340.v6.patch, YARN-4340.v7.patch, YARN-4340.v8.patch, YARN-4340.v9.patch > > > This JIRA tracks changes to the APIs of the reservation system, and enables > querying the reservation system on which reservation exists by "time-range, > reservation-id". > YARN-4420 has a dependency on this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4411) RMAppAttemptImpl#createApplicationAttemptReport throws IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-4411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123194#comment-15123194 ] Hudson commented on YARN-4411: -- FAILURE: Integrated in Hadoop-trunk-Commit #9208 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9208/]) YARN-4411. RMAppAttemptImpl#createApplicationAttemptReport throws (devaraj: rev a277bdc9edc66bef419fcd063b832073e512f234) * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java * hadoop-yarn-project/CHANGES.txt * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java > RMAppAttemptImpl#createApplicationAttemptReport throws > IllegalArgumentException > --- > > Key: YARN-4411 > URL: https://issues.apache.org/jira/browse/YARN-4411 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.1 >Reporter: yarntime >Assignee: Bibin A Chundatt > Fix For: 2.8.0 > > Attachments: 0002-YARN-4411.patch, 0003-YARN-4411.patch, > YARN-4411.001.patch > > > in version 2.7.1, line 1914 may cause IllegalArgumentException in > RMAppAttemptImpl: > YarnApplicationAttemptState.valueOf(this.getState().toString()) > cause by this.getState() returns type RMAppAttemptState which may not be > converted to YarnApplicationAttemptState. > {noformat} > java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState.LAUNCHED_UNMANAGED_SAVING > at java.lang.Enum.valueOf(Enum.java:236) > at > org.apache.hadoop.yarn.api.records.YarnApplicationAttemptState.valueOf(YarnApplicationAttemptState.java:27) > at > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.createApplicationAttemptReport(RMAppAttemptImpl.java:1870) > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationAttemptReport(ClientRMService.java:355) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationAttemptReport(ApplicationClientProtocolPBServiceImpl.java:355) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4654) Yarn node label CLI should parse "=" correctly when trying to remove all labels on a node
[ https://issues.apache.org/jira/browse/YARN-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naganarasimha G R updated YARN-4654: Attachment: YARN-4654.v1.002.patch Thanks [~bibinchundatt] for the comments, Corrected the same in this patch ! > Yarn node label CLI should parse "=" correctly when trying to remove all > labels on a node > - > > Key: YARN-4654 > URL: https://issues.apache.org/jira/browse/YARN-4654 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Wangda Tan >Assignee: Naganarasimha G R > Attachments: YARN-4654.v1.001.patch, YARN-4654.v1.002.patch > > > Currently, when adding labels to nodes, user can run: > {{yarn rmadmin -replaceLabelsOnNode "host1=x host2=y"}} > However, when removing labels from a node, user has to run: > {{yarn rmadmin -replaceLabelsOnNode "host1 host2"}} > Instead of: > {{yarn rmadmin -replaceLabelsOnNode "host1= host2="}} > We should handle both of "=" exists/not-exists case when removing labels on a > node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4653) Document YARN security model from the perspective of Application Developers
[ https://issues.apache.org/jira/browse/YARN-4653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated YARN-4653: - Summary: Document YARN security model from the perspective of Application Developers (was: Document YARN security model) > Document YARN security model from the perspective of Application Developers > --- > > Key: YARN-4653 > URL: https://issues.apache.org/jira/browse/YARN-4653 > Project: Hadoop YARN > Issue Type: Task > Components: site >Affects Versions: 2.7.2 >Reporter: Steve Loughran >Assignee: Steve Loughran > Original Estimate: 2h > Remaining Estimate: 2h > > What YARN apps need to do for security today is generally copied direct from > distributed shell, with a bit of [ill-informed > superstition|https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/yarn.html] > being the sole prose. > We need a normative document in the YARN site covering > # the needs for YARN security > # token creation for AM launch > # how the RM gets involved > # token propagation on container launch > # token renewal strategies > # How to get tokens for other apps like HBase and Hive. > # how to work under OOzie > Perhaps the WritingYarnApplications.md doc is updated, otherwise why not just > link to the relevant bit of the distributed shell client on github for a > guarantee of staying up to date? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4653) Document YARN security model from the perspective of Application Developers
[ https://issues.apache.org/jira/browse/YARN-4653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123315#comment-15123315 ] Steve Loughran commented on YARN-4653: -- thanks for the link ... hadn't seen that. nice. That's a document which should be linked to, ideally even pulled into the hadoop site I'm doing something less ambitious but equally important: explain to application developers what they need. I'll change the title accordingly > Document YARN security model from the perspective of Application Developers > --- > > Key: YARN-4653 > URL: https://issues.apache.org/jira/browse/YARN-4653 > Project: Hadoop YARN > Issue Type: Task > Components: site >Affects Versions: 2.7.2 >Reporter: Steve Loughran >Assignee: Steve Loughran > Original Estimate: 2h > Remaining Estimate: 2h > > What YARN apps need to do for security today is generally copied direct from > distributed shell, with a bit of [ill-informed > superstition|https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/yarn.html] > being the sole prose. > We need a normative document in the YARN site covering > # the needs for YARN security > # token creation for AM launch > # how the RM gets involved > # token propagation on container launch > # token renewal strategies > # How to get tokens for other apps like HBase and Hive. > # how to work under OOzie > Perhaps the WritingYarnApplications.md doc is updated, otherwise why not just > link to the relevant bit of the distributed shell client on github for a > guarantee of staying up to date? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4629) Distributed shell breaks under strong security
[ https://issues.apache.org/jira/browse/YARN-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123667#comment-15123667 ] Steve Loughran commented on YARN-4629: -- + add a check that the RM principal config option is non-null/non-empty > Distributed shell breaks under strong security > -- > > Key: YARN-4629 > URL: https://issues.apache.org/jira/browse/YARN-4629 > Project: Hadoop YARN > Issue Type: Bug > Components: applications/distributed-shell, security >Affects Versions: 2.7.1 > Environment: Secure cluster with the RM principal listed with a > /_HOST entry to be expanded, most common with YARN HA enabled. >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-4629.001.patch, YARN-4629.002.patch > > > If the auth_to_local is set to map requests from unknown hosts to nobody, the > dist shell app fails. The reason is that the client doesn't translate the > _HOST placeholder to the local hostname. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4647) Make RegisterNodeManagerRequestPBImpl thread-safe
[ https://issues.apache.org/jira/browse/YARN-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123685#comment-15123685 ] Hudson commented on YARN-4647: -- FAILURE: Integrated in Hadoop-trunk-Commit #9209 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9209/]) YARN-4647. Make RegisterNodeManagerRequestPBImpl thread-safe. (kasha) (kasha: rev c9a09d6926b258e205a4ff7998ce5a86bf5dbe3b) * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RegisterNodeManagerRequestPBImpl.java * hadoop-yarn-project/CHANGES.txt > Make RegisterNodeManagerRequestPBImpl thread-safe > - > > Key: YARN-4647 > URL: https://issues.apache.org/jira/browse/YARN-4647 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: yarn-4647.001.patch, yarn-4647.002.patch > > > While working on YARN-4512, I noticed there are potential race conditions in > RegisterNodeManagerRequestPBImpl. We need to add more locking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4615) TestAbstractYarnScheduler#testResourceRequestRecoveryToTheRightAppAttempt fails occasionally
[ https://issues.apache.org/jira/browse/YARN-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123760#comment-15123760 ] Hadoop QA commented on YARN-4615: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 49s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 41s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 45s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 150m 19s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_66 Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | | hadoop.yarn.server.resourcemanager.TestAMAuthorization | | JDK v1.7.0_91 Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.TestAMAuthorization | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12785167/0005-YARN-4615.patch | | JIRA Issue | YARN-4615 | | Optional Tests | asflicense compile javac
[jira] [Updated] (YARN-4647) Make RegisterNodeManagerRequestPBImpl thread-safe
[ https://issues.apache.org/jira/browse/YARN-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-4647: --- Summary: Make RegisterNodeManagerRequestPBImpl thread-safe (was: RegisterNodeManagerRequestPBImpl needs better locking) > Make RegisterNodeManagerRequestPBImpl thread-safe > - > > Key: YARN-4647 > URL: https://issues.apache.org/jira/browse/YARN-4647 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: yarn-4647.001.patch, yarn-4647.002.patch > > > While working on YARN-4512, I noticed there are potential race conditions in > RegisterNodeManagerRequestPBImpl. We need to add more locking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4647) RegisterNodeManagerRequestPBImpl needs better locking
[ https://issues.apache.org/jira/browse/YARN-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123649#comment-15123649 ] Karthik Kambatla commented on YARN-4647: The checkstyle errors are benign. No unit test since the patch only adds synchronization. Checking this in, based on Wangda's +1 earlier. > RegisterNodeManagerRequestPBImpl needs better locking > - > > Key: YARN-4647 > URL: https://issues.apache.org/jira/browse/YARN-4647 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: yarn-4647.001.patch, yarn-4647.002.patch > > > While working on YARN-4512, I noticed there are potential race conditions in > RegisterNodeManagerRequestPBImpl. We need to add more locking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4629) Distributed shell breaks under strong security
[ https://issues.apache.org/jira/browse/YARN-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123678#comment-15123678 ] Steve Loughran commented on YARN-4629: -- My variant {code} ... import static org.apache.hadoop.yarn.conf.YarnConfiguration.*; public static String getRMPrincipal(Configuration conf) throws IOException { String principal = conf.get(RM_PRINCIPAL, ""); String hostname; Preconditions.checkState(!principal.isEmpty(), "Not set: " + RM_PRINCIPAL); if (HAUtil.isHAEnabled(conf)) { YarnConfiguration yarnConf = new YarnConfiguration(conf); if (yarnConf.get(RM_HA_ID) == null) { // If RM_HA_ID is not configured, use the first of RM_HA_IDS. // Any valid RM HA ID should work. String[] rmIds = yarnConf.getStrings(RM_HA_IDS); Preconditions.checkState((rmIds != null) && (rmIds.length > 0), "Not set " + RM_HA_IDS); yarnConf.set(RM_HA_ID, rmIds[0]); } hostname = yarnConf.getSocketAddr( RM_ADDRESS, DEFAULT_RM_ADDRESS, DEFAULT_RM_PORT).getHostName(); } else { hostname = conf.getSocketAddr( RM_ADDRESS, DEFAULT_RM_ADDRESS, DEFAULT_RM_PORT).getHostName(); } return SecurityUtil.getServerPrincipal(principal, hostname); } {code} > Distributed shell breaks under strong security > -- > > Key: YARN-4629 > URL: https://issues.apache.org/jira/browse/YARN-4629 > Project: Hadoop YARN > Issue Type: Bug > Components: applications/distributed-shell, security >Affects Versions: 2.7.1 > Environment: Secure cluster with the RM principal listed with a > /_HOST entry to be expanded, most common with YARN HA enabled. >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-4629.001.patch, YARN-4629.002.patch > > > If the auth_to_local is set to map requests from unknown hosts to nobody, the > dist shell app fails. The reason is that the client doesn't translate the > _HOST placeholder to the local hostname. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4647) Make RegisterNodeManagerRequestPBImpl thread-safe
[ https://issues.apache.org/jira/browse/YARN-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-4647: --- Issue Type: Improvement (was: Bug) > Make RegisterNodeManagerRequestPBImpl thread-safe > - > > Key: YARN-4647 > URL: https://issues.apache.org/jira/browse/YARN-4647 > Project: Hadoop YARN > Issue Type: Improvement > Components: nodemanager >Affects Versions: 2.8.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: yarn-4647.001.patch, yarn-4647.002.patch > > > While working on YARN-4512, I noticed there are potential race conditions in > RegisterNodeManagerRequestPBImpl. We need to add more locking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4512) Provide a knob to turn on over-allocation
[ https://issues.apache.org/jira/browse/YARN-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-4512: --- Attachment: yarn-4512-yarn-1011.005.patch > Provide a knob to turn on over-allocation > - > > Key: YARN-4512 > URL: https://issues.apache.org/jira/browse/YARN-4512 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: YARN-4512-YARN-1011.001.patch, > yarn-4512-yarn-1011.002.patch, yarn-4512-yarn-1011.003.patch, > yarn-4512-yarn-1011.004.patch, yarn-4512-yarn-1011.005.patch > > > We need two configs for overallocation - one to specify the threshold upto > which it is okay to over-allocate, another to specify the threshold after > which OPPORTUNISTIC containers should be preempted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4512) Provide a knob to turn on over-allocation
[ https://issues.apache.org/jira/browse/YARN-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123774#comment-15123774 ] Karthik Kambatla commented on YARN-4512: Committed YARN-4647 to trunk. Rebased YARN-1011 on trunk. v5 patch is v4 rebased. > Provide a knob to turn on over-allocation > - > > Key: YARN-4512 > URL: https://issues.apache.org/jira/browse/YARN-4512 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: YARN-4512-YARN-1011.001.patch, > yarn-4512-yarn-1011.002.patch, yarn-4512-yarn-1011.003.patch, > yarn-4512-yarn-1011.004.patch, yarn-4512-yarn-1011.005.patch > > > We need two configs for overallocation - one to specify the threshold upto > which it is okay to over-allocate, another to specify the threshold after > which OPPORTUNISTIC containers should be preempted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4512) Provide a knob to turn on over-allocation
[ https://issues.apache.org/jira/browse/YARN-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123905#comment-15123905 ] Hadoop QA commented on YARN-4512: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 56s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 4s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 4s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 7 new + 308 unchanged - 0 fixed = 315 total (was 308) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 48s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s {color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 51s {color} | {color:green} hadoop-yarn-common in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s {color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK v1.8.0_66. {color} | |
[jira] [Commented] (YARN-4615) TestAbstractYarnScheduler#testResourceRequestRecoveryToTheRightAppAttempt fails occasionally
[ https://issues.apache.org/jira/browse/YARN-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123769#comment-15123769 ] Sunil G commented on YARN-4615: --- Ran failed cases locally and it passes fine. Also verified that this test is not failing with induced reproduction steps give by Rohith. > TestAbstractYarnScheduler#testResourceRequestRecoveryToTheRightAppAttempt > fails occasionally > > > Key: YARN-4615 > URL: https://issues.apache.org/jira/browse/YARN-4615 > Project: Hadoop YARN > Issue Type: Sub-task > Components: test >Reporter: Jason Lowe >Assignee: Sunil G > Attachments: 0001-YARN-4615.patch, 0002-YARN-4615.patch, > 0003-YARN-4615.patch, 0004-YARN-4615.patch, 0005-YARN-4615.patch > > > Sometimes > TestAbstractYarnScheduler#testResourceRequestRecoveryToTheRightAppAttempt > will fail like this: > {noformat} > org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler > testResourceRequestRecoveryToTheRightAppAttempt[1](org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler) > Time elapsed: 77.427 sec <<< FAILURE! > java.lang.AssertionError: Attempt state is not correct (timedout): expected: > SCHEDULED actual: ALLOCATED for the application attempt > appattempt_1453254869107_0001_02 > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:197) > at > org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:172) > at > org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForAttemptScheduled(MockRM.java:831) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler.testResourceRequestRecoveryToTheRightAppAttempt(TestAbstractYarnScheduler.java:572) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4594) container-executor fails to remove directory tree when chmod required
[ https://issues.apache.org/jira/browse/YARN-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-4594: - Summary: container-executor fails to remove directory tree when chmod required (was: Fix test-container-executor.c to pass) Thanks for the patch, Colin! I noticed we're using openat, fchmodat, and unlinkat for the first time. I suspect most other POSIX-like distributions support this, but I think these were only recently added to MacOS X (in 10.9 Yosemite). I'm not sure if anyone uses container-executor for MacOS X (or if container-executor even compiles/works for MacOS X today). but adding these could break the native build for those using older MacOS X versions. One alternative would be a double-pass with ftw where we walk just the directories first changing permissions then followed by the walk it does today. The directory trees involved are going to be very shallow, so it's probably not a problem in practice if we decided to go that route. If we stick with the custom walker then here are some comments on the patch: * I think this needs to use strerror(-fd): {code} if (fd < 0) { fprintf(LOGFILE, "error opening %s: %s\n", name, strerror(ret)); goto done; } {code} * There's no check for an error being encountered by readdir and therefore no logging if it does occur * Sometimes recursive_unlink_helper is returning errno and sometimes it is returning -errno. For example, the "failed to stat" path will set ret=errno and return -ret as -errno, but the "failed to unlink" path will set ret=-errno and thus return errno. > container-executor fails to remove directory tree when chmod required > - > > Key: YARN-4594 > URL: https://issues.apache.org/jira/browse/YARN-4594 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: YARN-4594.001.patch > > > test-container-executor.c doesn't work: > * It assumes that realpath(/bin/ls) will be /bin/ls, whereas it is actually > /usr/bin/ls on many systems. > * The recursive delete logic in container-executor.c fails -- nftw does the > wrong thing when confronted with directories with the wrong mode (permission > bits), leading to an attempt to run rmdir on a non-empty directory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4654) Yarn node label CLI should parse "=" correctly when trying to remove all labels on a node
[ https://issues.apache.org/jira/browse/YARN-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123829#comment-15123829 ] Naganarasimha G R commented on YARN-4654: - Test case failures are either not related to the modifications in the patch and timed out ones are passing locally. [~rohithsharma] can you take a look at this ? > Yarn node label CLI should parse "=" correctly when trying to remove all > labels on a node > - > > Key: YARN-4654 > URL: https://issues.apache.org/jira/browse/YARN-4654 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Wangda Tan >Assignee: Naganarasimha G R > Attachments: YARN-4654.v1.001.patch, YARN-4654.v1.002.patch > > > Currently, when adding labels to nodes, user can run: > {{yarn rmadmin -replaceLabelsOnNode "host1=x host2=y"}} > However, when removing labels from a node, user has to run: > {{yarn rmadmin -replaceLabelsOnNode "host1 host2"}} > Instead of: > {{yarn rmadmin -replaceLabelsOnNode "host1= host2="}} > We should handle both of "=" exists/not-exists case when removing labels on a > node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4446) Refactor reader API for better extensibility
[ https://issues.apache.org/jira/browse/YARN-4446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-4446: --- Attachment: YARN-4446-YARN-2928.02.patch > Refactor reader API for better extensibility > > > Key: YARN-4446 > URL: https://issues.apache.org/jira/browse/YARN-4446 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: yarn-2928-1st-milestone > Attachments: YARN-4446-YARN-2928.01.patch, > YARN-4446-YARN-2928.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-1011) [Umbrella] Schedule containers based on utilization of currently allocated containers
[ https://issues.apache.org/jira/browse/YARN-1011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124489#comment-15124489 ] Inigo Goiri commented on YARN-1011: --- The second scheduling loop makes sense. I'd like the design doc to be updated with this new approach and a couple examples on how containers would be started. I think the next step would be to start YARN-4511 and maybe create a new JIRA for the overallocation scheduling in the NM. After that, we could try to implement the scheduling approach in YARN-1013 or YARN-1015. We would still miss the interface to mark containers/application as supporting opportunistic; is this being tracked in any other JIRA? > [Umbrella] Schedule containers based on utilization of currently allocated > containers > - > > Key: YARN-1011 > URL: https://issues.apache.org/jira/browse/YARN-1011 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Arun C Murthy > Attachments: yarn-1011-design-v0.pdf, yarn-1011-design-v1.pdf, > yarn-1011-design-v2.pdf > > > Currently RM allocates containers and assumes resources allocated are > utilized. > RM can, and should, get to a point where it measures utilization of allocated > containers and, if appropriate, allocate more (speculative?) containers. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4512) Provide a knob to turn on over-allocation
[ https://issues.apache.org/jira/browse/YARN-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124490#comment-15124490 ] Inigo Goiri commented on YARN-4512: --- Thanks [~kasha] for working on this! > Provide a knob to turn on over-allocation > - > > Key: YARN-4512 > URL: https://issues.apache.org/jira/browse/YARN-4512 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: YARN-4512-YARN-1011.001.patch, > yarn-4512-yarn-1011.002.patch, yarn-4512-yarn-1011.003.patch, > yarn-4512-yarn-1011.004.patch, yarn-4512-yarn-1011.005.patch > > > We need two configs for overallocation - one to specify the threshold upto > which it is okay to over-allocate, another to specify the threshold after > which OPPORTUNISTIC containers should be preempted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4625) Make ApplicationSubmissionContext and ApplicationSubmissionContextInfo more consistent
[ https://issues.apache.org/jira/browse/YARN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-4625: Attachment: YARN-4625.2.patch > Make ApplicationSubmissionContext and ApplicationSubmissionContextInfo more > consistent > -- > > Key: YARN-4625 > URL: https://issues.apache.org/jira/browse/YARN-4625 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-4625.2.patch, YARN-4625.20160121.1.patch > > > There're some differences between ApplicationSubmissionContext and > ApplicationSubmissionContextInfo, for example, we can not submit Application > with logAggregationContext specified thru RM web Service . We could make them > more consistent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4428) Redirect RM page to AHS page when AHS turned on and RM page is not available
[ https://issues.apache.org/jira/browse/YARN-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124305#comment-15124305 ] Hudson commented on YARN-4428: -- FAILURE: Integrated in Hadoop-trunk-Commit #9211 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9211/]) YARN-4428. Redirect RM page to AHS page when AHS turned on and RM page (jlowe: rev 772ea7b41b06beaa1f4ac4fa86eac8d6e6c8cd36) * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebAppFilter.java * hadoop-yarn-project/CHANGES.txt * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebApp.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java > Redirect RM page to AHS page when AHS turned on and RM page is not available > > > Key: YARN-4428 > URL: https://issues.apache.org/jira/browse/YARN-4428 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Chang Li >Assignee: Chang Li > Fix For: 2.7.3 > > Attachments: YARN-4428.1.2.patch, YARN-4428.1.patch, > YARN-4428.10.patch, YARN-4428.2.2.patch, YARN-4428.2.patch, > YARN-4428.3.patch, YARN-4428.3.patch, YARN-4428.4.patch, YARN-4428.5.patch, > YARN-4428.6.patch, YARN-4428.7.patch, YARN-4428.8.patch, > YARN-4428.9.test.patch, YARN-4428.branch-2.7.patch > > > When AHS is turned on, if we can't view application in RM page, RM page > should redirect us to AHS page. For example, when you go to > cluster/app/application_1, if RM no longer remember the application, we will > simply get "Failed to read the application application_1", but it will be > good for RM ui to smartly try to redirect to AHS ui > /applicationhistory/app/application_1 to see if it's there. The redirect > usage already exist for logs in nodemanager UI. > Also, when AHS is enabled, WebAppProxyServlet should redirect to AHS page on > fall back of RM not remembering the app. YARN-3975 tried to do this only when > original tracking url is not set. But there are many cases, such as when app > failed at launch, original tracking url will be set to point to RM page, so > redirect to AHS page won't work. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4649) Add additional logging to some NM state store operations
[ https://issues.apache.org/jira/browse/YARN-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124438#comment-15124438 ] Hadoop QA commented on YARN-4649: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: patch generated 4 new + 130 unchanged - 0 fixed = 134 total (was 130) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 38s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 17s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 32s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12785273/YARN-4649.002.patch | | JIRA Issue | YARN-4649 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux fed7bac621a6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC
[jira] [Updated] (YARN-4649) Add additional logging to some NM state store operations
[ https://issues.apache.org/jira/browse/YARN-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sidharta Seethana updated YARN-4649: Attachment: YARN-4649.002.patch Uploaded a new patch based on code review feedback. Added a {{toString()}} implementation to {{RecoveredContainerState}} and combined logging statements. > Add additional logging to some NM state store operations > > > Key: YARN-4649 > URL: https://issues.apache.org/jira/browse/YARN-4649 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Sidharta Seethana >Assignee: Sidharta Seethana >Priority: Minor > Attachments: YARN-4649.001.patch, YARN-4649.002.patch > > > Adding additional logging to NM container recovery code (specifically > application/container status operations) makes it easier to debug container > recovery related issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4625) Make ApplicationSubmissionContext and ApplicationSubmissionContextInfo more consistent
[ https://issues.apache.org/jira/browse/YARN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124544#comment-15124544 ] Xuan Gong commented on YARN-4625: - [~vvasudev] Attached a new patch which added the testcase, and updated the doc. > Make ApplicationSubmissionContext and ApplicationSubmissionContextInfo more > consistent > -- > > Key: YARN-4625 > URL: https://issues.apache.org/jira/browse/YARN-4625 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-4625.2.patch, YARN-4625.20160121.1.patch > > > There're some differences between ApplicationSubmissionContext and > ApplicationSubmissionContextInfo, for example, we can not submit Application > with logAggregationContext specified thru RM web Service . We could make them > more consistent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4594) container-executor fails to remove directory tree when chmod required
[ https://issues.apache.org/jira/browse/YARN-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124347#comment-15124347 ] Hadoop QA commented on YARN-4594: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s {color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 27s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 0s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 45s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12785255/YARN-4594.002.patch | | JIRA Issue | YARN-4594 | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 9c29722bdde3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 772ea7b | | Default Java | 1.7.0_91 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/10442/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/10442/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.8.0_66.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/10442/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.7.0_91.txt | | JDK v1.7.0_91 Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/10442/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | |
[jira] [Commented] (YARN-4446) Refactor reader API for better extensibility
[ https://issues.apache.org/jira/browse/YARN-4446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124442#comment-15124442 ] Varun Saxena commented on YARN-4446: Fixed one checkstyle issue which could be fixed. > Refactor reader API for better extensibility > > > Key: YARN-4446 > URL: https://issues.apache.org/jira/browse/YARN-4446 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: yarn-2928-1st-milestone > Attachments: YARN-4446-YARN-2928.01.patch, > YARN-4446-YARN-2928.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4446) Refactor reader API for better extensibility
[ https://issues.apache.org/jira/browse/YARN-4446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124465#comment-15124465 ] Hadoop QA commented on YARN-4446: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 55s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s {color} | {color:green} YARN-2928 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s {color} | {color:green} YARN-2928 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s {color} | {color:green} YARN-2928 passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s {color} | {color:red} hadoop-yarn-server-timelineservice in YARN-2928 failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} YARN-2928 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice: patch generated 3 new + 53 unchanged - 46 fixed = 56 total (was 99) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s {color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s {color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-jdk1.7.0_91 with JDK v1.7.0_91 generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 1s {color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 4s {color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 41s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | |
[jira] [Commented] (YARN-3102) Decommisioned Nodes not listed in Web UI
[ https://issues.apache.org/jira/browse/YARN-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124309#comment-15124309 ] Hadoop QA commented on YARN-3102: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 2m 18s {color} | {color:red} root in branch-2.7 failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 15s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s {color} | {color:green} branch-2.7 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 12s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 6s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 5s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 5s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 8s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s {color} | {color:red} The patch has 3566 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 1m 29s {color} | {color:red} The patch has 41 line(s) with tabs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 6s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 6s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 5s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 42m 26s {color} | {color:red} Patch generated 56 ASF License
[jira] [Commented] (YARN-4340) Add "list" API to reservation system
[ https://issues.apache.org/jira/browse/YARN-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124704#comment-15124704 ] Hadoop QA commented on YARN-4340: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 10 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 59s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 53s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 48s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 58s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 44s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 21s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 27s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 53s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 28s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 16s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s {color} | {color:green} root: patch generated 0 new + 347 unchanged - 2 fixed = 347 total (was 349) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 50s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s {color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 54s {color} | {color:green} hadoop-yarn-common in the patch passed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 38s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 11s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66.
[jira] [Commented] (YARN-4625) Make ApplicationSubmissionContext and ApplicationSubmissionContextInfo more consistent
[ https://issues.apache.org/jira/browse/YARN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124775#comment-15124775 ] Hadoop QA commented on YARN-4625: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped branch modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 4s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 22 new + 65 unchanged - 1 fixed = 87 total (was 66) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patch modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 39s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 6s {color} | {color:green} hadoop-yarn-site in the patch passed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 57s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
[jira] [Commented] (YARN-4625) Make ApplicationSubmissionContext and ApplicationSubmissionContextInfo more consistent
[ https://issues.apache.org/jira/browse/YARN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124689#comment-15124689 ] Hadoop QA commented on YARN-4625: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped branch modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 5s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s {color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 22 new + 65 unchanged - 1 fixed = 87 total (was 66) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patch modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 49s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s {color} | {color:green} hadoop-yarn-site in the patch passed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 4s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91.
[jira] [Updated] (YARN-4625) Make ApplicationSubmissionContext and ApplicationSubmissionContextInfo more consistent
[ https://issues.apache.org/jira/browse/YARN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-4625: Attachment: YARN-4625.3.patch > Make ApplicationSubmissionContext and ApplicationSubmissionContextInfo more > consistent > -- > > Key: YARN-4625 > URL: https://issues.apache.org/jira/browse/YARN-4625 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-4625.2.patch, YARN-4625.20160121.1.patch, > YARN-4625.3.patch > > > There're some differences between ApplicationSubmissionContext and > ApplicationSubmissionContextInfo, for example, we can not submit Application > with logAggregationContext specified thru RM web Service . We could make them > more consistent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4594) container-executor fails to remove directory tree when chmod required
[ https://issues.apache.org/jira/browse/YARN-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124248#comment-15124248 ] Colin Patrick McCabe commented on YARN-4594: Thanks for the review, [~jlowe]. bq. I noticed we're using openat, fchmodat, and unlinkat for the first time. I suspect most other POSIX-like distributions support this, but I think these were only recently added to MacOS X (in 10.9 Yosemite). I'm not sure if anyone uses container-executor for MacOS X (or if container-executor even compiles/works for MacOS X today). but adding these could break the native build for those using older MacOS X versions. Hmm. Correct me if I'm wrong, but I don't think {{container-executor}} is supported at all on MacOS. I can see places in the code that rely on cgroups, which is a kernel feature that MacOS just doesn't have (and may never have). You can see a bunch of code in {{container-executor.c}} dealing with very linux specific files in {{/proc}}, and so forth. My thinking is that if we ever do support MacOS in {{container-executor}}, we will support a new enough version that using the newer POSIX functions is not a problem. bq. I think this needs to use strerror(-fd): Fixed bq. There's no check for an error being encountered by readdir and therefore no logging if it does occur Fixed bq. Sometimes recursive_unlink_helper is returning errno and sometimes it is returning -errno. For example, the "failed to stat" path will set ret=errno and return -ret as -errno, but the "failed to unlink" path will set ret=-errno and thus return errno. Good catch. Let's return positive error codes everywhere, except in the very specific case of {{open_helper}} where non-negative returns mean "file descriptor". > container-executor fails to remove directory tree when chmod required > - > > Key: YARN-4594 > URL: https://issues.apache.org/jira/browse/YARN-4594 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: YARN-4594.001.patch > > > test-container-executor.c doesn't work: > * It assumes that realpath(/bin/ls) will be /bin/ls, whereas it is actually > /usr/bin/ls on many systems. > * The recursive delete logic in container-executor.c fails -- nftw does the > wrong thing when confronted with directories with the wrong mode (permission > bits), leading to an attempt to run rmdir on a non-empty directory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4594) container-executor fails to remove directory tree when chmod required
[ https://issues.apache.org/jira/browse/YARN-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated YARN-4594: --- Attachment: YARN-4594.002.patch > container-executor fails to remove directory tree when chmod required > - > > Key: YARN-4594 > URL: https://issues.apache.org/jira/browse/YARN-4594 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: YARN-4594.001.patch, YARN-4594.002.patch > > > test-container-executor.c doesn't work: > * It assumes that realpath(/bin/ls) will be /bin/ls, whereas it is actually > /usr/bin/ls on many systems. > * The recursive delete logic in container-executor.c fails -- nftw does the > wrong thing when confronted with directories with the wrong mode (permission > bits), leading to an attempt to run rmdir on a non-empty directory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4428) Redirect RM page to AHS page when AHS turned on and RM page is not available
[ https://issues.apache.org/jira/browse/YARN-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-4428: - Hadoop Flags: Reviewed Summary: Redirect RM page to AHS page when AHS turned on and RM page is not available (was: Redirect RM page to AHS page when AHS turned on and RM page is not avaialable) +1 to both recent patches. The only difference between the last two trunk patches is the log level, and I manually verified the branch-2.7 patch applies cleanly and is only trivially different from the trunk patch. Committing this. > Redirect RM page to AHS page when AHS turned on and RM page is not available > > > Key: YARN-4428 > URL: https://issues.apache.org/jira/browse/YARN-4428 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Chang Li >Assignee: Chang Li > Attachments: YARN-4428.1.2.patch, YARN-4428.1.patch, > YARN-4428.10.patch, YARN-4428.2.2.patch, YARN-4428.2.patch, > YARN-4428.3.patch, YARN-4428.3.patch, YARN-4428.4.patch, YARN-4428.5.patch, > YARN-4428.6.patch, YARN-4428.7.patch, YARN-4428.8.patch, > YARN-4428.9.test.patch, YARN-4428.branch-2.7.patch > > > When AHS is turned on, if we can't view application in RM page, RM page > should redirect us to AHS page. For example, when you go to > cluster/app/application_1, if RM no longer remember the application, we will > simply get "Failed to read the application application_1", but it will be > good for RM ui to smartly try to redirect to AHS ui > /applicationhistory/app/application_1 to see if it's there. The redirect > usage already exist for logs in nodemanager UI. > Also, when AHS is enabled, WebAppProxyServlet should redirect to AHS page on > fall back of RM not remembering the app. YARN-3975 tried to do this only when > original tracking url is not set. But there are many cases, such as when app > failed at launch, original tracking url will be set to point to RM page, so > redirect to AHS page won't work. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (YARN-4658) Typo in o.a.h.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler comment
Daniel Templeton created YARN-4658: -- Summary: Typo in o.a.h.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler comment Key: YARN-4658 URL: https://issues.apache.org/jira/browse/YARN-4658 Project: Hadoop YARN Issue Type: Improvement Reporter: Daniel Templeton Assignee: Nicole Pazmany Comment in {{testContinuousSchedulingInterruptedException()}} is {code} // Add one nodes {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4100) Add Documentation for Distributed and Delegated-Centralized Node Labels feature
[ https://issues.apache.org/jira/browse/YARN-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naganarasimha G R updated YARN-4100: Attachment: (was: NodeLabel.html) > Add Documentation for Distributed and Delegated-Centralized Node Labels > feature > --- > > Key: YARN-4100 > URL: https://issues.apache.org/jira/browse/YARN-4100 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api, client, resourcemanager >Reporter: Naganarasimha G R >Assignee: Naganarasimha G R > Attachments: YARN-4100.v1.001.patch, YARN-4100.v1.002.patch, > YARN-4100.v1.003.patch, YARN-4100.v1.004.patch > > > Add Documentation for Distributed Node Labels feature -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4465) SchedulerUtils#validateRequest for Label check should happen only when nodelabel enabled
[ https://issues.apache.org/jira/browse/YARN-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123944#comment-15123944 ] Hadoop QA commented on YARN-4465: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s {color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: patch generated 0 new + 20 unchanged - 1 fixed = 20 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 17s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 48s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 148m 9s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_66 Failed junit tests | hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA | | | hadoop.yarn.server.resourcemanager.TestResourceManager | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing | | | hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA | | | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | | hadoop.yarn.server.resourcemanager.TestClientRMService | | |
[jira] [Updated] (YARN-4100) Add Documentation for Distributed and Delegated-Centralized Node Labels feature
[ https://issues.apache.org/jira/browse/YARN-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naganarasimha G R updated YARN-4100: Attachment: YARN-4100.v1.005.patch NodeLabel.html Thanks [~devaraj.k] for the review and sorry for the delay in responding. Have uploaded a patch with the review comments and also have added *Contents* section in the beginning and thus reorganising the headers. Please review the same. > Add Documentation for Distributed and Delegated-Centralized Node Labels > feature > --- > > Key: YARN-4100 > URL: https://issues.apache.org/jira/browse/YARN-4100 > Project: Hadoop YARN > Issue Type: Sub-task > Components: api, client, resourcemanager >Reporter: Naganarasimha G R >Assignee: Naganarasimha G R > Attachments: NodeLabel.html, YARN-4100.v1.001.patch, > YARN-4100.v1.002.patch, YARN-4100.v1.003.patch, YARN-4100.v1.004.patch, > YARN-4100.v1.005.patch > > > Add Documentation for Distributed Node Labels feature -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (YARN-4660) o.a.h.yarn.event.TestAsyncDispatcher.testDispatcherOnCloseIfQueueEmpty() swallows YarnExceptions
Daniel Templeton created YARN-4660: -- Summary: o.a.h.yarn.event.TestAsyncDispatcher.testDispatcherOnCloseIfQueueEmpty() swallows YarnExceptions Key: YARN-4660 URL: https://issues.apache.org/jira/browse/YARN-4660 Project: Hadoop YARN Issue Type: Improvement Components: test Reporter: Daniel Templeton Assignee: Daniel Templeton Priority: Minor Either we expect the exception, or we don't. Quietly swallowing it is the wrong thing to do in any case. Introduced in YARN-3878. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4340) Add "list" API to reservation system
[ https://issues.apache.org/jira/browse/YARN-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Po updated YARN-4340: -- Attachment: (was: YARN-4340.v12.patch) > Add "list" API to reservation system > > > Key: YARN-4340 > URL: https://issues.apache.org/jira/browse/YARN-4340 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Sean Po > Attachments: YARN-4340.v1.patch, YARN-4340.v10.patch, > YARN-4340.v11.patch, YARN-4340.v2.patch, YARN-4340.v3.patch, > YARN-4340.v4.patch, YARN-4340.v5.patch, YARN-4340.v6.patch, > YARN-4340.v7.patch, YARN-4340.v8.patch, YARN-4340.v9.patch > > > This JIRA tracks changes to the APIs of the reservation system, and enables > querying the reservation system on which reservation exists by "time-range, > reservation-id". > YARN-4420 has a dependency on this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (YARN-4659) o.a.h.yarn.event.DrainDispatcher BlockingQueue constructor and waitForEventThreadToWait() should be annotated as @VisibleForTesting
Daniel Templeton created YARN-4659: -- Summary: o.a.h.yarn.event.DrainDispatcher BlockingQueue constructor and waitForEventThreadToWait() should be annotated as @VisibleForTesting Key: YARN-4659 URL: https://issues.apache.org/jira/browse/YARN-4659 Project: Hadoop YARN Issue Type: Improvement Reporter: Daniel Templeton Assignee: Daniel Templeton Priority: Trivial Added/exposed in YARN-3878 to support unit tests -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4100) Add Documentation for Distributed and Delegated-Centralized Node Labels feature
[ https://issues.apache.org/jira/browse/YARN-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124021#comment-15124021 ] Hadoop QA commented on YARN-4100: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 4s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 6s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 10s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 21s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 255 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 5s {color} | {color:red} The patch has 384 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 57s {color} | {color:green} hadoop-yarn-common in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s {color} | {color:green} hadoop-yarn-site in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s {color} | {color:green} hadoop-yarn-common in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 6s {color} | {color:green} hadoop-yarn-site in the patch passed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s {color} | {color:red} Patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 38s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL |
[jira] [Updated] (YARN-4340) Add "list" API to reservation system
[ https://issues.apache.org/jira/browse/YARN-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Po updated YARN-4340: -- Attachment: YARN-4340.v12.patch > Add "list" API to reservation system > > > Key: YARN-4340 > URL: https://issues.apache.org/jira/browse/YARN-4340 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Sean Po > Attachments: YARN-4340.v1.patch, YARN-4340.v10.patch, > YARN-4340.v11.patch, YARN-4340.v12.patch, YARN-4340.v2.patch, > YARN-4340.v3.patch, YARN-4340.v4.patch, YARN-4340.v5.patch, > YARN-4340.v6.patch, YARN-4340.v7.patch, YARN-4340.v8.patch, YARN-4340.v9.patch > > > This JIRA tracks changes to the APIs of the reservation system, and enables > querying the reservation system on which reservation exists by "time-range, > reservation-id". > YARN-4420 has a dependency on this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4649) Add additional logging to some NM state store operations
[ https://issues.apache.org/jira/browse/YARN-4649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124057#comment-15124057 ] Sidharta Seethana commented on YARN-4649: - Thanks for the review, [~vvasudev]. I'll add a {{toString()}} function to {{RecoveredContainerState}} . Regarding 2), could you please clarify why the log statements need to be combined? They are structured this way for better readability. In addition, the second log line in each case could result in multiple log lines anyway. > Add additional logging to some NM state store operations > > > Key: YARN-4649 > URL: https://issues.apache.org/jira/browse/YARN-4649 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Sidharta Seethana >Assignee: Sidharta Seethana >Priority: Minor > Attachments: YARN-4649.001.patch > > > Adding additional logging to NM container recovery code (specifically > application/container status operations) makes it easier to debug container > recovery related issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (YARN-4657) Javadoc comment is broken for o.a.h.yarn.util.resource.Resources.multiplyByAndAddTo()
Daniel Templeton created YARN-4657: -- Summary: Javadoc comment is broken for o.a.h.yarn.util.resource.Resources.multiplyByAndAddTo() Key: YARN-4657 URL: https://issues.apache.org/jira/browse/YARN-4657 Project: Hadoop YARN Issue Type: Bug Reporter: Daniel Templeton Assignee: Daniel Templeton Priority: Trivial The comment is {code} /** * Multiply @param rhs by @param by, and add the result to @param lhs * without creating any new {@link Resource} object */ {code} The {{@param}} tag can't be used that way. {{\{@code rhs\}}} is the correct thing to do. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4340) Add "list" API to reservation system
[ https://issues.apache.org/jira/browse/YARN-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123970#comment-15123970 ] Hadoop QA commented on YARN-4340: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 10 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 6s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 15s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 10s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 0s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 5s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 7s {color} | {color:red} root: patch generated 2 new + 347 unchanged - 2 fixed = 349 total (was 349) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 36s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 8s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s {color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 57s {color} | {color:green} hadoop-yarn-common in the patch passed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 42s {color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 4s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch
[jira] [Comment Edited] (YARN-4617) LeafQueue#pendingOrderingPolicy should always use fixed ordering policy instead of using same as active applications ordering policy
[ https://issues.apache.org/jira/browse/YARN-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124143#comment-15124143 ] Jian He edited comment on YARN-4617 at 1/29/16 8:25 PM: thanks [~sunilg] and [~Naganarasimha] for reviewing the patch too ! was (Author: jianhe): thanks [~Naganarasimha] for reviewing the patch too ! > LeafQueue#pendingOrderingPolicy should always use fixed ordering policy > instead of using same as active applications ordering policy > > > Key: YARN-4617 > URL: https://issues.apache.org/jira/browse/YARN-4617 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 2.8.0 >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Fix For: 2.8.0 > > Attachments: 0001-YARN-4617.patch, 0001-YARN-4617.patch, > 0002-YARN-4617.patch, 0003-YARN-4617.patch, 0004-YARN-4617.patch, > 0005-YARN-4617.patch, 0006-YARN-4617.patch > > > In discussion with [~leftnoteasy] in the JIRA > [comment|https://issues.apache.org/jira/browse/YARN-4479?focusedCommentId=15108236=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15108236] > pointed out that {{LeafQueue#pendingOrderingPolicy}} should NOT be assumed > to be same as active applications ordering policy. It causes an issue when > using fair ordering policy. > Expectations of this JIRA should include > # Create FifoOrderingPolicyForPendingApps which extends FifoOrderingPolicy. > # Comparator of new ordering policy should use > RecoveryComparator,PriorityComparator and Fifocomparator in order > respectively. > # Clean up {{LeafQueue#pendingOPForRecoveredApps}} which is no more required > once new fixed ordering policy is created pending applications. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4617) LeafQueue#pendingOrderingPolicy should always use fixed ordering policy instead of using same as active applications ordering policy
[ https://issues.apache.org/jira/browse/YARN-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124143#comment-15124143 ] Jian He commented on YARN-4617: --- thanks [~Naganarasimha] for reviewing the patch too ! > LeafQueue#pendingOrderingPolicy should always use fixed ordering policy > instead of using same as active applications ordering policy > > > Key: YARN-4617 > URL: https://issues.apache.org/jira/browse/YARN-4617 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 2.8.0 >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Fix For: 2.8.0 > > Attachments: 0001-YARN-4617.patch, 0001-YARN-4617.patch, > 0002-YARN-4617.patch, 0003-YARN-4617.patch, 0004-YARN-4617.patch, > 0005-YARN-4617.patch, 0006-YARN-4617.patch > > > In discussion with [~leftnoteasy] in the JIRA > [comment|https://issues.apache.org/jira/browse/YARN-4479?focusedCommentId=15108236=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15108236] > pointed out that {{LeafQueue#pendingOrderingPolicy}} should NOT be assumed > to be same as active applications ordering policy. It causes an issue when > using fair ordering policy. > Expectations of this JIRA should include > # Create FifoOrderingPolicyForPendingApps which extends FifoOrderingPolicy. > # Comparator of new ordering policy should use > RecoveryComparator,PriorityComparator and Fifocomparator in order > respectively. > # Clean up {{LeafQueue#pendingOPForRecoveredApps}} which is no more required > once new fixed ordering policy is created pending applications. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-3102) Decommisioned Nodes not listed in Web UI
[ https://issues.apache.org/jira/browse/YARN-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124101#comment-15124101 ] Jason Lowe commented on YARN-3102: -- Thanks, Kuhu! Latest patch looks good to me. It doesn't apply cleanly to branch-2.7, could you provide a patch for that as well? > Decommisioned Nodes not listed in Web UI > > > Key: YARN-3102 > URL: https://issues.apache.org/jira/browse/YARN-3102 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.6.0 > Environment: 2 Node Manager and 1 Resource Manager >Reporter: Bibin A Chundatt >Assignee: Kuhu Shukla >Priority: Minor > Attachments: YARN-3102-v1.patch, YARN-3102-v2.patch, > YARN-3102-v3.patch, YARN-3102-v4.patch, YARN-3102-v5.patch, > YARN-3102-v6.patch, YARN-3102-v7.patch, YARN-3102-v8.patch > > > Configure yarn.resourcemanager.nodes.exclude-path in yarn-site.xml to > yarn.exlude file In RM1 machine > Add Yarn.exclude with NM1 Host Name > Start the node as listed below NM1,NM2 Resource manager > Now check Nodes decommisioned in /cluster/nodes > Number of decommisioned node is listed as 1 but Table is empty in > /cluster/nodes/decommissioned (detail of Decommision node not shown) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4428) Redirect RM page to AHS page when AHS turned on and RM page is not avaialable
[ https://issues.apache.org/jira/browse/YARN-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chang Li updated YARN-4428: --- Attachment: YARN-4428.branch-2.7.patch [~jlowe], uploaded 2.7 patch. Also realized that my previous .9 patch wrote log.info instead of log.debug, so updated .10 patch to address that as well > Redirect RM page to AHS page when AHS turned on and RM page is not avaialable > - > > Key: YARN-4428 > URL: https://issues.apache.org/jira/browse/YARN-4428 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Chang Li >Assignee: Chang Li > Attachments: YARN-4428.1.2.patch, YARN-4428.1.patch, > YARN-4428.10.patch, YARN-4428.2.2.patch, YARN-4428.2.patch, > YARN-4428.3.patch, YARN-4428.3.patch, YARN-4428.4.patch, YARN-4428.5.patch, > YARN-4428.6.patch, YARN-4428.7.patch, YARN-4428.8.patch, > YARN-4428.9.test.patch, YARN-4428.branch-2.7.patch > > > When AHS is turned on, if we can't view application in RM page, RM page > should redirect us to AHS page. For example, when you go to > cluster/app/application_1, if RM no longer remember the application, we will > simply get "Failed to read the application application_1", but it will be > good for RM ui to smartly try to redirect to AHS ui > /applicationhistory/app/application_1 to see if it's there. The redirect > usage already exist for logs in nodemanager UI. > Also, when AHS is enabled, WebAppProxyServlet should redirect to AHS page on > fall back of RM not remembering the app. YARN-3975 tried to do this only when > original tracking url is not set. But there are many cases, such as when app > failed at launch, original tracking url will be set to point to RM page, so > redirect to AHS page won't work. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4512) Provide a knob to turn on over-allocation
[ https://issues.apache.org/jira/browse/YARN-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124155#comment-15124155 ] Karthik Kambatla commented on YARN-4512: [~elgoiri] - could you bless the latest patch here? > Provide a knob to turn on over-allocation > - > > Key: YARN-4512 > URL: https://issues.apache.org/jira/browse/YARN-4512 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: YARN-4512-YARN-1011.001.patch, > yarn-4512-yarn-1011.002.patch, yarn-4512-yarn-1011.003.patch, > yarn-4512-yarn-1011.004.patch, yarn-4512-yarn-1011.005.patch > > > We need two configs for overallocation - one to specify the threshold upto > which it is okay to over-allocate, another to specify the threshold after > which OPPORTUNISTIC containers should be preempted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4428) Redirect RM page to AHS page when AHS turned on and RM page is not avaialable
[ https://issues.apache.org/jira/browse/YARN-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chang Li updated YARN-4428: --- Attachment: YARN-4428.10.patch > Redirect RM page to AHS page when AHS turned on and RM page is not avaialable > - > > Key: YARN-4428 > URL: https://issues.apache.org/jira/browse/YARN-4428 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Chang Li >Assignee: Chang Li > Attachments: YARN-4428.1.2.patch, YARN-4428.1.patch, > YARN-4428.10.patch, YARN-4428.2.2.patch, YARN-4428.2.patch, > YARN-4428.3.patch, YARN-4428.3.patch, YARN-4428.4.patch, YARN-4428.5.patch, > YARN-4428.6.patch, YARN-4428.7.patch, YARN-4428.8.patch, > YARN-4428.9.test.patch > > > When AHS is turned on, if we can't view application in RM page, RM page > should redirect us to AHS page. For example, when you go to > cluster/app/application_1, if RM no longer remember the application, we will > simply get "Failed to read the application application_1", but it will be > good for RM ui to smartly try to redirect to AHS ui > /applicationhistory/app/application_1 to see if it's there. The redirect > usage already exist for logs in nodemanager UI. > Also, when AHS is enabled, WebAppProxyServlet should redirect to AHS page on > fall back of RM not remembering the app. YARN-3975 tried to do this only when > original tracking url is not set. But there are many cases, such as when app > failed at launch, original tracking url will be set to point to RM page, so > redirect to AHS page won't work. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4446) Refactor reader API for better extensibility
[ https://issues.apache.org/jira/browse/YARN-4446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated YARN-4446: --- Attachment: YARN-4446-YARN-2928.01.patch > Refactor reader API for better extensibility > > > Key: YARN-4446 > URL: https://issues.apache.org/jira/browse/YARN-4446 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: yarn-2928-1st-milestone > Attachments: YARN-4446-YARN-2928.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4446) Refactor reader API for better extensibility
[ https://issues.apache.org/jira/browse/YARN-4446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124193#comment-15124193 ] Hadoop QA commented on YARN-4446: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 19s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s {color} | {color:green} YARN-2928 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s {color} | {color:green} YARN-2928 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} YARN-2928 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 37s {color} | {color:green} YARN-2928 passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 16s {color} | {color:red} hadoop-yarn-server-timelineservice in YARN-2928 failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s {color} | {color:green} YARN-2928 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice: patch generated 4 new + 53 unchanged - 46 fixed = 57 total (was 99) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 11s {color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s {color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-jdk1.7.0_91 with JDK v1.7.0_91 generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s {color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 52s {color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 0s {color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 23s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | |
[jira] [Commented] (YARN-4428) Redirect RM page to AHS page when AHS turned on and RM page is not avaialable
[ https://issues.apache.org/jira/browse/YARN-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124239#comment-15124239 ] Hadoop QA commented on YARN-4428: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 2m 0s {color} | {color:red} root in branch-2.7 failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 15s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 8s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s {color} | {color:green} branch-2.7 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 12s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 6s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 8s {color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.7 failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 6s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 6s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 8s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 2s {color} | {color:red} The patch has 4548 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 1m 45s {color} | {color:red} The patch has 47 line(s) with tabs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 8s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 6s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 7s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 6s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.8.0_72. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 8s {color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 42m 38s {color} | {color:red} Patch generated 56 ASF License
[jira] [Commented] (YARN-4617) LeafQueue#pendingOrderingPolicy should always use fixed ordering policy instead of using same as active applications ordering policy
[ https://issues.apache.org/jira/browse/YARN-4617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124160#comment-15124160 ] Hudson commented on YARN-4617: -- FAILURE: Integrated in Hadoop-trunk-Commit #9210 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9210/]) YARN-4617. LeafQueue#pendingOrderingPolicy should always use fixed (jianhe: rev f4a57d4a531e793373fe3118d644871a3b9ae0b1) * hadoop-yarn-project/CHANGES.txt * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/policy/MockSchedulableEntity.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/policy/RecoveryComparator.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/policy/TestFifoOrderingPolicyForPendingApps.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/policy/FifoOrderingPolicyForPendingApps.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java * hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/policy/SchedulableEntity.java * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java > LeafQueue#pendingOrderingPolicy should always use fixed ordering policy > instead of using same as active applications ordering policy > > > Key: YARN-4617 > URL: https://issues.apache.org/jira/browse/YARN-4617 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 2.8.0 >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Fix For: 2.8.0 > > Attachments: 0001-YARN-4617.patch, 0001-YARN-4617.patch, > 0002-YARN-4617.patch, 0003-YARN-4617.patch, 0004-YARN-4617.patch, > 0005-YARN-4617.patch, 0006-YARN-4617.patch > > > In discussion with [~leftnoteasy] in the JIRA > [comment|https://issues.apache.org/jira/browse/YARN-4479?focusedCommentId=15108236=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15108236] > pointed out that {{LeafQueue#pendingOrderingPolicy}} should NOT be assumed > to be same as active applications ordering policy. It causes an issue when > using fair ordering policy. > Expectations of this JIRA should include > # Create FifoOrderingPolicyForPendingApps which extends FifoOrderingPolicy. > # Comparator of new ordering policy should use > RecoveryComparator,PriorityComparator and Fifocomparator in order > respectively. > # Clean up {{LeafQueue#pendingOPForRecoveredApps}} which is no more required > once new fixed ordering policy is created pending applications. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-4340) Add "list" API to reservation system
[ https://issues.apache.org/jira/browse/YARN-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Po updated YARN-4340: -- Description: This JIRA tracks changes to the APIs of the reservation system, and enables querying the reservation system on which reservation exists by "time-range, reservation-id". YARN-4420 and YARN-2575 has a dependency on this. was: This JIRA tracks changes to the APIs of the reservation system, and enables querying the reservation system on which reservation exists by "time-range, reservation-id". YARN-4420 has a dependency on this. > Add "list" API to reservation system > > > Key: YARN-4340 > URL: https://issues.apache.org/jira/browse/YARN-4340 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacityscheduler, fairscheduler, resourcemanager >Reporter: Carlo Curino >Assignee: Sean Po > Attachments: YARN-4340.v1.patch, YARN-4340.v10.patch, > YARN-4340.v11.patch, YARN-4340.v12.patch, YARN-4340.v2.patch, > YARN-4340.v3.patch, YARN-4340.v4.patch, YARN-4340.v5.patch, > YARN-4340.v6.patch, YARN-4340.v7.patch, YARN-4340.v8.patch, YARN-4340.v9.patch > > > This JIRA tracks changes to the APIs of the reservation system, and enables > querying the reservation system on which reservation exists by "time-range, > reservation-id". > YARN-4420 and YARN-2575 has a dependency on this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (YARN-3102) Decommisioned Nodes not listed in Web UI
[ https://issues.apache.org/jira/browse/YARN-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kuhu Shukla updated YARN-3102: -- Attachment: YARN-3102-branch-2.7.001.patch Uploading branch-2.7 version of the patch. The issue mainly in RMNodeImpl since InactiveRMNodes is a Map ofand not . where the mapping is between the hostname (not including the port) and the RMNode. When the AddTransition tries to remove the entry for the host it checks additionally based on port if the result is not null. > Decommisioned Nodes not listed in Web UI > > > Key: YARN-3102 > URL: https://issues.apache.org/jira/browse/YARN-3102 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.6.0 > Environment: 2 Node Manager and 1 Resource Manager >Reporter: Bibin A Chundatt >Assignee: Kuhu Shukla >Priority: Minor > Attachments: YARN-3102-branch-2.7.001.patch, YARN-3102-v1.patch, > YARN-3102-v2.patch, YARN-3102-v3.patch, YARN-3102-v4.patch, > YARN-3102-v5.patch, YARN-3102-v6.patch, YARN-3102-v7.patch, YARN-3102-v8.patch > > > Configure yarn.resourcemanager.nodes.exclude-path in yarn-site.xml to > yarn.exlude file In RM1 machine > Add Yarn.exclude with NM1 Host Name > Start the node as listed below NM1,NM2 Resource manager > Now check Nodes decommisioned in /cluster/nodes > Number of decommisioned node is listed as 1 but Table is empty in > /cluster/nodes/decommissioned (detail of Decommision node not shown) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (YARN-4512) Provide a knob to turn on over-allocation
[ https://issues.apache.org/jira/browse/YARN-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124238#comment-15124238 ] Inigo Goiri commented on YARN-4512: --- I think we can overlook the checkstyles issues. +1 on v5. > Provide a knob to turn on over-allocation > - > > Key: YARN-4512 > URL: https://issues.apache.org/jira/browse/YARN-4512 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: YARN-4512-YARN-1011.001.patch, > yarn-4512-yarn-1011.002.patch, yarn-4512-yarn-1011.003.patch, > yarn-4512-yarn-1011.004.patch, yarn-4512-yarn-1011.005.patch > > > We need two configs for overallocation - one to specify the threshold upto > which it is okay to over-allocate, another to specify the threshold after > which OPPORTUNISTIC containers should be preempted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)