[jira] [Updated] (YARN-8354) SingleConstraintAppPlacementAllocator's allocate does not decPendingResource
[ https://issues.apache.org/jira/browse/YARN-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LongGang Chen updated YARN-8354: Description: SingleConstraintAppPlacementAllocator.allocate() does not decPendingResource,only reduce ResourceSizing.numAllocations by one. may be we should change decreasePendingNumAllocation() from : {code:java} private void decreasePendingNumAllocation() { // Deduct pending #allocations by 1 ResourceSizing sizing = schedulingRequest.getResourceSizing(); sizing.setNumAllocations(sizing.getNumAllocations() - 1); } {code} to: {code:java} private void decreasePendingNumAllocation() { // Deduct pending #allocations by 1 ResourceSizing sizing = schedulingRequest.getResourceSizing(); sizing.setNumAllocations(sizing.getNumAllocations() - 1); // Deduct pending resource of app and queue appSchedulingInfo.decPendingResource( schedulingRequest.getNodeLabelExpression(), sizing.getResources()); } } {code} was: SingleConstraintAppPlacementAllocator.allocate() does not decPendingResource,only reduce ResourceSizing.numAllocations by one. may be we should change decreasePendingNumAllocation() from : {code:java} private void decreasePendingNumAllocation() { // Deduct pending #allocations by 1 ResourceSizing sizing = schedulingRequest.getResourceSizing(); sizing.setNumAllocations(sizing.getNumAllocations() - 1); } {code} to: {code:java} private void decreasePendingNumAllocation() { // Deduct pending #allocations by 1 ResourceSizing sizing = schedulingRequest.getResourceSizing(); sizing.setNumAllocations(sizing.getNumAllocations() - 1); // Deduct pending resource of app and queue if (getExecutionType() == ExecutionType.OPPORTUNISTIC) { appSchedulingInfo.decOpportunisticPendingResource( schedulingRequest.getNodeLabelExpression(), sizing.getResources()); }else{ appSchedulingInfo.decPendingResource( schedulingRequest.getNodeLabelExpression(), sizing.getResources()); } } {code} > SingleConstraintAppPlacementAllocator's allocate does not decPendingResource > > > Key: YARN-8354 > URL: https://issues.apache.org/jira/browse/YARN-8354 > Project: Hadoop YARN > Issue Type: Bug > Components: RM >Affects Versions: 3.0.x >Reporter: LongGang Chen >Priority: Major > > SingleConstraintAppPlacementAllocator.allocate() does not > decPendingResource,only > reduce ResourceSizing.numAllocations by one. > may be we should change decreasePendingNumAllocation() from : > > {code:java} > private void decreasePendingNumAllocation() { > // Deduct pending #allocations by 1 > ResourceSizing sizing = schedulingRequest.getResourceSizing(); > sizing.setNumAllocations(sizing.getNumAllocations() - 1); > } > {code} > to: > {code:java} > private void decreasePendingNumAllocation() { > // Deduct pending #allocations by 1 > ResourceSizing sizing = schedulingRequest.getResourceSizing(); > sizing.setNumAllocations(sizing.getNumAllocations() - 1); > // Deduct pending resource of app and queue > appSchedulingInfo.decPendingResource( > schedulingRequest.getNodeLabelExpression(), > sizing.getResources()); > } > } > {code} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8355) container update error because of competition
[ https://issues.apache.org/jira/browse/YARN-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LongGang Chen updated YARN-8355: Description: first, Quickly go through the update logic, Increase as an example: * 1: normal work in ApplicationMasterService, DefaultAMSProcessor. * 2: CapacityScheduler.allocate will call AbstractYarnScheduler.handleContainerUpdates * 3: AbstractYarnScheduler.handleContainerUpdates will call handleIncreaseRequests, then call ContainerUpdateContext.checkAndAddToOutstandingIncreases * 4: cancle && and new: checkAndAddToOutstandingIncreases will check this inc update for this container, if there is an outstanding inc, it will cancle it by calling appSchedulingInfo.allocate(...) to allocate a dummy container; if the update is a fresh one, it will call appSchedulingInfo.updateResourceRequests to add a new request. the capacity of this new request is gap value between existing container and capacity of updateRequest, for example, if original capacity is , the target capacity of UpdateRequest is , the gap[the capacity of the new request which will be added to appSchedulingInfo] is . * 5: swap temp container and existing container: CapacityScheduler.allocate call FiCaSchedulerApp.getAllocation(...), getAllocation will call SchedulerApplicationAttempt.pullNewlyIncreasedContainers, then call ContainerUpdateContext.swapContainer,swapContainer will swap the newly allocated inc temp container with existing container, for example: original capacity , temp inc container's capacity , so the updated existing container has capacity ,inc update done. the problem is: if we send inc update twice for a certain container, for example: send inc to , then send inc with new target , the final updated capacity is uncertain. Scenes one: * 1: send inc update from to * 2: scheduler aproves it, and commit it, so app.liveContainers has this temp inc container with capacity in it. * 3: send inc with new target , a new resourceRequest with capacity will add to appSchedulingInfo, and swap first temp container, after that, the existing container has new capacity * 4: scheduler aproves the send temp resourceRequest, allocate second temp container with capacity * 5: swap the second inc temp container. so the updated capacity of this existing container is = , but wanted is Scenes two: * 1: send send inc update from to * 2: scheduler aproves it, but the temp container with capacity is queued in commitService, wait to commit * 3: send inc with new target , will add a new resourceRequest to appSchedulingInfo, but with same SchedulerRequestKey. * 4: the first temp container commit, app.apply will call appSchedulingInfo.allocate to reduce pending num, at this situation, it will cancle the second inc request. * 5: swap the first int temp container. the updated existing container's capacity is , but the wanted is two key points: * 1: when ContainerUpdateContext.checkAndAddToOutstandingIncreases cancle previous inc request and put current inc request, it use same SchedulerRequestKey , this action has competition with app.apply, like scenes two, app.apply will cancle second inc update's request. * 2: ContainerUpdateContext.swapContainer do not check the update target change or not. how to fix: * 1: after ContainerUpdateContext.checkAndAddToOutstandingIncreases cancle previous inc update request , use a new SchedulerRequestKey for current inc update request . we can add a new field createTime to distinguish them, default value of createTime is 0 * 2: change ContainerUpdateContext.swapContainer to checkAndSwapContainer, check update target change or not, if change, just ignore this temp container and release it. like Scenes one, when we swap first temp inc container, we found that if we do this swap, the updated capacity is , but the newly target's capacity is , so we just ignore this swap, and release the temp container. was: first, Quickly go through the update logic, Increase as an example: step 1: normal work in ApplicationMasterService, DefaultAMSProcessor. step 2: CapacityScheduler.allocate will call AbstractYarnScheduler.handleContainerUpdates step 3: AbstractYarnScheduler.handleContainerUpdates will call handleIncreaseRequests, then call ContainerUpdateContext.checkAndAddToOutstandingIncreases step 4: cancle && and new: checkAndAddToOutstandingIncreases will check this inc update for this container, if there is an outstanding inc, it will cancle it by calling appSchedulingInfo.allocate(...) to allocate a dummy container; if the update is a fresh one, it will call appSchedulingInfo.updateResourceRequests to add a new request. the capacity of this new request is gap value between exiting rmContainer and capacity of updateRequest, for example, if original capacity is , the target capacity of UpdateRequest is , the gap[the capacity of the new request which wi
[jira] [Created] (YARN-8355) container update error because of competition
LongGang Chen created YARN-8355: --- Summary: container update error because of competition Key: YARN-8355 URL: https://issues.apache.org/jira/browse/YARN-8355 Project: Hadoop YARN Issue Type: Bug Components: RM Affects Versions: 3.0.x Reporter: LongGang Chen first, Quickly go through the update logic, Increase as an example: step 1: normal work in ApplicationMasterService, DefaultAMSProcessor. step 2: CapacityScheduler.allocate will call AbstractYarnScheduler.handleContainerUpdates step 3: AbstractYarnScheduler.handleContainerUpdates will call handleIncreaseRequests, then call ContainerUpdateContext.checkAndAddToOutstandingIncreases step 4: cancle && and new: checkAndAddToOutstandingIncreases will check this inc update for this container, if there is an outstanding inc, it will cancle it by calling appSchedulingInfo.allocate(...) to allocate a dummy container; if the update is a fresh one, it will call appSchedulingInfo.updateResourceRequests to add a new request. the capacity of this new request is gap value between exiting rmContainer and capacity of updateRequest, for example, if original capacity is , the target capacity of UpdateRequest is , the gap[the capacity of the new request which will be added to appSchedulingInfo] is . step 5: swap temp container and existing container: CapacityScheduler.allocate call FiCaSchedulerApp.getAllocation(...), getAllocation will call SchedulerApplicationAttempt.pullNewlyIncreasedContainers, then call ContainerUpdateContext.swapContainer,swapContainer will swap the newly allocated inc temp container with existing container, for example: original capacity , temp inc container's capacity , so the updated existing container has capacity ,inc update done. the problem is: if we send inc update twice for a certain container, for example: send inc to , then send inc with new target , the final updated capacity is uncertain. Scenes one: 1: send inc update from to 2: scheduler aprove it, and commit it, so app.liveContainers has this temp inc container with capacity in it. 3: send inc with new target , a new resourceRequest with capacity will add to appSchedulingInfo, and swap first temp container, after that, the existing container has new capacity 4: scheduler aprove the send temp reqourceRequest, allocate second temp container with capacity 5: swap the second inc temp container. so the updated capacity of this existing container is = , but wanted is Scenes two: 1: send send inc update from to 2: scheduler aprove it, but the temp container with capacity is queued in commitService, wait to commit 3: send inc with new target , will add a new resourceRequest to appSchedulingInfo, but with same SchedulerRequestKey. 4: the first temp container commit, app.apply will call appSchedulingInfo.allocate to reduce pending num, at this situation, it will cancle the second inc request. 5: swap the first int temp container. the updated existing container's capacity is , but the wanted is two key points: 1: when ContainerUpdateContext.checkAndAddToOutstandingIncreases cancle previous inc and put current inc request, it use same SchedulerRequestKey as before, this action has competition with app.apply, like scenes two, app.apply will cancle second inc update's request. 2: ContainerUpdateContext.swapContainer do not check the update target change or not. how to fix: 1: after ContainerUpdateContext.checkAndAddToOutstandingIncreases cancle previous inc update, use a new SchedulerRequestKey for current inc update. we can add a new field createTime to distinguish them, default value of createTime is 0 2: change ContainerUpdateContext.swapContainer to checkAndSwapContainer, check update target change or not, if change, just ignore this temp container and release it. like Scenes one, when we swap first temp inc container, wo found that if we do this swap, the updated capacity is , but the newly target's capacity is , so we just ignore this swap, and release the temp container. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8353) LightWeightResource's hashCode function is different from parent class
[ https://issues.apache.org/jira/browse/YARN-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LongGang Chen updated YARN-8353: Affects Version/s: 3.0.x > LightWeightResource's hashCode function is different from parent class > -- > > Key: YARN-8353 > URL: https://issues.apache.org/jira/browse/YARN-8353 > Project: Hadoop YARN > Issue Type: Bug > Components: RM >Affects Versions: 3.0.x >Reporter: LongGang Chen >Priority: Major > > LightWeightResource's hashCode function is different from parent class. > One of the consequences is: > ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, > ContainerUpdateContext.outstandingIncreases will has smelly datas. > a simple test: > {code:java} > public void testHashCode() throws Exception{ > Resource resource = Resources.createResource(10,10); > Resource resource1 = new ResourcePBImpl(); > resource1.setMemorySize(10L); > resource1.setVirtualCores(10); > int x = resource.hashCode(); > int y = resource1.hashCode(); > Assert.assertEquals(x, y); > } > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8353) LightWeightResource's hashCode function is different from parent class
[ https://issues.apache.org/jira/browse/YARN-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LongGang Chen updated YARN-8353: Component/s: RM > LightWeightResource's hashCode function is different from parent class > -- > > Key: YARN-8353 > URL: https://issues.apache.org/jira/browse/YARN-8353 > Project: Hadoop YARN > Issue Type: Bug > Components: RM >Affects Versions: 3.0.x >Reporter: LongGang Chen >Priority: Major > > LightWeightResource's hashCode function is different from parent class. > One of the consequences is: > ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, > ContainerUpdateContext.outstandingIncreases will has smelly datas. > a simple test: > {code:java} > public void testHashCode() throws Exception{ > Resource resource = Resources.createResource(10,10); > Resource resource1 = new ResourcePBImpl(); > resource1.setMemorySize(10L); > resource1.setVirtualCores(10); > int x = resource.hashCode(); > int y = resource1.hashCode(); > Assert.assertEquals(x, y); > } > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8354) SingleConstraintAppPlacementAllocator's allocate does not decPendingResource
[ https://issues.apache.org/jira/browse/YARN-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LongGang Chen updated YARN-8354: Affects Version/s: 3.0.x Component/s: RM > SingleConstraintAppPlacementAllocator's allocate does not decPendingResource > > > Key: YARN-8354 > URL: https://issues.apache.org/jira/browse/YARN-8354 > Project: Hadoop YARN > Issue Type: Bug > Components: RM >Affects Versions: 3.0.x >Reporter: LongGang Chen >Priority: Major > > SingleConstraintAppPlacementAllocator.allocate() does not > decPendingResource,only > reduce ResourceSizing.numAllocations by one. > may be we should change decreasePendingNumAllocation() from : > > {code:java} > private void decreasePendingNumAllocation() { > // Deduct pending #allocations by 1 > ResourceSizing sizing = schedulingRequest.getResourceSizing(); > sizing.setNumAllocations(sizing.getNumAllocations() - 1); > } > {code} > to: > {code:java} > private void decreasePendingNumAllocation() { > // Deduct pending #allocations by 1 > ResourceSizing sizing = schedulingRequest.getResourceSizing(); > sizing.setNumAllocations(sizing.getNumAllocations() - 1); > // Deduct pending resource of app and queue > if (getExecutionType() == ExecutionType.OPPORTUNISTIC) { > appSchedulingInfo.decOpportunisticPendingResource( > schedulingRequest.getNodeLabelExpression(), > sizing.getResources()); > }else{ > appSchedulingInfo.decPendingResource( > schedulingRequest.getNodeLabelExpression(), > sizing.getResources()); > } > } > {code} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8354) SingleConstraintAppPlacementAllocator's allocate does not decPendingResource
[ https://issues.apache.org/jira/browse/YARN-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LongGang Chen updated YARN-8354: Description: SingleConstraintAppPlacementAllocator.allocate() does not decPendingResource,only reduce ResourceSizing.numAllocations by one. may be we should change decreasePendingNumAllocation() from : {code:java} private void decreasePendingNumAllocation() { // Deduct pending #allocations by 1 ResourceSizing sizing = schedulingRequest.getResourceSizing(); sizing.setNumAllocations(sizing.getNumAllocations() - 1); } {code} to: {code:java} private void decreasePendingNumAllocation() { // Deduct pending #allocations by 1 ResourceSizing sizing = schedulingRequest.getResourceSizing(); sizing.setNumAllocations(sizing.getNumAllocations() - 1); // Deduct pending resource of app and queue if (getExecutionType() == ExecutionType.OPPORTUNISTIC) { appSchedulingInfo.decOpportunisticPendingResource( schedulingRequest.getNodeLabelExpression(), sizing.getResources()); }else{ appSchedulingInfo.decPendingResource( schedulingRequest.getNodeLabelExpression(), sizing.getResources()); } } {code} was: SingleConstraintAppPlacementAllocator.allocate() does not decPendingResource,only reduce ResourceSizing.numAllocations by one. may be we should change decreasePendingNumAllocation() form : {code:java} private void decreasePendingNumAllocation() { // Deduct pending #allocations by 1 ResourceSizing sizing = schedulingRequest.getResourceSizing(); sizing.setNumAllocations(sizing.getNumAllocations() - 1); } {code} to: {code:java} private void decreasePendingNumAllocation() { // Deduct pending #allocations by 1 ResourceSizing sizing = schedulingRequest.getResourceSizing(); sizing.setNumAllocations(sizing.getNumAllocations() - 1); // Deduct pending resource of app and queue if (getExecutionType() == ExecutionType.OPPORTUNISTIC) { appSchedulingInfo.decOpportunisticPendingResource( schedulingRequest.getNodeLabelExpression(), sizing.getResources()); }else{ appSchedulingInfo.decPendingResource( schedulingRequest.getNodeLabelExpression(), sizing.getResources()); } } {code} > SingleConstraintAppPlacementAllocator's allocate does not decPendingResource > > > Key: YARN-8354 > URL: https://issues.apache.org/jira/browse/YARN-8354 > Project: Hadoop YARN > Issue Type: Bug >Reporter: LongGang Chen >Priority: Major > > SingleConstraintAppPlacementAllocator.allocate() does not > decPendingResource,only > reduce ResourceSizing.numAllocations by one. > may be we should change decreasePendingNumAllocation() from : > > {code:java} > private void decreasePendingNumAllocation() { > // Deduct pending #allocations by 1 > ResourceSizing sizing = schedulingRequest.getResourceSizing(); > sizing.setNumAllocations(sizing.getNumAllocations() - 1); > } > {code} > to: > {code:java} > private void decreasePendingNumAllocation() { > // Deduct pending #allocations by 1 > ResourceSizing sizing = schedulingRequest.getResourceSizing(); > sizing.setNumAllocations(sizing.getNumAllocations() - 1); > // Deduct pending resource of app and queue > if (getExecutionType() == ExecutionType.OPPORTUNISTIC) { > appSchedulingInfo.decOpportunisticPendingResource( > schedulingRequest.getNodeLabelExpression(), > sizing.getResources()); > }else{ > appSchedulingInfo.decPendingResource( > schedulingRequest.getNodeLabelExpression(), > sizing.getResources()); > } > } > {code} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-8354) SingleConstraintAppPlacementAllocator's allocate does not decPendingResource
LongGang Chen created YARN-8354: --- Summary: SingleConstraintAppPlacementAllocator's allocate does not decPendingResource Key: YARN-8354 URL: https://issues.apache.org/jira/browse/YARN-8354 Project: Hadoop YARN Issue Type: Bug Reporter: LongGang Chen SingleConstraintAppPlacementAllocator.allocate() does not decPendingResource,only reduce ResourceSizing.numAllocations by one. may be we should change decreasePendingNumAllocation() form : {code:java} private void decreasePendingNumAllocation() { // Deduct pending #allocations by 1 ResourceSizing sizing = schedulingRequest.getResourceSizing(); sizing.setNumAllocations(sizing.getNumAllocations() - 1); } {code} to: {code:java} private void decreasePendingNumAllocation() { // Deduct pending #allocations by 1 ResourceSizing sizing = schedulingRequest.getResourceSizing(); sizing.setNumAllocations(sizing.getNumAllocations() - 1); // Deduct pending resource of app and queue if (getExecutionType() == ExecutionType.OPPORTUNISTIC) { appSchedulingInfo.decOpportunisticPendingResource( schedulingRequest.getNodeLabelExpression(), sizing.getResources()); }else{ appSchedulingInfo.decPendingResource( schedulingRequest.getNodeLabelExpression(), sizing.getResources()); } } {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8353) LightWeightResource's hashCode function is different from parent class
[ https://issues.apache.org/jira/browse/YARN-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LongGang Chen updated YARN-8353: Description: LightWeightResource's hashCode function is different from parent class. One of the consequences is: ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, ContainerUpdateContext.outstandingIncreases will has smelly datas. a simple test: {code:java} public void testHashCode() throws Exception{ Resource resource = Resources.createResource(10,10); Resource resource1 = new ResourcePBImpl(); resource1.setMemorySize(10L); resource1.setVirtualCores(10); int x = resource.hashCode(); int y = resource1.hashCode(); Assert.assertEquals(x, y); } {code} was: LightWeightResource's hashCode function is different from parent class. One of the consequences is: ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, ContainerUpdateContext.outstandingIncreases will has smelly datas. a simple test: public void testHashCode() throws Exception{ Resource resource = Resources.createResource(10,10); Resource resource1 = new ResourcePBImpl(); resource1.setMemorySize(10L); resource1.setVirtualCores(10); int x = resource.hashCode(); int y = resource1.hashCode(); Assert.assertEquals(x, y); } > LightWeightResource's hashCode function is different from parent class > -- > > Key: YARN-8353 > URL: https://issues.apache.org/jira/browse/YARN-8353 > Project: Hadoop YARN > Issue Type: Bug >Reporter: LongGang Chen >Priority: Major > > LightWeightResource's hashCode function is different from parent class. > One of the consequences is: > ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, > ContainerUpdateContext.outstandingIncreases will has smelly datas. > a simple test: > {code:java} > public void testHashCode() throws Exception{ > Resource resource = Resources.createResource(10,10); > Resource resource1 = new ResourcePBImpl(); > resource1.setMemorySize(10L); > resource1.setVirtualCores(10); > int x = resource.hashCode(); > int y = resource1.hashCode(); > Assert.assertEquals(x, y); > } > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8353) LightWeightResource's hashCode function is different from parent class
[ https://issues.apache.org/jira/browse/YARN-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LongGang Chen updated YARN-8353: Description: LightWeightResource's hashCode function is different from parent class. One of the consequences is: ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, ContainerUpdateContext.outstandingIncreases will has smelly datas. a simple test: public void testHashCode() throws Exception{ Resource resource = Resources.createResource(10,10); Resource resource1 = new ResourcePBImpl(); resource1.setMemorySize(10L); resource1.setVirtualCores(10); int x = resource.hashCode(); int y = resource1.hashCode(); Assert.assertEquals(x, y); } was: LightWeightResource's hashCode function is different from parent class. One of the consequences is: ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, ContainerUpdateContext.outstandingIncreases will has smelly datas. a simple test: public void testHashCode() throws Exception { Resource resource = Resources.createResource(10,10); Resource resource1 = new ResourcePBImpl(); resource1.setMemorySize(10L); resource1.setVirtualCores(10); int x = resource.hashCode(); int y = resource1.hashCode(); Assert.assertEquals(x, y); } > LightWeightResource's hashCode function is different from parent class > -- > > Key: YARN-8353 > URL: https://issues.apache.org/jira/browse/YARN-8353 > Project: Hadoop YARN > Issue Type: Bug >Reporter: LongGang Chen >Priority: Major > > LightWeightResource's hashCode function is different from parent class. > One of the consequences is: > ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, > ContainerUpdateContext.outstandingIncreases will has smelly datas. > a simple test: > public void testHashCode() throws Exception{ > Resource resource = Resources.createResource(10,10); > Resource resource1 = new ResourcePBImpl(); > resource1.setMemorySize(10L); > resource1.setVirtualCores(10); > int x = resource.hashCode(); > int y = resource1.hashCode(); > Assert.assertEquals(x, y); > } -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8353) LightWeightResource's hashCode function is different from parent class
[ https://issues.apache.org/jira/browse/YARN-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LongGang Chen updated YARN-8353: Description: LightWeightResource's hashCode function is different from parent class. One of the consequences is: ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, ContainerUpdateContext.outstandingIncreases will has smelly datas. a simple test: public void testHashCode() throws Exception { Resource resource = Resources.createResource(10,10); Resource resource1 = new ResourcePBImpl(); resource1.setMemorySize(10L); resource1.setVirtualCores(10); int x = resource.hashCode(); int y = resource1.hashCode(); Assert.assertEquals(x, y); } was: LightWeightResource's hashCode function is different from parent class. One of the consequences is: ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, ContainerUpdateContext.outstandingIncreases will has smelly datas. a simple test: public void testHashCode() throws Exception { Resource resource = Resources.createResource(10,10); Resource resource1 = new ResourcePBImpl(); resource1.setMemorySize(10L); resource1.setVirtualCores(10); int x = resource.hashCode(); int y = resource1.hashCode(); Assert.assertEquals(x, y); } > LightWeightResource's hashCode function is different from parent class > -- > > Key: YARN-8353 > URL: https://issues.apache.org/jira/browse/YARN-8353 > Project: Hadoop YARN > Issue Type: Bug >Reporter: LongGang Chen >Priority: Major > > LightWeightResource's hashCode function is different from parent class. > One of the consequences is: > ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, > ContainerUpdateContext.outstandingIncreases will has smelly datas. > a simple test: > public void testHashCode() throws Exception { > Resource resource = Resources.createResource(10,10); > Resource resource1 = new ResourcePBImpl(); > resource1.setMemorySize(10L); > resource1.setVirtualCores(10); > int x = resource.hashCode(); > int y = resource1.hashCode(); > Assert.assertEquals(x, y); > } -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-8353) LightWeightResource's hashCode function is different from parent class
[ https://issues.apache.org/jira/browse/YARN-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LongGang Chen updated YARN-8353: Description: LightWeightResource's hashCode function is different from parent class. One of the consequences is: ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, ContainerUpdateContext.outstandingIncreases will has smelly datas. a simple test: public void testHashCode() throws Exception { Resource resource = Resources.createResource(10,10); Resource resource1 = new ResourcePBImpl(); resource1.setMemorySize(10L); resource1.setVirtualCores(10); int x = resource.hashCode(); int y = resource1.hashCode(); Assert.assertEquals(x, y); } was: LightWeightResource's hashCode function is different from parent class。 One of the consequences is: ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, ContainerUpdateContext.outstandingIncreases will has smelly datas。 a simple test: public void testHashCode() throws Exception { Resource resource = Resources.createResource(10,10); Resource resource1 = new ResourcePBImpl(); resource1.setMemorySize(10L); resource1.setVirtualCores(10); int x = resource.hashCode(); int y = resource1.hashCode(); Assert.assertEquals(x, y); } > LightWeightResource's hashCode function is different from parent class > -- > > Key: YARN-8353 > URL: https://issues.apache.org/jira/browse/YARN-8353 > Project: Hadoop YARN > Issue Type: Bug >Reporter: LongGang Chen >Priority: Major > > LightWeightResource's hashCode function is different from parent class. > One of the consequences is: > ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, > ContainerUpdateContext.outstandingIncreases will has smelly datas. > a simple test: > public void testHashCode() throws Exception { > Resource resource = Resources.createResource(10,10); > Resource resource1 = new ResourcePBImpl(); > resource1.setMemorySize(10L); > resource1.setVirtualCores(10); > int x = resource.hashCode(); > int y = resource1.hashCode(); > Assert.assertEquals(x, y); > } -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-8353) LightWeightResource's hashCode function is different from parent class
LongGang Chen created YARN-8353: --- Summary: LightWeightResource's hashCode function is different from parent class Key: YARN-8353 URL: https://issues.apache.org/jira/browse/YARN-8353 Project: Hadoop YARN Issue Type: Bug Reporter: LongGang Chen LightWeightResource's hashCode function is different from parent class。 One of the consequences is: ContainerUpdateContext.removeFromOutstandingUpdate will nor work correct, ContainerUpdateContext.outstandingIncreases will has smelly datas。 a simple test: public void testHashCode() throws Exception { Resource resource = Resources.createResource(10,10); Resource resource1 = new ResourcePBImpl(); resource1.setMemorySize(10L); resource1.setVirtualCores(10); int x = resource.hashCode(); int y = resource1.hashCode(); Assert.assertEquals(x, y); } -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org