[jira] [Commented] (YARN-6254) Provide a mechanism to whitelist the RM REST API clients
[ https://issues.apache.org/jira/browse/YARN-6254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889706#comment-15889706 ] yunjiong zhao commented on YARN-6254: - Reduce yarn.resourcemanager.max-completed-applications from default value 1 to a small value like 500 should solve the problem. > Provide a mechanism to whitelist the RM REST API clients > > > Key: YARN-6254 > URL: https://issues.apache.org/jira/browse/YARN-6254 > Project: Hadoop YARN > Issue Type: New Feature > Components: resourcemanager >Affects Versions: 2.7.1 >Reporter: Aroop Maliakkal > > Currently RM REST APIs are open to everyone. Can we provide a whitelist > feature so that we can control what frequency and what hosts can hit the RM > REST APIs ? > Thanks, > /Aroop -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key
[ https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Yang updated YARN-6257: --- Attachment: YARN-6257.001.patch Attach a patch for review. > CapacityScheduler REST API produces incorrect JSON - JSON object > operationsInfo contains deplicate key > -- > > Key: YARN-6257 > URL: https://issues.apache.org/jira/browse/YARN-6257 > Project: Hadoop YARN > Issue Type: Bug > Components: capacityscheduler >Affects Versions: 2.8.1 >Reporter: Tao Yang >Assignee: Tao Yang >Priority: Minor > Attachments: YARN-6257.001.patch > > > In response string of CapacityScheduler REST API, > scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a > JSON object : > {code} > "operationsInfo":{ > > "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, > > "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, > > "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, > > "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}} > } > {code} > To solve this problem, I suppose the type of operationsInfo field in > CapacitySchedulerHealthInfo class should be converted from Map to List. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key
[ https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Yang updated YARN-6257: --- Description: In response string of CapacityScheduler REST API, scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a JSON object : {code} "operationsInfo":{ "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}} } {code} To solve this problem, I suppose the type of operationsInfo field in CapacitySchedulerHealthInfo class should be converted from Map to List. was: In response string of CapacityScheduler REST API, scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a JSON object : {code:json} "operationsInfo":{ "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}} } {code} To solve this problem, I suppose the type of operationsInfo field in CapacitySchedulerHealthInfo class should be converted from Map to List. > CapacityScheduler REST API produces incorrect JSON - JSON object > operationsInfo contains deplicate key > -- > > Key: YARN-6257 > URL: https://issues.apache.org/jira/browse/YARN-6257 > Project: Hadoop YARN > Issue Type: Bug > Components: capacityscheduler >Affects Versions: 2.8.1 >Reporter: Tao Yang >Assignee: Tao Yang >Priority: Minor > > In response string of CapacityScheduler REST API, > scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a > JSON object : > {code} > "operationsInfo":{ > > "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, > > "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, > > "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, > > "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}} > } > {code} > To solve this problem, I suppose the type of operationsInfo field in > CapacitySchedulerHealthInfo class should be converted from Map to List. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key
Tao Yang created YARN-6257: -- Summary: CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key Key: YARN-6257 URL: https://issues.apache.org/jira/browse/YARN-6257 Project: Hadoop YARN Issue Type: Bug Components: capacityscheduler Affects Versions: 2.8.1 Reporter: Tao Yang Assignee: Tao Yang Priority: Minor In response string of CapacityScheduler REST API, scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a JSON object : {code:json} "operationsInfo":{ "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}, "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}} } {code} To solve this problem, I suppose the type of operationsInfo field in CapacitySchedulerHealthInfo class should be converted from Map to List. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889646#comment-15889646 ] Varun Saxena commented on YARN-6027: Give me a couple of hours. Will review in detail... > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889636#comment-15889636 ] Rohith Sharma K S commented on YARN-6027: - [~varun_saxena] do you have any additional comments on 0007 patch, I am planning to update new patch fixing newly added checkstyle comments. > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889629#comment-15889629 ] Varun Saxena commented on YARN-6027: [~rohithsharma], can you fix the pending checkstyles? Seem fixable. > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889608#comment-15889608 ] Hadoop QA commented on YARN-6027: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 21s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 43s{color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase in YARN-5355 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 12 new + 28 unchanged - 1 fixed = 40 total (was 29) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 17s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | YARN-6027 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855296/YARN-6027-YARN-5355.0007.patch
[jira] [Created] (YARN-6256) Add FROM_ID info key for timeline entities in reader response.
Rohith Sharma K S created YARN-6256: --- Summary: Add FROM_ID info key for timeline entities in reader response. Key: YARN-6256 URL: https://issues.apache.org/jira/browse/YARN-6256 Project: Hadoop YARN Issue Type: Sub-task Reporter: Rohith Sharma K S Assignee: Rohith Sharma K S It is continuation with YARN-6027 to add FROM_ID key in all other timeline entity responses which includes # Flow run entity response. # Application entity response # Generic timeline entity response - Here we need to retrospect on idprefix filter which is now separately provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889593#comment-15889593 ] Hadoop QA commented on YARN-6027: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 50s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s{color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase in YARN-5355 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} YARN-5355 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 29s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 12 new + 28 unchanged - 1 fixed = 40 total (was 29) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 31s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | YARN-6027 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855294/YARN-6027-YARN-5355.0007.patch
[jira] [Assigned] (YARN-4652) Skip label configuration and calculation of queue if nodelabel disabled
[ https://issues.apache.org/jira/browse/YARN-4652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ying Zhang reassigned YARN-4652: Assignee: Ying Zhang (was: Bibin A Chundatt) > Skip label configuration and calculation of queue if nodelabel disabled > --- > > Key: YARN-4652 > URL: https://issues.apache.org/jira/browse/YARN-4652 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bibin A Chundatt >Assignee: Ying Zhang > > As per the discussion in YARN-4465 the queue level configuration need to be > skipped when node label is disabled in RM > {quote} > Queues with label configured will be rejected, including > default-node-label-expression, accessible-node-labels (accessible-node-labels > is * by default, but we shouldn't allow explicitly set accessible-node-labels > to non-empty and non *), and label-related capacities (check > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils#loadCapacitiesByLabelsFromConf) > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4652) Skip label configuration and calculation of queue if nodelabel disabled
[ https://issues.apache.org/jira/browse/YARN-4652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889572#comment-15889572 ] Ying Zhang commented on YARN-4652: -- Thanks [~bibinchundatt]. Assigned to me then. Will upload a patch soon and ask you for review. > Skip label configuration and calculation of queue if nodelabel disabled > --- > > Key: YARN-4652 > URL: https://issues.apache.org/jira/browse/YARN-4652 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Bibin A Chundatt >Assignee: Bibin A Chundatt > > As per the discussion in YARN-4465 the queue level configuration need to be > skipped when node label is disabled in RM > {quote} > Queues with label configured will be rejected, including > default-node-label-expression, accessible-node-labels (accessible-node-labels > is * by default, but we shouldn't allow explicitly set accessible-node-labels > to non-empty and non *), and label-related capacities (check > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils#loadCapacitiesByLabelsFromConf) > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-6027: Attachment: YARN-6027-YARN-5355.0007.patch > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-6027: Attachment: (was: YARN-6027-YARN-5355.0007.patch) > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-6027: Attachment: YARN-6027-YARN-5355.0007.patch Updated the patch as per Sangjin Lee comment and checkstyle and java doc errors. And do NOT get surprised looking at patch which has doubled in size!! It is because of fixing checkstyle comments which TestTimelineReaderWebServicesHBaseStorage has been modified. > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch, YARN-6027-YARN-5355.0007.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6199) Support for listing flows with filter userid
[ https://issues.apache.org/jira/browse/YARN-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889522#comment-15889522 ] Rohith Sharma K S edited comment on YARN-6199 at 3/1/17 5:39 AM: - bq. Use-case wise, you plan to use this with daterange. Right? Initially Yes, but it is not going to be scalable solution because data range boundary is unlimited. Date Range do NOT solve all filter userid use case. Date range retrieves all the users data. Lets say, userX has run a flow one week back and userX wants to know about his flow activities. Can he able to retrieves userX flows given data range? The response contains all the users flow activities data. bq. do you think this should be a merge blocker? We can take consensus on this! It is important to support this filter. I do not think of any easier solution to support this filter unless flow activity table to be modified to keep userid in column. By doing this, at least column filters can be used to match user id. was (Author: rohithsharma): bq. Use-case wise, you plan to use this with daterange. Right? Date Range do NOT solve filter userid use case. Date range retrieves all the users data. Lets say, userX has run a flow one week back and userX wants to know about his flow activities. Can he able to retrieves userX flows given data range? The response contains all the users flow activities data. bq. do you think this should be a merge blocker? We can take consensus on this! It is important to support this filter. I do not think of any easier solution to support this filter unless flow activity table to be modified to keep userid in column. By doing this, at least column filters can be used to match user id. > Support for listing flows with filter userid > > > Key: YARN-6199 > URL: https://issues.apache.org/jira/browse/YARN-6199 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > > Currently */flows* API retrieves flow entities for all the users by default. > It is required to provide filter user i.e */flows?user=rohith* . This is > critical filter in secured environment. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6199) Support for listing flows with filter userid
[ https://issues.apache.org/jira/browse/YARN-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889522#comment-15889522 ] Rohith Sharma K S commented on YARN-6199: - bq. Use-case wise, you plan to use this with daterange. Right? Date Range do NOT solve filter userid use case. Date range retrieves all the users data. Lets say, userX has run a flow one week back and userX wants to know about his flow activities. Can he able to retrieves userX flows given data range? The response contains all the users flow activities data. bq. do you think this should be a merge blocker? We can take consensus on this! It is important to support this filter. I do not think of any easier solution to support this filter unless flow activity table to be modified to keep userid in column. By doing this, at least column filters can be used to match user id. > Support for listing flows with filter userid > > > Key: YARN-6199 > URL: https://issues.apache.org/jira/browse/YARN-6199 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > > Currently */flows* API retrieves flow entities for all the users by default. > It is required to provide filter user i.e */flows?user=rohith* . This is > critical filter in secured environment. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6254) Provide a mechanism to whitelist the RM REST API clients
[ https://issues.apache.org/jira/browse/YARN-6254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889500#comment-15889500 ] Aroop Maliakkal commented on YARN-6254: --- Here is the symptom of this issue what we are seeing :- We have some customers who crawls the resourcemanager REST APIs for monitoring and workflow purposes . Given that these resourcemanager responses are pretty heavy (in our case, /v1/cluster/apps is throwing around 300M of size_download data), we feel it is critical to have some kind of governance with platform team to control who all can access these REST APIs. {quote}hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,484 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 312000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,484 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 313000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,485 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 314000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,485 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 315000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,485 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 316000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,485 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 317000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,485 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 318000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,485 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 319000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,485 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 32 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,485 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 32 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,485 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 321000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,485 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 322000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,486 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 323000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,487 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 324000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,487 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 324000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,487 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 325000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,487 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 326000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,487 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 327000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,487 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 328000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,487 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 329000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,487 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 33 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,487 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 331000 hadoop-hadoop-resourcemanager.log.2017-02-24-18.gz:2017-02-24 18:08:01,487 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Size of event-queue is 332000 {quote} > Provide a mechanism to whitelist the RM REST API clients > > > Key: YARN-6254 > URL: https://issues.apache.org/jira/browse/YARN-6254 > Project: Hadoop YARN > Issue Type: New Feature > Components: resourcemanager >Affects Versions: 2.7.1 >Reporter: Aroop Maliakkal > > Currently RM REST APIs are open to everyone. Can we provide a whitelist > feature so that we can control what frequency and what hosts can hit the RM > REST APIs ? > Thanks, > /Aroop -- This message was sent by Atlassian JIRA (v6.3.15#6346) ---
[jira] [Commented] (YARN-6190) Validation and synchronization fixes in LocalityMulticastAMRMProxyPolicy
[ https://issues.apache.org/jira/browse/YARN-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889499#comment-15889499 ] Carlo Curino commented on YARN-6190: Thanks [~subru]. > Validation and synchronization fixes in LocalityMulticastAMRMProxyPolicy > > > Key: YARN-6190 > URL: https://issues.apache.org/jira/browse/YARN-6190 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Fix For: YARN-2915 > > Attachments: YARN-6190-YARN-2915.v1.patch, > YARN-6190-YARN-2915.v2.patch > > > A bug fix in LocalityMulticastAMRMProxyPolicy on policy array condition > check, along with misc cleanups. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889497#comment-15889497 ] Rohith Sharma K S commented on YARN-6027: - Thanks [~sjlee0] and [~varun_saxena] for the review. bq. Can we at least standardize on one set of definitions? Will update in next patch to make use of default delimiter/escape characters by calling just split methods. And also will remove UID_ESCAPE_CHAR and UID_DELIMETER_CHAR character constants from TimelineUIDConverter class. > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6248) Killing an app with pending container requests leaves the user in UsersManager
[ https://issues.apache.org/jira/browse/YARN-6248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889423#comment-15889423 ] Hadoop QA commented on YARN-6248: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 59s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 69m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestLeaderElectorService | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6248 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855198/YARN-6248.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ead8d7a6e42b 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 989bd56 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15115/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15115/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15115/console | | Powered by | Apache Yetus 0
[jira] [Commented] (YARN-6255) Refactor yarn-native-services framework
[ https://issues.apache.org/jira/browse/YARN-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889390#comment-15889390 ] Hadoop QA commented on YARN-6255: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} YARN-6255 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-6255 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855279/YARN-6255.01-yarn-native-servies.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15116/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Refactor yarn-native-services framework > > > Key: YARN-6255 > URL: https://issues.apache.org/jira/browse/YARN-6255 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-6255.01-yarn-native-servies.patch > > > YARN-4692 provides a good abstraction of services on YARN. We could use this > as a building block in yarn-native-services framework code base as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6255) Refactor yarn-native-services framework
[ https://issues.apache.org/jira/browse/YARN-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6255: -- Attachment: YARN-6255.01-yarn-native-servies.patch > Refactor yarn-native-services framework > > > Key: YARN-6255 > URL: https://issues.apache.org/jira/browse/YARN-6255 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-6255.01-yarn-native-servies.patch > > > YARN-4692 provides a good abstraction of services on YARN. We could use this > as a building block in yarn-native-services framework code base as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-1728) Workaround guice3x-undecoded pathInfo in YARN WebApp
[ https://issues.apache.org/jira/browse/YARN-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889359#comment-15889359 ] Yuanbo Liu commented on YARN-1728: -- Thanks a lot! > Workaround guice3x-undecoded pathInfo in YARN WebApp > > > Key: YARN-1728 > URL: https://issues.apache.org/jira/browse/YARN-1728 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Abraham Elmahrek >Assignee: Yuanbo Liu > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha3 > > Attachments: test-case-for-trunk.patch, YARN-1728-branch-2.001.patch, > YARN-1728-branch-2.002.patch, YARN-1728-branch-2.003.patch, > YARN-1728-branch-2.004.patch, YARN-1728-branch-2.005.patch > > > For example, going to the job history server page > http://localhost:19888/jobhistory/logs/localhost%3A8041/container_1391466602060_0011_01_01/job_1391466602060_0011/admin/stderr > results in the following error: > {code} > Cannot get container logs. Invalid nodeId: > test-cdh5-hue.ent.cloudera.com%3A8041 > {code} > Where the url decoded version works: > http://localhost:19888/jobhistory/logs/localhost:8041/container_1391466602060_0011_01_01/job_1391466602060_0011/admin/stderr > It seems like both should be supported as the former is simply percent > encoding. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6040) Introduce api independent PendingAsk to replace usage of ResourceRequest within Scheduler classes
[ https://issues.apache.org/jira/browse/YARN-6040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889343#comment-15889343 ] Hadoop QA commented on YARN-6040: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 44s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 42s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 32s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 11 new + 978 unchanged - 22 fixed = 989 total (was 1000) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 25s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK v1.7.0_121. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}114m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_121 Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption | | JDK v1.7.0_121 Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCa
[jira] [Commented] (YARN-6248) Killing an app with pending container requests leaves the user in UsersManager
[ https://issues.apache.org/jira/browse/YARN-6248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889333#comment-15889333 ] Sunil G commented on YARN-6248: --- Yes. Thanks [~eepayne]. There was a UT failure, hence I used a similar fix done in another part of code. I submitted patch to see any UT issues. If in case of any failure, we can fix that UT as well. Change looks perfectly fine for me. > Killing an app with pending container requests leaves the user in UsersManager > -- > > Key: YARN-6248 > URL: https://issues.apache.org/jira/browse/YARN-6248 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: Eric Payne >Assignee: Eric Payne > Attachments: User Left Over.jpg, YARN-6248.001.patch > > > If an app is still asking for resources when it is killed, the user is left > in the UsersManager structure and shows up on the GUI. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4599) Set OOM control for memory cgroups
[ https://issues.apache.org/jira/browse/YARN-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889309#comment-15889309 ] Miklos Szegedi commented on YARN-4599: -- Thank you for the patch [~sandflee]. This would be useful for many users. I have a few questions. 1. CGroupsElasticMemoryResourceHandlerImpl copies lots of code from CGroupsMemoryResourceHandlerImpl. Have you considered inheritance to reduce redundancy? 2. Is it better to use JNI as in the current patch, or implement the event logic in Linux Container Executor? 3. Could you please rebase it? It does not build anymore. > Set OOM control for memory cgroups > -- > > Key: YARN-4599 > URL: https://issues.apache.org/jira/browse/YARN-4599 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.9.0 >Reporter: Karthik Kambatla >Assignee: sandflee > Labels: oct16-medium > Attachments: yarn-4599-not-so-useful.patch, YARN-4599.sandflee.patch > > > YARN-1856 adds memory cgroups enforcing support. We should also explicitly > set OOM control so that containers are not killed as soon as they go over > their usage. Today, one could set the swappiness to control this, but > clusters with swap turned off exist. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6255) Refactor yarn-native-services framework
Jian He created YARN-6255: - Summary: Refactor yarn-native-services framework Key: YARN-6255 URL: https://issues.apache.org/jira/browse/YARN-6255 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jian He Assignee: Jian He YARN-4692 provides a good abstraction of services on YARN. We could use this as a building block in yarn-native-services framework code base as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6190) Validation and synchronization fixes in LocalityMulticastAMRMProxyPolicy
[ https://issues.apache.org/jira/browse/YARN-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889261#comment-15889261 ] Botong Huang commented on YARN-6190: Great, thanks [~curino] for the feedback! > Validation and synchronization fixes in LocalityMulticastAMRMProxyPolicy > > > Key: YARN-6190 > URL: https://issues.apache.org/jira/browse/YARN-6190 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6190-YARN-2915.v1.patch, > YARN-6190-YARN-2915.v2.patch > > > A bug fix in LocalityMulticastAMRMProxyPolicy on policy array condition > check, along with misc cleanups. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6190) Validation and synchronization fixes in LocalityMulticastAMRMProxyPolicy
[ https://issues.apache.org/jira/browse/YARN-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889253#comment-15889253 ] Carlo Curino commented on YARN-6190: Thanks [~botong], I just committed this to branch YARN-2915. > Validation and synchronization fixes in LocalityMulticastAMRMProxyPolicy > > > Key: YARN-6190 > URL: https://issues.apache.org/jira/browse/YARN-6190 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6190-YARN-2915.v1.patch, > YARN-6190-YARN-2915.v2.patch > > > A bug fix in LocalityMulticastAMRMProxyPolicy on policy array condition > check, along with misc cleanups. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6254) Provide a mechanism to whitelist the RM REST API clients
Aroop Maliakkal created YARN-6254: - Summary: Provide a mechanism to whitelist the RM REST API clients Key: YARN-6254 URL: https://issues.apache.org/jira/browse/YARN-6254 Project: Hadoop YARN Issue Type: New Feature Components: resourcemanager Affects Versions: 2.7.1 Reporter: Aroop Maliakkal Currently RM REST APIs are open to everyone. Can we provide a whitelist feature so that we can control what frequency and what hosts can hit the RM REST APIs ? Thanks, /Aroop -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6190) Validation and synchronization fixes in LocalityMulticastAMRMProxyPolicy
[ https://issues.apache.org/jira/browse/YARN-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-6190: --- Summary: Validation and synchronization fixes in LocalityMulticastAMRMProxyPolicy (was: Bug in LocalityMulticastAMRMProxyPolicy argument validation) > Validation and synchronization fixes in LocalityMulticastAMRMProxyPolicy > > > Key: YARN-6190 > URL: https://issues.apache.org/jira/browse/YARN-6190 > Project: Hadoop YARN > Issue Type: Sub-task > Components: federation >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6190-YARN-2915.v1.patch, > YARN-6190-YARN-2915.v2.patch > > > A bug fix in LocalityMulticastAMRMProxyPolicy on policy array condition > check, along with misc cleanups. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6247) Add SubClusterResolver into FederationStateStoreFacade
[ https://issues.apache.org/jira/browse/YARN-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889198#comment-15889198 ] Hadoop QA commented on YARN-6247: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 20s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 19s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 56s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 11s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} YARN-2915 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6247 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855246/YARN-6247-YARN-2915.v4.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux c712ffefc5f9 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 611a7fe | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15113/testReport/
[jira] [Updated] (YARN-6040) Introduce api independent PendingAsk to replace usage of ResourceRequest within Scheduler classes
[ https://issues.apache.org/jira/browse/YARN-6040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-6040: - Attachment: YARN-6040.branch-2.008.patch > Introduce api independent PendingAsk to replace usage of ResourceRequest > within Scheduler classes > - > > Key: YARN-6040 > URL: https://issues.apache.org/jira/browse/YARN-6040 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Fix For: 3.0.0-alpha2 > > Attachments: YARN-6040.001.patch, YARN-6040.002.patch, > YARN-6040.003.patch, YARN-6040.004.patch, YARN-6040.005.patch, > YARN-6040.006.patch, YARN-6040.007.patch, YARN-6040.branch-2.007.patch, > YARN-6040.branch-2.008.patch > > > As mentioned by YARN-5906, currently schedulers are using ResourceRequest > heavily so it will be very hard to adopt the new PowerfulResourceRequest > (YARN-4902). > This JIRA is the 2nd step of refactoring, which remove usage of > ResourceRequest from AppSchedulingInfo / SchedulerApplicationAttempt. Instead > of returning ResourceRequest, it returns a lightweight and API-independent > object - {{PendingAsk}}. > The only remained ResourceRequest API of AppSchedulingInfo will be used by > web service to get list of ResourceRequests. > So after this patch, usage of ResourceRequest will be isolated inside > AppSchedulingInfo, so it will be more flexible to update internal data > structure and upgrade old ResourceRequest API to new. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6253) FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp
[ https://issues.apache.org/jira/browse/YARN-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889150#comment-15889150 ] Haibo Chen commented on YARN-6253: -- Sure. > FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp > --- > > Key: YARN-6253 > URL: https://issues.apache.org/jira/browse/YARN-6253 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Haibo Chen >Assignee: Haibo Chen > Labels: yarn-5355-merge-blocker > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6253.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6253) FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp
[ https://issues.apache.org/jira/browse/YARN-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889125#comment-15889125 ] Sangjin Lee commented on YARN-6253: --- LGTM. Will commit shortly. I think it is fine to merge it to the YARN-5355 branch and have it go to trunk when we merge that feature branch. Are you fine with that? > FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp > --- > > Key: YARN-6253 > URL: https://issues.apache.org/jira/browse/YARN-6253 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Haibo Chen >Assignee: Haibo Chen > Labels: yarn-5355-merge-blocker > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6253.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6253) FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp
[ https://issues.apache.org/jira/browse/YARN-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889104#comment-15889104 ] Hadoop QA commented on YARN-6253: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6253 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855241/YARN-6253.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1f153f0e17a0 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 235203d | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15112/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15112/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp > --- > > Key: YARN-6253 > URL: https://issues.apache.org/jira/browse/YARN-6253 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >
[jira] [Updated] (YARN-6247) Add SubClusterResolver into FederationStateStoreFacade
[ https://issues.apache.org/jira/browse/YARN-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6247: --- Attachment: YARN-6247-YARN-2915.v4.patch > Add SubClusterResolver into FederationStateStoreFacade > -- > > Key: YARN-6247 > URL: https://issues.apache.org/jira/browse/YARN-6247 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6247-YARN-2915.v1.patch, > YARN-6247-YARN-2915.v2.patch, YARN-6247-YARN-2915.v3.patch, > YARN-6247-YARN-2915.v4.patch > > > Add SubClusterResolver into FederationStateStoreFacade. Since the resolver > might involve some overhead (read file in the background, potentially > periodically), it is good to put it inside FederationStateStoreFacade > singleton, so that only one instance will be created. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6253) FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp
[ https://issues.apache.org/jira/browse/YARN-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-6253: - Attachment: YARN-6253.01.patch Attaching a trivial fix. > FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp > --- > > Key: YARN-6253 > URL: https://issues.apache.org/jira/browse/YARN-6253 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Haibo Chen >Assignee: Haibo Chen > Labels: yarn-5355-merge-blocker > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6253.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6252) Suspicious code fragments: comparing with itself
[ https://issues.apache.org/jira/browse/YARN-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen reassigned YARN-6252: Assignee: Haibo Chen > Suspicious code fragments: comparing with itself > > > Key: YARN-6252 > URL: https://issues.apache.org/jira/browse/YARN-6252 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: AppChecker >Assignee: Haibo Chen > > Hi > 1) > https://github.com/apache/hadoop/blob/235203dffda1482fb38762fde544c4dd9c3e1fa8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ReservationId.java#L106 > {code:java} > return getId() > getId() ? 1 : getId() < getId() ? -1 : 0; > {code} > strangely, than getId() compare with itself > Probably It should be something like this: > {code:java} > return this.getId() > other.getId() ? 1 : this.getId() < other.getId() > ? -1 : 0; > {code} > 2) > https://github.com/apache/hadoop/blob/235203dffda1482fb38762fde544c4dd9c3e1fa8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestAppLogAggregatorImpl.java#L260 > {code:java} > if(filesUploaded.size() != filesUploaded.size()) { > fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " + > "expected size: " + filesExpected.size()); > } > {code} > filesUploaded.size() compare with it self > probably it should be: > {code:java} > if(filesUploaded.size() != filesExpected.size()) { > fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " + > "expected size: " + filesExpected.size()); > } > {code} > These possible defects found by [static code analyzer > AppChecker|https://cnpo.ru/en/solutions/appchecker.php] -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6253) FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp
[ https://issues.apache.org/jira/browse/YARN-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated YARN-6253: -- Labels: yarn-5355-merge-blocker (was: ) > FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp > --- > > Key: YARN-6253 > URL: https://issues.apache.org/jira/browse/YARN-6253 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Haibo Chen >Assignee: Haibo Chen > Labels: yarn-5355-merge-blocker > Fix For: 3.0.0-alpha3 > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4061) [Fault tolerance] Fault tolerant writer for timeline v2
[ https://issues.apache.org/jira/browse/YARN-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889016#comment-15889016 ] Haibo Chen commented on YARN-4061: -- I found this during my attempt to do coprocessor refactoring and did not see any other similar cases. I recall it was mentioned in one of our fault-tolerant writer discussion that ATS clients set the timestamp explicitly, but I was not sure if this is a real bug or not. That's why I posted it here. Filed YARN-6253. > [Fault tolerance] Fault tolerant writer for timeline v2 > --- > > Key: YARN-4061 > URL: https://issues.apache.org/jira/browse/YARN-4061 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Li Lu >Assignee: Joep Rottinghuis > Labels: YARN-5355, yarn-5355-merge-blocker > Attachments: FaulttolerantwriterforTimelinev2.pdf > > > We need to build a timeline writer that can be resistant to backend storage > down time and timeline collector failures. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6253) FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp
[ https://issues.apache.org/jira/browse/YARN-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889014#comment-15889014 ] Haibo Chen commented on YARN-6253: -- It is expected that all timelineclients set timestamp explicitly on timelineentities, but FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops the timestamp. > FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp > --- > > Key: YARN-6253 > URL: https://issues.apache.org/jira/browse/YARN-6253 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Haibo Chen >Assignee: Haibo Chen > Fix For: 3.0.0-alpha3 > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6253) FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp
[ https://issues.apache.org/jira/browse/YARN-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-6253: - Issue Type: Sub-task (was: Bug) Parent: YARN-5355 > FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp > --- > > Key: YARN-6253 > URL: https://issues.apache.org/jira/browse/YARN-6253 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Haibo Chen >Assignee: Haibo Chen > Fix For: 3.0.0-alpha3 > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6253) FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp
Haibo Chen created YARN-6253: Summary: FlowAcitivityColumnPrefix.store(byte[] rowKey, ...) drops timestamp Key: YARN-6253 URL: https://issues.apache.org/jira/browse/YARN-6253 Project: Hadoop YARN Issue Type: Bug Affects Versions: 3.0.0-alpha2 Reporter: Haibo Chen Assignee: Haibo Chen Fix For: 3.0.0-alpha3 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6218) TestAMRMClient fails with fair scheduler
[ https://issues.apache.org/jira/browse/YARN-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888997#comment-15888997 ] Yufei Gu commented on YARN-6218: Thanks for working on this. [~miklos.szeg...@cloudera.com]. The patches looks good generally. Some nits: # Would triggerSchedulingWithNMHeartBeat() or triggerAllocationWithNMHeartBeat() be better names for waitForNMHeartbeat()? And we can get grid of comments before each time we invoke waitForNMHeartbeat() in that case. # Would be better to use setup() and teardown() instead of startApp() and cancelApp()? Since these functions do more than just start an app and cancel an app. > TestAMRMClient fails with fair scheduler > > > Key: YARN-6218 > URL: https://issues.apache.org/jira/browse/YARN-6218 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Minor > Attachments: YARN-6218.000.patch, YARN-6218.001.patch > > > We ran into this issue on v2. Allocation does not happen in the specified > amount of time. > Error Message > expected:<2> but was:<0> > Stacktrace > java.lang.AssertionError: expected:<2> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAMRMClientMatchStorage(TestAMRMClient.java:495) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6153) keepContainer does not work when AM retry window is set
[ https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888981#comment-15888981 ] Jian He commented on YARN-6153: --- [~kyungwan nam], actually, the patch is failing TestAMRestart on branch-2.8, could you take a look and upload a patch for branch-2.8 ? Name the patch as YARN-6153-branch-2.8.patch which will trigger jenkins report on branch-2.8 > keepContainer does not work when AM retry window is set > --- > > Key: YARN-6153 > URL: https://issues.apache.org/jira/browse/YARN-6153 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.1 >Reporter: kyungwan nam >Assignee: kyungwan nam > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: YARN-6153.001.patch, YARN-6153.002.patch, > YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, > YARN-6153.006.patch > > > yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster. > I submitted a YARN application (slider app) that keepContainers=true, > attemptFailuresValidityInterval=30. > it did work properly when AM was failed firstly. > all containers launched by previous AM were resynced with new AM (attempt2) > without killing containers. > after 10 minutes, I thought AM failure count was reset by > attemptFailuresValidityInterval (5 minutes). > but, all containers were killed when AM was failed secondly. (new AM attempt3 > was launched properly) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Reopened] (YARN-6153) keepContainer does not work when AM retry window is set
[ https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He reopened YARN-6153: --- > keepContainer does not work when AM retry window is set > --- > > Key: YARN-6153 > URL: https://issues.apache.org/jira/browse/YARN-6153 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.1 >Reporter: kyungwan nam >Assignee: kyungwan nam > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: YARN-6153.001.patch, YARN-6153.002.patch, > YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, > YARN-6153.006.patch > > > yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster. > I submitted a YARN application (slider app) that keepContainers=true, > attemptFailuresValidityInterval=30. > it did work properly when AM was failed firstly. > all containers launched by previous AM were resynced with new AM (attempt2) > without killing containers. > after 10 minutes, I thought AM failure count was reset by > attemptFailuresValidityInterval (5 minutes). > but, all containers were killed when AM was failed secondly. (new AM attempt3 > was launched properly) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4061) [Fault tolerance] Fault tolerant writer for timeline v2
[ https://issues.apache.org/jira/browse/YARN-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888979#comment-15888979 ] Sangjin Lee commented on YARN-4061: --- [~haibochen], is your comment a general comment on what that method is doing, independent of this JIRA? If so, I think that is correct. It must be a bug. Today there is no caller for that flavor of the {{store()}} method, but we should still fix it. Do you mind opening a new JIRA to fix that? Have you checked other {{store()}} methods to see if there is any other similar issue? > [Fault tolerance] Fault tolerant writer for timeline v2 > --- > > Key: YARN-4061 > URL: https://issues.apache.org/jira/browse/YARN-4061 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Li Lu >Assignee: Joep Rottinghuis > Labels: YARN-5355, yarn-5355-merge-blocker > Attachments: FaulttolerantwriterforTimelinev2.pdf > > > We need to build a timeline writer that can be resistant to backend storage > down time and timeline collector failures. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5385) Add a PriorityAgent in ReservationSystem
[ https://issues.apache.org/jira/browse/YARN-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888967#comment-15888967 ] Carlo Curino commented on YARN-5385: [~seanpo03] thanks for addressing some of my comments, we are getting closer, though I still have some doubts. Ok on 1-4, and I am fine postponing 5 to separate JIRA (I agree it will require some more work, and probably API discussions). Ok on 6 via switch. Regarding 7, I think in general we assume that multiple agents can operate on a single plan, and the "coordination" was happening in the plan (see writelocks there, and double-checking whether ReservationAllocation fits). I think we should follow the same style of coordination, thus finding a way to grab locks on the plan for an atomic set of changes. The fact that we have only one agent per plan, is not strictly required, mostly there because the invocations are almost static. I suggest to keep the coordination in the plan. Also even if we do what you propose, the lock should only kick-in if you have to make room, and not for normal submissions. [~subru] I would like your opinion on this: What do you think about having an explicit grabbing and release of the plan writelocks (from the agent side)? In a sense we are providing begin-transaction, commit type methods. [~seanpo03] can you do a QPS test for this? this locks worry me a bit. Ok to do more complex solutions like in suggested in 8 in YARN-6226. Good on 9. I like the default semantics for 10. Good on 11. > Add a PriorityAgent in ReservationSystem > - > > Key: YARN-5385 > URL: https://issues.apache.org/jira/browse/YARN-5385 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler, fairscheduler, resourcemanager >Reporter: Sean Po >Assignee: Sean Po > Labels: oct16-hard > Attachments: YARN-5385.v002.patch, YARN-5385.v003.patch, > YARN-5385.v004.patch, YARN-5385.v005.patch, YARN-5385.v006.patch, > YARN-5385.v007.patch, YARN-5385.v1.patch > > > YARN-5211 proposes adding support for generalized priorities for reservations > in the YARN ReservationSystem. This JIRA is a sub-task to track the addition > of a priority agent to accomplish it. Please refer to the design doc in the > parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6252) Suspicious code fragments: comparing with itself
AppChecker created YARN-6252: Summary: Suspicious code fragments: comparing with itself Key: YARN-6252 URL: https://issues.apache.org/jira/browse/YARN-6252 Project: Hadoop YARN Issue Type: Bug Affects Versions: 3.0.0-alpha2 Reporter: AppChecker Hi 1) https://github.com/apache/hadoop/blob/235203dffda1482fb38762fde544c4dd9c3e1fa8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ReservationId.java#L106 {code:java} return getId() > getId() ? 1 : getId() < getId() ? -1 : 0; {code} strangely, than getId() compare with itself Probably It should be something like this: {code:java} return this.getId() > other.getId() ? 1 : this.getId() < other.getId() ? -1 : 0; {code} 2) https://github.com/apache/hadoop/blob/235203dffda1482fb38762fde544c4dd9c3e1fa8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestAppLogAggregatorImpl.java#L260 {code:java} if(filesUploaded.size() != filesUploaded.size()) { fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " + "expected size: " + filesExpected.size()); } {code} filesUploaded.size() compare with it self probably it should be: {code:java} if(filesUploaded.size() != filesExpected.size()) { fail(errMsgPrefix + ": actual size: " + filesUploaded.size() + " vs " + "expected size: " + filesExpected.size()); } {code} These possible defects found by [static code analyzer AppChecker|https://cnpo.ru/en/solutions/appchecker.php] -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6153) keepContainer does not work when AM retry window is set
[ https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888928#comment-15888928 ] Hudson commented on YARN-6153: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11323 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11323/]) YARN-6153. KeepContainer does not work when AM retry window is set. (jianhe: rev 235203dffda1482fb38762fde544c4dd9c3e1fa8) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptImplDiagnostics.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java > keepContainer does not work when AM retry window is set > --- > > Key: YARN-6153 > URL: https://issues.apache.org/jira/browse/YARN-6153 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.1 >Reporter: kyungwan nam >Assignee: kyungwan nam > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: YARN-6153.001.patch, YARN-6153.002.patch, > YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, > YARN-6153.006.patch > > > yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster. > I submitted a YARN application (slider app) that keepContainers=true, > attemptFailuresValidityInterval=30. > it did work properly when AM was failed firstly. > all containers launched by previous AM were resynced with new AM (attempt2) > without killing containers. > after 10 minutes, I thought AM failure count was reset by > attemptFailuresValidityInterval (5 minutes). > but, all containers were killed when AM was failed secondly. (new AM attempt3 > was launched properly) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6153) keepContainer does not work when AM retry window is set
[ https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He reassigned YARN-6153: - Resolution: Fixed Assignee: kyungwan nam Fix Version/s: 3.0.0-alpha3 2.8.0 Target Version/s: 2.8.0, 3.0.0-alpha3 Committed to trunk, branch-2, branch-2.8. thanks [~kyungwan nam] ! > keepContainer does not work when AM retry window is set > --- > > Key: YARN-6153 > URL: https://issues.apache.org/jira/browse/YARN-6153 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.1 >Reporter: kyungwan nam >Assignee: kyungwan nam > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: YARN-6153.001.patch, YARN-6153.002.patch, > YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, > YARN-6153.006.patch > > > yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster. > I submitted a YARN application (slider app) that keepContainers=true, > attemptFailuresValidityInterval=30. > it did work properly when AM was failed firstly. > all containers launched by previous AM were resynced with new AM (attempt2) > without killing containers. > after 10 minutes, I thought AM failure count was reset by > attemptFailuresValidityInterval (5 minutes). > but, all containers were killed when AM was failed secondly. (new AM attempt3 > was launched properly) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6153) keepContainer does not work when AM retry window is set
[ https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1590#comment-1590 ] Jian He commented on YARN-6153: --- test failure not related, committing this. > keepContainer does not work when AM retry window is set > --- > > Key: YARN-6153 > URL: https://issues.apache.org/jira/browse/YARN-6153 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.1 >Reporter: kyungwan nam > Attachments: YARN-6153.001.patch, YARN-6153.002.patch, > YARN-6153.003.patch, YARN-6153.004.patch, YARN-6153.005.patch, > YARN-6153.006.patch > > > yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster. > I submitted a YARN application (slider app) that keepContainers=true, > attemptFailuresValidityInterval=30. > it did work properly when AM was failed firstly. > all containers launched by previous AM were resynced with new AM (attempt2) > without killing containers. > after 10 minutes, I thought AM failure count was reset by > attemptFailuresValidityInterval (5 minutes). > but, all containers were killed when AM was failed secondly. (new AM attempt3 > was launched properly) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6247) Add SubClusterResolver into FederationStateStoreFacade
[ https://issues.apache.org/jira/browse/YARN-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1560#comment-1560 ] Hadoop QA commented on YARN-6247: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 33s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 19s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 54s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} YARN-2915 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 43s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 24s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6247 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855205/YARN-6247-YARN-2915.v3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 1838f9d1b89e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 611a7fe | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | javadoc | https://builds.apache.org/job/PreCommit-YARN-Build/1
[jira] [Commented] (YARN-6251) Fix Scheduler locking issue introduced by YARN-6216
[ https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1553#comment-1553 ] Hadoop QA commented on YARN-6251: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 5m 25s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 5s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Switch statement found in org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(SchedulerEvent) where one case falls through to the next case At FairScheduler.java:where one case falls through to the next case At FairScheduler.java:[lines 1143-1145] | | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer | | | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokens | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6251 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855204/YARN-6251.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 04c58609b6ba 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e0bb867 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15110/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html | | unit | https://builds.apache.org/job/PreCommit-YARN-Bu
[jira] [Commented] (YARN-6153) keepContainer does not work when AM retry window is set
[ https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1518#comment-1518 ] Hadoop QA commented on YARN-6153: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 49s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6153 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855077/YARN-6153.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3bcd0aee8206 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e0bb867 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15109/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15109/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15109/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > keepContainer does not work when AM retry window is set > ---
[jira] [Commented] (YARN-6042) Dump scheduler and queue state information into FairScheduler DEBUG log
[ https://issues.apache.org/jira/browse/YARN-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1510#comment-1510 ] Hadoop QA commented on YARN-6042: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 3s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}147m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart | | | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6042 | | GITHUB PR | https://github.com/apache/hadoop/pull/193 | | Optional Tests | asflicense mvnsite unit compile javac javadoc mvninstall findbugs checkstyle | | uname | Linux 4353b8e1e9b7 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 480b4dd | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15107/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15107/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-ser
[jira] [Commented] (YARN-6189) Improve application status log message when RM restarted when app is in NEW state
[ https://issues.apache.org/jira/browse/YARN-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888797#comment-15888797 ] Junping Du commented on YARN-6189: -- Thanks [~xgong] and [~templedf] for review! > Improve application status log message when RM restarted when app is in NEW > state > - > > Key: YARN-6189 > URL: https://issues.apache.org/jira/browse/YARN-6189 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Yesha Vora >Assignee: Junping Du > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-6189.patch, YARN-6189-v2.patch > > > When RM restart/failover happens when application is in NEW state, > application status command for that application prints below stacktrace. > Improve exception message to less confusion to say something like: > "application is not unknown, may be previous submission is > not successful." > {code} > hrt_qa@:/root> yarn application -status application_1470379565464_0001 > 16/08/05 17:24:29 INFO impl.TimelineClientImpl: Timeline service address: > https://hostxxx:8190/ws/v1/timeline/ > 16/08/05 17:24:30 INFO client.AHSProxy: Connecting to Application History > server at hostxxx/xxx:10200 > 16/08/05 17:24:31 WARN retry.RetryInvocationHandler: Exception while invoking > ApplicationClientProtocolPBClientImpl.getApplicationReport over rm1. Not > retrying because try once and fail. > org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application > with id 'application_1470379565464_0001' doesn't exist in RM. > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:331) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:175) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:417) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) > at > org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:101) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationReport(ApplicationClientProtocolPBClientImpl.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176) > at com.sun.proxy.$Proxy18.getApplicationReport(Unknown Source) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationReport(YarnClientImpl.java:436) > at > org.apache.hadoop.yarn.client.cli.ApplicationCLI.printApplicationReport(ApplicationCLI.java:481) > at > org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:160) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:83) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException): > Application with id 'application_1470379565464_0001' doesn't exist in R
[jira] [Updated] (YARN-6247) Add SubClusterResolver into FederationStateStoreFacade
[ https://issues.apache.org/jira/browse/YARN-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Botong Huang updated YARN-6247: --- Attachment: YARN-6247-YARN-2915.v3.patch Thread safe comment added in SubClusterResolver interface > Add SubClusterResolver into FederationStateStoreFacade > -- > > Key: YARN-6247 > URL: https://issues.apache.org/jira/browse/YARN-6247 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6247-YARN-2915.v1.patch, > YARN-6247-YARN-2915.v2.patch, YARN-6247-YARN-2915.v3.patch > > > Add SubClusterResolver into FederationStateStoreFacade. Since the resolver > might involve some overhead (read file in the background, potentially > periodically), it is good to put it inside FederationStateStoreFacade > singleton, so that only one instance will be created. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4061) [Fault tolerance] Fault tolerant writer for timeline v2
[ https://issues.apache.org/jira/browse/YARN-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888753#comment-15888753 ] Haibo Chen commented on YARN-4061: -- I notice that in FlowAcitivityColumnPrefix.store(byte[] rowKey, TypedBufferedMutator tableMutator, String qualifier, Long timestamp, Object inputValue, Attribute... attributes), timestamp is dropped (column.store gets a null for timestamp). This is not in line with one of our discussions in which I remember I was told that ATS clients set the timestamp explicitly so there is no out of order problem in our fault-torelant writer. Are I missing something? > [Fault tolerance] Fault tolerant writer for timeline v2 > --- > > Key: YARN-4061 > URL: https://issues.apache.org/jira/browse/YARN-4061 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Li Lu >Assignee: Joep Rottinghuis > Labels: YARN-5355, yarn-5355-merge-blocker > Attachments: FaulttolerantwriterforTimelinev2.pdf > > > We need to build a timeline writer that can be resistant to backend storage > down time and timeline collector failures. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6251) Fix Scheduler locking issue introduced by YARN-6216
[ https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6251: -- Attachment: YARN-6251.001.patch Uploading fix. The deadlock is due to the fact that the {{completeContainer()}} method (used to flush resources of temporary containers created during the update) is called in the AM's allocate thread, which tries to grab the lock on the queue and app... which can be contended for in the reverse order by the Scheduler thread on a NODE_UPDATE at the same time. The proposed solution is: Instead of calling {{completeContainer()}} directly, we send it as an event to the Scheduler to handle.. This will ensure that the Scheduler is the only entity that will have the lock. > Fix Scheduler locking issue introduced by YARN-6216 > --- > > Key: YARN-6251 > URL: https://issues.apache.org/jira/browse/YARN-6251 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6251.001.patch > > > Opening to track a locking issue that was uncovered when running a custom SLS > AMSimulator. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5330) SharingPolicy enhancements required to support recurring reservations in the YARN ReservationSystem
[ https://issues.apache.org/jira/browse/YARN-5330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan reassigned YARN-5330: Assignee: (was: Subru Krishnan) > SharingPolicy enhancements required to support recurring reservations in the > YARN ReservationSystem > --- > > Key: YARN-5330 > URL: https://issues.apache.org/jira/browse/YARN-5330 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Subru Krishnan > > YARN-5326 proposes adding native support for recurring reservations in the > YARN ReservationSystem. This JIRA is a sub-task to track the changes required > in SharingPolicy to accomplish it. Please refer to the design doc in the > parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5330) SharingPolicy enhancements required to support recurring reservations in the YARN ReservationSystem
[ https://issues.apache.org/jira/browse/YARN-5330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan reassigned YARN-5330: Assignee: Subru Krishnan (was: Sangeetha Abdu Jyothi) > SharingPolicy enhancements required to support recurring reservations in the > YARN ReservationSystem > --- > > Key: YARN-5330 > URL: https://issues.apache.org/jira/browse/YARN-5330 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Subru Krishnan >Assignee: Subru Krishnan > > YARN-5326 proposes adding native support for recurring reservations in the > YARN ReservationSystem. This JIRA is a sub-task to track the changes required > in SharingPolicy to accomplish it. Please refer to the design doc in the > parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem
[ https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan reassigned YARN-5329: Assignee: (was: Sangeetha Abdu Jyothi) > ReservationAgent enhancements required to support recurring reservations in > the YARN ReservationSystem > -- > > Key: YARN-5329 > URL: https://issues.apache.org/jira/browse/YARN-5329 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Subru Krishnan > > YARN-5326 proposes adding native support for recurring reservations in the > YARN ReservationSystem. This JIRA is a sub-task to track the changes required > in ReservationAgent to accomplish it. Please refer to the design doc in the > parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5328) InMemoryPlan enhancements required to support recurring reservations in the YARN ReservationSystem
[ https://issues.apache.org/jira/browse/YARN-5328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan reassigned YARN-5328: Assignee: Subru Krishnan (was: Sangeetha Abdu Jyothi) > InMemoryPlan enhancements required to support recurring reservations in the > YARN ReservationSystem > -- > > Key: YARN-5328 > URL: https://issues.apache.org/jira/browse/YARN-5328 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Subru Krishnan >Assignee: Subru Krishnan > > YARN-5326 proposes adding native support for recurring reservations in the > YARN ReservationSystem. This JIRA is a sub-task to track the changes required > in InMemoryPlan to accomplish it. Please refer to the design doc in the > parent JIRA for details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5516) Add REST API for periodicity
[ https://issues.apache.org/jira/browse/YARN-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan reassigned YARN-5516: Assignee: Subru Krishnan (was: Sangeetha Abdu Jyothi) > Add REST API for periodicity > > > Key: YARN-5516 > URL: https://issues.apache.org/jira/browse/YARN-5516 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Sangeetha Abdu Jyothi >Assignee: Subru Krishnan > > YARN-5516 changing REST API of the reservation system to support periodicity. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6247) Add SubClusterResolver into FederationStateStoreFacade
[ https://issues.apache.org/jira/browse/YARN-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888728#comment-15888728 ] Subru Krishnan edited comment on YARN-6247 at 2/28/17 7:33 PM: --- Thanks [~botong] for the patch. It looks fairly straightforward. I have only one suggestion; can you clearly call out in {{SubClusterResolver}} interface (Javadoc) that implementing classes are expected to be thread-safe. was (Author: subru): Thanks [~botong] for the patch. It looks fairly straightforward. I have only one suggestion; can you clearly call out in {{SubClusterResolver}} interface that implementing classes are expected to be thread-safe. > Add SubClusterResolver into FederationStateStoreFacade > -- > > Key: YARN-6247 > URL: https://issues.apache.org/jira/browse/YARN-6247 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6247-YARN-2915.v1.patch, > YARN-6247-YARN-2915.v2.patch > > > Add SubClusterResolver into FederationStateStoreFacade. Since the resolver > might involve some overhead (read file in the background, potentially > periodically), it is good to put it inside FederationStateStoreFacade > singleton, so that only one instance will be created. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6247) Add SubClusterResolver into FederationStateStoreFacade
[ https://issues.apache.org/jira/browse/YARN-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888728#comment-15888728 ] Subru Krishnan commented on YARN-6247: -- Thanks [~botong] for the patch. It looks fairly straightforward. I have only one suggestion; can you clearly call out in {{SubClusterResolver}} interface that implementing classes are expected to be thread-safe. > Add SubClusterResolver into FederationStateStoreFacade > -- > > Key: YARN-6247 > URL: https://issues.apache.org/jira/browse/YARN-6247 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6247-YARN-2915.v1.patch, > YARN-6247-YARN-2915.v2.patch > > > Add SubClusterResolver into FederationStateStoreFacade. Since the resolver > might involve some overhead (read file in the background, potentially > periodically), it is good to put it inside FederationStateStoreFacade > singleton, so that only one instance will be created. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6251) Fix Scheduler locking issue introduced by YARN-6216
[ https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6251: -- Summary: Fix Scheduler locking issue introduced by YARN-6216 (was: Fix Scheduler locking issue introduced by YARN-6126) > Fix Scheduler locking issue introduced by YARN-6216 > --- > > Key: YARN-6251 > URL: https://issues.apache.org/jira/browse/YARN-6251 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Fix For: 3.0.0-alpha3 > > > Opening to track a locking issue that was uncovered when running a custom SLS > AMSimulator. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6251) Fix Scheduler locking issue introduced by YARN-6126
[ https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6251: -- Summary: Fix Scheduler locking issue introduced by YARN-6126 (was: Fix Scheduler locking issue introduced by YARN-6216) > Fix Scheduler locking issue introduced by YARN-6126 > --- > > Key: YARN-6251 > URL: https://issues.apache.org/jira/browse/YARN-6251 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Fix For: 3.0.0-alpha3 > > > Opening to track a locking issue that was uncovered when running a custom SLS > AMSimulator. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6247) Add SubClusterResolver into FederationStateStoreFacade
[ https://issues.apache.org/jira/browse/YARN-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated YARN-6247: - Issue Type: Sub-task (was: Task) Parent: YARN-2915 > Add SubClusterResolver into FederationStateStoreFacade > -- > > Key: YARN-6247 > URL: https://issues.apache.org/jira/browse/YARN-6247 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6247-YARN-2915.v1.patch, > YARN-6247-YARN-2915.v2.patch > > > Add SubClusterResolver into FederationStateStoreFacade. Since the resolver > might involve some overhead (read file in the background, potentially > periodically), it is good to put it inside FederationStateStoreFacade > singleton, so that only one instance will be created. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6251) Fix Scheduler locking issue introduced by YARN-6216
[ https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888722#comment-15888722 ] Arun Suresh commented on YARN-6251: --- Posting relevant section of the jstack dump {noformat} Found one Java-level deadlock: = "pool-7-thread-88": waiting for ownable synchronizer 0x83635b98, (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), which is held by "SchedulerEventDispatcher:Event Processor" "SchedulerEventDispatcher:Event Processor": waiting for ownable synchronizer 0xf3f6b808, (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), which is held by "pool-7-thread-88" Java stack information for the threads listed above: === "pool-7-thread-88": at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x83635b98> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199) at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1520) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainerInternal(CapacityScheduler.java:1600) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.completedContainer(AbstractYarnScheduler.java:602) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.pullNewlyUpdatedContainers(SchedulerApplicationAttempt.java:852) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.pullNewlyIncreasedContainers(SchedulerApplicationAttempt.java:789) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.getAllocation(FiCaSchedulerApp.java:693) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:925) at org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.allocate(SLSCapacityScheduler.java:191) at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocateInternal(ApplicationMasterService.java:581) at org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService.allocateInternal(OpportunisticContainerAllocatorAMService.java:254) at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:446) - locked <0xf3fe59c0> (a org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService$AllocateResponseLock) at org.apache.hadoop.yarn.sls.appmaster.PromotingAMSimulator$1.run(PromotingAMSimulator.java:267) at org.apache.hadoop.yarn.sls.appmaster.PromotingAMSimulator$1.run(PromotingAMSimulator.java:264) at org.apache.hadoop.yarn.sls.appmaster.AMSimulator.middleStep(AMSimulator.java:179) at org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:96) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) "SchedulerEventDispatcher:Event Processor": at sun.misc.Unsafe.park(Native Method) - parking to wait for <0xf3f6b808> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283) at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.getHeadroom(FiCaSchedulerApp.java:755) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.allocateResource(LeafQueue.java:1578) at org.apache.hadoop.yarn.server.resourcemanage
[jira] [Created] (YARN-6251) Fix Scheduler locking issue introduced by YARN-6216
Arun Suresh created YARN-6251: - Summary: Fix Scheduler locking issue introduced by YARN-6216 Key: YARN-6251 URL: https://issues.apache.org/jira/browse/YARN-6251 Project: Hadoop YARN Issue Type: Bug Reporter: Arun Suresh Opening to track a locking issue that was uncovered when running a custom SLS AMSimulator. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6251) Fix Scheduler locking issue introduced by YARN-6216
[ https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh updated YARN-6251: -- Fix Version/s: 3.0.0-alpha3 > Fix Scheduler locking issue introduced by YARN-6216 > --- > > Key: YARN-6251 > URL: https://issues.apache.org/jira/browse/YARN-6251 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Fix For: 3.0.0-alpha3 > > > Opening to track a locking issue that was uncovered when running a custom SLS > AMSimulator. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6248) Killing an app with pending container requests leaves the user in UsersManager
[ https://issues.apache.org/jira/browse/YARN-6248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-6248: - Attachment: YARN-6248.001.patch Uploading patch. [~sunilg] and [~leftnoteasy], can you please have a look? {{UsersManager#updateNonActiveUsersResourceUsage}} was putting the user back into the {{users}} map after it had been removed by {{removeUser}}. > Killing an app with pending container requests leaves the user in UsersManager > -- > > Key: YARN-6248 > URL: https://issues.apache.org/jira/browse/YARN-6248 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: Eric Payne >Assignee: Eric Payne > Attachments: User Left Over.jpg, YARN-6248.001.patch > > > If an app is still asking for resources when it is killed, the user is left > in the UsersManager structure and shows up on the GUI. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6251) Fix Scheduler locking issue introduced by YARN-6216
[ https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun Suresh reassigned YARN-6251: - Assignee: Arun Suresh > Fix Scheduler locking issue introduced by YARN-6216 > --- > > Key: YARN-6251 > URL: https://issues.apache.org/jira/browse/YARN-6251 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Fix For: 3.0.0-alpha3 > > > Opening to track a locking issue that was uncovered when running a custom SLS > AMSimulator. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3471) Fix timeline client retry
[ https://issues.apache.org/jira/browse/YARN-3471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888706#comment-15888706 ] Haibo Chen commented on YARN-3471: -- [~varun_saxena] Are you working on this? If not, I'd like to take it. > Fix timeline client retry > - > > Key: YARN-3471 > URL: https://issues.apache.org/jira/browse/YARN-3471 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: 2.8.0 >Reporter: Zhijie Shen >Assignee: Varun Saxena > Labels: YARN-5355 > Attachments: YARN-3471.1.patch, YARN-3471.2.patch > > > I found that the client retry has some problems: > 1. The new put methods will retry on all exception, but they should only do > it upon ConnectException. > 2. We can reuse TimelineClientConnectionRetry to simplify the retry logic. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6189) Improve application status log message when RM restarted when app is in NEW state
[ https://issues.apache.org/jira/browse/YARN-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888699#comment-15888699 ] Hudson commented on YARN-6189: -- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11322 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11322/]) YARN-6189: Improve application status log message when RM restarted when (xgong: rev e0bb867c3fa638c9f689ee0b044b400481cf02b5) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java > Improve application status log message when RM restarted when app is in NEW > state > - > > Key: YARN-6189 > URL: https://issues.apache.org/jira/browse/YARN-6189 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Yesha Vora >Assignee: Junping Du > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-6189.patch, YARN-6189-v2.patch > > > When RM restart/failover happens when application is in NEW state, > application status command for that application prints below stacktrace. > Improve exception message to less confusion to say something like: > "application is not unknown, may be previous submission is > not successful." > {code} > hrt_qa@:/root> yarn application -status application_1470379565464_0001 > 16/08/05 17:24:29 INFO impl.TimelineClientImpl: Timeline service address: > https://hostxxx:8190/ws/v1/timeline/ > 16/08/05 17:24:30 INFO client.AHSProxy: Connecting to Application History > server at hostxxx/xxx:10200 > 16/08/05 17:24:31 WARN retry.RetryInvocationHandler: Exception while invoking > ApplicationClientProtocolPBClientImpl.getApplicationReport over rm1. Not > retrying because try once and fail. > org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application > with id 'application_1470379565464_0001' doesn't exist in RM. > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:331) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:175) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:417) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) > at > org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:101) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationReport(ApplicationClientProtocolPBClientImpl.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176) > at com.sun.proxy.$Proxy18.getApplicationReport(Unknown Source) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationReport(YarnClientImpl.java:436) > at > org.apache.hadoop.yarn.client.cli.ApplicationCL
[jira] [Commented] (YARN-6216) Unify Container Resizing code paths with Container Updates making it scheduler agnostic
[ https://issues.apache.org/jira/browse/YARN-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888696#comment-15888696 ] Hudson commented on YARN-6216: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11321 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11321/]) YARN-6216. Unify Container Resizing code paths with Container Updates (wangda: rev eac6b4c35c50e555c2f1b5f913bb2c4d839f1ff4) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerResizing.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestIncreaseAllocationExpirer.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/SchedulerRequestKey.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainer.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerNode.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ContainerUpdateContext.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java * (edit) hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerChangeResourceEvent.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/ContainerAllocationProposal.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/ContainerAllocator.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apach
[jira] [Commented] (YARN-6040) Introduce api independent PendingAsk to replace usage of ResourceRequest within Scheduler classes
[ https://issues.apache.org/jira/browse/YARN-6040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888681#comment-15888681 ] Hadoop QA commented on YARN-6040: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 5m 6s{color} | {color:red} Docker failed to build yetus/hadoop:b59b8b7. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-6040 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855194/YARN-6040.branch-2.007.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15108/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Introduce api independent PendingAsk to replace usage of ResourceRequest > within Scheduler classes > - > > Key: YARN-6040 > URL: https://issues.apache.org/jira/browse/YARN-6040 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Fix For: 3.0.0-alpha2 > > Attachments: YARN-6040.001.patch, YARN-6040.002.patch, > YARN-6040.003.patch, YARN-6040.004.patch, YARN-6040.005.patch, > YARN-6040.006.patch, YARN-6040.007.patch, YARN-6040.branch-2.007.patch > > > As mentioned by YARN-5906, currently schedulers are using ResourceRequest > heavily so it will be very hard to adopt the new PowerfulResourceRequest > (YARN-4902). > This JIRA is the 2nd step of refactoring, which remove usage of > ResourceRequest from AppSchedulingInfo / SchedulerApplicationAttempt. Instead > of returning ResourceRequest, it returns a lightweight and API-independent > object - {{PendingAsk}}. > The only remained ResourceRequest API of AppSchedulingInfo will be used by > web service to get list of ResourceRequests. > So after this patch, usage of ResourceRequest will be isolated inside > AppSchedulingInfo, so it will be more flexible to update internal data > structure and upgrade old ResourceRequest API to new. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6189) Improve application status log message when RM restarted when app is in NEW state
[ https://issues.apache.org/jira/browse/YARN-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888679#comment-15888679 ] Xuan Gong commented on YARN-6189: - +1 LGTM. Checking this in > Improve application status log message when RM restarted when app is in NEW > state > - > > Key: YARN-6189 > URL: https://issues.apache.org/jira/browse/YARN-6189 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Yesha Vora >Assignee: Junping Du > Attachments: YARN-6189.patch, YARN-6189-v2.patch > > > When RM restart/failover happens when application is in NEW state, > application status command for that application prints below stacktrace. > Improve exception message to less confusion to say something like: > "application is not unknown, may be previous submission is > not successful." > {code} > hrt_qa@:/root> yarn application -status application_1470379565464_0001 > 16/08/05 17:24:29 INFO impl.TimelineClientImpl: Timeline service address: > https://hostxxx:8190/ws/v1/timeline/ > 16/08/05 17:24:30 INFO client.AHSProxy: Connecting to Application History > server at hostxxx/xxx:10200 > 16/08/05 17:24:31 WARN retry.RetryInvocationHandler: Exception while invoking > ApplicationClientProtocolPBClientImpl.getApplicationReport over rm1. Not > retrying because try once and fail. > org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application > with id 'application_1470379565464_0001' doesn't exist in RM. > at > org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:331) > at > org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:175) > at > org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:417) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) > at > org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:101) > at > org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationReport(ApplicationClientProtocolPBClientImpl.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176) > at com.sun.proxy.$Proxy18.getApplicationReport(Unknown Source) > at > org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationReport(YarnClientImpl.java:436) > at > org.apache.hadoop.yarn.client.cli.ApplicationCLI.printApplicationReport(ApplicationCLI.java:481) > at > org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:160) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:83) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException): > Application with id 'application_1470379565464_0001' doesn't exist in RM. > at > org.apache.hadoop.yarn.server.resourcemanager.Cl
[jira] [Commented] (YARN-6216) Unify Container Resizing code paths with Container Updates making it scheduler agnostic
[ https://issues.apache.org/jira/browse/YARN-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888667#comment-15888667 ] Wangda Tan commented on YARN-6216: -- Committed to trunk, thanks [~asuresh], there're some dependencies to backport to branch-2, including YARN-6040, will backport the dependencies first. > Unify Container Resizing code paths with Container Updates making it > scheduler agnostic > --- > > Key: YARN-6216 > URL: https://issues.apache.org/jira/browse/YARN-6216 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, fairscheduler, resourcemanager >Affects Versions: 3.0.0-alpha2 >Reporter: Arun Suresh >Assignee: Arun Suresh > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6216.001.patch, YARN-6216.002.patch, > YARN-6216.003.patch > > > YARN-5959 introduced an {{ContainerUpdateContext}} which can be used to > update the ExecutionType of a container in a scheduler agnostic manner. As > mentioned in that JIRA, extending that to encompass Container resizing is > trivial. > This JIRA proposes to remove all the CapacityScheduler specific code paths. > (CapacityScheduler, CSQueue, FicaSchedulerApp etc.) and modify the code to > use the framework introduced in YARN-5959 without any loss of functionality. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6040) Introduce api independent PendingAsk to replace usage of ResourceRequest within Scheduler classes
[ https://issues.apache.org/jira/browse/YARN-6040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-6040: - Attachment: YARN-6040.branch-2.007.patch > Introduce api independent PendingAsk to replace usage of ResourceRequest > within Scheduler classes > - > > Key: YARN-6040 > URL: https://issues.apache.org/jira/browse/YARN-6040 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Fix For: 3.0.0-alpha2 > > Attachments: YARN-6040.001.patch, YARN-6040.002.patch, > YARN-6040.003.patch, YARN-6040.004.patch, YARN-6040.005.patch, > YARN-6040.006.patch, YARN-6040.007.patch, YARN-6040.branch-2.007.patch > > > As mentioned by YARN-5906, currently schedulers are using ResourceRequest > heavily so it will be very hard to adopt the new PowerfulResourceRequest > (YARN-4902). > This JIRA is the 2nd step of refactoring, which remove usage of > ResourceRequest from AppSchedulingInfo / SchedulerApplicationAttempt. Instead > of returning ResourceRequest, it returns a lightweight and API-independent > object - {{PendingAsk}}. > The only remained ResourceRequest API of AppSchedulingInfo will be used by > web service to get list of ResourceRequests. > So after this patch, usage of ResourceRequest will be isolated inside > AppSchedulingInfo, so it will be more flexible to update internal data > structure and upgrade old ResourceRequest API to new. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Reopened] (YARN-6040) Introduce api independent PendingAsk to replace usage of ResourceRequest within Scheduler classes
[ https://issues.apache.org/jira/browse/YARN-6040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan reopened YARN-6040: -- Reopen to run Jenkins against branch-2 > Introduce api independent PendingAsk to replace usage of ResourceRequest > within Scheduler classes > - > > Key: YARN-6040 > URL: https://issues.apache.org/jira/browse/YARN-6040 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Fix For: 3.0.0-alpha2 > > Attachments: YARN-6040.001.patch, YARN-6040.002.patch, > YARN-6040.003.patch, YARN-6040.004.patch, YARN-6040.005.patch, > YARN-6040.006.patch, YARN-6040.007.patch > > > As mentioned by YARN-5906, currently schedulers are using ResourceRequest > heavily so it will be very hard to adopt the new PowerfulResourceRequest > (YARN-4902). > This JIRA is the 2nd step of refactoring, which remove usage of > ResourceRequest from AppSchedulingInfo / SchedulerApplicationAttempt. Instead > of returning ResourceRequest, it returns a lightweight and API-independent > object - {{PendingAsk}}. > The only remained ResourceRequest API of AppSchedulingInfo will be used by > web service to get list of ResourceRequests. > So after this patch, usage of ResourceRequest will be isolated inside > AppSchedulingInfo, so it will be more flexible to update internal data > structure and upgrade old ResourceRequest API to new. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888660#comment-15888660 ] Varun Saxena commented on YARN-6027: bq. Specifically my proposal is to change the existing calls by TimelineUIDConverter to use the new constants in TimelineReaderUtils (or simply call split(String)). Makes sense to me. +1 to your proposal. > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888630#comment-15888630 ] Sangjin Lee commented on YARN-6027: --- OK how about this? The {{TimelineUIDConverter}} class also uses these characters (the same ones). Can we at least standardize on one set of definitions? With this patch we would have {{TimelineUIDConverter#UID_DELIMETER_CHAR}} and {{TimelineUIDConverter#UID_ESCAPE_CHAR}}, and {{TimelineReaderUtils#DEFAULT_DELIMETER_CHAR}} and {{TimelineReaderUtils#DEFAULT_ESCAPE_CHAR}}. Specifically my proposal is to change the existing calls by {{TimelineUIDConverter}} to use the new constants in {{TimelineReaderUtils}} (or simply call {{split(String)}}). > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888616#comment-15888616 ] Varun Saxena commented on YARN-6027: By the way, findbugs warning is an extant warning which means it already exists in the branch i.e. it has not been introduced by the patch. > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888606#comment-15888606 ] Varun Saxena commented on YARN-6027: Same here. Clueless about the findbugs warning. Had tried a few things earlier as well. The class is almost equivalent to ApplicationColumnPrefix other than an unused flag in constructor. Tried removing that as well but findbugs warning does not seem to go away. By the way, [~sjlee0], kindly share your opinion about using Separator enum based on the preceding discussion. > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6140) start time key in NM leveldb store should be removed when container is removed
[ https://issues.apache.org/jira/browse/YARN-6140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated YARN-6140: -- Labels: (was: yarn-5355-merge-blocker) > start time key in NM leveldb store should be removed when container is removed > -- > > Key: YARN-6140 > URL: https://issues.apache.org/jira/browse/YARN-6140 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: YARN-5355 >Reporter: Sangjin Lee >Assignee: Ajith S > > It appears that the start time key is not removed when the container is > removed. The key was introduced in YARN-5792. > I found this while backporting the YARN-5355-branch-2 branch to our internal > branch loosely based on 2.6.0. The {{TestNMLeveldbStateStoreService}} test > was failing because of this. > I'm not sure why we didn't see this earlier. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6140) start time key in NM leveldb store should be removed when container is removed
[ https://issues.apache.org/jira/browse/YARN-6140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888607#comment-15888607 ] Sangjin Lee commented on YARN-6140: --- I'm going to remove the blocker label. In reality this issue doesn't manifest itself on trunk or on branch-2. I'd love to get this in soon (as this is fairly trivial), but I don't think this is a blocker per se. > start time key in NM leveldb store should be removed when container is removed > -- > > Key: YARN-6140 > URL: https://issues.apache.org/jira/browse/YARN-6140 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: YARN-5355 >Reporter: Sangjin Lee >Assignee: Ajith S > > It appears that the start time key is not removed when the container is > removed. The key was introduced in YARN-5792. > I found this while backporting the YARN-5355-branch-2 branch to our internal > branch loosely based on 2.6.0. The {{TestNMLeveldbStateStoreService}} test > was failing because of this. > I'm not sure why we didn't see this earlier. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6199) Support for listing flows with filter userid
[ https://issues.apache.org/jira/browse/YARN-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888604#comment-15888604 ] Sangjin Lee commented on YARN-6199: --- Quick question: do you think this should be a merge blocker? > Support for listing flows with filter userid > > > Key: YARN-6199 > URL: https://issues.apache.org/jira/browse/YARN-6199 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > > Currently */flows* API retrieves flow entities for all the users by default. > It is required to provide filter user i.e */flows?user=rohith* . This is > critical filter in secured environment. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888595#comment-15888595 ] Haibo Chen commented on YARN-5269: -- [~varun_saxena] Are you currently working on this? If not, do you mind if I assign it to myself? > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Varun Saxena > Labels: YARN-5355 > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6027) Support fromid(offset) filter for /flows API
[ https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888594#comment-15888594 ] Sangjin Lee commented on YARN-6027: --- Thanks for the updated patch [~rohithsharma]. I'm not sure what the deal is with the findbugs issue. {{EntityColumnPrefix}} is not declared to be serializable, nor is it touched in this patch... The checkstyle violations seem to be trivial to fix, and so are javadoc errors. Could you please look into them? > Support fromid(offset) filter for /flows API > > > Key: YARN-6027 > URL: https://issues.apache.org/jira/browse/YARN-6027 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Labels: yarn-5355-merge-blocker > Attachments: YARN-6027-YARN-5355.0001.patch, > YARN-6027-YARN-5355.0002.patch, YARN-6027-YARN-5355.0003.patch, > YARN-6027-YARN-5355.0004.patch, YARN-6027-YARN-5355.0005.patch, > YARN-6027-YARN-5355.0006.patch > > > In YARN-5585 , fromId is supported for retrieving entities. We need similar > filter for flows/flowRun apps and flow run and flow as well. > Along with supporting fromId, this JIRA should also discuss following points > * Should we throw an exception for entities/entity retrieval if duplicates > found? > * TimelieEntity : > ** Should equals method also check for idPrefix? > ** Does idPrefix is part of identifiers? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-1728) Workaround guice3x-undecoded pathInfo in YARN WebApp
[ https://issues.apache.org/jira/browse/YARN-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gera Shegalov updated YARN-1728: Summary: Workaround guice3x-undecoded pathInfo in YARN WebApp (was: History server doesn't understand percent encoded paths) > Workaround guice3x-undecoded pathInfo in YARN WebApp > > > Key: YARN-1728 > URL: https://issues.apache.org/jira/browse/YARN-1728 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Abraham Elmahrek >Assignee: Yuanbo Liu > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha3 > > Attachments: test-case-for-trunk.patch, YARN-1728-branch-2.001.patch, > YARN-1728-branch-2.002.patch, YARN-1728-branch-2.003.patch, > YARN-1728-branch-2.004.patch, YARN-1728-branch-2.005.patch > > > For example, going to the job history server page > http://localhost:19888/jobhistory/logs/localhost%3A8041/container_1391466602060_0011_01_01/job_1391466602060_0011/admin/stderr > results in the following error: > {code} > Cannot get container logs. Invalid nodeId: > test-cdh5-hue.ent.cloudera.com%3A8041 > {code} > Where the url decoded version works: > http://localhost:19888/jobhistory/logs/localhost:8041/container_1391466602060_0011_01_01/job_1391466602060_0011/admin/stderr > It seems like both should be supported as the former is simply percent > encoding. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6232) Update resource usage and preempted resource calculations to take into account all resource types
[ https://issues.apache.org/jira/browse/YARN-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888591#comment-15888591 ] Wangda Tan commented on YARN-6232: -- Thanks [~vvasudev], Few minor comments: 1) @Deprecated methods can be removed from following unstable classes: - ApplicationAttemptStateData - ApplicationResourceUsageReport 2) Changes of ResourceInfo: are they compatible? I'm fine with these changes if they're compatible changes. 3) yarn_protos.proto: - ApplicationResourceUsageMapProto: Is it better to rename it to StringLongMapProto? > Update resource usage and preempted resource calculations to take into > account all resource types > - > > Key: YARN-6232 > URL: https://issues.apache.org/jira/browse/YARN-6232 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-6232-YARN-3926.001.patch, > YARN-6232-YARN-3926.002.patch > > > The chargeback calculations that take place on the RM should be updated to > take all resource types into account. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk
[ https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888571#comment-15888571 ] Yufei Gu edited comment on YARN-6249 at 2/28/17 6:16 PM: - Yes. Still flaky in trunk. 2 out of 100 failed in my local test. Failed both on configuration MinSharePreemption and MinSharePreemptionWithDRF. The {{FSPreemptionThread}} didn't get the chance to run in both cases. was (Author: yufeigu): Yes. Still flaky in trunk. 2 out of 100 failed in my local test. > TestFairSchedulerPreemption is inconsistently failing on trunk > -- > > Key: YARN-6249 > URL: https://issues.apache.org/jira/browse/YARN-6249 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler, resourcemanager >Affects Versions: 2.9.0 >Reporter: Sean Po >Assignee: Yufei Gu > > Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. > An example stack trace: > {noformat} > Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec > <<< FAILURE! - in > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption > testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption) > Time elapsed: 10.475 sec <<< FAILURE! > java.lang.AssertionError: Incorrect number of containers on the greedy app > expected:<4> but was:<8> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-1728) History server doesn't understand percent encoded paths
[ https://issues.apache.org/jira/browse/YARN-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888582#comment-15888582 ] Hudson commented on YARN-1728: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11320 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11320/]) YARN-1728. Regression test for guice-undecoded pathInfo in YARN WebApp. (gera: rev 480b4dd574d0355bf6c976a38bb45cb86adb2714) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java > History server doesn't understand percent encoded paths > --- > > Key: YARN-1728 > URL: https://issues.apache.org/jira/browse/YARN-1728 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Abraham Elmahrek >Assignee: Yuanbo Liu > Attachments: test-case-for-trunk.patch, YARN-1728-branch-2.001.patch, > YARN-1728-branch-2.002.patch, YARN-1728-branch-2.003.patch, > YARN-1728-branch-2.004.patch, YARN-1728-branch-2.005.patch > > > For example, going to the job history server page > http://localhost:19888/jobhistory/logs/localhost%3A8041/container_1391466602060_0011_01_01/job_1391466602060_0011/admin/stderr > results in the following error: > {code} > Cannot get container logs. Invalid nodeId: > test-cdh5-hue.ent.cloudera.com%3A8041 > {code} > Where the url decoded version works: > http://localhost:19888/jobhistory/logs/localhost:8041/container_1391466602060_0011_01_01/job_1391466602060_0011/admin/stderr > It seems like both should be supported as the former is simply percent > encoding. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6247) Add SubClusterResolver into FederationStateStoreFacade
[ https://issues.apache.org/jira/browse/YARN-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888575#comment-15888575 ] Botong Huang commented on YARN-6247: No test is added because this is only to add a new member variable which not yet used by anyone. Federation Interceptor will be using it later. > Add SubClusterResolver into FederationStateStoreFacade > -- > > Key: YARN-6247 > URL: https://issues.apache.org/jira/browse/YARN-6247 > Project: Hadoop YARN > Issue Type: Task >Reporter: Botong Huang >Assignee: Botong Huang >Priority: Minor > Attachments: YARN-6247-YARN-2915.v1.patch, > YARN-6247-YARN-2915.v2.patch > > > Add SubClusterResolver into FederationStateStoreFacade. Since the resolver > might involve some overhead (read file in the background, potentially > periodically), it is good to put it inside FederationStateStoreFacade > singleton, so that only one instance will be created. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk
[ https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888571#comment-15888571 ] Yufei Gu commented on YARN-6249: Yes. Still flaky in trunk. 2 out of 100 failed in my local test. > TestFairSchedulerPreemption is inconsistently failing on trunk > -- > > Key: YARN-6249 > URL: https://issues.apache.org/jira/browse/YARN-6249 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler, resourcemanager >Affects Versions: 2.9.0 >Reporter: Sean Po >Assignee: Yufei Gu > > Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. > An example stack trace: > {noformat} > Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec > <<< FAILURE! - in > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption > testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption) > Time elapsed: 10.475 sec <<< FAILURE! > java.lang.AssertionError: Incorrect number of containers on the greedy app > expected:<4> but was:<8> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org