[jira] [Created] (YARN-7331) Change in few metrics in new YARN UI related to native-services
Sunil G created YARN-7331: - Summary: Change in few metrics in new YARN UI related to native-services Key: YARN-7331 URL: https://issues.apache.org/jira/browse/YARN-7331 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: yarn-native-services Reporter: Sunil G Assignee: Sunil G Few metrics changes. 1. The below metrics need not to show up in UI. These can be removed. containersRequested pendingAAContainers surplusContainers Containers Failed Since Last Threshold Pending Anti-Affinity Containers 2. completedContainers is Renamed to containersSucceeded 3. containersReady is newly added. 4. “Created Time” in component to be removed. This is not real created time for component. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7275) NM Statestore cleanup for Container updates
[ https://issues.apache.org/jira/browse/YARN-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205444#comment-16205444 ] Hadoop QA commented on YARN-7275: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 21s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 1 new + 383 unchanged - 0 fixed = 384 total (was 383) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 29s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 9s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | YARN-7275 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892326/YARN-7275.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bc41985a877f 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 20575ec | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/17947/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/17947/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Re
[jira] [Commented] (YARN-7244) ShuffleHandler is not aware of disks that are added
[ https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205423#comment-16205423 ] Hadoop QA commented on YARN-7244: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 12s{color} | {color:green} root: The patch generated 0 new + 317 unchanged - 2 fixed = 317 total (was 319) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 31s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 11 new + 123 unchanged - 0 fixed = 134 total (was 123) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 7s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} hadoop-mapreduce-client-shuffle in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}136m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | YARN-7244 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892321/YARN-7244.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8a924ec4e6b5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86
[jira] [Updated] (YARN-7275) NM Statestore cleanup for Container updates
[ https://issues.apache.org/jira/browse/YARN-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] kartheek muthyala updated YARN-7275: Attachment: YARN-7275.006.patch Fixing the broken checkstyle and junits in YARN-7275.005.patch > NM Statestore cleanup for Container updates > --- > > Key: YARN-7275 > URL: https://issues.apache.org/jira/browse/YARN-7275 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Arun Suresh >Assignee: kartheek muthyala >Priority: Blocker > Attachments: YARN-7275.001.patch, YARN-7275.002.patch, > YARN-7275.003.patch, YARN-7275.004.patch, YARN-7275.005.patch, > YARN-7275.006.patch > > > Currently, only resource updates are recorded in the NM state store, we need > to add ExecutionType updates as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7244) ShuffleHandler is not aware of disks that are added
[ https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kuhu Shukla updated YARN-7244: -- Attachment: YARN-7244.007.patch Updated patch that fixes checkstyles (almost all.. there is one to add getter in a test that seems excessive to me) and test failure for testMapFileAccess. My setup did not allow for that test to run and required overhauling. Verified that it passes now. > ShuffleHandler is not aware of disks that are added > --- > > Key: YARN-7244 > URL: https://issues.apache.org/jira/browse/YARN-7244 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Kuhu Shukla >Assignee: Kuhu Shukla > Attachments: YARN-7244.001.patch, YARN-7244.002.patch, > YARN-7244.003.patch, YARN-7244.004.patch, YARN-7244.005.patch, > YARN-7244.006.patch, YARN-7244.007.patch > > > The ShuffleHandler permanently remembers the list of "good" disks on NM > startup. If disks later are added to the node then map tasks will start using > them but the ShuffleHandler will not be aware of them. The end result is that > the data cannot be shuffled from the node leading to fetch failures and > re-runs of the map tasks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7244) ShuffleHandler is not aware of disks that are added
[ https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205338#comment-16205338 ] Hadoop QA commented on YARN-7244: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 36s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 11s{color} | {color:orange} root: The patch generated 8 new + 317 unchanged - 2 fixed = 325 total (was 319) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 32s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 11 new + 123 unchanged - 0 fixed = 134 total (was 123) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 55s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 37s{color} | {color:red} hadoop-mapreduce-client-shuffle in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}123m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler | | | hadoop.mapred.TestShuffleHandler | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | YARN-7244 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892309/YARN-7244.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 0b8bc629617a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31
[jira] [Commented] (YARN-7224) Support GPU isolation for docker container
[ https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205333#comment-16205333 ] Hadoop QA commented on YARN-7224: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 18s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 17s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 32 new + 380 unchanged - 11 fixed = 412 total (was 391) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 6s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 52s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The
[jira] [Updated] (YARN-7244) ShuffleHandler is not aware of disks that are added
[ https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kuhu Shukla updated YARN-7244: -- Attachment: YARN-7244.006.patch Thank you for the comments/review [~jlowe]! Updated patch. Will wait for PreCommit before any review requests. > ShuffleHandler is not aware of disks that are added > --- > > Key: YARN-7244 > URL: https://issues.apache.org/jira/browse/YARN-7244 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Kuhu Shukla >Assignee: Kuhu Shukla > Attachments: YARN-7244.001.patch, YARN-7244.002.patch, > YARN-7244.003.patch, YARN-7244.004.patch, YARN-7244.005.patch, > YARN-7244.006.patch > > > The ShuffleHandler permanently remembers the list of "good" disks on NM > startup. If disks later are added to the node then map tasks will start using > them but the ShuffleHandler will not be aware of them. The end result is that > the data cannot be shuffled from the node leading to fetch failures and > re-runs of the map tasks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7330) Add support to show GPU on UI/metrics
Wangda Tan created YARN-7330: Summary: Add support to show GPU on UI/metrics Key: YARN-7330 URL: https://issues.apache.org/jira/browse/YARN-7330 Project: Hadoop YARN Issue Type: Sub-task Reporter: Wangda Tan Assignee: Wangda Tan Priority: Blocker -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7224) Support GPU isolation for docker container
[ https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-7224: - Attachment: YARN-7224.005.patch Attached ver.5 patch. > Support GPU isolation for docker container > -- > > Key: YARN-7224 > URL: https://issues.apache.org/jira/browse/YARN-7224 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-7224.001.patch, YARN-7224.002-wip.patch, > YARN-7224.003.patch, YARN-7224.004.patch, YARN-7224.005.patch > > > YARN-6620 added support of GPU isolation in NM side, which only supports > non-docker containers. We need to add support to help docker containers > launched by YARN can utilize GPUs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7327) CapacityScheduler: Allocate containers asynchronously by default
[ https://issues.apache.org/jira/browse/YARN-7327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205259#comment-16205259 ] Wangda Tan commented on YARN-7327: -- [~CraigI], If you wanna try the latest async scheduling in Capacity Scheduler, you don't need to change application code. It is a global scheduler config in capacity-scheduler.xml: {code} yarn.scheduler.capacity.schedule-asynchronously.enable true {code} In addition to that, the 2.9.0/3.0.0 Yarn support specify multiple thread (by default is 1) to allocate containers. {code} yarn.scheduler.capacity.schedule-asynchronously.maximum-threads 4 {code} >From the test report: >https://issues.apache.org/jira/secure/attachment/12831662/YARN-5139-Concurrent-scheduling-performance-report.pdf > the multiple thread + async approach can improve scheduler throughput (and >shorten allocation delays) significantly. Please let me know how it goes in your side, I can help to answer questions if you have. > CapacityScheduler: Allocate containers asynchronously by default > > > Key: YARN-7327 > URL: https://issues.apache.org/jira/browse/YARN-7327 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Craig Ingram >Priority: Trivial > Attachments: yarn-async-scheduling.png > > > I was recently doing some research into Spark on YARN's startup time and > observed slow, synchronous allocation of containers/executors. I am testing > on a 4 node bare metal cluster w/48 cores and 128GB memory per node. YARN was > only allocating about 3 containers per second. Moreover when starting 3 Spark > applications at the same time with each requesting 44 containers, the first > application would get all 44 requested containers and then the next > application would start getting containers and so on. > > From looking at the code, it appears this is by design. There is an > undocumented configuration variable that will enable asynchronous allocation > of containers. I'm sure I'm missing something, but why is this not the > default? Is there a bug or race condition in this code path? I've done some > testing with it and it's been working and is significantly faster. > > Here's the config: > `yarn.scheduler.capacity.schedule-asynchronously.enable` > > Any help understanding this would be appreciated. > > Thanks, > Craig > > If you're curious about the performance difference with this setting, here > are the results: > > The following tool was used for the benchmarks: > https://github.com/SparkTC/spark-bench > h2. async scheduler research > The goal of this test is to determine if running Spark on YARN with async > scheduling of containers reduces the amount of time required for an > application to receive all of its requested resources. This setting should > also reduce the overall runtime of short-lived applications/stages or > notebook paragraphs. This setting could prove crucial to achieving optimal > performance when sharing resources on a cluster with dynalloc enabled. > h3. Test Setup > Must update /etc/hadoop/conf/capacity-scheduler.xml (or through Ambari) > between runs. > `yarn.scheduler.capacity.schedule-asynchronously.enable=true|false` > conf files request executors counts of: > * 2 > * 20 > * 50 > * 100 > The apps are being submitted to the default queue on each cluster which caps > at 48 cores on dynalloc and 72 cores on baremetal. The default queue was > expanded for the last two tests on baremetal so it could potentially take > advantage of all 144 cores. > h3. Test Environments > h4. dynalloc > 4 VMs in Fyre (1 master, 3 workers) > 8 CPUs/16 GB per node > model name: QEMU Virtual CPU version 2.5+ > h4. baremetal > 4 baremetal instances in Fyre (1 master, 3 workers) > 48 CPUs/128GB per node > model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz > h3. Using spark-bench with timedsleep workload sync > h4. dynalloc > || requested containers | avg | stdev|| > |2 | 23.814900 | 1.110725| > |20 | 29.770250 | 0.830528| > |50 | 44.486600 | 0.593516| > |100 | 44.337700 | 0.490139| > h4. baremetal - 2 queues splitting cluster 72 cores each > || requested containers | avg | stdev|| > |2 | 14.827000 | 0.292290| > |20 | 19.613150 | 0.155421| > |50 | 30.768400 | 0.083400| > |100 | 40.931850 | 0.092160| > h4. baremetal - 1 queue to rule them all - 144 cores > || requested containers | avg | stdev|| > |2 | 14.833050 | 0.334061| > |20 | 19.575000 | 0.212836| > |50 | 30.765350 | 0.111035| > |100 | 41.763300 | 0.182700| > h3. Using spark-bench with timedsleep workload async > h4. dynalloc > || requested containers | avg | stdev|| > |2 | 22.575150 | 0.574296| > |20 | 26.904150 | 1.244602| > |50 | 44.721800 | 0.655388| > |100 | 44.57 | 0.514540| > h5. 2nd run > ||
[jira] [Commented] (YARN-7230) Document DockerContainerRuntime for branch-2.8 with proper scope and claim as an experimental feature
[ https://issues.apache.org/jira/browse/YARN-7230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205256#comment-16205256 ] Hadoop QA commented on YARN-7230: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} branch-2.8 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 34s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} branch-2.8 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 20m 25s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:c2d96dd | | JIRA Issue | YARN-7230 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892302/YARN-7230.branch-2.8.001.patch | | Optional Tests | asflicense mvnsite xml | | uname | Linux 00c2ce33ad04 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2.8 / b41c5e4 | | modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/17943/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Document DockerContainerRuntime for branch-2.8 with proper scope and claim as > an experimental feature > - > > Key: YARN-7230 > URL: https://issues.apache.org/jira/browse/YARN-7230 > Project: Hadoop YARN > Issue Type: Bug > Components: documentation >Affects Versions: 2.8.1 >Reporter: Junping Du >Assignee: Shane Kumpf >Priority: Blocker > Attachments: YARN-7230.branch-2.8.001.patch > > > YARN-5258 is to document new feature for docker container runtime which > already get checked in trunk/branch-2. We need a similar one for branch-2.8. > However, given we missed several patches, we need to define narrowed scope of > these feature/improvements which match with existing patches landed in 2.8. > Also, like YARN-6622, to document it as experimental. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-7127) Merge yarn-native-service branch into trunk
[ https://issues.apache.org/jira/browse/YARN-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205252#comment-16205252 ] Allen Wittenauer edited comment on YARN-7127 at 10/15/17 7:10 PM: -- I thought some more about this topic this morning and had two more things to add: 1) I think an AM should have a way to tell the RM about any extra capabilities it might have. This feature isn't particularly useful for the RM, but it would be beneficial for any clients. For example, the MR AM might tag itself as "jobtracker" to note that it supports the extra features that the 'mapred' command uses. A Slider AM might tag itself as 'slider' or 'native' or whatever to signify that it supports those extensions. etc. etc. That would make extending the yarn application subcommand MUCH easier and potentially even open the door for extensions/plug-ins to that command from third parties. For example, turning the extra mapred subcommands into a hook off of yarn application would allow us to ultimately kill the mapred command once the timeline server is capable of doing everything that the history server can. 2) A large part of the discussion here is fueled by contradicting views on this project's place within Hadoop. If one takes the belief that it's "just another framework, like MapReduce," then creating separate sub-commands, documentation, daemons, etc. seems logical. If one takes the view that it's "part of YARN," then adding new sub-commands, a separate documentation section, and a ton of new daemons does not make sense. But it doesn't appear that either of those choices has been made. Portions of the code base are in the separate framework type of mold, but other changes are to core YARN functionality, even if we push aside "obviously part of YARN" bits like RegistryDNS. It seems as though the folks working on this branch need to make that decision and drive it to completion: is it part of YARN or is it not? If it's the former, then that means full integration: no more separate API daemon, no different subcommand structure, etc., etc. If it's the latter, then that means total separation: it needs to be a separate subproject, no shared code base, new top-level command, etc., etc. Having a foot in both is what is ultimately driving this disagreement and will eventually confuse users. was (Author: aw): I thought some more about this topic this morning and had two more thoughts: 1) I think an AM should have a way to tell the RM about any extra capabilities it might have. This feature isn't particularly useful for the RM, but it would be beneficial for any clients. For example, the MR AM might tag itself as "jobtracker" to note that it supports the extra features that the 'mapred' command uses. A Slider AM might tag itself as 'slider' or 'native' or whatever to signify that it supports those extensions. etc. etc. That would make extending the yarn application subcommand MUCH easier and potentially even open the door for extensions/plug-ins to that command from third parties. For example, turning the extra mapred subcommands into a hook off of yarn application would allow us to ultimately kill the mapred command once the timeline server is capable of doing everything that the history server can. 2) A large part of the discussion here is fueled by contradicting views on this project's place within Hadoop. If one takes the belief that it's "just another framework, like MapReduce," then creating separate sub-commands, documentation, daemons, etc. seems logical. If one takes the view that it's "part of YARN," then adding new sub-commands, a separate documentation section, and a ton of new daemons does not make sense. But it doesn't appear that either of those choices has been made. Portions of the code base are in the separate framework type of mold, but other changes are to core YARN functionality, even if we push aside "obviously part of YARN" bits like RegistryDNS. It seems as though the folks working on this branch need to make that decision and drive it to completion: is it part of YARN or is it not? If it's the former, then that means full integration: no more separate API daemon, no different subcommand structure, etc., etc. If it's the latter, then that means total separation: it needs to be a separate subproject, no shared code base, new top-level command, etc., etc. Having a foot in both is what is ultimately driving this disagreement and will eventually confuse users. > Merge yarn-native-service branch into trunk > --- > > Key: YARN-7127 > URL: https://issues.apache.org/jira/browse/YARN-7127 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-712
[jira] [Commented] (YARN-6927) Add support for individual resource types requests in MapReduce
[ https://issues.apache.org/jira/browse/YARN-6927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205253#comment-16205253 ] Wangda Tan commented on YARN-6927: -- [~sunilg]/[~templedf], bq. Could we consider other resource specifications like yarn.app.mapreduce.am.resource.mb/cpu-vcores as a standard way to define CPU and Memory if users want to use, rather than an overwriting model? In case when both are specified (old and new config for cpu/memory), i am more in favor of throwing an exception to end user asking to specify in one format. Agreed. > Add support for individual resource types requests in MapReduce > --- > > Key: YARN-6927 > URL: https://issues.apache.org/jira/browse/YARN-6927 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Daniel Templeton >Assignee: Gergo Repas > Attachments: YARN-6927.000.patch, YARN-6927.001.patch > > > YARN-6504 adds support for resource profiles in MapReduce jobs, but resource > profiles don't give users much flexibility in their resource requests. To > satisfy users' needs, MapReduce should also allow users to specify arbitrary > resource requests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7127) Merge yarn-native-service branch into trunk
[ https://issues.apache.org/jira/browse/YARN-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205252#comment-16205252 ] Allen Wittenauer commented on YARN-7127: I thought some more about this topic this morning and had two more thoughts: 1) I think an AM should have a way to tell the RM about any extra capabilities it might have. This feature isn't particularly useful for the RM, but it would be beneficial for any clients. For example, the MR AM might tag itself as "jobtracker" to note that it supports the extra features that the 'mapred' command uses. A Slider AM might tag itself as 'slider' or 'native' or whatever to signify that it supports those extensions. etc. etc. That would make extending the yarn application subcommand MUCH easier and potentially even open the door for extensions/plug-ins to that command from third parties. For example, turning the extra mapred subcommands into a hook off of yarn application would allow us to ultimately kill the mapred command once the timeline server is capable of doing everything that the history server can. 2) A large part of the discussion here is fueled by contradicting views on this project's place within Hadoop. If one takes the belief that it's "just another framework, like MapReduce," then creating separate sub-commands, documentation, daemons, etc. seems logical. If one takes the view that it's "part of YARN," then adding new sub-commands, a separate documentation section, and a ton of new daemons does not make sense. But it doesn't appear that either of those choices has been made. Portions of the code base are in the separate framework type of mold, but other changes are to core YARN functionality, even if we push aside "obviously part of YARN" bits like RegistryDNS. It seems as though the folks working on this branch need to make that decision and drive it to completion: is it part of YARN or is it not? If it's the former, then that means full integration: no more separate API daemon, no different subcommand structure, etc., etc. If it's the latter, then that means total separation: it needs to be a separate subproject, no shared code base, new top-level command, etc., etc. Having a foot in both is what is ultimately driving this disagreement and will eventually confuse users. > Merge yarn-native-service branch into trunk > --- > > Key: YARN-7127 > URL: https://issues.apache.org/jira/browse/YARN-7127 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-7127.01.patch, YARN-7127.02.patch, > YARN-7127.03.patch, YARN-7127.04.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7230) Document DockerContainerRuntime for branch-2.8 with proper scope and claim as an experimental feature
[ https://issues.apache.org/jira/browse/YARN-7230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shane Kumpf updated YARN-7230: -- Attachment: YARN-7230.branch-2.8.001.patch > Document DockerContainerRuntime for branch-2.8 with proper scope and claim as > an experimental feature > - > > Key: YARN-7230 > URL: https://issues.apache.org/jira/browse/YARN-7230 > Project: Hadoop YARN > Issue Type: Bug > Components: documentation >Affects Versions: 2.8.1 >Reporter: Junping Du >Assignee: Shane Kumpf >Priority: Blocker > Attachments: YARN-7230.branch-2.8.001.patch > > > YARN-5258 is to document new feature for docker container runtime which > already get checked in trunk/branch-2. We need a similar one for branch-2.8. > However, given we missed several patches, we need to define narrowed scope of > these feature/improvements which match with existing patches landed in 2.8. > Also, like YARN-6622, to document it as experimental. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5589) Update CapacitySchedulerConfiguration minimum and maximum calculations to consider all resource types
[ https://issues.apache.org/jira/browse/YARN-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205236#comment-16205236 ] Varun Vasudev commented on YARN-5589: - [~lovekesh.bansal] - please feel free to take it over. > Update CapacitySchedulerConfiguration minimum and maximum calculations to > consider all resource types > - > > Key: YARN-5589 > URL: https://issues.apache.org/jira/browse/YARN-5589 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5589) Update CapacitySchedulerConfiguration minimum and maximum calculations to consider all resource types
[ https://issues.apache.org/jira/browse/YARN-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Vasudev reassigned YARN-5589: --- Assignee: (was: Varun Vasudev) > Update CapacitySchedulerConfiguration minimum and maximum calculations to > consider all resource types > - > > Key: YARN-5589 > URL: https://issues.apache.org/jira/browse/YARN-5589 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Varun Vasudev > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5589) Update CapacitySchedulerConfiguration minimum and maximum calculations to consider all resource types
[ https://issues.apache.org/jira/browse/YARN-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205178#comment-16205178 ] lovekesh bansal commented on YARN-5589: --- [~vvasudev] I am new to the open source protocols. The JIRA is assigned to you. Wanted to know if you Are you actively working on it? If not, may I chip in if you are ok with it? > Update CapacitySchedulerConfiguration minimum and maximum calculations to > consider all resource types > - > > Key: YARN-5589 > URL: https://issues.apache.org/jira/browse/YARN-5589 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6927) Add support for individual resource types requests in MapReduce
[ https://issues.apache.org/jira/browse/YARN-6927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205128#comment-16205128 ] Daniel Templeton commented on YARN-6927: bq. Could we consider other resource specifications like yarn.app.mapreduce.am.resource.mb/cpu-vcores as a standard way to define CPU and Memory if users want to use, rather than an overwriting model? In case when both are specified (old and new config for cpu/memory), i am more in favor of throwing an exception to end user asking to specify in one format. Agreed. > Add support for individual resource types requests in MapReduce > --- > > Key: YARN-6927 > URL: https://issues.apache.org/jira/browse/YARN-6927 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Daniel Templeton >Assignee: Gergo Repas > Attachments: YARN-6927.000.patch, YARN-6927.001.patch > > > YARN-6504 adds support for resource profiles in MapReduce jobs, but resource > profiles don't give users much flexibility in their resource requests. To > satisfy users' needs, MapReduce should also allow users to specify arbitrary > resource requests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6460) Accelerated time in SLS
[ https://issues.apache.org/jira/browse/YARN-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205084#comment-16205084 ] Yufei Gu commented on YARN-6460: Thanks [~seneque] for working on this. The approach looks good to me generally. The patch doesn't apply and needs a rebase though. > Accelerated time in SLS > --- > > Key: YARN-6460 > URL: https://issues.apache.org/jira/browse/YARN-6460 > Project: Hadoop YARN > Issue Type: Improvement > Components: scheduler-load-simulator >Reporter: Julien Vaudour >Assignee: Julien Vaudour >Priority: Minor > Attachments: YARN-6460-branch-2.000.patch, > YARN-6460-branch-2.001.patch, YARN-6460-branch-2.002.patch, > YARN-6460.000.patch, YARN-6460.001.patch, YARN-6460.002.patch > > > Be able to accelerate time in SLS.To do that, a {{timescalefactor}} parameter > is introduced (default value = 1) > With if we use a time factor of X, time in simulation will be X times faster > than real. Time in generated CSV will be modified a well to have the same > result as if we don't use {{timescalefactor}}. > For example, it permits to run a simulation of one week of jobs in just one > day if we use {{timescalefactor=7}} > To do that a ScaleClock object has been introduced, which implements > {{org.apache.hadoop.yarn.util.Clock}}. It also extends > {{com.codahale.metrics.Clock}} for metrics reported in csv. > All objects used for the simulation now use a shared reference to a > {{org.apache.hadoop.yarn.util.Clock}} instance to get current time instead of > using {{System.currentTimeMillis()}} > A new optionnal parameter {{--timescalefactor=}} has been introduced > on {{slsrun.sh}} script. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org