[ https://issues.apache.org/jira/browse/YARN-7732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366683#comment-16366683 ]
Wangda Tan commented on YARN-7732: ---------------------------------- Thanks [~youchen], in general the patch looks good, I'd like to do some basic validations using old traces in the next Monday. If I don't get back by next Monday, please feel free to commit the patch to trunk. And is there any compatibility issue for syn.json? I saw the description says: {quote}See syn_generic.json for an equivalent of the previous syn.json in the new format. {quote} cc: [~curino] > Support Generic AM Simulator from SynthGenerator > ------------------------------------------------ > > Key: YARN-7732 > URL: https://issues.apache.org/jira/browse/YARN-7732 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator > Reporter: Young Chen > Assignee: Young Chen > Priority: Minor > Attachments: YARN-7732-YARN-7798.01.patch, > YARN-7732-YARN-7798.02.patch, YARN-7732.01.patch, YARN-7732.02.patch, > YARN-7732.03.patch, YARN-7732.04.patch, YARN-7732.05.patch > > > Extract the MapReduce specific set-up in the SLSRunner into the > MRAMSimulator, and enable support for pluggable AMSimulators. > Previously, the AM set up in SLSRunner had the MRAMSimulator type hard coded, > for example startAMFromSynthGenerator() calls this: > > {code:java} > runNewAM(SLSUtils.DEFAULT_JOB_TYPE, user, jobQueue, oldJobId, > jobStartTimeMS, jobFinishTimeMS, containerList, reservationId, > job.getDeadline(), getAMContainerResource(null)); > {code} > where SLSUtils.DEFAULT_JOB_TYPE = "mapreduce" > The container set up was also only suitable for mapreduce: > > {code:java} > Version:1.0 StartHTML:000000286 EndHTML:000012564 StartFragment:000003634 > EndFragment:000012474 StartSelection:000003700 EndSelection:000012464 > SourceURL:https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java > > // map tasks > for (int i = 0; i < job.getNumberMaps(); i++) { > TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.MAP, i, 0); > RMNode node = > nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size()))) > .getNode(); > String hostname = "/" + node.getRackName() + "/" + node.getHostName(); > long containerLifeTime = tai.getRuntime(); > Resource containerResource = > Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(), > (int) tai.getTaskInfo().getTaskVCores()); > containerList.add(new ContainerSimulator(containerResource, > containerLifeTime, hostname, DEFAULT_MAPPER_PRIORITY, "map")); > } > // reduce tasks > for (int i = 0; i < job.getNumberReduces(); i++) { > TaskAttemptInfo tai = job.getTaskAttemptInfo(TaskType.REDUCE, i, 0); > RMNode node = > nmMap.get(keyAsArray.get(rand.nextInt(keyAsArray.size()))) > .getNode(); > String hostname = "/" + node.getRackName() + "/" + node.getHostName(); > long containerLifeTime = tai.getRuntime(); > Resource containerResource = > Resource.newInstance((int) tai.getTaskInfo().getTaskMemory(), > (int) tai.getTaskInfo().getTaskVCores()); > containerList.add( > new ContainerSimulator(containerResource, containerLifeTime, > hostname, DEFAULT_REDUCER_PRIORITY, "reduce")); > } > {code} > > In addition, the syn.json format supported only mapreduce (the parameters > were very specific: mtime, rtime, mtasks, rtasks, etc..). > This patch aims to introduce a new syn.json format that can describe generic > jobs, and the SLS setup required to support the synth generation of generic > jobs. > See syn_generic.json for an equivalent of the previous syn.json in the new > format. > Using the new generic format, we describe a StreamAMSimulator simulates a > long running streaming service that maintains N number of containers for the > lifetime of the AM. See syn_stream.json. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org