[ 
https://issues.apache.org/jira/browse/MAPREDUCE-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12934739#action_12934739
 ] 

Scott Chen commented on MAPREDUCE-2198:
---------------------------------------

Hey M.C.

Yes, we can set a higher slot limit on TT and let scheduler manage the slots.

bq.  Are we running two TT's on each node, that talk to different JT's, and 
"migrating" slots from one to the other?
Yes. The motivation here is that when deploying new JT and TT. We need to 
restart the cluster and we lose all the running jobs.
This can be solved by the way you described.

Other use case is that people can experiment with the best slot settings by 
using the CLI without restarting the cluster.
Right now if you want to change the number of slots, you have to change the 
conf on every TT and restart.

Scott

> Allow FairScheduler to control the number of slots on each TaskTracker
> ----------------------------------------------------------------------
>
>                 Key: MAPREDUCE-2198
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2198
>             Project: Hadoop Map/Reduce
>          Issue Type: New Feature
>          Components: contrib/fair-share
>    Affects Versions: 0.22.0
>            Reporter: Scott Chen
>            Assignee: Scott Chen
>             Fix For: 0.22.0
>
>
> We can set the number of slots on the TaskTracker to be high and let 
> FairScheduler handles the slots.
> This approach allows us to change the number of slots on each node 
> dynamically.
> The administrator can change the number of slots with a CLI tool.
> One use case of this is for upgrading the MapReduce.
> Instead of restarting the cluster, we can run the new MapReduce on the same 
> cluster.
> And use the CLI tool to gradually migrate the slots.
> This way we don't lost the progress fo the jobs that's already executed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to