[ 
https://issues.apache.org/jira/browse/YARN-371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13571787#comment-13571787
 ] 

Sandy Ryza commented on YARN-371:
---------------------------------

Bobby,

I believe that requests could be far more compact than 150 bytes per task.  
asks could be broken up into a list of "locations" and a list of "combinations" 
where each location is a rack or node string and each combination is a list of 
indexes into the location array, as well as an integer that indicates the 
number of requests with that combination.  As the locations array would not 
exceed 65,000 entries, we could consider using shorts for the indexes.   This 
would lead to at most something more like 15 extra bytes per task.  This is 
also a worst case, in which none of the combinations are shared by tasks.  As 
Arun mentions above, if an application is making a request like this that would 
saturate the entire cluster, it would make more sense to make only rack level 
requests, which would barely require any increase in the request size.

Arun,

The per-application number we are speaking about, whether 1.5 MB or 15 MB, is a 
maximum, for when an application uses almost the entire cluster.  I admit not 
having a ton of experience with what kind of workloads are targeted at clusters 
in practice, but is it typical that one would try to run thousands of jobs this 
large?
                
> Resource-centric compression in AM-RM protocol limits scheduling
> ----------------------------------------------------------------
>
>                 Key: YARN-371
>                 URL: https://issues.apache.org/jira/browse/YARN-371
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: api, resourcemanager, scheduler
>    Affects Versions: 2.0.2-alpha
>            Reporter: Sandy Ryza
>            Assignee: Sandy Ryza
>
> Each AMRM heartbeat consists of a list of resource requests. Currently, each 
> resource request consists of a container count, a resource vector, and a 
> location, which may be a node, a rack, or "*". When an application wishes to 
> request a task run in multiple localtions, it must issue a request for each 
> location.  This means that for a node-local task, it must issue three 
> requests, one at the node-level, one at the rack-level, and one with * (any). 
> These requests are not linked with each other, so when a container is 
> allocated for one of them, the RM has no way of knowing which others to get 
> rid of. When a node-local container is allocated, this is handled by 
> decrementing the number of requests on that node's rack and in *. But when 
> the scheduler allocates a task with a node-local request on its rack, the 
> request on the node is left there.  This can cause delay-scheduling to try to 
> assign a container on a node that nobody cares about anymore.
> Additionally, unless I am missing something, the current model does not allow 
> requests for containers only on a specific node or specific rack. While this 
> is not a use case for MapReduce currently, it is conceivable that it might be 
> something useful to support in the future, for example to schedule 
> long-running services that persist state in a particular location, or for 
> applications that generally care less about latency than data-locality.
> Lastly, the ability to understand which requests are for the same task will 
> possibly allow future schedulers to make more intelligent scheduling 
> decisions, as well as permit a more exact understanding of request load.
> I would propose the tweak of allowing a single ResourceRequest to encapsulate 
> all the location information for a task.  So instead of just a single 
> location, a ResourceRequest would contain an array of locations, including 
> nodes that it would be happy with, racks that it would be happy with, and 
> possibly *.  Side effects of this change would be a reduction in the amount 
> of data that needs to be transferred in a heartbeat, as well in as the RM's 
> memory footprint, becaused what used to be different requests for the same 
> task are now able to share some common data.
> While this change breaks compatibility, if it is going to happen, it makes 
> sense to do it now, before YARN becomes beta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to