[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16211212#comment-16211212
 ] 

Peter Bacsko commented on MAPREDUCE-5124:
-----------------------------------------

[~jlowe] yesterday I had a discussion with [~miklos.szeg...@cloudera.com] about 
the possible implementations of this.

We came up with different solutions which I'll try to summarize here.

* Throttling: we try to determine whether an event for a particular task update 
has been processed or not. If not, then we don't try to process the new status 
update. We can examine the event queue of the AsyncDispatcher and try find 
update event that belongs to the same attempt ID. I can think of two approaches 
here:
*# Server-side throttling: block inside MRAppMaster until the status update is 
fully processed. If I'm not mistaken, this completely blocks the current RPC 
server thread, so too many status updates in parallel might make it impossible 
to process other RPC calls.
*# Client-side throttling: we return without dispatching 
{{TaskAttemptStatusUpdateEvent}} to the event queue, but set a field in 
{{AMFeedBack}}, indicating that the AM is busy. The client checks the result. 
If the flag is set, it doubles the status update interval, resulting in fewer 
status update calls to the AM. 
* Use the deferred RPC response mechanism implemented in HADOOP-11552. This 
means that we have to retrieve the callback object from the current RPC calling 
context and pass it over until the full update logic is executed. This is 
doable, although one event might create another event and it's not entirely 
clear when the operation can be considered finished. Getting rid of some 
asynchronicity can help, although I'm not sure if this kind of change is 
dangerous or not.
* Let the AM drive the whole status update mechanism as explained by Miklos. 
This looks too complicated and the change would be too big, at least for this 
JIRA.

I haven't been deeply considering the pros and cons of the proposed solutions. 
Personally I like the client-side throttling and the deferred RPC callback.

If we go for throttling, we also have to think about how we determine when we 
need to push the client back to send updates less frequently. We can check the 
size of the current event queue, but Miklos had some convincing arguments 
against doing it. We can look for already existing 
{{TaskAttemptStatusUpdateEvent}}s (what I suggested above) but that means 
iteration which is more expensive. I can't see a simple, silver-bullet solution 
right now.

> AM lacks flow control for task events
> -------------------------------------
>
>                 Key: MAPREDUCE-5124
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5124
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mr-am
>    Affects Versions: 2.0.3-alpha, 0.23.5
>            Reporter: Jason Lowe
>            Assignee: Haibo Chen
>         Attachments: MAPREDUCE-5124-proto.2.txt, MAPREDUCE-5124-prototype.txt
>
>
> The AM does not have any flow control to limit the incoming rate of events 
> from tasks.  If the AM is unable to keep pace with the rate of incoming 
> events for a sufficient period of time then it will eventually exhaust the 
> heap and crash.  MAPREDUCE-5043 addressed a major bottleneck for event 
> processing, but the AM could still get behind if it's starved for CPU and/or 
> handling a very large job with tens of thousands of active tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org

Reply via email to