[ 
https://issues.apache.org/jira/browse/YARN-2884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710140#comment-14710140
 ] 

Subru Krishnan commented on YARN-2884:
--------------------------------------

[~jlowe], let me try to answer your question as this approach will not affect 
applications that ship their own configs. To run MapReduce in our cluster where 
AMRMProxy is enabled, the only change we made was to update 
_resourcemanager.scheduler.address_ value to point to the _amrmproxy.address_. 
We thought this is acceptable as AMRMProxy (if enabled) is the Scheduler proxy 
for the apps and moreover quite easy to accomplish as we only had to update the 
MapReduce config only on our gateway machines from where MapReduce jobs are 
submitted. The rolling upgrade reliability as you rightly pointed out is 
maintained as MapReduce configs continues to be independent of node configs. 
FYI we also validated with Spark which exhibits the same characteristics.
Ideally I agree that application configs should be decoupled from the server 
side configs for multiple reasons like rolling upgrades, security, etc but 
unfortunately many applications (REEF, Distributed Shell, etc) depend on the 
node configs today. So in summary the HADOOP_CONF_DIR modification will address 
applications that pick up configs from nodes without breaking self contained 
applications as the modified HADOOP_CONF_DIR does not show up on the latter's 
classpath.
 

> Proxying all AM-RM communications
> ---------------------------------
>
>                 Key: YARN-2884
>                 URL: https://issues.apache.org/jira/browse/YARN-2884
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: nodemanager, resourcemanager
>            Reporter: Carlo Curino
>            Assignee: Kishore Chaliparambil
>         Attachments: YARN-2884-V1.patch, YARN-2884-V2.patch, 
> YARN-2884-V3.patch, YARN-2884-V4.patch, YARN-2884-V5.patch, 
> YARN-2884-V6.patch, YARN-2884-V7.patch, YARN-2884-V8.patch, YARN-2884-V9.patch
>
>
> We introduce the notion of an RMProxy, running on each node (or once per 
> rack). Upon start the AM is forced (via tokens and configuration) to direct 
> all its requests to a new services running on the NM that provide a proxy to 
> the central RM. 
> This give us a place to:
> 1) perform distributed scheduling decisions
> 2) throttling mis-behaving AMs
> 3) mask the access to a federation of RMs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to