[ 
https://issues.apache.org/jira/browse/YARN-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418029#comment-16418029
 ] 

Mridul Muralidharan commented on YARN-7935:
-------------------------------------------

[~eyang] I think there is some confusion here.
Spark does not require user defined networks - I dont think it was mentioned 
that this was required.

Taking a step back:

With "host" networking mode, we get it to work without any changes to the 
application code at all - giving us all the benefits of isolation without any 
loss in existing functionality (modulo specifying the env variables required 
ofcourse).

When used with bridge/overlay/user defined networks/etc, the container hostname 
passed to spark AM via allocation request is that of nodemanager, and not the 
actual container hostname used in the docker container.
This patch exposes the container hostname as an env variable - just as we have 
other container and node specific env variables exposed to the container 
(CONTAINER_ID, NM_HOST, etc).

Do you see any concern with exposing this variable ? I want to make sure I am 
not missing something here.

What spark (or any other application) does with this variable is their 
implementation detail; I can go into details of why this is required in the 
case of spark specifically if required, but that might digress from the jira.


> Expose container's hostname to applications running within the docker 
> container
> -------------------------------------------------------------------------------
>
>                 Key: YARN-7935
>                 URL: https://issues.apache.org/jira/browse/YARN-7935
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: yarn
>            Reporter: Suma Shivaprasad
>            Assignee: Suma Shivaprasad
>            Priority: Major
>         Attachments: YARN-7935.1.patch, YARN-7935.2.patch, YARN-7935.3.patch
>
>
> Some applications have a need to bind to the container's hostname (like 
> Spark) which is different from the NodeManager's hostname(NM_HOST which is 
> available as an env during container launch) when launched through Docker 
> runtime. The container's hostname can be exposed to applications via an env 
> CONTAINER_HOSTNAME. Another potential candidate is the container's IP but 
> this can be addressed in a separate jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to