Hi Margus,
Appreciate your help!
Seems this parameter is related to CGroups functions.
I am using CDH without kerberos, I set the parameter:
yarn.nodemanager.container-executor.class=org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor

Then run spark job again, hit the problem as below, any points I missed?
Thanks again !
... ...
Diagnostics: Application application_1516614010938_0003 initialization
failed (exitCode=255) with output: main : command provided 0
main : run as user is nobody
main : requested yarn user is ses_test
Can't create directory
/data/yarn/nm/usercache/test_user/appcache/application_1516614010938_0003 -
Permission denied
Can't create directory
/data01/yarn/nm/usercache/test_user/appcache/application_1516614010938_0003
- Permission denied
Did not create any app directories
... ...



2018-01-22 15:36 GMT+08:00 Margusja <mar...@roo.ee>:

> Hi
>
> One way to get it is use YARN configuration parameter - yarn.nodemanager.
> container-executor.class.
> By default it is org.apache.hadoop.yarn.server.nodemanager.
> DefaultContainerExecutor
>
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor - gives
> you user who executes script.
>
> Br
> Margus
>
>
>
> On 22 Jan 2018, at 09:28, sd wang <pingwang1...@gmail.com> wrote:
>
> Hi Advisers,
> When submit spark job in yarn cluster mode, the job will be executed by
> "yarn" user. Any parameters can change the user? I tried
> setting HADOOP_USER_NAME but it did not work. I'm using spark 2.2.
> Thanks for any help!
>
>
>

Reply via email to