[ 
https://issues.apache.org/jira/browse/HIVE-15767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16090171#comment-16090171
 ] 

Aihua Xu commented on HIVE-15767:
---------------------------------

[~gezapeti] Logically seems it's correct to set proper 
mapreduce.job.credentials.binary and pass to Spark. And MR is also doing the 
same thing. Can you find out why it makes the difference when oozie calls 
HiveCLI MR vs. Spark actions? 



> Hive On Spark is not working on secure clusters from Oozie
> ----------------------------------------------------------
>
>                 Key: HIVE-15767
>                 URL: https://issues.apache.org/jira/browse/HIVE-15767
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 1.2.1, 2.1.1
>            Reporter: Peter Cseh
>            Assignee: Peter Cseh
>         Attachments: HIVE-15767-001.patch, HIVE-15767-002.patch, 
> HIVE-15767.1.patch
>
>
> When a HiveAction is launched form Oozie with Hive On Spark enabled, we're 
> getting errors:
> {noformat}
> Caused by: java.io.IOException: Exception reading 
> file:/yarn/nm/usercache/yshi/appcache/application_1485271416004_0022/container_1485271416004_0022_01_000002/container_tokens
>         at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:188)
>         at 
> org.apache.hadoop.mapreduce.security.TokenCache.mergeBinaryTokens(TokenCache.java:155)
> {noformat}
> This is caused by passing the {{mapreduce.job.credentials.binary}} property 
> to the Spark configuration in RemoteHiveSparkClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to