[ https://issues.apache.org/jira/browse/SPARK-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369897#comment-16369897 ]
Sumit Nigam commented on SPARK-11182: ------------------------------------- Also, is adding --conf spark.hadoop.fs.hdfs.impl.disable.cache=true a workaround which can be used in the interim? > HDFS Delegation Token will be expired when calling > "UserGroupInformation.getCurrentUser.addCredentials" in HA mode > ------------------------------------------------------------------------------------------------------------------ > > Key: SPARK-11182 > URL: https://issues.apache.org/jira/browse/SPARK-11182 > Project: Spark > Issue Type: Bug > Components: YARN > Affects Versions: 1.5.1 > Reporter: Liangliang Gu > Priority: Major > > In HA mode, DFSClient will generate HDFS Delegation Token for each Name Node > automatically, which will not be updated when Spark update Credentials for > the current user. > Spark should update these tokens in order to avoid Token Expired Error. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org