[ https://issues.apache.org/jira/browse/SPARK-21080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16059022#comment-16059022 ]
Lukasz Raszka commented on SPARK-21080: --------------------------------------- Update: I think it might be a mistake on our side, and it was not Spark internal HDFS access attempt that caused this, but ours. Sorry for the all confusion. Still, I agree it would be great to have the mentioned PR rebased. Meanwhile I guess you can close this one. > Workaround for HDFS delegation token expiry broken with some Hadoop versions > ---------------------------------------------------------------------------- > > Key: SPARK-21080 > URL: https://issues.apache.org/jira/browse/SPARK-21080 > Project: Spark > Issue Type: Bug > Components: YARN > Affects Versions: 2.1.0 > Environment: Spark 2.1.0 on Yarn, Hadoop 2.7.3 > Reporter: Lukasz Raszka > Priority: Minor > > We're getting struck by SPARK-11182, where the core issue in HDFS has been > fixed in more recent versions. It seems that [workaround introduced by user > SaintBacchus|https://github.com/apache/spark/commit/646366b5d2f12e42f8e7287672ba29a8c918a17d] > doesn't work in newer version of Hadoop. This seems to be cause by a move of > property name from {{fs.hdfs.impl}} to {{fs.AbstractFileSystem.hdfs.impl}} > which happened somewhere around 2.7.0 or earlier. Taking this into account > should make workaround work again for less recent Hadoop versions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org