[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15134404#comment-15134404
 ] 

Steve Loughran commented on HADOOP-12563:
-----------------------------------------

SPARK-11265 and SPARK-12241 show the how apps will benefit from a standard API 
for loading tokens; each service needs its own custom reflection code, 
something which was inadvertently broken when upgrading the hive version. To 
add support for new services (e.g Accumulo), Spark would currently need to 
patch its Client.scala class. That's not sustainable.

> Updated utility to create/modify token files
> --------------------------------------------
>
>                 Key: HADOOP-12563
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12563
>             Project: Hadoop Common
>          Issue Type: New Feature
>    Affects Versions: 3.0.0
>            Reporter: Allen Wittenauer
>            Assignee: Matthew Paduano
>         Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to