[ https://issues.apache.org/jira/browse/HDFS-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16049350#comment-16049350 ]
Ewan Higgs commented on HDFS-11956: ----------------------------------- I took a look and see that this fails when writing blocks. e.g.: {code} hadoop-2.6.5/bin/hdfs dfs -copyFromLocal hello.txt / {code} This comes from the fact that the {{BlockTokenIdenfitier}} has the StorageID in there; but the StorageID is an optional field in the request which is new in 3.0. This means that it isn't passed in. Defaulting to 'null' and allowing this would of course defeat the purpose of the BlockTokenIdentifier, so I think this should be fixed with a bitflag (e.g. {{dfs.block.access.token.storageid.enable}}) which defaults to false and makes the [[BlockTokenSecretManager}} only use the storage id in the {{checkAccess}} call if it's enabled. This will allow old clients work; but it won't allow the system to take advantage of new features enabled by using the storage id in the write calls. > Fix BlockToken compatibility with Hadoop 2.x clients > ---------------------------------------------------- > > Key: HDFS-11956 > URL: https://issues.apache.org/jira/browse/HDFS-11956 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 3.0.0-alpha4 > Reporter: Andrew Wang > Assignee: Chris Douglas > Priority: Blocker > > Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. > When talking to a 3.0.0-alpha4 DN with security on: > {noformat} > 2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Block token verification failed: op=WRITE_BLOCK, > remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs > [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with > StorageIDs [] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org