[ https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858097#comment-15858097 ]
Daryn Sharp commented on HDFS-11026: ------------------------------------ The first byte trick is clever. I'd prefer an equality check for a definitive magic byte that can't occur or represents something so large it can't/won't occur . Perhaps something like -1 – I haven't checked if that's impossible for the varint. Curiosity, why do different jdks throw an IOException or RuntimeException during incorrect decoding? I'd expect a deterministic exception. > Convert BlockTokenIdentifier to use Protobuf > -------------------------------------------- > > Key: HDFS-11026 > URL: https://issues.apache.org/jira/browse/HDFS-11026 > Project: Hadoop HDFS > Issue Type: Task > Components: hdfs, hdfs-client > Affects Versions: 2.9.0, 3.0.0-alpha1 > Reporter: Ewan Higgs > Assignee: Ewan Higgs > Fix For: 3.0.0-alpha3 > > Attachments: blocktokenidentifier-protobuf.patch, > HDFS-11026.002.patch, HDFS-11026.003.patch, HDFS-11026.004.patch, > HDFS-11026.005.patch > > > {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} > (basically a {{byte[]}}) and manual serialization to get data into and out of > the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. > {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The > {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded > more easily and will be consistent with the rest of the system. > NB: Release of this will require a version update since 2.8.x won't be able > to decipher {{BlockKeyProto.keyBytes}} from 2.8.y. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org