aajisaka commented on a change in pull request #3677: URL: https://github.com/apache/hadoop/pull/3677#discussion_r756842720
########## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java ########## @@ -216,6 +217,8 @@ public static SaslPropertiesResolver getSaslPropertiesResolver( DataTransferEncryptorMessageProto.parseFrom(vintPrefixed(in)); if (proto.getStatus() == DataTransferEncryptorStatus.ERROR_UNKNOWN_KEY) { throw new InvalidEncryptionKeyException(proto.getMessage()); + } else if (proto.getStatus() == DataTransferEncryptorStatus.ERROR_ACCESS_TOKEN) { + throw new InvalidBlockTokenException(proto.getMessage()); } else if (proto.getStatus() == DataTransferEncryptorStatus.ERROR) { throw new IOException(proto.getMessage()); Review comment: There are 4 same code occurrence. Would you create a method to handle error and reuse the method? ########## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto ########## @@ -38,6 +38,7 @@ message DataTransferEncryptorMessageProto { SUCCESS = 0; ERROR_UNKNOWN_KEY = 1; ERROR = 2; + ERROR_ACCESS_TOKEN = 3; Review comment: If the server sends `ERROR_ACCESS_TOKEN` and the client is old (i.e. this patch is not applied), client cannot understand the `ERROR_ACCESS_TOKEN` and it is treated as `SUCCESS (0)`, which is the default value in proto2. In my understanding, setting `SUCCESS` as default was a mistake when creating the enum. Would you try adding an optional bool for the status? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org