[ 
https://issues.apache.org/jira/browse/HDFS-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17772769#comment-17772769
 ] 

Zilong Zhu commented on HDFS-16644:
-----------------------------------

[~nishtha11shah] You are right. 
[https://github.com/apache/hadoop/pull/5962/files] change can only help the 
DataNode avoid crashes but do not enable the 2.10 hadoop-client to successfully 
reads/writes. I believe we should change client code on the 2.10-branch. As 
mentioned above, HDFS-13541 merged into both branch-2.10 and branch-3.2. It 
added the "handshakeMsg" field. But HDFS-6708 and HDFS-9807 merged into 
branch-3.2 only. It added the "storageTypes" and "storageIds" fields before 
HDFS-13541. 2.10-client mistakenly thinks of the ”storageType“ as 
”handshakeMsg“. This in turn passed the wrong “handshakeMsg”. This is where the 
real issue lies. 

> java.io.IOException Invalid token in javax.security.sasl.qop
> ------------------------------------------------------------
>
>                 Key: HDFS-16644
>                 URL: https://issues.apache.org/jira/browse/HDFS-16644
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.2.1
>            Reporter: Walter Su
>            Priority: Major
>              Labels: pull-request-available
>
> deployment:
> server side: kerberos enabled cluster with jdk 1.8 and hdfs-server 3.2.1
> client side:
> I run command hadoop fs -put a test file, with kerberos ticket inited first, 
> and use identical core-site.xml & hdfs-site.xml configuration.
>  using client ver 3.2.1, it succeeds.
>  using client ver 2.8.5, it succeeds.
>  using client ver 2.10.1, it fails. The client side error info is:
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2022-06-27 01:06:15,781 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DataNode{data=FSDataset{dirpath='[/mnt/disk1/hdfs, /mnt/***/hdfs, 
> /mnt/***/hdfs, /mnt/***/hdfs]'}, localName='emr-worker-***.***:9866', 
> datanodeUuid='b1c7f64a-6389-4739-bddf-***', xmitsInProgress=0}:Exception 
> transfering block BP-1187699012-10.****-***:blk_1119803380_46080919 to mirror 
> 10.*****:9866
> java.io.IOException: Invalid token in javax.security.sasl.qop: D
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
> Once any client ver 2.10.1 connect to hdfs server, the DataNode no longer 
> accepts any client connection, even client ver 3.2.1 cannot connects to hdfs 
> server. The DataNode rejects any client connection. For a short time, all 
> DataNodes rejects client connections. 
> The problem exists even if I replace DataNode with ver 3.3.0 or replace java 
> with jdk 11.
> The problem is fixed if I replace DataNode with ver 3.2.0. I guess the 
> problem is related to HDFS-13541



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to