[ 
https://issues.apache.org/jira/browse/HDFS-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17769919#comment-17769919
 ] 

Nishtha Shah commented on HDFS-16644:
-------------------------------------

Updating the fix:

Though the above fix helped resolve the writes and reads from Namenode and 
Datanode itself, the issue during the upgrade from 2.10 to 3.3.5 remained as it 
is, as reads/writes from Hbase (Still using 2.10 hadoop-client) are facing the 
same issue until upgraded to 3.3.5. This is causing the upgrade without 
downtime impossible

> java.io.IOException Invalid token in javax.security.sasl.qop
> ------------------------------------------------------------
>
>                 Key: HDFS-16644
>                 URL: https://issues.apache.org/jira/browse/HDFS-16644
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.2.1
>            Reporter: Walter Su
>            Priority: Major
>              Labels: pull-request-available
>
> deployment:
> server side: kerberos enabled cluster with jdk 1.8 and hdfs-server 3.2.1
> client side:
> I run command hadoop fs -put a test file, with kerberos ticket inited first, 
> and use identical core-site.xml & hdfs-site.xml configuration.
>  using client ver 3.2.1, it succeeds.
>  using client ver 2.8.5, it succeeds.
>  using client ver 2.10.1, it fails. The client side error info is:
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2022-06-27 01:06:15,781 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DataNode{data=FSDataset{dirpath='[/mnt/disk1/hdfs, /mnt/***/hdfs, 
> /mnt/***/hdfs, /mnt/***/hdfs]'}, localName='emr-worker-***.***:9866', 
> datanodeUuid='b1c7f64a-6389-4739-bddf-***', xmitsInProgress=0}:Exception 
> transfering block BP-1187699012-10.****-***:blk_1119803380_46080919 to mirror 
> 10.*****:9866
> java.io.IOException: Invalid token in javax.security.sasl.qop: D
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
> Once any client ver 2.10.1 connect to hdfs server, the DataNode no longer 
> accepts any client connection, even client ver 3.2.1 cannot connects to hdfs 
> server. The DataNode rejects any client connection. For a short time, all 
> DataNodes rejects client connections. 
> The problem exists even if I replace DataNode with ver 3.3.0 or replace java 
> with jdk 11.
> The problem is fixed if I replace DataNode with ver 3.2.0. I guess the 
> problem is related to HDFS-13541



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to