[ 
https://issues.apache.org/jira/browse/HDFS-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17754437#comment-17754437
 ] 

Zilong Zhu edited comment on HDFS-16644 at 8/15/23 12:54 PM:
-------------------------------------------------------------

At the same time, we also identified another issue related to this. It looks 
like the meta inf file for the BlockTokenIdentifier is in the hadoop-hdfs.jar 
rather then the hadoop-hdfs-client.jar. This prevents a client from decode 
identifier because the service loader doesn't find the BlockTokenIdentifier 
class. 

This will result in HDFS-13541 not functioning properly on branch-2.10 as well. 
I created HDFS-17159 to track it.


was (Author: JIRAUSER287487):
At the same time, we also identified another issus related to this. It looks 
like the meta inf file for the BlockTokenIdentifier is in the hadoop-hdfs.jar 
rather then the hadoop-hdfs-client.jar. This prevents a client from decode 
identifier because the service loader doesn't find the BlockTokenIdentifier 
class. 

This will result in HDFS-13541 not functioning properly on branch-2.10 as well. 
I created HDFS-17159 to track it.

> java.io.IOException Invalid token in javax.security.sasl.qop
> ------------------------------------------------------------
>
>                 Key: HDFS-16644
>                 URL: https://issues.apache.org/jira/browse/HDFS-16644
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.2.1
>            Reporter: Walter Su
>            Priority: Major
>
> deployment:
> server side: kerberos enabled cluster with jdk 1.8 and hdfs-server 3.2.1
> client side:
> I run command hadoop fs -put a test file, with kerberos ticket inited first, 
> and use identical core-site.xml & hdfs-site.xml configuration.
>  using client ver 3.2.1, it succeeds.
>  using client ver 2.8.5, it succeeds.
>  using client ver 2.10.1, it fails. The client side error info is:
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2022-06-27 01:06:15,781 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DataNode{data=FSDataset{dirpath='[/mnt/disk1/hdfs, /mnt/***/hdfs, 
> /mnt/***/hdfs, /mnt/***/hdfs]'}, localName='emr-worker-***.***:9866', 
> datanodeUuid='b1c7f64a-6389-4739-bddf-***', xmitsInProgress=0}:Exception 
> transfering block BP-1187699012-10.****-***:blk_1119803380_46080919 to mirror 
> 10.*****:9866
> java.io.IOException: Invalid token in javax.security.sasl.qop: D
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
> Once any client ver 2.10.1 connect to hdfs server, the DataNode no longer 
> accepts any client connection, even client ver 3.2.1 cannot connects to hdfs 
> server. The DataNode rejects any client connection. For a short time, all 
> DataNodes rejects client connections. 
> The problem exists even if I replace DataNode with ver 3.3.0 or replace java 
> with jdk 11.
> The problem is fixed if I replace DataNode with ver 3.2.0. I guess the 
> problem is related to HDFS-13541



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to