[jira] [Commented] (HDFS-16644) java.io.IOException Invalid token in javax.security.sasl.qop

2023-10-03 Thread Nishtha Shah (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17771386#comment-17771386
 ] 

Nishtha Shah commented on HDFS-16644:
-

[~vagarychen]  If you could help with the above issue.
I tried reverting changes related to feature HDFS-13541 and the upgrade worked. 
I suspect the cause can be https://issues.apache.org/jira/browse/HDFS-13699, 
but still unable to pin point and get a fix of the issue

> java.io.IOException Invalid token in javax.security.sasl.qop
> 
>
> Key: HDFS-16644
> URL: https://issues.apache.org/jira/browse/HDFS-16644
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
>Reporter: Walter Su
>Priority: Major
>  Labels: pull-request-available
>
> deployment:
> server side: kerberos enabled cluster with jdk 1.8 and hdfs-server 3.2.1
> client side:
> I run command hadoop fs -put a test file, with kerberos ticket inited first, 
> and use identical core-site.xml & hdfs-site.xml configuration.
>  using client ver 3.2.1, it succeeds.
>  using client ver 2.8.5, it succeeds.
>  using client ver 2.10.1, it fails. The client side error info is:
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2022-06-27 01:06:15,781 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DataNode{data=FSDataset{dirpath='[/mnt/disk1/hdfs, /mnt/***/hdfs, 
> /mnt/***/hdfs, /mnt/***/hdfs]'}, localName='emr-worker-***.***:9866', 
> datanodeUuid='b1c7f64a-6389-4739-bddf-***', xmitsInProgress=0}:Exception 
> transfering block BP-1187699012-10.-***:blk_1119803380_46080919 to mirror 
> 10.*:9866
> java.io.IOException: Invalid token in javax.security.sasl.qop: D
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
> Once any client ver 2.10.1 connect to hdfs server, the DataNode no longer 
> accepts any client connection, even client ver 3.2.1 cannot connects to hdfs 
> server. The DataNode rejects any client connection. For a short time, all 
> DataNodes rejects client connections. 
> The problem exists even if I replace DataNode with ver 3.3.0 or replace java 
> with jdk 11.
> The problem is fixed if I replace DataNode with ver 3.2.0. I guess the 
> problem is related to HDFS-13541



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16644) java.io.IOException Invalid token in javax.security.sasl.qop

2023-09-27 Thread Nishtha Shah (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17769919#comment-17769919
 ] 

Nishtha Shah commented on HDFS-16644:
-

Updating the fix:

Though the above fix helped resolve the writes and reads from Namenode and 
Datanode itself, the issue during the upgrade from 2.10 to 3.3.5 remained as it 
is, as reads/writes from Hbase (Still using 2.10 hadoop-client) are facing the 
same issue until upgraded to 3.3.5. This is causing the upgrade without 
downtime impossible

> java.io.IOException Invalid token in javax.security.sasl.qop
> 
>
> Key: HDFS-16644
> URL: https://issues.apache.org/jira/browse/HDFS-16644
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
>Reporter: Walter Su
>Priority: Major
>  Labels: pull-request-available
>
> deployment:
> server side: kerberos enabled cluster with jdk 1.8 and hdfs-server 3.2.1
> client side:
> I run command hadoop fs -put a test file, with kerberos ticket inited first, 
> and use identical core-site.xml & hdfs-site.xml configuration.
>  using client ver 3.2.1, it succeeds.
>  using client ver 2.8.5, it succeeds.
>  using client ver 2.10.1, it fails. The client side error info is:
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2022-06-27 01:06:15,781 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DataNode{data=FSDataset{dirpath='[/mnt/disk1/hdfs, /mnt/***/hdfs, 
> /mnt/***/hdfs, /mnt/***/hdfs]'}, localName='emr-worker-***.***:9866', 
> datanodeUuid='b1c7f64a-6389-4739-bddf-***', xmitsInProgress=0}:Exception 
> transfering block BP-1187699012-10.-***:blk_1119803380_46080919 to mirror 
> 10.*:9866
> java.io.IOException: Invalid token in javax.security.sasl.qop: D
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
> Once any client ver 2.10.1 connect to hdfs server, the DataNode no longer 
> accepts any client connection, even client ver 3.2.1 cannot connects to hdfs 
> server. The DataNode rejects any client connection. For a short time, all 
> DataNodes rejects client connections. 
> The problem exists even if I replace DataNode with ver 3.3.0 or replace java 
> with jdk 11.
> The problem is fixed if I replace DataNode with ver 3.2.0. I guess the 
> problem is related to HDFS-13541



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16644) java.io.IOException Invalid token in javax.security.sasl.qop

2023-08-28 Thread Nishtha Shah (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759581#comment-17759581
 ] 

Nishtha Shah commented on HDFS-16644:
-

We are also seeing this issue when upgrading the cluster from 2.10.1 to 3.3.5 
and Kerberos enabled during transit. Initially we saw the impact on hbase 
writes, but when we tried to manually copy some files from local to hdfs, that 
failed with the same exceptions.
[https://github.com/apache/hadoop/pull/5962/files] change helped us get past 
this issue.

> java.io.IOException Invalid token in javax.security.sasl.qop
> 
>
> Key: HDFS-16644
> URL: https://issues.apache.org/jira/browse/HDFS-16644
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.1
>Reporter: Walter Su
>Priority: Major
>  Labels: pull-request-available
>
> deployment:
> server side: kerberos enabled cluster with jdk 1.8 and hdfs-server 3.2.1
> client side:
> I run command hadoop fs -put a test file, with kerberos ticket inited first, 
> and use identical core-site.xml & hdfs-site.xml configuration.
>  using client ver 3.2.1, it succeeds.
>  using client ver 2.8.5, it succeeds.
>  using client ver 2.10.1, it fails. The client side error info is:
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2022-06-27 01:06:15,781 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DataNode{data=FSDataset{dirpath='[/mnt/disk1/hdfs, /mnt/***/hdfs, 
> /mnt/***/hdfs, /mnt/***/hdfs]'}, localName='emr-worker-***.***:9866', 
> datanodeUuid='b1c7f64a-6389-4739-bddf-***', xmitsInProgress=0}:Exception 
> transfering block BP-1187699012-10.-***:blk_1119803380_46080919 to mirror 
> 10.*:9866
> java.io.IOException: Invalid token in javax.security.sasl.qop: D
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
> Once any client ver 2.10.1 connect to hdfs server, the DataNode no longer 
> accepts any client connection, even client ver 3.2.1 cannot connects to hdfs 
> server. The DataNode rejects any client connection. For a short time, all 
> DataNodes rejects client connections. 
> The problem exists even if I replace DataNode with ver 3.3.0 or replace java 
> with jdk 11.
> The problem is fixed if I replace DataNode with ver 3.2.0. I guess the 
> problem is related to HDFS-13541



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15491) Journal transactions batched in sync metrics are missing from Journal Node JMX

2023-07-10 Thread Nishtha Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishtha Shah reassigned HDFS-15491:
---

Assignee: Nishtha Shah

> Journal transactions batched in sync metrics are missing from Journal Node 
> JMX  
> 
>
> Key: HDFS-15491
> URL: https://issues.apache.org/jira/browse/HDFS-15491
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 3.1.1, hdfs, journal-node, qjm
>Affects Versions: 3.1.1
>Reporter: Mohit Saindane
>Assignee: Nishtha Shah
>Priority: Critical
>  Labels: newbie, patch, pull-request-available
> Fix For: 3.1.1
>
> Attachments: JournalNode metrics issue.png
>
>
> I have enable the HA in HDP3.1.5 multinode cluster, I was trying to access 
> the REST response given by Journal Node on http 8480 port, but as per [Apache 
> Hadoop 3.1.1 
> Documentation|[https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/Metrics.html#JournalNode]]
>  I found that some of the Journal Node specific metrics are missing. (metrics 
> like NumTransactionsBatchedInSync_(60,300,3600)_sNumOps and 
> NumTransactionsBatchedInSync_Num_s_(50/75/90/95/99)_thPercentileLatencyMicros),
>  So checked the _JournalMetrics.java_ code given in 
> _hadoop-hdfs-3.1.1.3.1.5.0-152-sources.jar_ as well as __ [Hadoop Open Source 
> Code|[https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalMetrics.java]]
>  found that the following code is missing :
>  
> @Metric("Journal transactions batched in sync")
>  final MutableQuantiles[] numTransactionsBatchedInSync;
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16996) Fix flaky testFsCloseAfterClusterShutdown in TestFileCreation

2023-05-31 Thread Nishtha Shah (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17728199#comment-17728199
 ] 

Nishtha Shah commented on HDFS-16996:
-

Thanks a lot [~elgoiri] and [~ayushtkn]!!

> Fix flaky testFsCloseAfterClusterShutdown in TestFileCreation
> -
>
> Key: HDFS-16996
> URL: https://issues.apache.org/jira/browse/HDFS-16996
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Uma Maheswara Rao G
>Assignee: Nishtha Shah
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> {code:java}
> [ERROR] 
> testFsCloseAfterClusterShutdown(org.apache.hadoop.hdfs.TestFileCreation) Time 
> elapsed: 1.725 s <<< FAILURE! java.lang.AssertionError: Test resulted in an 
> unexpected exit: 1: Block report processor encountered fatal exception: 
> java.lang.ClassCastException: org.apache.hadoop.fs.FsServerDefaults cannot be 
> cast to java.lang.Boolean at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:2166) at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:2152) at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:2145) at 
> org.apache.hadoop.hdfs.TestFileCreation.testFsCloseAfterClusterShutdown(TestFileCreation.java:1198)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: 1: Block report processor encountered fatal exception: 
> java.lang.ClassCastException: org.apache.hadoop.fs.FsServerDefaults cannot be 
> cast to java.lang.Boolean at 
> org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:381) at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5451){code}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5532/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15491) Journal transactions batched in sync metrics are missing from Journal Node JMX

2023-05-29 Thread Nishtha Shah (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17727335#comment-17727335
 ] 

Nishtha Shah commented on HDFS-15491:
-

[~m@hit1994] [~jianghuazhu] [~rahulb-2000] If it is still unresolved, Is it 
fine If I pick it up?

> Journal transactions batched in sync metrics are missing from Journal Node 
> JMX  
> 
>
> Key: HDFS-15491
> URL: https://issues.apache.org/jira/browse/HDFS-15491
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 3.1.1, hdfs, journal-node, qjm
>Affects Versions: 3.1.1
>Reporter: Mohit Saindane
>Priority: Critical
>  Labels: newbie, patch, pull-request-available
> Fix For: 3.1.1
>
> Attachments: JournalNode metrics issue.png
>
>
> I have enable the HA in HDP3.1.5 multinode cluster, I was trying to access 
> the REST response given by Journal Node on http 8480 port, but as per [Apache 
> Hadoop 3.1.1 
> Documentation|[https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/Metrics.html#JournalNode]]
>  I found that some of the Journal Node specific metrics are missing. (metrics 
> like NumTransactionsBatchedInSync_(60,300,3600)_sNumOps and 
> NumTransactionsBatchedInSync_Num_s_(50/75/90/95/99)_thPercentileLatencyMicros),
>  So checked the _JournalMetrics.java_ code given in 
> _hadoop-hdfs-3.1.1.3.1.5.0-152-sources.jar_ as well as __ [Hadoop Open Source 
> Code|[https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalMetrics.java]]
>  found that the following code is missing :
>  
> @Metric("Journal transactions batched in sync")
>  final MutableQuantiles[] numTransactionsBatchedInSync;
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16996) TestFileCreation failed with ClassCastException

2023-05-29 Thread Nishtha Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishtha Shah reassigned HDFS-16996:
---

Assignee: Nishtha Shah

> TestFileCreation failed with ClassCastException
> ---
>
> Key: HDFS-16996
> URL: https://issues.apache.org/jira/browse/HDFS-16996
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Uma Maheswara Rao G
>Assignee: Nishtha Shah
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> [ERROR] 
> testFsCloseAfterClusterShutdown(org.apache.hadoop.hdfs.TestFileCreation) Time 
> elapsed: 1.725 s <<< FAILURE! java.lang.AssertionError: Test resulted in an 
> unexpected exit: 1: Block report processor encountered fatal exception: 
> java.lang.ClassCastException: org.apache.hadoop.fs.FsServerDefaults cannot be 
> cast to java.lang.Boolean at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:2166) at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:2152) at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:2145) at 
> org.apache.hadoop.hdfs.TestFileCreation.testFsCloseAfterClusterShutdown(TestFileCreation.java:1198)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: 1: Block report processor encountered fatal exception: 
> java.lang.ClassCastException: org.apache.hadoop.fs.FsServerDefaults cannot be 
> cast to java.lang.Boolean at 
> org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:381) at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5451){code}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5532/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16946) RBF: top real owners metrics can't been parsed json string

2023-05-29 Thread Nishtha Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishtha Shah reassigned HDFS-16946:
---

Assignee: Nishtha Shah

> RBF: top real owners metrics can't been parsed json string
> --
>
> Key: HDFS-16946
> URL: https://issues.apache.org/jira/browse/HDFS-16946
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: Max  Xie
>Assignee: Nishtha Shah
>Priority: Minor
> Attachments: image-2023-03-09-22-24-39-833.png
>
>
> After HDFS-15447,  Add top real owners metrics for delegation tokens. But the 
> metrics can't been parsed json string.
>  RBFMetrics$getTopTokenRealOwners method just return 
> `org.apache.hadoop.metrics2.util.Metrics2Util$NameValuePair@1`
> !image-2023-03-09-22-24-39-833.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org