[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16732268#comment-16732268
 ] 

Pranay Singh commented on HDFS-14084:
-------------------------------------

[~elgoiri] thanks for commenting on the issue, I looked at the details of the 
failure related to 
TestSSLFactory. It is caused due to  testServerWeakCiphers, which is a sporadic 
failure and has been reported in HADOOP-16016.
 
-------------------------
[ERROR] testServerWeakCiphers(org.apache.hadoop.security.ssl.TestSSLFactory) 
Time elapsed: 0.082 s <<< FAILURE! java.lang.AssertionError: Expected to find 
'no cipher suites in common' but got unexpected exception: 
javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is 
disabled or cipher suites are inappropriate)

---------------------------

 

The other two failures are caused because "port is in use"

*java.net.BindException: Port in use: localhost:36969 at* 
org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1207)
 at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1229) 
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1288) at 
org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1143) at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:183)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:892)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:703) 
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:960) at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:933) at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1699)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2188) 
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNodes(MiniDFSCluster.java:2143)
 at 
org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader.testHasNonEcBlockUsingStripedIDForAddBlock(TestFSEditLogLoader.java:656)

 

*java.net.BindException: Port in use: localhost:10197 at* 
org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1207)
 at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1229) 
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1288) at 
org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1143) at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:183)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:892)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:703) 
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:960) at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:933) at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1699)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1316) 
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1085)

> Need for more stats in DFSClient
> --------------------------------
>
>                 Key: HDFS-14084
>                 URL: https://issues.apache.org/jira/browse/HDFS-14084
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 3.0.0
>            Reporter: Pranay Singh
>            Assignee: Pranay Singh
>            Priority: Minor
>         Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch, 
> HDFS-14084.003.patch, HDFS-14084.004.patch, HDFS-14084.005.patch, 
> HDFS-14084.006.patch, HDFS-14084.007.patch, HDFS-14084.008.patch, 
> HDFS-14084.009.patch, HDFS-14084.010.patch, HDFS-14084.011.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -------------------------
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to