[ https://issues.apache.org/jira/browse/HDFS-16721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600442#comment-17600442 ]
ASF GitHub Bot commented on HDFS-16721: --------------------------------------- Likkey commented on PR #4847: URL: https://github.com/apache/hadoop/pull/4847#issuecomment-1237070768 > Thank you very much for your contribution!From my personal point of view, it is unreasonable to configure a negative number for timeout. I don't think this change is necessary, the possibility of timeout being configured as a negative number is very low. > > I think there should be reasonable reasons to support us to modify the parameter verification. I also endorse your statement, thank you very much for the advice:) > Improve the check code of the important configuration item > “dfs.client.socket-timeout”. > --------------------------------------------------------------------------------------- > > Key: HDFS-16721 > URL: https://issues.apache.org/jira/browse/HDFS-16721 > Project: Hadoop HDFS > Issue Type: Bug > Components: dfsclient > Affects Versions: 3.1.3 > Environment: Linux version 4.15.0-142-generic > (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu > 5.4.0-6ubuntu1~16.04.12)) > java version "1.8.0_162" > Java(TM) SE Runtime Environment (build 1.8.0_162-b12) > Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode) > Reporter: Jingxuan Fu > Assignee: Jingxuan Fu > Priority: Major > Labels: pull-request-available > > {code:java} > <property> > <name>dfs.client.socket-timeout</name> > <value>60000</value> > <description> > Default timeout value in milliseconds for all sockets. > </description> > </property>{code} > "dfs.client.socket-timeout" as the default timeout value for all sockets is > applied in multiple places, it is a configuration item with significant > impact, but the value of this configuration item is not checked in the source > code and when it is set to an abnormal value just throw an overgeneralized > exception and cannot be corrected in time , which affects the normal use of > the program. > {code:java} > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /hdfsapi/test/testhdfs.txt could only be written to 0 of the 1 minReplication > nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this > operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205) > at > org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928) > at java.base/java.security.AccessController.doPrivileged(Native > Method) > at java.base/javax.security.auth.Subject.doAs(Subject.java:423) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916){code} > So I used Precondition.checkArgument() to refine the code for checking this > configuration item. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org