[ https://issues.apache.org/jira/browse/HDFS-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908836#comment-13908836 ]
Yongjun Zhang commented on HDFS-5939: ------------------------------------- Thanks [~wheat9] and [~szetszwo]. I just uploaded a new version to address your comments. About {quote} + LOG.info("Running testNoDatanode ..."); + LOG.info("Done with testNoDatanode."); This is unnecessary. {quote} I on purposely added it, so in the log file for the same test class, we have a delimiter message for a given test. Actually I even planned to advertise this. When looking at the output file of a test, we sometimes are confused about where is the beginning/end of a particular test. With this message, we can easily identify the message border of a given test. so I would like to keep it for this reason. About checking "no datanode" in exception message, my thinking is, if we don't do it, and if a different IOException is thrown, then this test claim success and hide the other IOException, and we won't catch it on the spot, even though it actually failed. I hope this explanation make sense to you. About the other comments of yours to prune the test, I agree with you that it will be nice if we prune the test as you suggested. Actually when I started writing the this test, I did start with creating a cluster with 0 dataNode, I not only was not able to reproduce the original problem, but I ran into a different problem instead. So I modified the testcase to be how it looks like now, and successfully reproduced the original problem. I think this testcase mimic better the original problem reported from the field, that is, the cluster was healthy and the dataNode existed in the beginning, then they are gone for some reason and the cluster became unhealthy. The different problem I ran into when creating a cluster with 0 dataNode can be a different issue to look at. But it should not prevent us from fixing this bug. I will take a look at the other problem when I have time. You might give it a try too if you are interested. Would you please help reviewing the new version 004? thanks. > WebHdfs returns misleading error code and logs nothing if trying to create a > file with no DNs in cluster > -------------------------------------------------------------------------------------------------------- > > Key: HDFS-5939 > URL: https://issues.apache.org/jira/browse/HDFS-5939 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client > Affects Versions: 2.3.0 > Reporter: Yongjun Zhang > Assignee: Yongjun Zhang > Attachments: HDFS-5939.001.patch, HDFS-5939.002.patch, > HDFS-5939.003.patch, HDFS-5939.004.patch > > > When trying to access hdfs via webhdfs, and when datanode is dead, user will > see an exception below without any clue that it's caused by dead datanode: > $ curl -i -X PUT > ".../webhdfs/v1/t1?op=CREATE&user.name=<userName>&overwrite=false" > ... > {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"n > must be positive"}} > Need to fix the report to give user hint about dead datanode. -- This message was sent by Atlassian JIRA (v6.1.5#6160)