[ https://issues.apache.org/jira/browse/HDFS-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13906230#comment-13906230 ]
Tsz Wo (Nicholas), SZE commented on HDFS-5939: ---------------------------------------------- > ... So (1) it never excludes all nodes and (2) we must have numOfDatanodes >= > 1. Actually, the above statement is wrong. e.g. - if scope="/dc", excludedScope="/dc/rack0" and rack0 is the only rack, then all nodes are excluded. - numOfDatanode under the scope is 0. > WebHdfs returns misleading error code and logs nothing if trying to create a > file with no DNs in cluster > -------------------------------------------------------------------------------------------------------- > > Key: HDFS-5939 > URL: https://issues.apache.org/jira/browse/HDFS-5939 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client > Affects Versions: 2.3.0 > Reporter: Yongjun Zhang > Assignee: Yongjun Zhang > Attachments: HDFS-5939.001.patch > > > When trying to access hdfs via webhdfs, and when datanode is dead, user will > see an exception below without any clue that it's caused by dead datanode: > $ curl -i -X PUT > ".../webhdfs/v1/t1?op=CREATE&user.name=<userName>&overwrite=false" > ... > {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"n > must be positive"}} > Need to fix the report to give user hint about dead datanode. -- This message was sent by Atlassian JIRA (v6.1.5#6160)