[ 
https://issues.apache.org/jira/browse/HDFS-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908914#comment-13908914
 ] 

Yongjun Zhang commented on HDFS-5939:
-------------------------------------

Thanks Haohui. I appreciate your diligence in reviewing the solution, And your 
earlier comments helped me to learn some 
important concepts. Please see my reply below:

{quote}
You can run it through mvn or eclipse. An analogy is that we don't write the 
code that calls System.out.println for every request.
{quote}
It's true that if we run in eclipse, then can run selected test function. 
However, very often, we need to look at a nightly run result, 
where one output file contains log for multiple tests.  

You used the example that "we don't write the code that calls 
System.out.println for every request.", writing debug message for each 
request in production will be annoying. But this is about unit test, usually 
seen by developer for debugging purpose. And it's just one 
per test function, not too many. I find it useful to look at test result. In 
anyways, it doesn't hurt much to have, right?

About the two broken tests, I already explained in my earlier update:
- For TestLoadGenerator,  I filed HDFS-5993. now it's fixed by HADOOP-10355. 
You can look at HADDOP-10355 to see 
the failure reason. 
- The other one 
org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly is not 
related to my change either. Let's see
how the test result of version 004 looks like.

About 
{quote}
I can reproduce it on trunk:
curl -X PUT "http://localhost:50070/webhdfs/v1/asd?op=CREATE";
{"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"n
 must be positive
{quote}

I did reproduce the problem the same way as you did before I wrote the unit 
test.  But that's in the context of reproducing it from command line.
When we write the unit test (JUnit) to reproduce the issue, if you can try what 
you suggested, you will see that it won't reproduce. It actually failed
in a different place. Failing at different place is not a surprise to me 
because having no datanode is abnormal itself. I'm not ready to file a bug 
for this yet, because we usually file bug against code already committed, and I 
don't know any  existing code that demonstrates the same problem.

Thanks.








> WebHdfs returns misleading error code and logs nothing if trying to create a 
> file with no DNs in cluster
> --------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5939
>                 URL: https://issues.apache.org/jira/browse/HDFS-5939
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 2.3.0
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>         Attachments: HDFS-5939.001.patch, HDFS-5939.002.patch, 
> HDFS-5939.003.patch, HDFS-5939.004.patch
>
>
> When trying to access hdfs via webhdfs, and when datanode is dead, user will 
> see an exception below without any clue that it's caused by dead datanode:
> $ curl -i -X PUT 
> ".../webhdfs/v1/t1?op=CREATE&user.name=<userName>&overwrite=false"
> ...
> {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"n
>  must be positive"}}
> Need to fix the report to give user hint about dead datanode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to