Thanks very much for the above. In 0.21 I found TestDFSIO in the
mapred-test.jar, but I didn't find Teragen and Terasort in the three test
jars (I found Teragen and Terasort in 0.20.04 test jars, but they cannot be
executed in 0.21). When I run TestDFSIO benchmark, I sometimes encounter
the situation below. I met the same problem when I test it in my laptop
with one datanode or two datanodes launched.

 java.io.IOException: File
/benchmarks/TestDFSIO/io_control/in_file_test_io_0 could only be replicated
to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1448)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:691)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:342)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1350)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)


Thanks.

Hao Yang

2011/11/29 Konstantin Boudnik <c...@apache.org>

> Once you have an instrumented cluster you can run some end-to-end tests
> (TeraSort
> has been suggested and it would do fine). The tricky part is to assemble
> all
> moving parts of the Hadoop cluster together from three different projects
> (if
> you are on .21+ branch) and will be a way easier for 0.20.+ based versions.
>
> However, one fine point here is that your faults (implemented as aspects)
> should be triggered some how. You can do it logically (say every 3rd
> attempt
> to get new block) or via environment, or from the test application.
>
> The latter might be a most tricky part because you'll have to essentially
> communicate between different JVM processes perhaps located on different
> physical hosts if you're running in a real cluster environment.
>
> To address these issues we have project Herriot which is based on byte code
> injection (read aspectj) and allows for distributed fault injections in a
> real
> cluster. System test framework adds a few APIs to the Hadoop daemons and
> allow
> to call them via standard Hadoop RPC.
>
> You can find framework sources and tests code in src/aop/system.
> There's a separate set of build targets which will help you instrument
> everything you need (jar-test-system should do). More info is here
> http://is.gd/SBkjrC
>
> One last point: Herriot isn't fully integrated in trunk's maven build as
> of today.
>
> Hope it helps,
>  Cos
>
> >> From: Hao Yang [mailto:hao.yang0...@gmail.com]
> >> Sent: Monday, November 28, 2011 9:44 PM
> >> To: common-dev@hadoop.apache.org
> >> Subject: how to write hadoop client to test HDFS with fault injection
> >>
> >> Hi, all:
> >>
> >> I am a graduate student, now working on HDFS fault injection course
> >> project. I used FI framework to create aspects, and hdfs-fi.jar was
> >> generated. How can I write Hadoop Client code to test the injection?
> Thank
> >> you.
> >>
> >>
> >> Thank you very much for your time.
> >>
> >>
> >> Best regards
> >> Hao Yang
> >>
> >
>

Reply via email to