Once you have an instrumented cluster you can run some end-to-end tests 
(TeraSort
has been suggested and it would do fine). The tricky part is to assemble all
moving parts of the Hadoop cluster together from three different projects (if
you are on .21+ branch) and will be a way easier for 0.20.+ based versions.

However, one fine point here is that your faults (implemented as aspects)
should be triggered some how. You can do it logically (say every 3rd attempt
to get new block) or via environment, or from the test application. 

The latter might be a most tricky part because you'll have to essentially
communicate between different JVM processes perhaps located on different
physical hosts if you're running in a real cluster environment.

To address these issues we have project Herriot which is based on byte code
injection (read aspectj) and allows for distributed fault injections in a real
cluster. System test framework adds a few APIs to the Hadoop daemons and allow
to call them via standard Hadoop RPC. 

You can find framework sources and tests code in src/aop/system.
There's a separate set of build targets which will help you instrument
everything you need (jar-test-system should do). More info is here 
http://is.gd/SBkjrC

One last point: Herriot isn't fully integrated in trunk's maven build as of 
today.

Hope it helps,
  Cos

>> From: Hao Yang [mailto:hao.yang0...@gmail.com]
>> Sent: Monday, November 28, 2011 9:44 PM
>> To: common-dev@hadoop.apache.org
>> Subject: how to write hadoop client to test HDFS with fault injection
>>
>> Hi, all:
>>
>> I am a graduate student, now working on HDFS fault injection course
>> project. I used FI framework to create aspects, and hdfs-fi.jar was
>> generated. How can I write Hadoop Client code to test the injection? Thank
>> you.
>>
>>
>> Thank you very much for your time.
>>
>>
>> Best regards
>> Hao Yang
>>
>

Reply via email to