Personally I would just use Har :) It sounds like an interesting
project. You might find this document helpful:
http://kazman.shidler.hawaii.edu/ArchDoc.html
It was designed to help contributors navigate the HDFS source tree.
-Joey
On Thu, Jan 19, 2012 at 11:52 AM, Sesha Kumar
Sorry for the delay. I'm trying to implement an IEEE paper which combines a
bunch of files into a single file and when the file is requested the
datanode extracts the desired file from the block and sends the file to
DFSClient.
Hi Shesha,
Take a look at org.apache.hadoop.hdfs.server.datanode.BlockSender.java
Regards,
Uma
From: Sesha Kumar [sesha...@gmail.com]
Sent: Monday, January 16, 2012 7:50 PM
To: hdfs-user@hadoop.apache.org
Subject: Data processing in DFSClient
Hey guys,
Sorry
Sesha,
What kind of processing are you attempting to do? Maybe it makes more sense
to just implement a MapReduce job rather than modifying the datanodes?
-Joey
On Mon, Jan 16, 2012 at 9:20 AM, Sesha Kumar sesha...@gmail.com wrote:
Hey guys,
Sorry for the typo in my last message.I have