Hi,
  Probably a very stupid question.
I have this data in binary format... and the following piece of code works
for me in normal java.


public classparser {

 public static void main(String [] args) throws Exception{
 String filename = "sample.txt";
File file = new File(filename);
FileInputStream fis = new FileInputStream(filename);
 System.out.println("Total file size to read (in bytes) : "
+ fis.available());
BSONDecoder bson = new BSONDecoder();
System.out.println(bson.readObject(fis));
}
}


Now finally the last line is the answer..
Now, I want to implement this on hadoop but the challenge (which I think)
is.. that I am not reading or parsing data line by line.. rather its a
stream of data??? right??
How do i replicate the above code logic.. but in hadoop?

Reply via email to