If it is just a GB then you probably don't need Hadoop, unless there is some 
serious processing involved that hasn't been explained or you already have the 
data on HDFS, or you happen to have a Hadoop cluster that you have access to 
and the amount of data is going to grow in size.  Then it could be worth it to 
write a M/R job to load the data into a DB.

--Bobby

On 5/26/11 12:23 PM, "vishnu krishnan" <vgrkrish...@gmail.com> wrote:

thanku,


so just i want to take a GB of data and give to the map/reduce, then store into 
the database?

Reply via email to