Hello,
I'm really new to Hadoop and I was wondering if the MAP reduce programming
model from Hadoop is a good choice only for processing large amount of data,
from a file, database or a queue? Thanks!
Small correction to Ashwanth's post - Sqoop is now an Apache Incubator
project residing at http://incubator.apache.org/sqoop with a community
of its own.
On Fri, Jan 27, 2012 at 4:56 PM, Ashwanth Kumar
ashwanthku...@googlemail.com wrote:
Hadoop is very good at processing data from HDFS. Tools
Sorry Harsh, it was quite some time since I followed Sqoop. Thanks for the
update.
- Ashwanth
On Fri, Jan 27, 2012 at 5:07 PM, Harsh J ha...@cloudera.com wrote:
Small correction to Ashwanth's post - Sqoop is now an Apache Incubator
project residing at http://incubator.apache.org/sqoop with a
I think his question was more different.Not sure though.
On Fri, Jan 27, 2012 at 5:21 PM, Ashwanth Kumar
ashwanthku...@googlemail.com wrote:
Sorry Harsh, it was quite some time since I followed Sqoop. Thanks for the
update.
- Ashwanth
On Fri, Jan 27, 2012 at 5:07 PM, Harsh J
I got this error before. I solved as below:
1: check if the current proto buf is 2.4.1, if not, uninstall it and
install the 2.4.1 version
2: error while loading shared libraries: libprotobuf.so.7: cannot open
shared object file: No such file or directory
This is a known issue for Linux, and
Hi Nick,
Thanks for your reply. I don't think what you are saying is related, as the
problem happens when the data is transferred; it's not deserialized or
anything during that step. Note that my code isn't involved at all: it's
purely Hadoop's own code that's running here.
I have done