Hi guys..
I am previously using hadoop and Hbase...
So for Hbase to run perfectly fine we need Hadoop-0.20-append for Hbase jar
files.. So I am using Hadoop-0.20-append jar files.. which made both my
hadoop and hbase to work fine..
Now I want to use pig for my hadoop and hbase clusters..
I
Try replacing the hadoop jar from the pig lib directory with the one from your
cluster.
-Joey
On Jul 2, 2011, at 0:38, praveenesh kumar praveen...@gmail.com wrote:
Hi guys..
I am previously using hadoop and Hbase...
So for Hbase to run perfectly fine we need
I wonder what happens if a HDFS client writes to 3 replicas, but only one of
them succeeds, but the other 2 fail,
then the client considers the operation a failure, but what happens to the
successful block? if another client does a latter read, and
hit on the successful block, it could read out
So given that it's a fairly unusual logging error, and that you're
using slf4j in your reducer, I'm wondering if your logging
configuration is somehow interfering with the logging config the child
process is using. Could you humor me and pull out your logging
code/dependencies and give the job
Hi,
Do we have to run multiple task trackers when running multiple data nodes on a
single computer?
Regards,
Xiaobo Gu
-Original Message-
From: Xiaobo Gu [mailto:guxiaobo1...@gmail.com]
Sent: Friday, July 01, 2011 11:35 PM
To: hdfs-u...@hadoop.apache.org
Subject: How to run