It looks like the "HRegionInfo was null or empty in .META." bug has
reappeared in RC1. We're trying to do a large load -- 100GB on 19
nodes. Our boxes are 8 core, 8 GB RAM, and we've tuned the GC and
kernel with every suggestion I found in the wiki and mailing list.
I've included every log I can t
Hello,
It looks like the load put on HDFS by HBase during this rapid upload of
data exceeds the I/O capacity of your cluster. What I would do in this case
is add more nodes until the load is sufficiently spread around.
You have a five node cluster? How much RAM? How many CPU cores? How is the
di
Do you know how much data that accounts? Do you think it would make
sense to enable compression on your family holding the HTML? If so,
please read http://wiki.apache.org/hadoop/UsingLzoCompression this may
help you a lot.
J-D
On Thu, Aug 6, 2009 at 1:57 AM, Zheng Lv wrote:
> Hello,
> I adjust
Chen,
The main problem is that appends are not supported in HDFS, HBase
simply cannot sync its logs to it. But, we did some work to make that
story better. The latest revision in the 0.19 branch and 0.20 RC1 both
solve much of the data loss problem but it won't be near perfect until
we have append
Well did you search what that exception means? From that forum post,
http://su.pr/9Eqsre, it seems that your localhost would be set to a
weird value. Please confirm that your OS network configuration is
valid.
Also which OS are you using?
Thx,
J-D
On Wed, Aug 5, 2009 at 11:12 PM, lei wang wrote
Since your servers might be getting starved for i/o, its a good idea
to check the throughput of the hard drives once.
As root, do:
hdparm -t /dev/sda1.
That'll check the throughput of sda1 drive. Take it from there.
On 8/5/09, Zheng Lv wrote:
> Hello,
> I adjusted the option "zookeeper.sessi