David Alves wrote:
Hi guys

We've got HBase(0.18.0, r695089) and Hadoop(0.18.0, r686010) running for a while, and apart from the ocasional regionserver stopping without notice (and whithout explanations from what we can see in the logs), problem that we solve easily just by restarting it, we now have come to face a more serious problem of what I think is data loss.

What you think it is David? A hang? We've seen occasional hangups on HDFS. You could try threaddumping and see if you can figure where things are blocked (Can do it in UI on problematic regionserver or by sending QUIT to the JVM PID).


We use Hbase as a links and documents database (similar to nutch) in a 3 node cluster (4GB Mem on each node), the links database has a 4 regions and the document database now has 200 regions for a total of 216 (with meta and root).

How much RAM allocated to HBase?  Each database has a single family or more?

After the crawl task, which went ok, (we now have 60GB/300GB full in hdfs) we proceed to do a full table scan to create the indexes and thats where things started to fail. We are seing a problem in the logs (at the end of this email). This repeats untils theres a retriesexausted exception and the task fails in the map phase. Hadoop fsk tool tells us that hdfs is ok. I'm still to explore the rest of the logs searching for some kind of error I will post a new mail if I find anything.

    Any help would be greatly appreciated.

Is this file in your HDFS: hdfs://cyclops-prod-1:9000/hbase/document/153945136/docDatum/mapfiles/5163556575658593611/data? If so, can you fetch it using ./bin/hadoop fs -get FILENAME?

What crawler are you using (out of interest).
St.Ack

Reply via email to