There could be more than one reason where RegionTooBusyException is thrown.
Below are two (from HRegion):

   * We throw RegionTooBusyException if above memstore limit
   * and expect client to retry using some kind of backoff
  */
  private void checkResources()

   * Try to acquire a lock.  Throw RegionTooBusyException

   * if failed to get the lock in time. Throw InterruptedIOException

   * if interrupted while waiting for the lock.

   */

  private void lock(final Lock lock, final int multiplier)

How many tasks may write to this row concurrently ?

Which 0.98 release are you using ?

Cheers

On Mon, Nov 10, 2014 at 11:10 AM, Brian Jeltema <
brian.jelt...@digitalenvoy.net> wrote:

> I’m running a map/reduce job against a table that is performing a large
> number of writes (probably updating every row).
> The job is failing with the exception below. This is a solid failure; it
> dies at the same point in the application,
> and at the same row in the table. So I doubt it’s a conflict with
> compaction (and the UI shows no compaction in progress),
> or that there is a load-related cause.
>
> ‘hbase hbck’ does not report any inconsistencies. The
> ‘waitForAllPreviousOpsAndReset’ leads me to suspect that
> there is operation in progress that is hung and blocking the update. I
> don’t see anything suspicious in the HBase logs.
> The data at the point of failure is not unusual, and is identical to many
> preceding rows.
> Does anybody have any ideas of what I should look for to find the cause of
> this RegionTooBusyException?
>
> This is Hadoop 2.4 and HBase 0.98.
>
> 14/11/10 13:46:13 INFO mapreduce.Job: Task Id :
> attempt_1415210751318_0010_m_000314_1, Status : FAILED
> Error:
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed
> 1744 actions: RegionTooBusyException: 1744 times,
>         at
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:207)
>         at
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:187)
>         at
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1568)
>         at
> org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:1023)
>         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:995)
>         at org.apache.hadoop.hbase.client.HTable.put(HTable.java:953)
>
> Brian

Reply via email to