bq. all the regions of this table were back on this same RS!

Interesting. Please check master log around the time this RS was brought
online. You can pastebin the relevant snippet.

Thanks

On Fri, Feb 13, 2015 at 8:55 AM, Shahab Yunus <shahab.yu...@gmail.com>
wrote:

> Hi Ted.
>
> Yes, the cluster itself is balanced. On average 300 regions per node on 10
> nodes.
>
> # of tables is 53 of varying sizes.
>
> Balancer was invoked and it didn't do anything (i.e. no movement of
> regions) but we didn't check the master's logs. We can do that.
>
> Interestingly, we restarted the RS which was holding all the regions of
> this one table. The regions were nicely spread out to the remaining RS. But
> when we brought back this RS, all the regions of this table were back on
> this same RS!
>
> Thanks.
>
>
> Regards,
> Shahab
>
> On Fri, Feb 13, 2015 at 11:46 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > How many tables are there in your cluster ?
> >
> > Is the cluster balanced overall (in terms of number of regions per
> server)
> > but this table is not ?
> >
> > What happens (check master log) when you issue 'balancer' command through
> > shell ?
> >
> > Cheers
> >
> > On Fri, Feb 13, 2015 at 8:19 AM, Shahab Yunus <shahab.yu...@gmail.com>
> > wrote:
> >
> > > CDH 5.3
> > > HBase 98.6
> > >
> > > We are writing data to an HBase table through a M/R job. We pre split
> the
> > > table before each job run. The problem is that most of the regions end
> up
> > > on the same RS. This results in that one RS being severely overloaded
> and
> > > subsequent M/R jobs failing trying to write to the regions on that RS.
> > >
> > > The balancer is on and the split policy is default. No changes there.
> It
> > is
> > > a 10 node cluster.
> > >
> > > All other related properties are defaults too.
> > >
> > > Any idea, how can we force balancing of the new regions? Do we have to
> > > consider compaction into the equation as well? Thanks.
> > >
> > > Regards,
> > > Shahab
> > >
> >
>

Reply via email to