Hi,
Thanks ted. we are using default split policy and our flush size is 64 MB.
And the size is calculated based on the formula
Math.min(getDesiredMaxFileSize(),initialSize * tableRegionsCount *
tableRegionsCount * tableRegionsCount);
If this size exceeds max region size (10 GB), then max regi
Split policy may play a role here.
Please take a look at:
http://hbase.apache.org/book.html#_custom_split_policies
On Mon, May 15, 2017 at 1:48 AM, Rajeshkumar J
wrote:
> Hi,
>
> As we run mapreduce over hbase it will take each region as input for each
> mapper. I have given region max size a
Hi,
As we run mapreduce over hbase it will take each region as input for each
mapper. I have given region max size as 10GB. If i have about 5 gb will it
take 5 gb of data as input of mappers??
Thanks