Hi,
I am currently analyzing hbase coprocessor and I tried run
RowCountEndPoint in single node cluster it is working fine. But whn I try
to run in multi node cluster it is not throwing error but it hangs in the
sense it is not running continuously. I have loaded coprocessor dynamically.
Thanks
-user@hbase to bcc (I encourage folks interest to subscribe to dev@hbase)
Personally, I'd rather see us concentrate on getting MOB out the door in
2.0.
branch-1 is now over 2 years old; I'd want some rigorous proof that we're
not setting our downstream users up for surprises by making major
Have a look at classes VisibilityController or AccessController .
These are observers as well as Endpoint which exposes some APIs for
client
-Anoop-
On Mon, May 15, 2017 at 4:34 PM, Rajeshkumar J
wrote:
> Hi,
>
>Check this example in the below link
>
>
Hi.
Can we revive HBASE-15370: Backport Moderate Object Storage (MOB) to branch-1 ?
Even we have customers using this feature and it requires lots of effort to
backport MOB patches from master branch to the released versions as the code
base has lots of differences.
Regards,
Ashish
Hi,
Check this example in the below link
https://www.3pillarglobal.com/insights/hbase-coprocessors
Thanks
On Mon, May 15, 2017 at 4:00 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> I have a *Coprocessor* which overrides *postPut* and I also want to call a
>
I have a *Coprocessor* which overrides *postPut* and I also want to call a
custom function in this Coprocessor from the hbase client. In hbase 0.94 I
would use "*public class MyCoprocessor extends BaseRegionObserver
implements MyCoprocessorProtocol*" but *CoprocessorProtocol *no longer
exists in
Hi,
Thanks ted. we are using default split policy and our flush size is 64 MB.
And the size is calculated based on the formula
Math.min(getDesiredMaxFileSize(),initialSize * tableRegionsCount *
tableRegionsCount * tableRegionsCount);
If this size exceeds max region size (10 GB), then max
Split policy may play a role here.
Please take a look at:
http://hbase.apache.org/book.html#_custom_split_policies
On Mon, May 15, 2017 at 1:48 AM, Rajeshkumar J
wrote:
> Hi,
>
> As we run mapreduce over hbase it will take each region as input for each
> mapper.
Hi,
As we run mapreduce over hbase it will take each region as input for each
mapper. I have given region max size as 10GB. If i have about 5 gb will it
take 5 gb of data as input of mappers??
Thanks