Re: : Investigate hbase superuser permissions in the face of quota violation-HBASE-17978

2019-05-05 Thread Biju N
Hi Uma,
   Can you raise an issue if this is still problem?

On Wed, Mar 13, 2019 at 10:55 AM Josh Elser  wrote:

> A superuser should be able to still initiate a compaction:
> https://issues.apache.org/jira/browse/HBASE-17978
>
> If the compaction didn't actually happen, that's a problem.
>
> On 3/13/19 3:09 AM, Uma wrote:
> > -- Forwarded message -
> > From: Uma 
> > Date: Wed 13 Mar, 2019, 6:54 AM
> > Subject: Investigate hbase superuser permissions in the face of quota
> > violation-HBASE-17978
> > To: user-subscr...@hbase.apache.org 
> >
> >
> > Hi Users,
> >
> > I observed that in case quota policy was enabled that disallowed
> > compaction, Super User is able to issue compaction command and no error
> is
> > thrown to user. But actually compaction is not happening for that table.
> In
> > debug log below message is printed:
> >
> > “as an active space quota violation policy disallows compactions.”
> >
> > Is it correct behaviour?
> >
> >
> >
> > Thanks,
> >
> > Uma
> >
> > Sent from Mail  for
> Windows
> > 10
> >
> >
> >
> > <
> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient
> >
> > Virus-free.
> > www.avast.com
> > <
> https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient
> >
> > <#m_7450450616560611580_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> >
>


Re: What is the benefit use HBase Spark Connector

2019-05-05 Thread Biju N
Predicate push down is one which comes to mind. The project readme doc has
more about the features https://github.com/hortonworks-spark/shc.

On Wed, Mar 6, 2019 at 9:14 PM Kang Minwoo  wrote:

> Hello, Users.
>
> I wonder what is the benefit use HBase Spark Connector instead of
> TableInputFormat.
>
> Best regards,
> Minwoo Kang
>


Re: hbase 2.1.4: insert error: cannot get replica 0

2019-05-04 Thread Biju N
Major compactions can take long time depending on data size. Are you moving
from an earlier version of HBase to 2.1.4 i.e. did this loading process
work fine earlier? When you mentioned "200 puts as a batch in a java
thread" assuming you are using HBase batch mutation? How much memory out of
the 64 GB allocated to HBase? Any frequent long GC pauses (information
should be available in the RS GC logs)? If you are loading a fresh table is
it pre-split to take advantage of all the RSes you have? Any other workload
during this data load?

Thanks

On Fri, Apr 26, 2019 at 8:03 AM Michael  wrote:

> The logs show nothing which looks like an error.
>
> Are long compactions which take up to 100sec ok?
>
> Otherwise I don't find any error or exceptions in the logs.
>
> cheers
>  Michael
>
>
> Am 25.04.19 um 07:10 schrieb Stack:
> > Check the logs on the servers? Anything untoward? For example, if you
> look
> > in the master logs, any complaints about regions having trouble onlining?
> >
>
>
>
> > What version of hbase?
>
> version 2.1.4
>
>


Re: Region Stuck in "OPENING" state

2017-09-20 Thread Biju N
If you haven't tried already, use "hbase hbck" to fix the holes.

On Wed, Sep 20, 2017 at 8:45 AM, Adam Hunt  wrote:

> I'm are running HBase 1.1.8 on MapR 5.2.1 and I have a table with a little
> over 1000 regions. All the regions load fine, except for one, which is
> stuck in the opening state. I've tried restarting the masters and all the
> region servers, but I'm getting the same result. The logs don't seem very
> helpful. They show that the request to open the region, but it never
> succeeds.
>
> 2017-09-20 05:26:02,095 INFO
> [PriorityRpcServer.handler=3,queue=1,port=16020]
> regionserver.RSRpcServices: Open whois-versioned,bobandamy.net
> ,1505420182154.489a652d37429afe19955512f644f64f.
> 2017-09-20 05:26:02,103 INFO
> [StoreOpener-489a652d37429afe19955512f644f64f-1] hfile.CacheConfig:
> blockCache=LruBlockCache{blockCount=17212, currentSize=1451765224,
> freeSize=4952805912, maxSize=6404571136, heapSize=1451765224,
> minSize=6084342272, minFactor=0.95, multiSize=3042171136, multiFactor=0.5,
> singleSize=1521085568, singleFactor=0.25}, cacheDataOnRead=true,
> cacheDataOnWrite=false, cacheIndexesOnWrite=false,
> cacheBloomsOnWrite=false, cacheEvictOnClose=false,
> cacheDataCompressed=false, prefetchOnOpen=false
> 2017-09-20 05:26:02,103 INFO
> [StoreOpener-489a652d37429afe19955512f644f64f-1]
> compactions.CompactionConfiguration: size [134217728,
> 9223372036854775807);
> files [3, 10); ratio 1.20; off-peak ratio 5.00; throttle point
> 2684354560; major period 60480, major jitter 0.50, min locality to
> compact 0.00
> 2017-09-20 05:26:02,106 INFO
> [StoreOpener-489a652d37429afe19955512f644f64f-1] hfile.CacheConfig:
> blockCache=LruBlockCache{blockCount=17212, currentSize=1451765224,
> freeSize=4952805912, maxSize=6404571136, heapSize=1451765224,
> minSize=6084342272, minFactor=0.95, multiSize=3042171136, multiFactor=0.5,
> singleSize=1521085568, singleFactor=0.25}, cacheDataOnRead=true,
> cacheDataOnWrite=false, cacheIndexesOnWrite=false,
> cacheBloomsOnWrite=false, cacheEvictOnClose=false,
> cacheDataCompressed=false, prefetchOnOpen=false
>
> hbck reports this error:
>
> ERROR: Region { meta =>
> whois-versioned,bobandamy.net,1505420182154.489a652d37429afe19955512f644f6
> 4f.,
> hdfs =>
> maprfs:///hbase/data/default/whois-versioned/
> 489a652d37429afe19955512f644f64f,
> deployed => , replicaId => 0 } not deployed on any region server.
> 2017-09-20 05:29:10,602 INFO  [main] util.HBaseFsck: Handling overlap
> merges in parallel. set hbasefsck.overlap.merge.parallel to false to run
> serially.
> ERROR: There is a hole in the region chain between bobandamy.net and
> bodyinbalanceshop.com.  You need to create a new .regioninfo and region
> dir
> in hdfs to plug the hole.
> ERROR: Found inconsistency in table whois-versioned
>
> Repairing the table created more of a mess, and didn't fix this issue. I've
> tried the assigning it from the shell, but the operation just times out.
> Any ideas on how to debug this issue in order to recover the data would be
> greatly appreciated. Thank you.
>


Re: HBase connection "expiration" on kerberized cluster

2017-08-07 Thread Biju N
Hi Sebastien,
   Can you also add these properties in your configuration and give it a
try?

configuration.set("hadoop.security.authentication", "Kerberos");
configuration.set("hbase.security.authentication", "Kerberos");
configuration.set("hbase.master.kerberos.principal", "hbase/_HOST@realm");
<- realm need to be replaced
configuration.set("hbase.regionserver.kerberos.principal",
"hbase/_HOST@realm");

Also eemove

configuration.set("hbase.client.kerberos.principal",
"myuser@myDomain");
configuration.set("hbase.client.keytab.file",
"/path/to/myuser/keytab");


On Mon, Aug 7, 2017 at 3:55 AM, schausson  wrote:

> Hi Sean,
>
> Unfortunately, couldn't solve my issue ...
> Below is the code of my utility class in charge of logging in and creating
> an HBase connection. I added the AuthUtil stuff as suggested in your
> answer,
> but probably missed something :(
>
> My web service basically invokes GetHBaseConnection() method, and uses
> returned connection to read/write data from/to HBase.
> At application startup, everything is fine : it successfully logs in,
> creates the HBase connection and my web service returns proper data.
> The problem comes up if I wait for a long while (> ticket lifetime). Then,
> when I invoke again my web service, I face the previously mentionned
> warnings and get a socket timeout error...
> When I look at the AuthUtil.getAuthChore() source code, it invokes
> ugi.checkTGTAndReloginFromKeytab() and this is also what I do in the
> background thread that I create when logging in (cf
> SpawnAutoRenewalThread()
> method below)
>
> Just to make it clear : in your answer, you wrote "you'll need to provide a
> keytab that HBase can use to renew kerberos access over time.". Does it
> mean
> that I have to provide a specific keytab for hbase or can I use a single
> keytab for everything ?
>
> In the end, should I stop trying to reuse my hbase connection and re-create
> it every time (whatever the heavy cost of re-creating it) ?
>
> Sorry about my "newbie" questions, but I feel really confused about all
> this
> stuff...
>
> Thanks for your help
>
> Sebastien
>
> PS : Note that if I remove hbase requests from my web service and "just"
> perform some HDFS operations (listing files from a folder for instance),
> everything works fine, even if I wait for a long while, so the point is
> hbase related.
>
> 
>
> private static Configuration configuration;
> private static boolean loggedOnCluster = false;
> private static Connection connection = null;
> private static ChoreService choreService = null;
>
> private static Configuration GetConfiguration() throws IOException {
> if (configuration == null) {
> configuration = HBaseConfiguration.create();
> configuration.set("hbase.client.kerberos.principal",
> "myuser@myDomain");
> configuration.set("hbase.client.keytab.file",
> "/path/to/myuser/keytab");
> }
> return configuration;
> }
>
>
> public static Connection GetHbaseConnection() {
> try {
> if (!loggedOnCluster) {
> Configuration conf = GetConfiguration();
> String userAccount = conf.get("hbase.client.
> kerberos.principal");
> String keyTabPath = conf.get("hbase.client.keytab.
> file");
> UserGroupInformation.setConfiguration(conf);
> UserGroupInformation.loginUserFromKeytab(userAccount,
> keyTabPath);
> loggedOnCluster = true;
> SpawnAutoRenewalThread();
> }
> } catch (IOException e) {
> LOGGER.error("!! Error while login in !!");
> e.printStackTrace();
> }
>
> if (connection == null || connection.isClosed() ||
> connection.isAborted())
> {
> try {
> final Configuration conf = GetConfiguration();
> final ScheduledChore authChore =
> AuthUtil.getAuthChore(conf);
> if (authChore != null) {
> choreService = new
> ChoreService("MY_APPLICATION");
> choreService.scheduleChore(authChore);
> }
> connection = ConnectionFactory.
> createConnection(conf);
> } catch (IOException ex) {
> LOGGER.error("!! Could not obtain connection to
> HBase !!");
> ex.printStackTrace();
> connection = null;
> }
> }
> return connection;
> }
>
> private static void SpawnAutoRenewalThread() throws IOException {
> Thread t = new Thread(new Runnable() {
> @Override
> public void run() {
> while (true) {
> tr

Re: HTableDescriptor

2017-06-19 Thread Biju N
   1. HBASE-18241 <https://issues.apache.org/jira/browse/HBASE-18241>
created.


On Mon, Jun 19, 2017 at 11:47 PM, Ted Yu  wrote:

> By 'should be' I meant the desired formation.
>
> Do you mind logging a JIRA which changes the return type for master branch
> ?
>
> Cheers
>
> On Mon, Jun 19, 2017 at 8:44 PM, Biju N  wrote:
>
> > Thanks Ted for the quick response. Looked at the code in "master" branch
> > and seems to have the return type as HTableDescriptor. Probably I am
> > looking at the wrong branch?
> >
> > https://github.com/apache/hbase/blob/master/hbase-
> > client/src/main/java/org/apache/hadoop/hbase/client/Admin.java#L200
> >
> > https://github.com/apache/hbase/blob/master/hbase-
> > client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L69
> >
> > On Mon, Jun 19, 2017 at 11:33 PM, Ted Yu  wrote:
> >
> > > I guess in branch-2, this is to keep backward compatibility.
> > >
> > > In master branch, TableDescriptor should be returned from these two
> > > methods.
> > >
> > > On Mon, Jun 19, 2017 at 8:27 PM, Biju N 
> wrote:
> > >
> > > > Hi There,
> > > >From the docs. HTableDescriptor is deprecated and will be removed
> > from
> > > > 3.0. But both client.Table#getTableDescriptor and
> > > > client.Admin#getTableDescriptor returns HTableDescriptor object. Is
> > > there
> > > > a
> > > > reason why these methods return the deprecated type? Looking to
> change
> > > code
> > > > from prior versions to not use the deprecated classes.
> > > >
> > > > Thanks,
> > > > Biju
> > > >
> > >
> >
>


Re: HTableDescriptor

2017-06-19 Thread Biju N
Thanks Ted for the quick response. Looked at the code in "master" branch
and seems to have the return type as HTableDescriptor. Probably I am
looking at the wrong branch?

https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java#L200

https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L69

On Mon, Jun 19, 2017 at 11:33 PM, Ted Yu  wrote:

> I guess in branch-2, this is to keep backward compatibility.
>
> In master branch, TableDescriptor should be returned from these two
> methods.
>
> On Mon, Jun 19, 2017 at 8:27 PM, Biju N  wrote:
>
> > Hi There,
> >From the docs. HTableDescriptor is deprecated and will be removed from
> > 3.0. But both client.Table#getTableDescriptor and
> > client.Admin#getTableDescriptor returns HTableDescriptor object. Is
> there
> > a
> > reason why these methods return the deprecated type? Looking to change
> code
> > from prior versions to not use the deprecated classes.
> >
> > Thanks,
> > Biju
> >
>


HTableDescriptor

2017-06-19 Thread Biju N
Hi There,
   From the docs. HTableDescriptor is deprecated and will be removed from
3.0. But both client.Table#getTableDescriptor and
client.Admin#getTableDescriptor returns HTableDescriptor object. Is there a
reason why these methods return the deprecated type? Looking to change code
from prior versions to not use the deprecated classes.

Thanks,
Biju


Re: On HBase Read Replicas

2017-03-01 Thread Biju N
>From the table definition. For e.g.
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html#getRegionReplication--

On Tue, Feb 28, 2017 at 3:30 PM, jeff saremi  wrote:

> Enis
>
> just one more question. How would i go about getting the count of the
> replica's for a table or columngroup? thanks
>
> 
> From: Enis Söztutar 
> Sent: Wednesday, February 22, 2017 1:38:41 PM
> To: hbase-user
> Subject: Re: On HBase Read Replicas
>
> If you are doing a get to a specific replica, it will execute as a read
> with retries to a single "copy". There will not be any backup / fallback
> RPCs to any other replica.
>
> Only in timeline consistency mode there will be fallback RPCs.
>
> Enis
>
> On Sun, Feb 19, 2017 at 9:43 PM, Anoop John  wrote:
>
> > Thanks Enis.. I was not knowing the way of setting replica id
> > specifically..  So what will happen if that said replica is down at
> > the read time?  Will that go to another replica?
> >
> > -Anoop-
> >
> > On Sat, Feb 18, 2017 at 3:34 AM, Enis Söztutar 
> wrote:
> > > You can do gets using two different "modes":
> > >  - Do a read with backup RPCs. In case, the algorithm that I have above
> > > will be used. 1 RPC to primary, and 2 more RPCs after primary timeouts.
> > >  - Do a read to a single replica. In this case, there is only 1 RPC
> that
> > > will happen to that given replica.
> > >
> > > Enis
> > >
> > > On Fri, Feb 17, 2017 at 12:03 PM, jeff saremi 
> > > wrote:
> > >
> > >> Enis
> > >>
> > >> Thanks for taking the time to reply
> > >>
> > >> So i thought that a read request is sent to all Replicas regardless.
> If
> > we
> > >> have the option of Sending to one, analyzing response, and then
> sending
> > to
> > >> another, this bodes well with our scenarios.
> > >>
> > >> Please confirm
> > >>
> > >> thanks
> > >>
> > >> 
> > >> From: Enis Söztutar 
> > >> Sent: Friday, February 17, 2017 11:38:42 AM
> > >> To: hbase-user
> > >> Subject: Re: On HBase Read Replicas
> > >>
> > >> You can use read-replicas to distribute the read-load if you are fine
> > with
> > >> stale reads. The read replicas normally have a "backup rpc" path,
> which
> > >> implements a logic like this:
> > >>  - Send the RPC to the primary replica
> > >>  - if no response for 100ms (or configured timeout), send RPCs to the
> > other
> > >> replicas
> > >>  - return the first non-exception response.
> > >>
> > >> However, there is also another feature for read replicas, where you
> can
> > >> indicate which exact replica_id you want to read from when you are
> > doing a
> > >> get. If you do this:
> > >> Get get = new Get(row);
> > >> get.setReplicaId(2);
> > >>
> > >> the Get RPC will only go to the replica_id=2. Note that if you have
> > region
> > >> replication = 3, then you will have regions with replica ids: {0, 1,
> 2}
> > >> where replica_id=0 is the primary.
> > >>
> > >> So you can do load-balancing with a get.setReplicaId(random() %
> > >> num_replicas) kind of pattern.
> > >>
> > >> Enis
> > >>
> > >>
> > >>
> > >> On Thu, Feb 16, 2017 at 9:41 AM, Anoop John 
> > wrote:
> > >>
> > >> > Never saw this kind of discussion.
> > >> >
> > >> > -Anoop-
> > >> >
> > >> > On Thu, Feb 16, 2017 at 10:13 PM, jeff saremi <
> jeffsar...@hotmail.com
> > >
> > >> > wrote:
> > >> > > Thanks Anoop.
> > >> > >
> > >> > > Understood.
> > >> > >
> > >> > > Have there been enhancement requests or discussions on load
> > balancing
> > >> by
> > >> > providing additional replicas in the past? Has anyone else come up
> > with
> > >> > anything on this?
> > >> > > thanks
> > >> > >
> > >> > > 
> > >> > > From: Anoop John 
> > >> > > Sent: Thursday, February 16, 2017 2:35:48 AM
> > >> > > To: user@hbase.apache.org
> > >> > > Subject: Re: On HBase Read Replicas
> > >> > >
> > >> > > The region replica feature came in so as to reduce the MTTR and so
> > >> > > increase the data availability.  When the master region containing
> > RS
> > >> > > dies, the clients can read from the secondary regions.  But to
> keep
> > >> > > one thing in mind that this data from secondary regions will be
> bit
> > >> > > out of sync as the replica is eventual consistent.   Because of
> this
> > >> > > said reason,  change client so as to share the load across diff
> RSs
> > >> > > might be tough.
> > >> > >
> > >> > > -Anoop-
> > >> > >
> > >> > > On Sun, Feb 12, 2017 at 8:13 AM, jeff saremi <
> > jeffsar...@hotmail.com>
> > >> > wrote:
> > >> > >> Yes indeed. thank you very much Ted
> > >> > >>
> > >> > >> 
> > >> > >> From: Ted Yu 
> > >> > >> Sent: Saturday, February 11, 2017 3:40:50 PM
> > >> > >> To: user@hbase.apache.org
> > >> > >> Subject: Re: On HBase Read Replicas
> > >> > >>
> > >> > >> Please take a look at the design doc attached to
> > >> > >> https://issues.apache.org/jira/browse/HBASE-10070.
> > >> > >>
> > >> > >> Your first question would be answered by that d

Re: Show Regions list with both HostName and FQDN

2016-06-29 Thread Biju N
The image didn't come through Karthik.

On Wed, Jun 29, 2016 at 8:30 AM, karthi keyan 
wrote:

> In my HBase cluster i have* only one* *RegionServer*. In the Web UI it
> displays Regionserver count as *2* with Hostaname and FQDN of same
> machine as below
>
> [image: Inline image 1]
>
> Initially no Region were allocated state=PENDING_OPEN  because it refers
> the node with FQDN- RegionServer.
>
> HBase- 1.1.5 on windows platform .
> in %HBASE_HOME%/conf/regionservers file having region with FQDN of the
> machine .
> in hbase-site.xml all values are configured with FQDN of the machine.
>
> I think this issue will be related to DNS resolution.
>
> Could any one  guide me to Resolve this issue??
>
>
> -Karthik
>


Re: after server restart - getting exception - java.io.IOException: Timed out waiting for lock for row

2016-06-22 Thread Biju N
Vishnu,
Are you using "local index" on any of the tables? We have seen similar
issues while using "local index".

On Wed, Jun 22, 2016 at 12:25 PM, vishnu rao  wrote:

> the server dies when trying to take the thread dump.
>
> i believe i am experiencing this bug
>
> https://issues.apache.org/jira/browse/PHOENIX-2508
>
> On Wed, Jun 22, 2016 at 5:03 PM, Heng Chen 
> wrote:
>
> > which thread hold the row lock? could you dump the jstack with 'jstack -l
> > pid' ?
> >
> > 2016-06-22 16:14 GMT+08:00 vishnu rao :
> >
> > > hi Heng.
> > >
> > > 2016-06-22 08:13:42,256 WARN
> > > [B.defaultRpcServer.handler=32,queue=2,port=16020]
> regionserver.HRegion:
> > > Failed getting lock in batch put,
> > > row=\x01\xD6\xFD\xC9\xDC\xE4\x08\xC4\x0D\xBESM\xC2\x82\x14Z
> > >
> > > java.io.IOException: Timed out waiting for lock for row:
> > > \x01\xD6\xFD\xC9\xDC\xE4\x08\xC4\x0D\xBESM\xC2\x82\x14Z
> > >
> > > at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
> > >
> > > at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2944)
> > >
> > > at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2801)
> > >
> > > at
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2743)
> > >
> > > at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
> > >
> > > at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
> > >
> > > at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2031)
> > >
> > > at
> > >
> > >
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
> > >
> > > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> > >
> > > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> > >
> > > at
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> > >
> > > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> > >
> > > at java.lang.Thread.run(Thread.java:745)
> > >
> > > On Wed, Jun 22, 2016 at 3:50 PM, Heng Chen 
> > > wrote:
> > >
> > > > Could you paste the whole jstack and relates rs log?   It seems row
> > write
> > > > lock was occupied by some thread.  Need more information to find it.
> > > >
> > > > 2016-06-22 13:48 GMT+08:00 vishnu rao :
> > > >
> > > > > need some help. this has happened for 2 of my servers
> > > > > -
> > > > >
> > > > > *[B.defaultRpcServer.handler=2,queue=2,port=16020]
> > > regionserver.HRegion:
> > > > > Failed getting lock in batch put,
> > > > > row=a\xF7\x1D\xCBdR\xBC\xEC_\x18D>\xA2\xD0\x95\xFF*
> > > > >
> > > > > *java.io.IOException: Timed out waiting for lock for row:
> > > > > a\xF7\x1D\xCBdR\xBC\xEC_\x18D>\xA2\xD0\x95\xFF*
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2944)
> > > > >
> > > > > at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2801)
> > > > >
> > > > > at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2743)
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2031)
> > > > >
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
> > > > >
> > > > > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> > > > >
> > > > > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> > > > >
> > > > > at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> > > > >
> > > > > at
> > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> > > > >
> > > > > at java.lang.Thread.run(Thread.java:745)
> > > > >
> > > > > --
> > > > > with regards,
> > > > > ch Vishnu
> > > > > mash213.wordpress.com
> > > > > doodle-vishnu.blogspot.in
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > with regards,
> > > ch Vishnu
> > > mash213.wordpress.com
> > > doodle-vishnu.blogspot.in
> > >
> >
>
>
>
> --
> with regards,
> ch Vishnu
> mash213.wordpress.com
> doodle-vishnu.blogspot.i