[jira] [Created] (HBASE-15755) SnapshotDescriptionUtils does not have any Interface audience marked

2016-05-02 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-15755:
--

 Summary: SnapshotDescriptionUtils does not have any Interface 
audience marked
 Key: HBASE-15755
 URL: https://issues.apache.org/jira/browse/HBASE-15755
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan


SnapshotDescriptionUtils does not have any IA marked. Should this be private or 
public?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Hangout on Slack?

2016-05-02 Thread Nick Dimiduk
I'm not such a huge slack fan, but I'm also becoming curmudgeonly. Sure,
why not? If that's where people want to gather. You creating a room? How to
make it "official"?

On Mon, Apr 25, 2016 at 4:22 PM, Stack  wrote:

> On Mon, Apr 25, 2016 at 4:05 PM, Apekshit Sharma 
> wrote:
>
> > "Committers should hang out in the #hbase room on irc.freenode.net for
> > real-time discussions."
> > -- HBase Book
> >
> > The room has a bunch of people, but none who I recognize. I wonder what
> > happened. How a room which I imagine must have had thriving geeky
> > discussions once, just died.
> >
> > Anyways, let's revive the old tradition because it will certainly be
> useful
> > to hang out in a room for real-time discussions. We can use the new
> kickass
> > service for that, Slack.
> >
> > What says the community?
> >
> >
> Or we could just revive the existing channel?
> St.Ack
>
>
>
> > -- Appy
> >
>


[jira] [Created] (HBASE-15754) Add testcase for AES encryption

2016-05-02 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-15754:
-

 Summary: Add testcase for AES encryption
 Key: HBASE-15754
 URL: https://issues.apache.org/jira/browse/HBASE-15754
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang
Assignee: Duo Zhang


As discussed in mailing list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: hbaseAdmin tableExists create catalogTracker for every call

2016-05-02 Thread Ted Yu
YQ:
See HBASE-4495 where Mikhail removed this part of code from HBaseAdmin.

For 0.98, I checked the source code - it is still in the following form.

On Mon, May 2, 2016 at 7:12 PM, Enis Söztutar  wrote:

> BTW, you can use dev@hbase.apache.org rather than issues@. The latter is
> more for emails from jira.
>
> On Mon, May 2, 2016 at 7:11 PM, Enis Söztutar  wrote:
>
> > Thanks for reporting.
> >
> > In master and branch-1, this part of the code is very different and no
> > longer has the problem.
> >
> > Did you check the latest 0.98 code base? It may not be worth fixing at
> > this point.
> >
> > Enis
> >
> > On Mon, May 2, 2016 at 6:21 AM, WangYQ 
> wrote:
> >
> >> the code :
> >>
> >>  private synchronized CatalogTracker getCatalogTracker()
> >>   throws ZooKeeperConnectionException, IOException {
> >> CatalogTracker ct = null;
> >> try {
> >>   ct = new CatalogTracker(this.conf);
> >>   ct.start();
> >> } catch (InterruptedException e) {
> >>   // Let it out as an IOE for now until we redo all so tolerate IEs
> >>   Thread.currentThread().interrupt();
> >>   throw new IOException("Interrupted", e);
> >> }
> >> return ct;
> >>   }
> >>
> >>
> >> I think we can make CatalogTracker be a object of HBaseAdmin class, can
> >> reduce many object create and destroy, reduce client to ZK
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> At 2016-04-19 21:09:42, "WangYQ"  wrote:
> >>
> >> in hbase 0.98.10,  class   "HBaseAdmin "
> >> line  303,  method  "tableExists",   will create a catalogTracker for
> >> every call
> >>
> >>
> >> we can let a HBaseAdmin object use one CatalogTracker object, to reduce
> >> the object create, connect zk and so on
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >
> >
>


Re: [DISCUSS] Make AsyncFSWAL the default WAL in 2.0

2016-05-02 Thread 张铎
Fine, will add the testcase.

And for the RPC, we only implement a new client side DTP here and still use
the original RPC.

Thanks.

2016-05-03 3:20 GMT+08:00 Gary Helmling :

> On Fri, Apr 29, 2016 at 6:24 PM 张铎  wrote:
>
> > Yes, it does. There is testcase that enumerates all the possible
> protection
> > level(authentication, integrity and privacy) and encryption
> algorithm(none,
> > 3des, rc4).
> >
> >
> >
> https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/io/asyncfs/TestSaslFanOutOneBlockAsyncDFSOutput.java
> >
> > I have also tested it in a secure cluster(hbase-2.0.0-SNAPSHOT and
> > hadoop-2.4.0).
> >
>
> Thanks.  Can you add in support for testing with AES
> (dfs.encrypt.data.transfer.cipher.suites=AES/CTR/NoPadding)?  This is only
> available in Hadoop 2.6.0+, but I think is far more likely to be used in
> production than 3des or rc4.


> Also, have you been following HADOOP-10768?  That is changing Hadoop RPC
> encryption negotiation to support more performant AES wrapping, similar to
> what is now supported in the data transfer pipeline.
>


[jira] [Created] (HBASE-15753) Site does not build with the instructions in the book

2016-05-02 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-15753:
-

 Summary: Site does not build with the instructions in the book
 Key: HBASE-15753
 URL: https://issues.apache.org/jira/browse/HBASE-15753
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar


Originally reported by [~clarax98007] in HBASE-15337. 
Instructions in the book say to run: 
{code}
mvn site -DskipTests
{code}

But it fails with javadoc related errors. 
Seems that we are using this in the jenkins job that [~misty] had setup 
(https://builds.apache.org/job/hbase_generate_website/): 
{code}mvn  site -DskipTests -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true 
-Dfindbugs.skip=true {code}

We should either fix the javadoc, or update the instructions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15720) Print row locks at the debug dump page

2016-05-02 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen resolved HBASE-15720.
---
Resolution: Fixed

> Print row locks at the debug dump page
> --
>
> Key: HBASE-15720
> URL: https://issues.apache.org/jira/browse/HBASE-15720
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.1
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5
>
> Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, 
> HBASE-15720-branch-1.0-addendum.patch, HBASE-15720-branch-1.2-addendum.patch, 
> HBASE-15720.patch
>
>
> We had to debug cases where some handlers are holding row locks for an 
> extended time (and maybe leak) and other handlers are getting timeouts for 
> obtaining row locks. 
> We should add row lock information at the debug page at the RS UI to be able 
> to live-debug such cases.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15749) Shade guava dependency

2016-05-02 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-15749.
---
Resolution: Not A Problem

Resolving as not a problem.

We got here because HBASE-15737 had StopWatch in hbase-server as a 'problem'. 
Attempts at eliciting why it a problem got "Our codebase should be consistent 
w.r.t. the usage of stop watch" and "... reduce the chance of incompatibilities 
in case newer version of Guava is involved", and so on.

HBASE-15737 seems to have been prompted by HBASE-14963 where suggestions of 
shaded client didn't carry because version concerned was older -- 
pre-shaded-client. HBASE-14963 included suggestion of shading guava that was 
repeated by me in HBASE-15737 when in this later context I should have talked 
up shaded modules instead.







> Shade guava dependency
> --
>
> Key: HBASE-15749
> URL: https://issues.apache.org/jira/browse/HBASE-15749
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> HBase codebase uses Guava library extensively.
> There have been JIRAs such as HBASE-14963 which tried to make compatibility 
> story around Guava better.
> Long term fix, as suggested over in HBASE-14963, is to shade Guava dependency.
> Future use of Guava in HBase would be more secure once shading is done.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-15720) Print row locks at the debug dump page

2016-05-02 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reopened HBASE-15720:
-

Reopening.

this broke compilation at least on branch-1.2. Please fix ASAP and in the 
future make sure building passes (atleast with {{-DskipTests}}) prior to 
pushing backports.

> Print row locks at the debug dump page
> --
>
> Key: HBASE-15720
> URL: https://issues.apache.org/jira/browse/HBASE-15720
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5
>
> Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, 
> HBASE-15720.patch
>
>
> We had to debug cases where some handlers are holding row locks for an 
> extended time (and maybe leak) and other handlers are getting timeouts for 
> obtaining row locks. 
> We should add row lock information at the debug page at the RS UI to be able 
> to live-debug such cases.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job

2016-05-02 Thread Ted Yu (JIRA)
Ted Yu created HBASE-15752:
--

 Summary: ClassNotFoundException is encountered when custom WAL 
codec is not found in WALPlayer job
 Key: HBASE-15752
 URL: https://issues.apache.org/jira/browse/HBASE-15752
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


[~cartershanklin] reported the following when he tried out back / restore 
feature in a Phoenix enabled deployment:
{code}
2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot 
get log reader
  at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344)
  at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266)
  at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254)
  at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403)
  at 
org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152)
  at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.lang.UnsupportedOperationException: Unable to find 
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
  at 
org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36)
  at 
org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103)
  at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282)
  at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:292)
  at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82)
  at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149)
  at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:301)
  ... 12 more
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
  at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  at java.lang.Class.forName0(Native Method)
  at java.lang.Class.forName(Class.java:264)
{code}
This was due to the IndexedWALEditCodec (specified thru 
hbase.regionserver.wal.codec) used by Phoenix being absent in hadoop classpath.

WALPlayer should handle this situation better by adding the jar for 
IndexedWALEditCodec class to mapreduce job dependency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15751) Fixed HBase compilation failed with Zookeeper 3.5 and bump HBase to use zookeeper 3.5

2016-05-02 Thread Yufeng Jiang (JIRA)
Yufeng Jiang created HBASE-15751:


 Summary: Fixed HBase compilation failed with Zookeeper 3.5 and 
bump HBase to use zookeeper 3.5
 Key: HBASE-15751
 URL: https://issues.apache.org/jira/browse/HBASE-15751
 Project: HBase
  Issue Type: Task
  Components: Zookeeper
Affects Versions: master
Reporter: Yufeng Jiang
 Fix For: master


>From zookeeper 3.5 and onwards, runFromConfig(QuorumPeerConfig config) method 
>throws AdminServerException.
HBase uses runFromConfig in HQuorumPeer.java and hence needs to throw this 
exception as well.

I've created a patch to make HBase compatible with zookeeper-3.5.1-alpha. 
However, since zookeeper 3.5+ does not have a stable version yet, I don't think 
we should commit this patch. Instead, I suggest using this JIRA to track this 
issue. Once zookeeper releases stable version of 3.5+, I could create another 
patch to bump the zookeeper version in HBase trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15750) Add on meta deserialization

2016-05-02 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-15750:
-

 Summary: Add on meta deserialization
 Key: HBASE-15750
 URL: https://issues.apache.org/jira/browse/HBASE-15750
 Project: HBase
  Issue Type: Sub-task
Reporter: Elliott Clark






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15748) Don't link in static libunwind.

2016-05-02 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HBASE-15748.
---
Resolution: Fixed
  Assignee: Elliott Clark

> Don't link in static libunwind.
> ---
>
> Key: HBASE-15748
> URL: https://issues.apache.org/jira/browse/HBASE-15748
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15748.patch
>
>
> A static libunwind compiled with gcc prevents clang from catching exceptions. 
> So just add the dynamic one. :-/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Make AsyncFSWAL the default WAL in 2.0

2016-05-02 Thread Gary Helmling
On Fri, Apr 29, 2016 at 6:24 PM 张铎  wrote:

> Yes, it does. There is testcase that enumerates all the possible protection
> level(authentication, integrity and privacy) and encryption algorithm(none,
> 3des, rc4).
>
>
> https://github.com/apache/hbase/blob/master/hbase-server/src/test/java/org/apache/hadoop/hbase/io/asyncfs/TestSaslFanOutOneBlockAsyncDFSOutput.java
>
> I have also tested it in a secure cluster(hbase-2.0.0-SNAPSHOT and
> hadoop-2.4.0).
>

Thanks.  Can you add in support for testing with AES
(dfs.encrypt.data.transfer.cipher.suites=AES/CTR/NoPadding)?  This is only
available in Hadoop 2.6.0+, but I think is far more likely to be used in
production than 3des or rc4.

Also, have you been following HADOOP-10768?  That is changing Hadoop RPC
encryption negotiation to support more performant AES wrapping, similar to
what is now supported in the data transfer pipeline.


[jira] [Created] (HBASE-15749) Shade guava dependency

2016-05-02 Thread Ted Yu (JIRA)
Ted Yu created HBASE-15749:
--

 Summary: Shade guava dependency
 Key: HBASE-15749
 URL: https://issues.apache.org/jira/browse/HBASE-15749
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu


HBase codebase uses Guava library extensively.

There have been JIRAs such as HBASE-14963 which tried to make compatibility 
story around Guava better.

Long term fix, as suggested over in HBASE-14963, is to shade Guava dependency.
Future use of Guava in HBase would be more secure once shading is done.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15748) Don't link in static libunwind.

2016-05-02 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-15748:
-

 Summary: Don't link in static libunwind.
 Key: HBASE-15748
 URL: https://issues.apache.org/jira/browse/HBASE-15748
 Project: HBase
  Issue Type: Sub-task
Reporter: Elliott Clark


A static libunwind compiled with gcc prevents clang from catching exceptions. 
So just add the dynamic one. :-/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15739) Add region location lookup from meta

2016-05-02 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HBASE-15739.
---
Resolution: Fixed

> Add region location lookup from meta
> 
>
> Key: HBASE-15739
> URL: https://issues.apache.org/jira/browse/HBASE-15739
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15739-v1.patch, HBASE-15739.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: What's going on? Two C++ clients being developed

2016-05-02 Thread Devaraj Das
(Meant to send this earlier but got delayed)
Thanks Clay for the inputs. Would like to give a quick update on where we are 
at this point and solicit thoughts on how to proceed from here:

1. The patch (from Vamsi) that has been uploaded on RB last has the copyright 
and other license related stuff taken care of.

2. We are taking a look at making the configuration pluggable, and provide an 
implementation that works with XML files. Maybe, something like this - if the 
configured conf directory has XML files, assume they are the default config 
files. If not, use the config loader meant for that particular type. The 
filename extension can be used to make this choice I guess..

3. On the sync/async RPC implementation, we are continuing to investigate this. 
This is something we could work together on with Elliott actively. On a related 
note, we have an implementation of the "batch" calls that's doing the get/put 
sequentially for each of the get/put. The HBase Java-client's AsyncProcess does 
it so that multiple regionservers are reached out to in parallel, etc. Looking 
at if implementing the RPC async from the get go would obviate the need for 
AsyncProcess in c++ client...

4. We can look at making smaller patches for the various client API and 
associated classes like GET, PUT, TableName, etc. (this is called out in the 
last mail from Enis). The way I see it - there is the front end work for 
providing classes for the APIs, and there is the back end work - Connection 
management, RPC, AsyncProcess-like-stuff.. There is a good amount of work done 
in Vamsi's patch for the former, and there is an Async RPC basis for the 
backend in Elliott's branch. We should see how we can leverage both and come up 
with one unified implementation if possible.

Thoughts?


From: Clay Baenziger (BLOOMBERG/ 731 LEX) 
Sent: Monday, April 25, 2016 11:01 AM
To: dev@hbase.apache.org
Subject: Re: What's going on? Two C++ clients being developed

>From an operator's view-point, I would add:

I am concerned that as an operator who often has to build Hadoop eco-system 
components and is chiefly interested in these C++ bindings that a non Apache, 
GNU or otherwise large-scale community supported open source utility in the 
build chain is liability to this codebase and its adoption.

As to the configuration process, I would really like to keep with XML. I am 
looking to use Maven repositories to host the configurations of our clusters 
(e.g. a POM-file per cluster hosting hbase-site.xml, hdfs-site.xml, etc.); it 
would be a pain to have to synchronize two configurations of the same 
information both on the publishing side and client side dependent on the use. 
It would be possible to duplicate all this information just because of 
different consumers but XML should not be terribly difficult for C/C++ code to 
parse -- e.g. OpenSolaris's use in SMF, Zones, etc. Further, for an example of 
an incubator project which uses XML configs already, see Apache HAWQ's use of 
hdfs-client.xml and similar for YARN with their pure non-Java implementation 
for HDFS and YARN client: 
https://github.com/apache/incubator-hawq/blob/9452055bc74e64f308a8b6cc2b7ab946e5584ba8/src/backend/utils/misc/etc/hdfs-client.xml.

I certainly would not be opposed to a pluggable configuration system. I'd 
imagine Apache Ambari could use that to not need to materialize XML configs 
from Postgres; I could see using Zookeeper akin to how Apache Solr Cloud uses 
Zookeeper for configuration information. But at this time, we have XML files 
for better or worse and a pluggable configuration system sounds like a great 
separate JIRA.

-Clay


From: dev@hbase.apache.org At: Apr 19 2016 15:31:25
To: dev@hbase.apache.org
Subject: Fwd:Re: What's going on? Two C++ clients being developed at the moment?

So there are a couple of technical topics that we can further discuss and
hopefully come to a conclusion for going forward.

1. Build system. I am in the auto-tools camp, unless there is a very good
reason to use a non-standard tool like Buck / Bazel, etc. Not sure whether
it makes sense to have two different build systems concurrently. Can we do
the main build with make, and create a wrapper with Buck?

2. XML based configuration versus something native. I strongly believe that
we should support standard hbase-site.xml. A lot of tooling in the Hadoop
ecosystem has already been developed for managing and deploying XML based
configurations over the years. Puppet / Chef scripts, Ambari, CM, etc all
understand and support hbase-site.xml. This is also true for hadoop
operators who should be familiar with modifying these files. So it would be
a real pain if we suddenly come up with yet another config format that
requires the operators and tools to learn how to deploy and manage. What if
there are both java clients and C++ clients in the same nodes. How do you
keep two config files in sync? Then there is the issue of
hbase-default.xml. It shoul

[jira] [Created] (HBASE-15747) Fix illegal character in tablename in generated Maven site

2016-05-02 Thread Misty Stanley-Jones (JIRA)
Misty Stanley-Jones created HBASE-15747:
---

 Summary: Fix illegal character in tablename in generated Maven site
 Key: HBASE-15747
 URL: https://issues.apache.org/jira/browse/HBASE-15747
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Misty Stanley-Jones
Priority: Trivial


See 
https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/37/artifact/link_report/warnX.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15746) RegionCoprocessor preClose() called 3 times

2016-05-02 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-15746:
---

 Summary: RegionCoprocessor preClose() called 3 times
 Key: HBASE-15746
 URL: https://issues.apache.org/jira/browse/HBASE-15746
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, regionserver
Affects Versions: 0.98.19, 1.1.4, 1.2.1, 2.0.0, 1.3.0
Reporter: Matteo Bertozzi
Priority: Minor


The preClose() region coprocessor call gets called 3 times via rpc.

The first one is when we receive the RPC
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java#L1329

The second time is when ask the RS to close the region
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L2852

The third time is when the doClose() on the region is executed.
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L1419

I'm pretty sure the first one can be removed since, there is no code between 
that and the second call. and they are a copy-paste.

The second one explicitly says that is to enforce ACLs before starting the 
operation, which leads me to the fact that the 3rd one in the region gets 
executed too late in the process. but the region.close() may be called by 
someone other than the RS, so we should probably leave the preClose() in there 
(e.g. OpenRegionHandler on failure cleanup). 

any idea?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Successful: HBase Generate Website

2016-05-02 Thread Apache Jenkins Server
Build status: Successful

If successful, the website and docs have been generated. If failed, skip to the 
bottom of this email.

Use the following commands to download the patch and apply it to a clean branch 
based on origin/asf-site. If you prefer to keep the hbase-site repo around 
permanently, you can skip the clone step.

  git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git

  cd hbase-site
  wget -O- 
https://builds.apache.org/job/hbase_generate_website/217/artifact/website.patch.zip
 | funzip > d1130582d54ca78bbd708c39e4fa02e7b89b232e.patch
  git fetch
  git checkout -b asf-site-d1130582d54ca78bbd708c39e4fa02e7b89b232e 
origin/asf-site
  git am --whitespace=fix d1130582d54ca78bbd708c39e4fa02e7b89b232e.patch

At this point, you can preview the changes by opening index.html or any of the 
other HTML pages in your local 
asf-site-d1130582d54ca78bbd708c39e4fa02e7b89b232e branch, and you can review 
the differences by running:

  git diff origin/asf-site

There are lots of spurious changes, such as timestamps and CSS styles in 
tables. To see a list of files that have been added, deleted, renamed, changed 
type, or are otherwise interesting, use the following command:

  git diff --name-status --diff-filter=ADCRTXUB origin/asf-site

To see only files that had 100 or more lines changed:

  git diff --stat origin/asf-site | grep -E '[1-9][0-9]{2,}'

When you are satisfied, publish your changes to origin/asf-site using this 
command:

  git push origin asf-site-d1130582d54ca78bbd708c39e4fa02e7b89b232e:asf-site

Changes take a couple of minutes to be propagated. You can then remove your 
asf-site-d1130582d54ca78bbd708c39e4fa02e7b89b232e branch:

  git checkout asf-site && git branch -d 
asf-site-d1130582d54ca78bbd708c39e4fa02e7b89b232e



If failed, see https://builds.apache.org/job/hbase_generate_website/217/console

[ANNOUNCE] Apache HBase 0.98.19 is now available for download

2016-05-02 Thread Andrew Purtell
Apache HBase 0.98.19 is now available for download. Get it from an Apache
mirror [1] or Maven repository. The list of changes in this release can be
found in the release notes [2] or at the bottom of this announcement.

Thanks to all who contributed to this release.

Best,
The HBase Dev Team

​1. http://www.apache.org/dyn/closer.lua/hbase/
2. https://s.apache.org/z92R


HBASE-11830 TestReplicationThrottler.testThrottling failed on virtual boxes
HBASE-12148 Remove TimeRangeTracker as point of contention when many
threads writing a Store
HBASE-12511 namespace permissions - add support from table creation
privilege in a namespace 'C'
HBASE-12663 unify getTableDescriptors() and
listTableDescriptorsByNamespace()
HBASE-12674 Add permission check to getNamespaceDescriptor()
HBASE-13700 Allow Thrift2 HSHA server to have configurable threads
HBASE-14809 Grant / revoke Namespace admin permission to group
HBASE-14870 Backport namespace permissions to 98 branch
HBASE-14983 Create metrics for per block type hit/miss ratios
HBASE-15191 CopyTable and VerifyReplication - Option to specify batch size,
versions
HBASE-15212 RRCServer should enforce max request size
HBASE-15234 ReplicationLogCleaner can abort due to transient ZK issues
HBASE-15368 Add pluggable window support
HBASE-15386 PREFETCH_BLOCKS_ON_OPEN in HColumnDescriptor is ignored
HBASE-15389 Write out multiple files when compaction
HBASE-15400 Use DateTieredCompactor for Date Tiered Compaction
HBASE-15405 Synchronize final results logging single thread in PE, fix
wrong defaults in help message
HBASE-15412 Add average region size metric
HBASE-15460 Fix infer issues in hbase-common
HBASE-15475 Allow TimestampsFilter to provide a seek hint
HBASE-15479 No more garbage or beware of autoboxing
HBASE-15527 Refactor Compactor related classes
HBASE-15548 SyncTable: sourceHashDir is supposed to be optional but won't
work without
HBASE-15569 Make Bytes.toStringBinary faster
HBASE-15582 SnapshotManifestV1 too verbose when there are no regions
HBASE-15587 FSTableDescriptors.getDescriptor() logs stack trace erronously
HBASE-15614 Report metrics from JvmPauseMonitor
HBASE-15621 Suppress Hbase SnapshotHFile cleaner error  messages when a
snaphot is going on
HBASE-15622 Superusers does not consider the keytab credentials
HBASE-15627 Miss space and closing quote in
AccessController#checkSystemOrSuperUser
HBASE-15629 Backport HBASE-14703 to 0.98+
HBASE-15637 TSHA Thrift-2 server should allow limiting call queue size
HBASE-15640 L1 cache doesn't give fair warning that it is showing partial
stats only when it hits limit
HBASE-15647 Backport HBASE-15507 to 0.98
HBASE-15650 Remove TimeRangeTracker as point of contention when many
threads reading a StoreFile
HBASE-15661 Hook up JvmPauseMonitor metrics in Master
HBASE-15662 Hook up JvmPauseMonitor to REST server
HBASE-15663 Hook up JvmPauseMonitor to ThriftServer
HBASE-15664 Use Long.MAX_VALUE instead of HConstants.FOREVER in
CompactionPolicy
HBASE-15665 Support using different StoreFileComparators for different
CompactionPolicies
HBASE-15672
hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes fails
HBASE-15673 [PE tool] Fix latency metrics for multiGet
HBASE-15679 Assertion on wrong variable in
TestReplicationThrottler#testThrottling


Re: Could not seekToPreviousRow

2016-05-02 Thread Govind
I'm using hbase-1.1.2 and yes,file 3eac358ffb9d43018221fbddf9274ffd is
producing the same error every time. I tested the same code on other table
and it worked fine. What could be wrong with this?

On Mon, May 2, 2016 at 3:42 PM, Ted Yu  wrote:

> Which release of hbase are you using ?
>
> Does file 3eac358ffb9d43018221fbddf9274ffd always produce such error during
> reverse scan ?
>
> Thanks
>
> On Mon, May 2, 2016 at 3:04 AM, Govind  wrote:
>
> > Hi all,
> >
> > I'm getting an exception while performing reverse scan on a HBase table.
> > Previously it was working fine but now there is some problem with seeking
> > to previous row. Any suggestions will be highly appreciated. Error log is
> > following:
> >
> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> > attempts=35, exceptions:
> > Mon May 02 10:59:29 CEST 2016,
> > RpcRetryingCaller{globalStartTime=1462179569123, pause=100, retries=35},
> > java.io.IOException: java.io.IOException: Could not seekToPreviousRow
> > StoreFileScanner[HFileScanner for reader
> >
> >
> reader=file:/data/hbase-1.1.2/data/hbase/data/default/dawikitable/c8cdadcd1247e04720972ab5a25597a7/outlinks/3eac358ffb9d43018221fbddf9274ffd,
> > compression=none, cacheConf=blockCache=LruBlockCache{blockCount=149348,
> > currentSize=9919772624, freeSize=2866589744, maxSize=12786362368,
> > heapSize=9919772624, minSize=12147044352, minFactor=0.95,
> > multiSize=6073522176, multiFactor=0.5, singleSize=3036761088,
> > singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
> > cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
> > cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false,
> > firstKey=Danmark2010-01-26T21:02:50Z/outlinks:.dk/1459765153334/Put,
> > lastKey=Motorveje i
> >
> >
> Danmark2010-08-24T14:03:07Z/outlinks:\xC3\x98ver\xC3\xB8d/1459766037971/Put,
> > avgKeyLen=70, avgValueLen=20, entries=49195292, length=4896832843,
> >
> >
> cur=Hj\xC3\xA6lp:Sandkassen2010-11-02T21:40:44Z/outlinks:Adriaterhav/1459771842796/Put/vlen=20/seqid=0]
> > to key
> >
> >
> Hj\xC3\xA6lp:Sandkassen2010-11-02T21:34:14Z/outlinks:\xC4\x8Crnomelj/1459771842779/Put/vlen=20/seqid=0
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:457)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:136)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:596)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5486)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5637)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5424)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395)
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> > at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.io.IOException: On-disk size without header provided is
> > 196736, but block header contains 65582. Block offset: -1, data starts
> > with:
> >
> >
> DATABLK*\x00\x01\x00.\x00\x01\x00\x1A\x00\x00\x00\x00\x8D\xA08\xE2\x01\x00\x00@
> > \x00\x00\x01\x00
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
> > at
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
> > ... 13 more
> >
> > Regards,
> > Govind
> >
>


Re: Could not seekToPreviousRow

2016-05-02 Thread Ted Yu
Which release of hbase are you using ?

Does file 3eac358ffb9d43018221fbddf9274ffd always produce such error during
reverse scan ?

Thanks

On Mon, May 2, 2016 at 3:04 AM, Govind  wrote:

> Hi all,
>
> I'm getting an exception while performing reverse scan on a HBase table.
> Previously it was working fine but now there is some problem with seeking
> to previous row. Any suggestions will be highly appreciated. Error log is
> following:
>
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> attempts=35, exceptions:
> Mon May 02 10:59:29 CEST 2016,
> RpcRetryingCaller{globalStartTime=1462179569123, pause=100, retries=35},
> java.io.IOException: java.io.IOException: Could not seekToPreviousRow
> StoreFileScanner[HFileScanner for reader
>
> reader=file:/data/hbase-1.1.2/data/hbase/data/default/dawikitable/c8cdadcd1247e04720972ab5a25597a7/outlinks/3eac358ffb9d43018221fbddf9274ffd,
> compression=none, cacheConf=blockCache=LruBlockCache{blockCount=149348,
> currentSize=9919772624, freeSize=2866589744, maxSize=12786362368,
> heapSize=9919772624, minSize=12147044352, minFactor=0.95,
> multiSize=6073522176, multiFactor=0.5, singleSize=3036761088,
> singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
> cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
> cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false,
> firstKey=Danmark2010-01-26T21:02:50Z/outlinks:.dk/1459765153334/Put,
> lastKey=Motorveje i
>
> Danmark2010-08-24T14:03:07Z/outlinks:\xC3\x98ver\xC3\xB8d/1459766037971/Put,
> avgKeyLen=70, avgValueLen=20, entries=49195292, length=4896832843,
>
> cur=Hj\xC3\xA6lp:Sandkassen2010-11-02T21:40:44Z/outlinks:Adriaterhav/1459771842796/Put/vlen=20/seqid=0]
> to key
>
> Hj\xC3\xA6lp:Sandkassen2010-11-02T21:34:14Z/outlinks:\xC4\x8Crnomelj/1459771842779/Put/vlen=20/seqid=0
> at
>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:457)
> at
>
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:136)
> at
>
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:596)
> at
>
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5486)
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5637)
> at
>
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5424)
> at
>
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395)
> at
>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: On-disk size without header provided is
> 196736, but block header contains 65582. Block offset: -1, data starts
> with:
>
> DATABLK*\x00\x01\x00.\x00\x01\x00\x1A\x00\x00\x00\x00\x8D\xA08\xE2\x01\x00\x00@
> \x00\x00\x01\x00
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
> at
>
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
> at
>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
> ... 13 more
>
> Regards,
> Govind
>


Successful: hbase.apache.org HTML Checker

2016-05-02 Thread Apache Jenkins Server
Successful

If successful, the HTML and link-checking report for http://hbase.apache.org is 
available at 
https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/37/artifact/link_report/index.html.

If failed, see 
https://builds.apache.org/job/HBase%20Website%20Link%20Ckecker/37/console.

Could not seekToPreviousRow

2016-05-02 Thread Govind
Hi all,

I'm getting an exception while performing reverse scan on a HBase table.
Previously it was working fine but now there is some problem with seeking
to previous row. Any suggestions will be highly appreciated. Error log is
following:

org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
attempts=35, exceptions:
Mon May 02 10:59:29 CEST 2016,
RpcRetryingCaller{globalStartTime=1462179569123, pause=100, retries=35},
java.io.IOException: java.io.IOException: Could not seekToPreviousRow
StoreFileScanner[HFileScanner for reader
reader=file:/data/hbase-1.1.2/data/hbase/data/default/dawikitable/c8cdadcd1247e04720972ab5a25597a7/outlinks/3eac358ffb9d43018221fbddf9274ffd,
compression=none, cacheConf=blockCache=LruBlockCache{blockCount=149348,
currentSize=9919772624, freeSize=2866589744, maxSize=12786362368,
heapSize=9919772624, minSize=12147044352, minFactor=0.95,
multiSize=6073522176, multiFactor=0.5, singleSize=3036761088,
singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false,
firstKey=Danmark2010-01-26T21:02:50Z/outlinks:.dk/1459765153334/Put,
lastKey=Motorveje i
Danmark2010-08-24T14:03:07Z/outlinks:\xC3\x98ver\xC3\xB8d/1459766037971/Put,
avgKeyLen=70, avgValueLen=20, entries=49195292, length=4896832843,
cur=Hj\xC3\xA6lp:Sandkassen2010-11-02T21:40:44Z/outlinks:Adriaterhav/1459771842796/Put/vlen=20/seqid=0]
to key
Hj\xC3\xA6lp:Sandkassen2010-11-02T21:34:14Z/outlinks:\xC4\x8Crnomelj/1459771842779/Put/vlen=20/seqid=0
at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:457)
at
org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:136)
at
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:596)
at
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5486)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5637)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5424)
at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: On-disk size without header provided is
196736, but block header contains 65582. Block offset: -1, data starts
with:
DATABLK*\x00\x01\x00.\x00\x01\x00\x1A\x00\x00\x00\x00\x8D\xA08\xE2\x01\x00\x00@
\x00\x00\x01\x00
at
org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
at
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
at
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
... 13 more

Regards,
Govind