[jira] [Created] (HBASE-12635) Delete acl notify znode of table after the table is deleted

2014-12-04 Thread Liu Shaohui (JIRA)
Liu Shaohui created HBASE-12635:
---

 Summary: Delete acl notify znode of table after the table is 
deleted
 Key: HBASE-12635
 URL: https://issues.apache.org/jira/browse/HBASE-12635
 Project: HBase
  Issue Type: Bug
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor


In our multi-tenant hbase cluster, we found that there are over 1M znodes under 
the acl node. The reason is that users create and delete tables with different 
names frequently.  The acl notify znode are left there after the tables are 
deleted.

Simple solution is that deleting acl notify znode of table in AccessController 
when deleting the table.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: First release candidate for HBase 0.99.2 (RC0) is available. Please vote by 12/06/2014

2014-12-04 Thread Enis Söztutar
I think this is still a case-by-case basis. The problem is that we cannot
keep track of what methods/classes that Phoenix depends on without Phoenix
tells us first. There is a special marker
InterfaceAudience.LimitedPrivate(Phoenix) just for that purpose.

For some of the stuff (like SplitTransaction) for example, I do not think
that we should have any restrictions in HBase in terms of refactoring. For
things like MetaReader, etc I can see that having more stable interfaces
there will help both projects.

Enis

On Wed, Dec 3, 2014 at 7:44 AM, Ted Yu yuzhih...@gmail.com wrote:

 Compiling Phoenix master against 0.99.2, I got:
 http://pastebin.com/gaxCs8fT

 Some removed methods are in HBase classes that are marked
 with @InterfaceAudience.Private
 I want to get some opinion on whether such methods should be deprecated
 first.

 Cheers

 On Tue, Dec 2, 2014 at 10:28 PM, Enis Söztutar e...@apache.org wrote:

  First release candidate for HBase 0.99.2 (RC0) is available. Please vote
 by
  12/06/2014
 
  The first release candidate for the third release from branch-1, HBase
  0.99.2 RC0, is
  available for download at
 https://people.apache.org/~enis/hbase-0.99.2RC0/
 
  Maven artifacts are also available in the temporary repository
  https://repository.apache.org/content/repositories/orgapachehbase-1048/
 
  Signed with my code signing key E964B5FF. Can be found here:
  https://people.apache.org/keys/committer/enis.asc
 
  Signed tag in the repository can be found here:
 
 
 https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=tag;h=080ab7230847776b7a2867dcbd3d421969d2dd37
 
  NOTE IN GIGANTIC LETTERS THAT THIS IS A DEVELOPER RELEASE.
  DO NOT USE THIS RELEASE IN PRODUCTION.
 
  HBase 0.99.2 is a developer preview release, and an odd-numbered
 release
  as
  defined in
 https://hbase.apache.org/book/upgrading.html#hbase.versioning.
  This release IS NOT intended for production use, and does not contain any
  backwards or forwards compatibility guarantees (even within minor
 versions
  0.99.x). Please refrain from deploying this over important data.
 
  0.99.2 release is provided from branch-1, which will be the basis for
  HBase-1.0
  release. This is the last planned 0.99.x release before 1.0. The reason
 for
  doing a developer preview release is to get more testing for the branch-1
  code
  that will be released soon as HBase-1.0.0. Thus, all contribution in
 terms
  of
  testing, benchmarking, checking API / source /wire compatibility,
 checking
  out
  documentation and further code contribution is highly appreciated. 1.0
 will
  be
  the first series in the 1.x line of releases which are expected to keep
  compatibility with previous 1.x releases. Thus it is very important to
  check
  the client side and server side APIs for compatibility and
 maintainability
  concerns for future releases.
 
  0.99.2 builds on top of all the changes that is in the 0.99.0 (an
 overview
  can be found at [1]) and 0.99.1 releases[2]. The theme of (eventual) 1.0
  release is to become a stable base for future 1.x series of releases. 1.0
  release will aim to achieve at least the same level of stability of 0.98
  releases without introducing too many new features.
 
  The work to clearly mark and differentiate client facing  APIs, and
  redefine
  some of the client interfaces for improving semantics, easy of use and
  maintainability has continued in 0.99.2 release. Marking/remarking of
  interfaces with InterfaceAudience has also been going on (HBASE-10462),
  which will identify areas for compatibility (with clients, coprocessors
  and dependent projects like Phoenix) for future releases.
 
  0.99.2 contains 190 issues fixed on top of 0.99.1. Some other notable
  improvements
  in this release are
   - [HBASE-12075] - Preemptive Fast Fail
   - [HBASE-12147] - Porting Online Config Change from 89-fb
   - [HBASE-12354] - Update dependencies in time for 1.0 release
   - [HBASE-12363] - Improve how KEEP_DELETED_CELLS works with MIN_VERSIONS
   - [HBASE-12434] - Add a command to compact all the regions in a
  regionserver
   - [HBASE-8707] - Add LongComparator for filter
   - [HBASE-12286] - [shell] Add server/cluster online load of
 configuration
  changes
   - [HBASE-12361] - Show data locality of region in table page
   - [HBASE-12496] - A blockedRequestsCount metric
   - Switch to using new style of client APIs internally (in a lot of
 places)
   - Improvements in visibility labels
   - Perf improvements
   - Some more documentation improvements
   - Numerous improvements in other areas and bug fixes.
 
  The release has these changes in default behavior (from 0.99.1):
   - Disabled the Distributed Log Replay feature by default. Similar to
  0.98
and earlier releases Distributed Log Split is the default.
 
 
  Thanks for everybody who have contributed to this release. Full list of
 the
  issues
  can be found here:
 
 
 https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753version=12328822
 
 
  

split failed caused by FileNotFoundException

2014-12-04 Thread 周帅锋
In our hbase clusters, split sometimes failed because the file to be
splited does not exist in parent region. In 0.94.2, this will cause
regionserver shutdown because the split transction has reached  PONR state.
In 0.94.20 or 0.98, split will fail and can roll back, because the split
transction only reach  the state offlined_parent.

In 0.94.2, the error is like below:
2014-09-23 22:27:55,710 INFO org.apache.hadoop.hbase.catalog.MetaEditor:
Offlined parent region x in META
2014-09-23 22:27:55,820 INFO
org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup
of failed split of x
Caused by: java.io.IOException: java.io.IOException:
java.io.FileNotFoundException: File does not exist: x
Caused by: java.io.IOException: java.io.FileNotFoundException: File does
not exist: x
Caused by: java.io.FileNotFoundException: File does not exist: x
2014-09-23 22:27:55,823 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
xxx,60020,1411383568857: Abort; we got an error after point-of-no-return

The reasion of missing files is a little complex, the whole procedure
include two failure split and one compact:
1) there are too many files in the region and compact is requested, but not
execute because there are many CompactionRequests(compactionRunners) in the
compaction queue. The compactionRequest hodes the object of the Store, and
also hodes a storefile list to compact of the store.

2) the region size is big enough, and split is requested. the region is
offline during spliting,and the store is closed. but the split failed when
spliting files(for some reason, like io busy, etc. causing time out)
2014-09-23 18:26:02,738 INFO
org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup
of failed split of x; Took too long to split the files and create the
references, aborting split

3) split successfully roll back, and the region is online again. During
roll back procedure, a new Store object is created, but the store in the
compaction queue did not removed, so there are two(or maybe more) store
object in regionserver.

4) the compaction on the store of the region requested before running, and
some storefiles are compact and removed, new bigger storefiles are created.
but the store reinitialized in the rollback split procedure doesn't know
the change of the storefiles.

5) split on region running again and fail again, because the storefiles in
parrent region doesn't exist(removed by compaction). Also, the split
transction doesn't know that there is a new file created by the compaction.
In 0.94.2, this error can't be found until the daughter region open, but
it's too late, the split failed at PONR state, and this will causing
regionserver shutdown. In 0.94.20 and 0.98, when doing splitStoreFiles, it
will looking into the storefile in the parent region and can found the
error before PONR, so split failure can be roll back.
 code in HRegionFileSystem.splitStoreFile:
 ...
 byte[] lastKey = f.createReader().getLastKey();

So, this situation is a fatal error in previous 0.94 version, and also a
common bug in the later 0.94 and higher version. And this is also the
reason why sometimes storefile reader is null(closed by the first failure
split).


Re: Changing it so we do NOT archive hfiles by default

2014-12-04 Thread Stack
Thanks for the helpful discussion.  To close out this thread, in summary,
turning off 'archiving' is a non-starter; it is a fundamental at this
stage. Let me instead work on making delete do more work per cycle over in
HBASE-12626 Archive cleaner cannot keep up; it maxes out at about 400k
deletes/hour (multithread it or add a bulk delete to NN as Matteo
suggests). I also opened HBASE-12627 Add back snapshot batching facility
from HBASE-11360 dropped by HBASE-11742 (Thanks for chiming in Mr. Latham).

Let me work on the above.
Thanks,
St.Ack

P.S. For those interested, the cluster was of ~500 nodes. It was RAM
constrained; other processes on box also need RAM. Over a period of days,
the loading was thrown off kilter because it started taking double writes
going from one schema to another (Cluster was running hot before the double
loading was enabled).  The single-threaded master cleaner was deleting an
archived file every 9ms on average, about 400k deletes an hour.  The
constrained RAM and their having 4-5 column families had them creating
files in excess of this rate so we backed up.


On Mon, Nov 24, 2014 at 5:35 PM, lars hofhansl la...@apache.org wrote:

 Ah. I agree. Let's get that back in. Should not be too hard.
 As for this specific issue - apparently there are no snapshots involved.
 Are we still doing the work then? Need to study the code again.
 If it is the Namenode stuck on doing deletes, deleting the files
 immediately won't help (well it would safe a rename in the NameNode).
 Chatted with Stack a bit. Seems it takes 7-8ms for the NN to delete a file
 from a single client thread.
 The NameNode definitely can do operations per second, so doing that
 locally on each region server as soon as the HFile expires might help. Or
 maybe we need multiple threads in the archiver.
 -- Lars
   From: Dave Latham lat...@davelink.net
  To: dev@hbase.apache.org; lars hofhansl la...@apache.org
  Sent: Monday, November 24, 2014 9:51 AM
  Subject: Re: Changing it so we do NOT archive hfiles by default

 Even with the manifest feature, it is still inefficient to iterate over
 every hfile in the archive and check with the namenode to see if there is a
 new snapshot manifest that may reference that single hfile rather than
 doing that check a single time for the list of all archived hfiles.



 On Sat, Nov 22, 2014 at 3:08 PM, lars hofhansl la...@apache.org wrote:

 Hit send to fast.
 I meant to say: Actually is HBASE-11360 still needed when we have
 manifests?
   From: lars hofhansl la...@apache.org
  To: dev@hbase.apache.org dev@hbase.apache.org; lars hofhansl 
 la...@apache.org
  Sent: Saturday, November 22, 2014 2:58 PM
  Subject: Re: Changing it so we do NOT archive hfiles by default

 Actually in HBASE-11360 when we have manifests? The problem was scanning
 all those reference files, now those are all replaced with a manifest so
 maybe this is not a problem.-- Lars

   From: lars hofhansl la...@apache.org


  To: dev@hbase.apache.org dev@hbase.apache.org
  Sent: Saturday, November 22, 2014 2:41 PM
  Subject: Re: Changing it so we do NOT archive hfiles by default

 I agree. I did not realize we undid HBASE-11360. Offhand I see no reason
 why it had to be rolled back completely, rather than being adapted.We need
 to bring that functionality back.
 -- Lars
   From: Dave Latham lat...@davelink.net


  To: dev@hbase.apache.org
  Sent: Saturday, November 22, 2014 7:04 AM
  Subject: Re: Changing it so we do NOT archive hfiles by default

 If no snapshots are enabled, then I'll definitely be curious to hear more
 on the cause of not keeping up.
 I also think it's reasonable to delete files directly if there is no use
 for them in the archive.

 However, HBase does need to be able to handle large scale archive cleaning
 for those who are using archive based features.  One important way is
 processing the checks in batches rather than one at a time.  For
 HBASE-11360 as an example, even with the manifest file there's no reason we
 can't still check the archive files against the manifests in batches rather
 than reverting it to one at a time - that part of the fix is compatible and
 still important.

 I hope you guys are able to get past the issue for this cluster but that we
 can also address it at large.



 On Fri, Nov 21, 2014 at 3:16 PM, Esteban Gutierrez este...@cloudera.com
 wrote:

  For the specific case Stack mentioned here there are no snapshots enabled
  and its an 0.94.x release so no real need for this user to have the
 archive
  enabled. I've also seen this issue on 0.98 on with a busy NN (deletions
  pile up)
 
  I think it should be fine to fall back to the old behavior if snapshots
 are
  not being used and delete compacted files or HFiles from a dropped table
  immediately.
 
  One problem with HBASE-11360 was to maintain a better compatibility with
  snapshots in the current they work in branch-1 with the manifest file.
 
  cheers,
  esteban.
 
 
 
 
  --
  Cloudera, Inc.
 
 
  On Fri, 

Re: First release candidate for HBase 0.99.2 (RC0) is available. Please vote by 12/06/2014

2014-12-04 Thread Ted Yu
We can continue discussion on API compatibility in some other thread.

Enis said 1.0 RC would be cute on the 15th. I just wanted to raise some
awareness so that there is enough time for downstream projects to get ready
for the release.

Cheers

On Wed, Dec 3, 2014 at 8:07 AM, Sean Busbey bus...@cloudera.com wrote:

 On Wed, Dec 3, 2014 at 9:44 AM, Ted Yu yuzhih...@gmail.com wrote:

  Compiling Phoenix master against 0.99.2, I got:
  http://pastebin.com/gaxCs8fT
 
  Some removed methods are in HBase classes that are marked
  with @InterfaceAudience.Private
  I want to get some opinion on whether such methods should be deprecated
  first.
 
  Cheers
 
 
 I haven't looked through the whole list yet, but I can tell you you'll hit
 failures when you get to test compilation due to HBASE-12522. That part, at
 least, is expected.

 Given the discussion around HBASE-12566, this will probably require more of
 a feedback cycle than the RC will allow. Is there any down side to having
 that discussion prior to 1.0 instead of holding up 0.99.2?

 --
 Sean



Re: split failed caused by FileNotFoundException

2014-12-04 Thread 周帅锋
I rechecked the code in 0.98, this problem is solved by check the store
object in the compactrunner and cance the compact the compact.
HRegion.compact:

  byte[] cf = Bytes.toBytes(store.getColumnFamilyName());
  if (stores.get(cf) != store) {
LOG.warn(Store  + store.getColumnFamilyName() +  on region  +
this
+  has been re-instantiated, cancel this compaction request. 
+  It may be caused by the roll back of split transaction);
return false;
  }


But, is it better to replease the store object by the new one and continue
the compact on the store, instead of cancel?


2014-12-04 15:00 GMT+08:00 周帅锋 zhoushuaif...@gmail.com:

 In our hbase clusters, split sometimes failed because the file to be
 splited does not exist in parent region. In 0.94.2, this will cause
 regionserver shutdown because the split transction has reached  PONR state.
 In 0.94.20 or 0.98, split will fail and can roll back, because the split
 transction only reach  the state offlined_parent.

 In 0.94.2, the error is like below:
 2014-09-23 22:27:55,710 INFO org.apache.hadoop.hbase.catalog.MetaEditor:
 Offlined parent region x in META
 2014-09-23 22:27:55,820 INFO
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup
 of failed split of x
 Caused by: java.io.IOException: java.io.IOException:
 java.io.FileNotFoundException: File does not exist: x
 Caused by: java.io.IOException: java.io.FileNotFoundException: File does
 not exist: x
 Caused by: java.io.FileNotFoundException: File does not exist: x
 2014-09-23 22:27:55,823 FATAL
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
 xxx,60020,1411383568857: Abort; we got an error after point-of-no-return

 The reasion of missing files is a little complex, the whole procedure
 include two failure split and one compact:
 1) there are too many files in the region and compact is requested, but
 not execute because there are many CompactionRequests(compactionRunners) in
 the compaction queue. The compactionRequest hodes the object of the Store,
 and also hodes a storefile list to compact of the store.

 2) the region size is big enough, and split is requested. the region is
 offline during spliting,and the store is closed. but the split failed when
 spliting files(for some reason, like io busy, etc. causing time out)
 2014-09-23 18:26:02,738 INFO
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup
 of failed split of x; Took too long to split the files and create the
 references, aborting split

 3) split successfully roll back, and the region is online again. During
 roll back procedure, a new Store object is created, but the store in the
 compaction queue did not removed, so there are two(or maybe more) store
 object in regionserver.

 4) the compaction on the store of the region requested before running, and
 some storefiles are compact and removed, new bigger storefiles are created.
 but the store reinitialized in the rollback split procedure doesn't know
 the change of the storefiles.

 5) split on region running again and fail again, because the storefiles in
 parrent region doesn't exist(removed by compaction). Also, the split
 transction doesn't know that there is a new file created by the compaction.
 In 0.94.2, this error can't be found until the daughter region open, but
 it's too late, the split failed at PONR state, and this will causing
 regionserver shutdown. In 0.94.20 and 0.98, when doing splitStoreFiles, it
 will looking into the storefile in the parent region and can found the
 error before PONR, so split failure can be roll back.
  code in HRegionFileSystem.splitStoreFile:
  ...
  byte[] lastKey = f.createReader().getLastKey();

 So, this situation is a fatal error in previous 0.94 version, and also a
 common bug in the later 0.94 and higher version. And this is also the
 reason why sometimes storefile reader is null(closed by the first failure
 split).



[jira] [Created] (HBASE-12636) Avoid too many write operations on zookeeper in replication

2014-12-04 Thread Liu Shaohui (JIRA)
Liu Shaohui created HBASE-12636:
---

 Summary: Avoid too many write operations on zookeeper in 
replication
 Key: HBASE-12636
 URL: https://issues.apache.org/jira/browse/HBASE-12636
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Assignee: Liu Shaohui
 Fix For: 1.0.0


In our production cluster, we found there are about over 1k write operations 
per second on zookeeper from hbase replication. The reason is that the 
replication source will write the log position to zookeeper for every edit 
shipping. If the current replicating WAL is just the WAL that regionserver is 
writing to,  each skipping will be very small but the frequency is very high, 
which causes many write operations on zookeeper.

A simple solution is that writing log position to zookeeper when position diff 
or skipped edit number is larger than a threshold, not every  edit shipping.

Suggestions are welcomed, thx~





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Region is out of bounds

2014-12-04 Thread Vladimir Rodionov
Kevin,

Thank you for your response. This is not a question on how to configure
correctly HBase cluster for write heavy workloads. This is internal HBase
issue - something is wrong in a default logic of compaction selection
algorithm in 0.94-0.98. It seems that nobody has ever tested importing data
with very high hbase.hstore.blockingStoreFiles value (200 in our case).

-Vladimir Rodionov

On Wed, Dec 3, 2014 at 6:38 AM, Kevin O'dell kevin.od...@cloudera.com
wrote:

 Vladimir,

   I know you said, do not ask me why, but I am going to have to ask you
 why.  The fact you are doing this(this being blocking store files  200)
 tells me there is something or multiple somethings wrong with your cluster
 setup.  A couple things come to mind:

 * During this heavy write period, could we use bulk loads?  If so, this
 should solve almost all of your problems

 * 1GB region size is WAY too small, and if you are pushing the volume of
 data you are talking about I would recommend 10 - 20GB region sizes this
 should help keep your region count smaller as well which will result in
 more optimal writes

 * Your cluster may be undersized, if you are setting the blocking to be
 that high, you may be pushing too much data for your cluster overall.

 Would you be so kind as to pass me a few pieces of information?

 1.) Cluster size
 2.) Average region count per RS
 3.) Heap size, Memstore global settings, and block cache settings
 4.) a RS log to pastebin and a time frame of high writes

 I can probably make some solid suggestions for you based on the above data.

 On Wed, Dec 3, 2014 at 1:04 AM, Vladimir Rodionov vladrodio...@gmail.com
 wrote:

  This is what we observed in our environment(s)
 
  The issue exists in CDH4.5, 5.1, HDP2.1, Mapr4
 
  If some one sets # of blocking stores way above default value, say - 200
 to
  avoid write stalls during intensive data loading (do not ask me , why we
 do
  this), then
  one of the regions grows indefinitely and takes more 99% of overall
 table.
 
  It can't be split because it still has orphaned reference files. Some of
 a
  reference files are able to avoid compactions for a long time, obviously.
 
  The split policy is IncreasingToUpperBound, max region size is 1G. I do
 my
  tests on CDH4.5 mostly but all other distros seem have the same issue.
 
  My attempt to add reference files forcefully to compaction list in
  Store.requetsCompaction() when region exceeds recommended maximum size
 did
  not work out well - some weird results in our test cases (but HBase tests
  are OK: small, medium and large).
 
  What is so special with these reference files? Any ideas, what can be
 done
  here to fix the issue?
 
  -Vladimir Rodionov
 



 --
 Kevin O'Dell
 Systems Engineer, Cloudera



Re: First release candidate for HBase 0.99.2 (RC0) is available. Please vote by 12/06/2014

2014-12-04 Thread Stack
On Wed, Dec 3, 2014 at 8:07 AM, Sean Busbey bus...@cloudera.com wrote:

 On Wed, Dec 3, 2014 at 9:44 AM, Ted Yu yuzhih...@gmail.com wrote:

  Compiling Phoenix master against 0.99.2, I got:
  http://pastebin.com/gaxCs8fT
 
  Some removed methods are in HBase classes that are marked
  with @InterfaceAudience.Private
  I want to get some opinion on whether such methods should be deprecated
  first.
 


There is no room for 'opinion' in this area. A bunch of work has been done
to remove ambiguity around our guarantees. The law as it stands is:
InterfaceAudience.Private means: APIs for HBase internals developers. No
guarantees on compatibility or availability in future versions. ... (From
http://hbase.apache.org/book.html#code.standards).  We can add verbiage but
you would have to have a perverse squint to interpret this no guarantees
as no guarantees -- after a deprecation cycle.

That said, we are an accommodating lot.  I suggest you take the list over
to phoenix dev and that phoenix comes back with explicit asks rather than
this blanket list taken from a compile against their master branch.

As per Sean, this discussion does not belong on a dev release RC vote
thread, nor should it hold up its release.
St.Ack


Re: A face-lift for 1.0

2014-12-04 Thread lars hofhansl
+1
I just came across one of the various HBase vs. Cassandra articles, and one of 
the main tenants of the articles was how much better the Cassandra 
documentation was.Thank god we have Misty now. :)

(not sure how much just a skin would help, but it surely won't hurt)

-- Lars
  From: Nick Dimiduk ndimi...@gmail.com
 To: hbase-dev dev@hbase.apache.org 
 Sent: Tuesday, December 2, 2014 9:46 AM
 Subject: A face-lift for 1.0
   
Heya,

In mind of the new release, I was thinking we should clean up our act a
little bit in regard to hbase.apache.org and our book. Just because the
project started in 2007 doesn't mean we need a site that looks like it's
from 2007. Phoenix's site looks great in this regard.

For the home page, I was thinking of converting it over to bootstrap [0] so
that it'll be easier to pick up theme, either on of our own or something
pre-canned [1]. I'm no web designer, but the idea is this would make it
easier for someone who is to help us out.

For the book, I just want to skin it -- no intention of changing docbook
part (such a decision I'll leave up to Misty). I'm less sure on this
project, but Riak's docs are a nice inspiration.

What do you think? Do we know any web designers who can help out with the
CSS?

-n

[0]: http://getbootstrap.com
[1]: https://wrapbootstrap.com/
[2]: http://docs.basho.com/riak/latest/


   

[jira] [Resolved] (HBASE-12545) Fix backwards compatibility issue introduced with HBASE-12363

2014-12-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-12545.

   Resolution: Not a Problem
Fix Version/s: (was: 0.98.9)
   (was: 1.0.0)
 Assignee: (was: Lars Hofhansl)

Resolving as Not A Problem

 Fix backwards compatibility issue introduced with HBASE-12363
 -

 Key: HBASE-12545
 URL: https://issues.apache.org/jira/browse/HBASE-12545
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver
Reporter: Lars Hofhansl
 Attachments: 12545.txt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Region is out of bounds

2014-12-04 Thread Andrew Purtell
Actually I have set hbase.hstore.blockingStoreFiles to 200 in testing
exactly :-), but must not have generated sufficient load to encounter the
issue you are seeing. Maybe it would be possible to adapt one of the ingest
integration tests to trigger this problem? Set blockingStoreFiles to 200 or
more. Tune down the region size to 128K or similar. If
it's reproducible like that please open a JIRA.

On Wed, Dec 3, 2014 at 9:07 AM, Vladimir Rodionov vladrodio...@gmail.com
wrote:

 Kevin,

 Thank you for your response. This is not a question on how to configure
 correctly HBase cluster for write heavy workloads. This is internal HBase
 issue - something is wrong in a default logic of compaction selection
 algorithm in 0.94-0.98. It seems that nobody has ever tested importing data
 with very high hbase.hstore.blockingStoreFiles value (200 in our case).

 -Vladimir Rodionov

 On Wed, Dec 3, 2014 at 6:38 AM, Kevin O'dell kevin.od...@cloudera.com
 wrote:

  Vladimir,
 
I know you said, do not ask me why, but I am going to have to ask you
  why.  The fact you are doing this(this being blocking store files  200)
  tells me there is something or multiple somethings wrong with your
 cluster
  setup.  A couple things come to mind:
 
  * During this heavy write period, could we use bulk loads?  If so, this
  should solve almost all of your problems
 
  * 1GB region size is WAY too small, and if you are pushing the volume of
  data you are talking about I would recommend 10 - 20GB region sizes this
  should help keep your region count smaller as well which will result in
  more optimal writes
 
  * Your cluster may be undersized, if you are setting the blocking to be
  that high, you may be pushing too much data for your cluster overall.
 
  Would you be so kind as to pass me a few pieces of information?
 
  1.) Cluster size
  2.) Average region count per RS
  3.) Heap size, Memstore global settings, and block cache settings
  4.) a RS log to pastebin and a time frame of high writes
 
  I can probably make some solid suggestions for you based on the above
 data.
 
  On Wed, Dec 3, 2014 at 1:04 AM, Vladimir Rodionov 
 vladrodio...@gmail.com
  wrote:
 
   This is what we observed in our environment(s)
  
   The issue exists in CDH4.5, 5.1, HDP2.1, Mapr4
  
   If some one sets # of blocking stores way above default value, say -
 200
  to
   avoid write stalls during intensive data loading (do not ask me , why
 we
  do
   this), then
   one of the regions grows indefinitely and takes more 99% of overall
  table.
  
   It can't be split because it still has orphaned reference files. Some
 of
  a
   reference files are able to avoid compactions for a long time,
 obviously.
  
   The split policy is IncreasingToUpperBound, max region size is 1G. I do
  my
   tests on CDH4.5 mostly but all other distros seem have the same issue.
  
   My attempt to add reference files forcefully to compaction list in
   Store.requetsCompaction() when region exceeds recommended maximum size
  did
   not work out well - some weird results in our test cases (but HBase
 tests
   are OK: small, medium and large).
  
   What is so special with these reference files? Any ideas, what can be
  done
   here to fix the issue?
  
   -Vladimir Rodionov
  
 
 
 
  --
  Kevin O'Dell
  Systems Engineer, Cloudera
 




-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: First release candidate for HBase 0.99.2 (RC0) is available. Please vote by 12/06/2014

2014-12-04 Thread Andrew Purtell
Phoenix PMC here, although I'm only speaking my own opinion. Concur, the
code takes liberties... We need to clean our own house.

On Wed, Dec 3, 2014 at 10:59 AM, Stack st...@duboce.net wrote:

 On Wed, Dec 3, 2014 at 8:07 AM, Sean Busbey bus...@cloudera.com wrote:

  On Wed, Dec 3, 2014 at 9:44 AM, Ted Yu yuzhih...@gmail.com wrote:
 
   Compiling Phoenix master against 0.99.2, I got:
   http://pastebin.com/gaxCs8fT
  
   Some removed methods are in HBase classes that are marked
   with @InterfaceAudience.Private
   I want to get some opinion on whether such methods should be deprecated
   first.
  


 There is no room for 'opinion' in this area. A bunch of work has been done
 to remove ambiguity around our guarantees. The law as it stands is:
 InterfaceAudience.Private means: APIs for HBase internals developers. No
 guarantees on compatibility or availability in future versions. ... (From
 http://hbase.apache.org/book.html#code.standards).  We can add verbiage
 but
 you would have to have a perverse squint to interpret this no guarantees
 as no guarantees -- after a deprecation cycle.

 That said, we are an accommodating lot.  I suggest you take the list over
 to phoenix dev and that phoenix comes back with explicit asks rather than
 this blanket list taken from a compile against their master branch.

 As per Sean, this discussion does not belong on a dev release RC vote
 thread, nor should it hold up its release.
 St.Ack




-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: First release candidate for HBase 0.99.2 (RC0) is available. Please vote by 12/06/2014

2014-12-04 Thread Andrew Purtell
See https://issues.apache.org/jira/browse/PHOENIX-1501. Pardon the noise on
a VOTE thread.

On Thu, Dec 4, 2014 at 10:31 AM, Andrew Purtell apurt...@apache.org wrote:

 Phoenix PMC here, although I'm only speaking my own opinion. Concur, the
 code takes liberties... We need to clean our own house.

 On Wed, Dec 3, 2014 at 10:59 AM, Stack st...@duboce.net wrote:

 On Wed, Dec 3, 2014 at 8:07 AM, Sean Busbey bus...@cloudera.com wrote:

  On Wed, Dec 3, 2014 at 9:44 AM, Ted Yu yuzhih...@gmail.com wrote:
 
   Compiling Phoenix master against 0.99.2, I got:
   http://pastebin.com/gaxCs8fT
  
   Some removed methods are in HBase classes that are marked
   with @InterfaceAudience.Private
   I want to get some opinion on whether such methods should be
 deprecated
   first.
  


 There is no room for 'opinion' in this area. A bunch of work has been done
 to remove ambiguity around our guarantees. The law as it stands is:
 InterfaceAudience.Private means: APIs for HBase internals developers. No
 guarantees on compatibility or availability in future versions. ... (From
 http://hbase.apache.org/book.html#code.standards).  We can add verbiage
 but
 you would have to have a perverse squint to interpret this no guarantees
 as no guarantees -- after a deprecation cycle.

 That said, we are an accommodating lot.  I suggest you take the list over
 to phoenix dev and that phoenix comes back with explicit asks rather than
 this blanket list taken from a compile against their master branch.

 As per Sean, this discussion does not belong on a dev release RC vote
 thread, nor should it hold up its release.
 St.Ack




 --
 Best regards,

- Andy

 Problems worthy of attack prove their worth by hitting back. - Piet Hein
 (via Tom White)




-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Please welcome our latest committer, Sean Busbey

2014-12-04 Thread Stack
Sean has been doing excellent work around these environs. Your PMC made him
a committer in recognition.  Welcome Sean!

St.Ack


Re: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread Ted Yu
Congratulations, Sean !

On Thu, Dec 4, 2014 at 12:10 PM, Stack st...@duboce.net wrote:

 Sean has been doing excellent work around these environs. Your PMC made him
 a committer in recognition.  Welcome Sean!

 St.Ack



Re: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread Esteban Gutierrez
Congratulations Sean! keep more of that great work coming!

esteban.

--
Cloudera, Inc.


On Thu, Dec 4, 2014 at 12:12 PM, Ted Yu yuzhih...@gmail.com wrote:

 Congratulations, Sean !

 On Thu, Dec 4, 2014 at 12:10 PM, Stack st...@duboce.net wrote:

  Sean has been doing excellent work around these environs. Your PMC made
 him
  a committer in recognition.  Welcome Sean!
 
  St.Ack
 



Re: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread Srikanth Srungarapu
Congrats, Sean!

On Thu, Dec 4, 2014 at 12:15 PM, Esteban Gutierrez este...@cloudera.com
wrote:

 Congratulations Sean! keep more of that great work coming!

 esteban.

 --
 Cloudera, Inc.


 On Thu, Dec 4, 2014 at 12:12 PM, Ted Yu yuzhih...@gmail.com wrote:

  Congratulations, Sean !
 
  On Thu, Dec 4, 2014 at 12:10 PM, Stack st...@duboce.net wrote:
 
   Sean has been doing excellent work around these environs. Your PMC made
  him
   a committer in recognition.  Welcome Sean!
  
   St.Ack
  
 



Re: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread Jonathan Hsieh
Congrats Sean!

On Thu, Dec 4, 2014 at 12:10 PM, Stack st...@duboce.net wrote:

 Sean has been doing excellent work around these environs. Your PMC made him
 a committer in recognition.  Welcome Sean!

 St.Ack




-- 
// Jonathan Hsieh (shay)
// HBase Tech Lead, Software Engineer, Cloudera
// j...@cloudera.com // @jmhsieh


Re: Region is out of bounds

2014-12-04 Thread Vladimir Rodionov
Andrew,

What HBase version have you run your test on?

This issue probably does not exist anymore in a latest Apache releases, but
still exists in not so latest, but still actively used, versions of CDH,
HDP etc. We have discovered it during large data set loading ( 100s of GB)
in our cluster (4 nodes).

-Vladimir

On Thu, Dec 4, 2014 at 10:23 AM, Andrew Purtell apurt...@apache.org wrote:

 Actually I have set hbase.hstore.blockingStoreFiles to 200 in testing
 exactly :-), but must not have generated sufficient load to encounter the
 issue you are seeing. Maybe it would be possible to adapt one of the ingest
 integration tests to trigger this problem? Set blockingStoreFiles to 200 or
 more. Tune down the region size to 128K or similar. If
 it's reproducible like that please open a JIRA.

 On Wed, Dec 3, 2014 at 9:07 AM, Vladimir Rodionov vladrodio...@gmail.com
 wrote:

  Kevin,
 
  Thank you for your response. This is not a question on how to configure
  correctly HBase cluster for write heavy workloads. This is internal HBase
  issue - something is wrong in a default logic of compaction selection
  algorithm in 0.94-0.98. It seems that nobody has ever tested importing
 data
  with very high hbase.hstore.blockingStoreFiles value (200 in our case).
 
  -Vladimir Rodionov
 
  On Wed, Dec 3, 2014 at 6:38 AM, Kevin O'dell kevin.od...@cloudera.com
  wrote:
 
   Vladimir,
  
 I know you said, do not ask me why, but I am going to have to ask
 you
   why.  The fact you are doing this(this being blocking store files 
 200)
   tells me there is something or multiple somethings wrong with your
  cluster
   setup.  A couple things come to mind:
  
   * During this heavy write period, could we use bulk loads?  If so, this
   should solve almost all of your problems
  
   * 1GB region size is WAY too small, and if you are pushing the volume
 of
   data you are talking about I would recommend 10 - 20GB region sizes
 this
   should help keep your region count smaller as well which will result in
   more optimal writes
  
   * Your cluster may be undersized, if you are setting the blocking to be
   that high, you may be pushing too much data for your cluster overall.
  
   Would you be so kind as to pass me a few pieces of information?
  
   1.) Cluster size
   2.) Average region count per RS
   3.) Heap size, Memstore global settings, and block cache settings
   4.) a RS log to pastebin and a time frame of high writes
  
   I can probably make some solid suggestions for you based on the above
  data.
  
   On Wed, Dec 3, 2014 at 1:04 AM, Vladimir Rodionov 
  vladrodio...@gmail.com
   wrote:
  
This is what we observed in our environment(s)
   
The issue exists in CDH4.5, 5.1, HDP2.1, Mapr4
   
If some one sets # of blocking stores way above default value, say -
  200
   to
avoid write stalls during intensive data loading (do not ask me , why
  we
   do
this), then
one of the regions grows indefinitely and takes more 99% of overall
   table.
   
It can't be split because it still has orphaned reference files. Some
  of
   a
reference files are able to avoid compactions for a long time,
  obviously.
   
The split policy is IncreasingToUpperBound, max region size is 1G. I
 do
   my
tests on CDH4.5 mostly but all other distros seem have the same
 issue.
   
My attempt to add reference files forcefully to compaction list in
Store.requetsCompaction() when region exceeds recommended maximum
 size
   did
not work out well - some weird results in our test cases (but HBase
  tests
are OK: small, medium and large).
   
What is so special with these reference files? Any ideas, what can be
   done
here to fix the issue?
   
-Vladimir Rodionov
   
  
  
  
   --
   Kevin O'Dell
   Systems Engineer, Cloudera
  
 



 --
 Best regards,

- Andy

 Problems worthy of attack prove their worth by hitting back. - Piet Hein
 (via Tom White)



Re: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread lars hofhansl
Welcome Sean! 
 From: Stack st...@duboce.net
 To: HBase Dev List dev@hbase.apache.org 
 Sent: Thursday, December 4, 2014 12:10 PM
 Subject: Please welcome our latest committer, Sean Busbey
   
Sean has been doing excellent work around these environs. Your PMC made him
a committer in recognition.  Welcome Sean!

St.Ack


  

Re: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread Nick Dimiduk
Nice work Sean, congratulations!

On Thu, Dec 4, 2014 at 12:10 PM, Stack st...@duboce.net wrote:

 Sean has been doing excellent work around these environs. Your PMC made him
 a committer in recognition.  Welcome Sean!

 St.Ack



Re: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread Enis Söztutar
Congrats and welcome.

Enis

On Thu, Dec 4, 2014 at 12:53 PM, Nick Dimiduk ndimi...@gmail.com wrote:

 Nice work Sean, congratulations!

 On Thu, Dec 4, 2014 at 12:10 PM, Stack st...@duboce.net wrote:

  Sean has been doing excellent work around these environs. Your PMC made
 him
  a committer in recognition.  Welcome Sean!
 
  St.Ack
 



Re: Testing and CI -- Apache Jenkins Builds (WAS - Re: Testing)

2014-12-04 Thread Stack
FYI, hadoopqa is broke until mighty apache infra succeed restoring svn
server [1]
St.Ack

1.
https://blogs.apache.org/infra/entry/subversion_master_undergoing_emergency_maintenance

On Tue, Nov 4, 2014 at 12:17 PM, Andrew Purtell apurt...@apache.org wrote:

 ... and the 0.98 branch has a couple of well known flappers, which I will
 probably disable (will file JIRAs for this if so) so we can get consistent
 blue builds there as well.

 On Tue, Nov 4, 2014 at 9:38 AM, Stack st...@duboce.net wrote:

  Branch-1 and master have stabilized and now run mostly blue (give or take
  the odd failure) [1][2]. Having a mostly blue branch-1 has helped us
  identify at least one destabilizing commit in the last few days, maybe
 two;
  this is as it should be (smile).
 
  Lets keep our builds blue. If you commit a patch, make sure subsequent
  builds stay blue. You can subscribe to bui...@hbase.apache.org to get
  notice of failures if not already subscribed.
 
  Thanks,
  St.Ack
 
  1. https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.0/
  2. https://builds.apache.org/view/H-L/view/HBase/job/HBase-TRUNK/
 
 
  On Mon, Oct 13, 2014 at 4:41 PM, Stack st...@duboce.net wrote:
 
   A few notes on testing.
  
   Too long to read, infra is more capable now and after some work, we are
   seeing branch-1 and trunk mostly running blue. Lets try and keep it
 this
   way going forward.
  
   Apache Infra has new, more capable hardware.
  
   A recent spurt of test fixing combined with more capable hardware seems
  to
   have gotten us to a new place; tests are mostly passing now on branch-1
  and
   master.  Lets try and keep it this way and start to trust our test runs
   again.  Just a few flakies remain.  Lets try and nail them.
  
   Our tests now run in parallel with other test suites where previous we
  ran
   alone. You can see this sometimes when our zombie detector reports
 tests
   from another project altogether as lingerers (To be fixed).  Some of
 our
   tests are failing because a concurrent hbase run is undoing classes and
   data from under it. Also, lets fix.
  
   Our tests are brittle. It takes 75minutes for them to complete.  Many
 are
   heavy-duty integration tests starting up multiple clusters and
 mapreduce
   all in the one JVM. It is a miracle they pass at all.  Usually
  integration
   tests have been cast as unit tests because there was no where else for
  them
   to get an airing.  We have the hbase-it suite now which would be a more
  apt
   place but until these are run on a regular basis in public for all to
  see,
   the fat integration tests disguised as unit tests will remain.  A
 review
  of
   our current unit tests weeding the old cruft and the no longer relevant
  or
   duplicates would be a nice undertaking if someone is looking to
  contribute.
  
   Alex Newman has been working on making our tests work up on travis and
   circle-ci.  That'll be sweet when it goes end-to-end.  He also added in
   some type categorizations -- client, filter, mapreduce -- alongside
 our
   old sizing categorizations of small/medium/large.  His thinking is
 that
   we can run these categorizations in parallel so we could run the total
   suite in about the time of the longest test, say 20-30minutes?  We
 could
   even change Apache to run them this way.
  
   FYI,
   St.Ack
  
  
  
  
  
  
  
 



 --
 Best regards,

- Andy

 Problems worthy of attack prove their worth by hitting back. - Piet Hein
 (via Tom White)



[jira] [Created] (HBASE-12637) Compilation with Hadoop-2.4- is broken

2014-12-04 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-12637:
-

 Summary: Compilation with Hadoop-2.4- is broken 
 Key: HBASE-12637
 URL: https://issues.apache.org/jira/browse/HBASE-12637
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar


HBASE-12554 introduced 
{code}
public static class MockMapping extends ScriptBasedMapping {
{code}

which breaks compilation with earlier hadoop versions: 
{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
(default-testCompile) on project hbase-server: Compilation failure: Compilation 
failure:
[ERROR] 
/Users/enis/projects/rc-test/hbase-0.99.2/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java:[77,42]
 error: cannot inherit from final ScriptBasedMapping
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Region is out of bounds

2014-12-04 Thread Andrew Purtell
Most versions of 0.98 since 0.98.1, but I haven't run a punishing high
scale bulk ingest for its own sake, high-ish rate ingest and a setting of
blockingStoreFiles to 200 have been in service of getting data in for
subsequent testing.


On Thu, Dec 4, 2014 at 12:43 PM, Vladimir Rodionov vladrodio...@gmail.com
wrote:

 Andrew,

 What HBase version have you run your test on?

 This issue probably does not exist anymore in a latest Apache releases, but
 still exists in not so latest, but still actively used, versions of CDH,
 HDP etc. We have discovered it during large data set loading ( 100s of GB)
 in our cluster (4 nodes).

 -Vladimir

 On Thu, Dec 4, 2014 at 10:23 AM, Andrew Purtell apurt...@apache.org
 wrote:

  Actually I have set hbase.hstore.
 ​​
 blockingStoreFiles to 200 in testing
  exactly :-), but must not have generated sufficient load to encounter the
  issue you are seeing. Maybe it would be possible to adapt one of the
 ingest
  integration tests to trigger this problem? Set blockingStoreFiles to 200
 or
  more. Tune down the region size to 128K or similar. If
  it's reproducible like that please open a JIRA.
 
  On Wed, Dec 3, 2014 at 9:07 AM, Vladimir Rodionov 
 vladrodio...@gmail.com
  wrote:
 
   Kevin,
  
   Thank you for your response. This is not a question on how to configure
   correctly HBase cluster for write heavy workloads. This is internal
 HBase
   issue - something is wrong in a default logic of compaction selection
   algorithm in 0.94-0.98. It seems that nobody has ever tested importing
  data
   with very high hbase.hstore.blockingStoreFiles value (200 in our case).
  
   -Vladimir Rodionov
  
   On Wed, Dec 3, 2014 at 6:38 AM, Kevin O'dell kevin.od...@cloudera.com
 
   wrote:
  
Vladimir,
   
  I know you said, do not ask me why, but I am going to have to ask
  you
why.  The fact you are doing this(this being blocking store files 
  200)
tells me there is something or multiple somethings wrong with your
   cluster
setup.  A couple things come to mind:
   
* During this heavy write period, could we use bulk loads?  If so,
 this
should solve almost all of your problems
   
* 1GB region size is WAY too small, and if you are pushing the volume
  of
data you are talking about I would recommend 10 - 20GB region sizes
  this
should help keep your region count smaller as well which will result
 in
more optimal writes
   
* Your cluster may be undersized, if you are setting the blocking to
 be
that high, you may be pushing too much data for your cluster overall.
   
Would you be so kind as to pass me a few pieces of information?
   
1.) Cluster size
2.) Average region count per RS
3.) Heap size, Memstore global settings, and block cache settings
4.) a RS log to pastebin and a time frame of high writes
   
I can probably make some solid suggestions for you based on the above
   data.
   
On Wed, Dec 3, 2014 at 1:04 AM, Vladimir Rodionov 
   vladrodio...@gmail.com
wrote:
   
 This is what we observed in our environment(s)

 The issue exists in CDH4.5, 5.1, HDP2.1, Mapr4

 If some one sets # of blocking stores way above default value, say
 -
   200
to
 avoid write stalls during intensive data loading (do not ask me ,
 why
   we
do
 this), then
 one of the regions grows indefinitely and takes more 99% of overall
table.

 It can't be split because it still has orphaned reference files.
 Some
   of
a
 reference files are able to avoid compactions for a long time,
   obviously.

 The split policy is IncreasingToUpperBound, max region size is 1G.
 I
  do
my
 tests on CDH4.5 mostly but all other distros seem have the same
  issue.

 My attempt to add reference files forcefully to compaction list in
 Store.requetsCompaction() when region exceeds recommended maximum
  size
did
 not work out well - some weird results in our test cases (but HBase
   tests
 are OK: small, medium and large).

 What is so special with these reference files? Any ideas, what can
 be
done
 here to fix the issue?

 -Vladimir Rodionov

   
   
   
--
Kevin O'Dell
Systems Engineer, Cloudera
   
  
 
 
 
  --
  Best regards,
 
 - Andy
 
  Problems worthy of attack prove their worth by hitting back. - Piet Hein
  (via Tom White)
 




-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread Andrew Purtell
Congratulations Sean!

On Thu, Dec 4, 2014 at 12:10 PM, Stack st...@duboce.net wrote:

 Sean has been doing excellent work around these environs. Your PMC made him
 a committer in recognition.  Welcome Sean!

 St.Ack




-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: First release candidate for HBase 0.99.2 (RC0) is available. Please vote by 12/06/2014

2014-12-04 Thread Enis Söztutar
Setting aside the Interface discussions, here is my +1 for the RC.

+1

Downloaded artifacts,
Checked sigs,
Checked crcs,
Checked the book
Checked dir layout in bin and src artifacts
Checked jars of hbase and hadoop in bin artifact
Checked version strings
Run in local mode
Run basic smoke tests in shell
Run LTT
Build the src artifact with hadoop versions 2.2.0,2.3.0,2.4.0, 2.5.0.
2.6.0. Compilation with 2.4.0 and before is broken, but it is ok for
this RC. See HBASE-12637
Checked maven repository artifacts by running the hbase-downstreamer
project test.


Reminder, Sat is the last day to vote on this RC. Please plan to spend some
time on the RC so that we can iron out issues for the next 1.0.0RC.

Enis

On Thu, Dec 4, 2014 at 10:38 AM, Andrew Purtell apurt...@apache.org wrote:

 See https://issues.apache.org/jira/browse/PHOENIX-1501. Pardon the noise
 on
 a VOTE thread.

 On Thu, Dec 4, 2014 at 10:31 AM, Andrew Purtell apurt...@apache.org
 wrote:

  Phoenix PMC here, although I'm only speaking my own opinion. Concur, the
  code takes liberties... We need to clean our own house.
 
  On Wed, Dec 3, 2014 at 10:59 AM, Stack st...@duboce.net wrote:
 
  On Wed, Dec 3, 2014 at 8:07 AM, Sean Busbey bus...@cloudera.com
 wrote:
 
   On Wed, Dec 3, 2014 at 9:44 AM, Ted Yu yuzhih...@gmail.com wrote:
  
Compiling Phoenix master against 0.99.2, I got:
http://pastebin.com/gaxCs8fT
   
Some removed methods are in HBase classes that are marked
with @InterfaceAudience.Private
I want to get some opinion on whether such methods should be
  deprecated
first.
   
 
 
  There is no room for 'opinion' in this area. A bunch of work has been
 done
  to remove ambiguity around our guarantees. The law as it stands is:
  InterfaceAudience.Private means: APIs for HBase internals developers.
 No
  guarantees on compatibility or availability in future versions. ...
 (From
  http://hbase.apache.org/book.html#code.standards).  We can add verbiage
  but
  you would have to have a perverse squint to interpret this no
 guarantees
  as no guarantees -- after a deprecation cycle.
 
  That said, we are an accommodating lot.  I suggest you take the list
 over
  to phoenix dev and that phoenix comes back with explicit asks rather
 than
  this blanket list taken from a compile against their master branch.
 
  As per Sean, this discussion does not belong on a dev release RC vote
  thread, nor should it hold up its release.
  St.Ack
 
 
 
 
  --
  Best regards,
 
 - Andy
 
  Problems worthy of attack prove their worth by hitting back. - Piet Hein
  (via Tom White)
 



 --
 Best regards,

- Andy

 Problems worthy of attack prove their worth by hitting back. - Piet Hein
 (via Tom White)



Re: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread Niels Basjes
Congrats Sean!

On Thu, Dec 4, 2014 at 9:10 PM, Stack st...@duboce.net wrote:

 Sean has been doing excellent work around these environs. Your PMC made him
 a committer in recognition.  Welcome Sean!

 St.Ack




-- 
Best regards / Met vriendelijke groeten,

Niels Basjes


Re: First release candidate for HBase 0.99.2 (RC0) is available. Please vote by 12/06/2014

2014-12-04 Thread Ted Yu
+1

Checked the book
Checked dir layout in bin and src artifacts
Ran unit test suite - passed.

Will do some more validation, time permitting.

On Thu, Dec 4, 2014 at 2:35 PM, Enis Söztutar e...@apache.org wrote:

 Setting aside the Interface discussions, here is my +1 for the RC.

 +1

 Downloaded artifacts,
 Checked sigs,
 Checked crcs,
 Checked the book
 Checked dir layout in bin and src artifacts
 Checked jars of hbase and hadoop in bin artifact
 Checked version strings
 Run in local mode
 Run basic smoke tests in shell
 Run LTT
 Build the src artifact with hadoop versions 2.2.0,2.3.0,2.4.0, 2.5.0.
 2.6.0. Compilation with 2.4.0 and before is broken, but it is ok for
 this RC. See HBASE-12637
 Checked maven repository artifacts by running the hbase-downstreamer
 project test.


 Reminder, Sat is the last day to vote on this RC. Please plan to spend some
 time on the RC so that we can iron out issues for the next 1.0.0RC.

 Enis

 On Thu, Dec 4, 2014 at 10:38 AM, Andrew Purtell apurt...@apache.org
 wrote:

  See https://issues.apache.org/jira/browse/PHOENIX-1501. Pardon the noise
  on
  a VOTE thread.
 
  On Thu, Dec 4, 2014 at 10:31 AM, Andrew Purtell apurt...@apache.org
  wrote:
 
   Phoenix PMC here, although I'm only speaking my own opinion. Concur,
 the
   code takes liberties... We need to clean our own house.
  
   On Wed, Dec 3, 2014 at 10:59 AM, Stack st...@duboce.net wrote:
  
   On Wed, Dec 3, 2014 at 8:07 AM, Sean Busbey bus...@cloudera.com
  wrote:
  
On Wed, Dec 3, 2014 at 9:44 AM, Ted Yu yuzhih...@gmail.com wrote:
   
 Compiling Phoenix master against 0.99.2, I got:
 http://pastebin.com/gaxCs8fT

 Some removed methods are in HBase classes that are marked
 with @InterfaceAudience.Private
 I want to get some opinion on whether such methods should be
   deprecated
 first.

  
  
   There is no room for 'opinion' in this area. A bunch of work has been
  done
   to remove ambiguity around our guarantees. The law as it stands is:
   InterfaceAudience.Private means: APIs for HBase internals developers.
  No
   guarantees on compatibility or availability in future versions. ...
  (From
   http://hbase.apache.org/book.html#code.standards).  We can add
 verbiage
   but
   you would have to have a perverse squint to interpret this no
  guarantees
   as no guarantees -- after a deprecation cycle.
  
   That said, we are an accommodating lot.  I suggest you take the list
  over
   to phoenix dev and that phoenix comes back with explicit asks rather
  than
   this blanket list taken from a compile against their master branch.
  
   As per Sean, this discussion does not belong on a dev release RC vote
   thread, nor should it hold up its release.
   St.Ack
  
  
  
  
   --
   Best regards,
  
  - Andy
  
   Problems worthy of attack prove their worth by hitting back. - Piet
 Hein
   (via Tom White)
  
 
 
 
  --
  Best regards,
 
 - Andy
 
  Problems worthy of attack prove their worth by hitting back. - Piet Hein
  (via Tom White)
 



RE: split failed caused by FileNotFoundException

2014-12-04 Thread Bijieshan
Nice find, zhoushuaifeng:)

Suggest to raise an issue for 94.

Jieshan.

From: 周帅锋 [zhoushuaif...@gmail.com]
Sent: Thursday, December 04, 2014 6:01 PM
To: dev
Subject: Re: split failed caused by FileNotFoundException

I rechecked the code in 0.98, this problem is solved by check the store
object in the compactrunner and cance the compact the compact.
HRegion.compact:

  byte[] cf = Bytes.toBytes(store.getColumnFamilyName());
  if (stores.get(cf) != store) {
LOG.warn(Store  + store.getColumnFamilyName() +  on region  +
this
+  has been re-instantiated, cancel this compaction request. 
+  It may be caused by the roll back of split transaction);
return false;
  }


But, is it better to replease the store object by the new one and continue
the compact on the store, instead of cancel?


2014-12-04 15:00 GMT+08:00 周帅锋 zhoushuaif...@gmail.com:

 In our hbase clusters, split sometimes failed because the file to be
 splited does not exist in parent region. In 0.94.2, this will cause
 regionserver shutdown because the split transction has reached  PONR state.
 In 0.94.20 or 0.98, split will fail and can roll back, because the split
 transction only reach  the state offlined_parent.

 In 0.94.2, the error is like below:
 2014-09-23 22:27:55,710 INFO org.apache.hadoop.hbase.catalog.MetaEditor:
 Offlined parent region x in META
 2014-09-23 22:27:55,820 INFO
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup
 of failed split of x
 Caused by: java.io.IOException: java.io.IOException:
 java.io.FileNotFoundException: File does not exist: x
 Caused by: java.io.IOException: java.io.FileNotFoundException: File does
 not exist: x
 Caused by: java.io.FileNotFoundException: File does not exist: x
 2014-09-23 22:27:55,823 FATAL
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
 xxx,60020,1411383568857: Abort; we got an error after point-of-no-return

 The reasion of missing files is a little complex, the whole procedure
 include two failure split and one compact:
 1) there are too many files in the region and compact is requested, but
 not execute because there are many CompactionRequests(compactionRunners) in
 the compaction queue. The compactionRequest hodes the object of the Store,
 and also hodes a storefile list to compact of the store.

 2) the region size is big enough, and split is requested. the region is
 offline during spliting,and the store is closed. but the split failed when
 spliting files(for some reason, like io busy, etc. causing time out)
 2014-09-23 18:26:02,738 INFO
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup
 of failed split of x; Took too long to split the files and create the
 references, aborting split

 3) split successfully roll back, and the region is online again. During
 roll back procedure, a new Store object is created, but the store in the
 compaction queue did not removed, so there are two(or maybe more) store
 object in regionserver.

 4) the compaction on the store of the region requested before running, and
 some storefiles are compact and removed, new bigger storefiles are created.
 but the store reinitialized in the rollback split procedure doesn't know
 the change of the storefiles.

 5) split on region running again and fail again, because the storefiles in
 parrent region doesn't exist(removed by compaction). Also, the split
 transction doesn't know that there is a new file created by the compaction.
 In 0.94.2, this error can't be found until the daughter region open, but
 it's too late, the split failed at PONR state, and this will causing
 regionserver shutdown. In 0.94.20 and 0.98, when doing splitStoreFiles, it
 will looking into the storefile in the parent region and can found the
 error before PONR, so split failure can be roll back.
  code in HRegionFileSystem.splitStoreFile:
  ...
  byte[] lastKey = f.createReader().getLastKey();

 So, this situation is a fatal error in previous 0.94 version, and also a
 common bug in the later 0.94 and higher version. And this is also the
 reason why sometimes storefile reader is null(closed by the first failure
 split).



[jira] [Resolved] (HBASE-12637) Compilation with Hadoop-2.4- is broken

2014-12-04 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-12637.
---
  Resolution: Fixed
Hadoop Flags: Reviewed

Pushed this to 1.0 and master. Thanks Ted for review. 

 Compilation with Hadoop-2.4- is broken 
 ---

 Key: HBASE-12637
 URL: https://issues.apache.org/jira/browse/HBASE-12637
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 1.0.0, 2.0.0

 Attachments: hbase-12637_v1.patch


 HBASE-12554 introduced 
 {code}
 public static class MockMapping extends ScriptBasedMapping {
 {code}
 which breaks compilation with earlier hadoop versions: 
 {code}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
 (default-testCompile) on project hbase-server: Compilation failure: 
 Compilation failure:
 [ERROR] 
 /Users/enis/projects/rc-test/hbase-0.99.2/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java:[77,42]
  error: cannot inherit from final ScriptBasedMapping
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12619) Backport HBASE-11639 (Replicate the visibility of Cells as strings) to 0.98

2014-12-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-12619.

  Resolution: Fixed
Hadoop Flags: Reviewed

Pushed to 0.98

 Backport HBASE-11639 (Replicate the visibility of Cells as strings) to 0.98
 ---

 Key: HBASE-12619
 URL: https://issues.apache.org/jira/browse/HBASE-12619
 Project: HBase
  Issue Type: Improvement
  Components: security
Affects Versions: 0.98.8
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: visibility
 Fix For: 0.98.9

 Attachments: HBASE-11639_0.98_1.patch


 Raising a back port issue for HBASE-11639. We have the patch ready just need 
 to see how we are handling the new CP hook in RS Observer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


NumTables/regions Quota management and RS groups - Core or coprocessors

2014-12-04 Thread Enis Söztutar
Hi,

We were chatting with Francis offline about these two features, which are
both related to NS and multi-tenant deployments.

The issue came up regarding whether these belong to core or they should be
left as coprocessor based implementations.

From a users perspective, if we leave these as co-processors, I feel that
they won't be used much and unless a very specific reason, most of the
users will not make use of them. However, especially for quotas for
example, I think that they should be enabled by default, and with some
reasonable default values (similar to how linux comes with default ulimit
settings).

We just wanted to bring this to dev to see what do you guys think.

https://issues.apache.org/jira/browse/HBASE-8410
https://issues.apache.org/jira/browse/HBASE-6721


Re: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread ramkrishna vasudevan
Congrats Sean!

On Fri, Dec 5, 2014 at 5:08 AM, Niels Basjes ni...@basjes.nl wrote:

 Congrats Sean!

 On Thu, Dec 4, 2014 at 9:10 PM, Stack st...@duboce.net wrote:

  Sean has been doing excellent work around these environs. Your PMC made
 him
  a committer in recognition.  Welcome Sean!
 
  St.Ack
 



 --
 Best regards / Met vriendelijke groeten,

 Niels Basjes



ByteBuffer Backed Cell - New APIs (HBASE-12358)

2014-12-04 Thread ramkrishna vasudevan
Hi Devs

This write up is to provide a brief idea  on why we need a BB backed cell
and what are the items that we need to take care before introducing new
APIs in Cell that are BB backed.

Pls refer to https://issues.apache.org/jira/browse/HBASE-12358 also and its
parent JIRA https://issues.apache.org/jira/browse/HBASE-11425 for the
history.

Coming back to the discussion on new APIs, this discussion is based on
supporting BB in the read path (write path is not targeted now) so that we
could work with offheap BBs also. This would avoid copying of data from
BlockCache to the read path ByteBuffer.

Assume we will be working with BBs in the read path, We will need to
 introduce *getXXXBuffer() *APIs and also *hasArray()* in Cell itself
directly.
If we try to extend the cell or create a new Cell then *everywhere we need
to do instanceOf check or do type conversion *and that is why adding new
APIs to Cell interface itself makes sense.

Plan is to use this *getXXXBuffer()* API through out the read path *instead
of getXXXArray()*.

Now there are two ways to use it

1) Use getXXXBuffer() along with getXXXOffset(), getXXXLength() like how we
use now for getXXXArray() APIs with the offset and length. Doing so would
ensure that every where in the filters and CP one has to just replace the
getXXXArray() with getXXXBuffer() and continue to use getXXXOffset() and
getXXXLength(). We would do some wrapping of the byte[] with a BB incase of
KeyValue type of cells so that getXXXBuffer along with offset and length
holds true everywhere. Note that here if hasArray is true(for KV case) then
getXXXArray() would also work.

2)The other way of using this is that use only getXXXBuffer() API and
ensure that the BB is always duplicated/sliced and only the portion of the
total BB is returned which represents the individual component of the Cell.
In this case there is no use of getXXXOffset() (as it is going to be 0) and
getXXXLength() is any way going to be the sliced BB's limit.

But in the 2nd approach we may end up in creating lot of small objects even
while doing comparison.

Now the next problem that comes is what to do with the getXXXArray() APIs.
If one sees hasArray() as false (a DBB backed Cell) and uses the
getXXXArray() API along with offset and length - what should we do. Should
we create a byte[] from the DBB and return it? Then in that case what would
should the *getXXXOffset() return for a getXXXBuffer or getXXXArray()?*

If we go with the 2nd approach then getXXXBuffer() should be clearly
documented saying that it has to be used without getXXXOffset() and
getXXXLength() and use getXXXOffset() and getXXXLength() only with
getXXXArray().

Now if a Cell is backed by on heap BB then we could definitely return
getXXXArray() also - but what to return in the getXXXOffset() would be
determined by what approach to use for getXXXBuffer(). (based on (1) and
(2)).

We wanted to open up this topic now so that to get some feedback on what
could be an option here. Since it is an user facing Interface we need to be
careful with this.

I would suggest that whenever a Cell is *BB backed*(Onheap or offheap)
always *hasArray() would be false* in that Cell impl.

Every where we would use getXXXBuffer() along with getXXXOffest() and
getXXXLength(). Even in case of KV we could wrap the byte[] with BB so that
we have uniformity through the read code and we don't have too many 'if'
else conditions.

When ever *hasArray() is false* - using getXXXArray() API would throw
*UnSupportedOperation
Exception*.

As said if we want *getXXXArray()* to be supported as per the existing way
then getXXXBuffer() and getXXXOffset(), getXXXLength() should be clearly
documented.

Thoughts!!!

Regards
Ram  Anoop


Re: Region is out of bounds

2014-12-04 Thread Qiang Tian
My attempt to add reference files forcefully to compaction list in
Store.requetsCompaction() when region exceeds recommended maximum size did
not work out well - some weird results in our test cases (but HBase tests
are OK: small, medium and large).

interesting...perhaps it was filtered out in RatioBasedCompactionPolicy#
selectCompaction?



On Fri, Dec 5, 2014 at 5:20 AM, Andrew Purtell apurt...@apache.org wrote:

 Most versions of 0.98 since 0.98.1, but I haven't run a punishing high
 scale bulk ingest for its own sake, high-ish rate ingest and a setting of
 blockingStoreFiles to 200 have been in service of getting data in for
 subsequent testing.


 On Thu, Dec 4, 2014 at 12:43 PM, Vladimir Rodionov vladrodio...@gmail.com
 
 wrote:

  Andrew,
 
  What HBase version have you run your test on?
 
  This issue probably does not exist anymore in a latest Apache releases,
 but
  still exists in not so latest, but still actively used, versions of CDH,
  HDP etc. We have discovered it during large data set loading ( 100s of
 GB)
  in our cluster (4 nodes).
 
  -Vladimir
 
  On Thu, Dec 4, 2014 at 10:23 AM, Andrew Purtell apurt...@apache.org
  wrote:
 
   Actually I have set hbase.hstore.
  ​​
  blockingStoreFiles to 200 in testing
   exactly :-), but must not have generated sufficient load to encounter
 the
   issue you are seeing. Maybe it would be possible to adapt one of the
  ingest
   integration tests to trigger this problem? Set blockingStoreFiles to
 200
  or
   more. Tune down the region size to 128K or similar. If
   it's reproducible like that please open a JIRA.
  
   On Wed, Dec 3, 2014 at 9:07 AM, Vladimir Rodionov 
  vladrodio...@gmail.com
   wrote:
  
Kevin,
   
Thank you for your response. This is not a question on how to
 configure
correctly HBase cluster for write heavy workloads. This is internal
  HBase
issue - something is wrong in a default logic of compaction selection
algorithm in 0.94-0.98. It seems that nobody has ever tested
 importing
   data
with very high hbase.hstore.blockingStoreFiles value (200 in our
 case).
   
-Vladimir Rodionov
   
On Wed, Dec 3, 2014 at 6:38 AM, Kevin O'dell 
 kevin.od...@cloudera.com
  
wrote:
   
 Vladimir,

   I know you said, do not ask me why, but I am going to have to
 ask
   you
 why.  The fact you are doing this(this being blocking store files 
   200)
 tells me there is something or multiple somethings wrong with your
cluster
 setup.  A couple things come to mind:

 * During this heavy write period, could we use bulk loads?  If so,
  this
 should solve almost all of your problems

 * 1GB region size is WAY too small, and if you are pushing the
 volume
   of
 data you are talking about I would recommend 10 - 20GB region sizes
   this
 should help keep your region count smaller as well which will
 result
  in
 more optimal writes

 * Your cluster may be undersized, if you are setting the blocking
 to
  be
 that high, you may be pushing too much data for your cluster
 overall.

 Would you be so kind as to pass me a few pieces of information?

 1.) Cluster size
 2.) Average region count per RS
 3.) Heap size, Memstore global settings, and block cache settings
 4.) a RS log to pastebin and a time frame of high writes

 I can probably make some solid suggestions for you based on the
 above
data.

 On Wed, Dec 3, 2014 at 1:04 AM, Vladimir Rodionov 
vladrodio...@gmail.com
 wrote:

  This is what we observed in our environment(s)
 
  The issue exists in CDH4.5, 5.1, HDP2.1, Mapr4
 
  If some one sets # of blocking stores way above default value,
 say
  -
200
 to
  avoid write stalls during intensive data loading (do not ask me ,
  why
we
 do
  this), then
  one of the regions grows indefinitely and takes more 99% of
 overall
 table.
 
  It can't be split because it still has orphaned reference files.
  Some
of
 a
  reference files are able to avoid compactions for a long time,
obviously.
 
  The split policy is IncreasingToUpperBound, max region size is
 1G.
  I
   do
 my
  tests on CDH4.5 mostly but all other distros seem have the same
   issue.
 
  My attempt to add reference files forcefully to compaction list
 in
  Store.requetsCompaction() when region exceeds recommended maximum
   size
 did
  not work out well - some weird results in our test cases (but
 HBase
tests
  are OK: small, medium and large).
 
  What is so special with these reference files? Any ideas, what
 can
  be
 done
  here to fix the issue?
 
  -Vladimir Rodionov
 



 --
 Kevin O'Dell
 Systems Engineer, Cloudera

   
  
  
  
   --
   Best regards,
  
  - Andy
  
   Problems worthy of attack prove their worth by hitting back. - Piet
 

[jira] [Created] (HBASE-12638) use thrift2 C++ access hbase 0.94.10 fail.

2014-12-04 Thread zhbhhb (JIRA)
zhbhhb created HBASE-12638:
--

 Summary: use thrift2 C++ access hbase 0.94.10 fail.
 Key: HBASE-12638
 URL: https://issues.apache.org/jira/browse/HBASE-12638
 Project: HBase
  Issue Type: Bug
  Components: API
Affects Versions: 0.96.1.1, 0.94.10
 Environment: linux-2.6.32 amd64 thrift-0.6.1
Reporter: zhbhhb


use the same thrift2 c++ code(only ip of thriftserver differ) to add data to 
hbase 0.94.10 fail,but add to hbase 0.96.11 success。
the code to throw exception:
try{
client.put(table,put);
}catch(TException e){
printf(error:%s\n, e.what());
//std::coutERRORe.what()std::endl;
}
the exception:
error:Default TException.

the information of exception is not clear,so I check the 
hadoop-hbase-thrift2-xxx.log to find useful information:
0.94.10 hadoop-hbase-thrift2-xxx.log:
2014-12-05 09:54:48,533 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.io.tmpdir=/tmp
2014-12-05 09:54:48,533 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:java.compiler=NA
2014-12-05 09:54:48,533 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:os.name=Linux
2014-12-05 09:54:48,533 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:os.arch=amd64
2014-12-05 09:54:48,533 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:os.version=2.6.32-279.2.1.el6.x86_64
2014-12-05 09:54:48,533 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:user.name=hadoop
2014-12-05 09:54:48,533 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:user.home=/usr/home/hadoop
2014-12-05 09:54:48,533 INFO org.apache.zookeeper.ZooKeeper: Client 
environment:user.dir=/usr/home/hadoop
2014-12-05 09:54:48,535 INFO org.apache.zookeeper.ZooKeeper: Initiating client 
connection, 

connectString=zk4.mars.grid.xxx.cn:2181,zk3.mars.grid.xxx.cn:2181,zk2.mars.grid.xxx.cn:2181,zk1.mars.grid.xxx.cn:2181,zk5.mars.grid.xxx.cn:2181
 sessionTimeout=18 

watcher=hconnection
2014-12-05 09:54:48,569 INFO 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of this 
process is 23...@h112158.mars.grid.xxx.cn
2014-12-05 09:54:48,575 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server 10.77.112.114/10.77.112.114:2181. Will not attempt to 
authenticate 

using SASL (unknown error)
2014-12-05 09:54:48,580 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established to 10.77.112.114/10.77.112.114:2181, initiating session
2014-12-05 09:54:48,592 INFO org.apache.zookeeper.ClientCnxn: Session 
establishment complete on server 10.77.112.114/10.77.112.114:2181, sessionid = 
0x44211bee3b3e00f, 

negotiated timeout = 6
2014-12-05 09:54:48,637 DEBUG 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: 
Looked up root region location, 

connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@77f71eb7;
 serverName=10.77.112.160,60020,1417408116358
2014-12-05 09:54:48,801 DEBUG 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: 
Cached location for .META.,,1.1028785192 is 

10.77.112.160:60020
2014-12-05 09:54:49,008 DEBUG org.apache.hadoop.hbase.client.MetaScanner: 
Scanning .META. starting at row=member,,00 for max=10 rows using 

org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@77f71eb7
2014-12-05 09:54:49,017 DEBUG 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: 
Cached location for 

member,,1417606385770.9d875e683ae567944f148d58bbfe05fb. is 10.77.112.161:60020

hbase 0.96.10 hbase-hadoop-thrift2-xxx.log:
2014-12-04 10:45:40,886 INFO  [pool-1-thread-1] zookeeper.ZooKeeper: Client 
environment:java.library.path=/usr/local/hadoop-2.2.0/lib/native
2014-12-04 10:45:40,886 INFO  [pool-1-thread-1] zookeeper.ZooKeeper: Client 
environment:java.io.tmpdir=/tmp
2014-12-04 10:45:40,886 INFO  [pool-1-thread-1] zookeeper.ZooKeeper: Client 
environment:java.compiler=NA
2014-12-04 10:45:40,886 INFO  [pool-1-thread-1] zookeeper.ZooKeeper: Client 
environment:os.name=Linux
2014-12-04 10:45:40,886 INFO  [pool-1-thread-1] zookeeper.ZooKeeper: Client 
environment:os.arch=amd64
2014-12-04 10:45:40,887 INFO  [pool-1-thread-1] zookeeper.ZooKeeper: Client 
environment:os.version=2.6.32-358.el6.x86_64
2014-12-04 10:45:40,887 INFO  [pool-1-thread-1] zookeeper.ZooKeeper: Client 
environment:user.name=hadoop
2014-12-04 10:45:40,887 INFO  [pool-1-thread-1] zookeeper.ZooKeeper: Client 
environment:user.home=/usr/home/hadoop
2014-12-04 10:45:40,887 INFO  [pool-1-thread-1] zookeeper.ZooKeeper: Client 
environment:user.dir=/root
2014-12-04 10:45:40,889 INFO  [pool-1-thread-1] zookeeper.ZooKeeper: Initiating 
client connection, 

connectString=zk4.mars.grid.xxx.cn:2181,zk3.mars.grid.xxx.cn:2181,zk2.mars.grid.xxx.cn:2181,zk1.mars.grid.xxx.cn:2181,zk5.mars.grid.xxx.cn:2181
 sessionTimeout=3 


Re: ByteBuffer Backed Cell - New APIs (HBASE-12358)

2014-12-04 Thread Ted Yu
Thanks for the writeup, Ram.

This feature is targeting 2.0 release, right ?

bq. If one sees hasArray() as false (a DBB backed Cell) and uses the
getXXXArray()
API along with offset and length

Is there example of the above usage pattern ? Within HBase core, we can
make sure the above pattern doesn't exist, right ?

Cheers

On Thu, Dec 4, 2014 at 7:24 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 Hi Devs

 This write up is to provide a brief idea  on why we need a BB backed cell
 and what are the items that we need to take care before introducing new
 APIs in Cell that are BB backed.

 Pls refer to https://issues.apache.org/jira/browse/HBASE-12358 also and
 its
 parent JIRA https://issues.apache.org/jira/browse/HBASE-11425 for the
 history.

 Coming back to the discussion on new APIs, this discussion is based on
 supporting BB in the read path (write path is not targeted now) so that we
 could work with offheap BBs also. This would avoid copying of data from
 BlockCache to the read path ByteBuffer.

 Assume we will be working with BBs in the read path, We will need to
  introduce *getXXXBuffer() *APIs and also *hasArray()* in Cell itself
 directly.
 If we try to extend the cell or create a new Cell then *everywhere we need
 to do instanceOf check or do type conversion *and that is why adding new
 APIs to Cell interface itself makes sense.

 Plan is to use this *getXXXBuffer()* API through out the read path *instead
 of getXXXArray()*.

 Now there are two ways to use it

 1) Use getXXXBuffer() along with getXXXOffset(), getXXXLength() like how we
 use now for getXXXArray() APIs with the offset and length. Doing so would
 ensure that every where in the filters and CP one has to just replace the
 getXXXArray() with getXXXBuffer() and continue to use getXXXOffset() and
 getXXXLength(). We would do some wrapping of the byte[] with a BB incase of
 KeyValue type of cells so that getXXXBuffer along with offset and length
 holds true everywhere. Note that here if hasArray is true(for KV case) then
 getXXXArray() would also work.

 2)The other way of using this is that use only getXXXBuffer() API and
 ensure that the BB is always duplicated/sliced and only the portion of the
 total BB is returned which represents the individual component of the Cell.
 In this case there is no use of getXXXOffset() (as it is going to be 0) and
 getXXXLength() is any way going to be the sliced BB's limit.

 But in the 2nd approach we may end up in creating lot of small objects even
 while doing comparison.

 Now the next problem that comes is what to do with the getXXXArray() APIs.
 If one sees hasArray() as false (a DBB backed Cell) and uses the
 getXXXArray() API along with offset and length - what should we do. Should
 we create a byte[] from the DBB and return it? Then in that case what would
 should the *getXXXOffset() return for a getXXXBuffer or getXXXArray()?*

 If we go with the 2nd approach then getXXXBuffer() should be clearly
 documented saying that it has to be used without getXXXOffset() and
 getXXXLength() and use getXXXOffset() and getXXXLength() only with
 getXXXArray().

 Now if a Cell is backed by on heap BB then we could definitely return
 getXXXArray() also - but what to return in the getXXXOffset() would be
 determined by what approach to use for getXXXBuffer(). (based on (1) and
 (2)).

 We wanted to open up this topic now so that to get some feedback on what
 could be an option here. Since it is an user facing Interface we need to be
 careful with this.

 I would suggest that whenever a Cell is *BB backed*(Onheap or offheap)
 always *hasArray() would be false* in that Cell impl.

 Every where we would use getXXXBuffer() along with getXXXOffest() and
 getXXXLength(). Even in case of KV we could wrap the byte[] with BB so that
 we have uniformity through the read code and we don't have too many 'if'
 else conditions.

 When ever *hasArray() is false* - using getXXXArray() API would throw
 *UnSupportedOperation
 Exception*.

 As said if we want *getXXXArray()* to be supported as per the existing way
 then getXXXBuffer() and getXXXOffset(), getXXXLength() should be clearly
 documented.

 Thoughts!!!

 Regards
 Ram  Anoop



Re: ByteBuffer Backed Cell - New APIs (HBASE-12358)

2014-12-04 Thread ramkrishna vasudevan
Is there example of the above usage pattern ?
Just take cases of Filters and CP where a Cell is exposed to the user in
the read path and that cell could be having hasArray - false (Cells backed
by DBB) or true (Cells that are coming from Memstore).
Within HBase core, we can
make sure the above pattern doesn't exist, right ?
In HBase core code we could definitely avoid the pattern and infact always
with getXXXBuffer everywhere (use getXXXOffset and getXXXLength) depends on
either (1) or (2) approach that we take. (1) could be preferable as
multiple new small objects can be avoided.
Also would help in KeyValue type Cells also to be used with getXXXBuffer
along with getXXXOffset and getXXXLength.

Regards
Ram


On Fri, Dec 5, 2014 at 9:24 AM, Ted Yu yuzhih...@gmail.com wrote:

 Thanks for the writeup, Ram.

 This feature is targeting 2.0 release, right ?

 bq. If one sees hasArray() as false (a DBB backed Cell) and uses the
 getXXXArray()
 API along with offset and length

 Is there example of the above usage pattern ? Within HBase core, we can
 make sure the above pattern doesn't exist, right ?

 Cheers

 On Thu, Dec 4, 2014 at 7:24 PM, ramkrishna vasudevan 
 ramkrishna.s.vasude...@gmail.com wrote:

  Hi Devs
 
  This write up is to provide a brief idea  on why we need a BB backed cell
  and what are the items that we need to take care before introducing new
  APIs in Cell that are BB backed.
 
  Pls refer to https://issues.apache.org/jira/browse/HBASE-12358 also and
  its
  parent JIRA https://issues.apache.org/jira/browse/HBASE-11425 for the
  history.
 
  Coming back to the discussion on new APIs, this discussion is based on
  supporting BB in the read path (write path is not targeted now) so that
 we
  could work with offheap BBs also. This would avoid copying of data from
  BlockCache to the read path ByteBuffer.
 
  Assume we will be working with BBs in the read path, We will need to
   introduce *getXXXBuffer() *APIs and also *hasArray()* in Cell itself
  directly.
  If we try to extend the cell or create a new Cell then *everywhere we
 need
  to do instanceOf check or do type conversion *and that is why adding new
  APIs to Cell interface itself makes sense.
 
  Plan is to use this *getXXXBuffer()* API through out the read path
 *instead
  of getXXXArray()*.
 
  Now there are two ways to use it
 
  1) Use getXXXBuffer() along with getXXXOffset(), getXXXLength() like how
 we
  use now for getXXXArray() APIs with the offset and length. Doing so would
  ensure that every where in the filters and CP one has to just replace the
  getXXXArray() with getXXXBuffer() and continue to use getXXXOffset() and
  getXXXLength(). We would do some wrapping of the byte[] with a BB incase
 of
  KeyValue type of cells so that getXXXBuffer along with offset and length
  holds true everywhere. Note that here if hasArray is true(for KV case)
 then
  getXXXArray() would also work.
 
  2)The other way of using this is that use only getXXXBuffer() API and
  ensure that the BB is always duplicated/sliced and only the portion of
 the
  total BB is returned which represents the individual component of the
 Cell.
  In this case there is no use of getXXXOffset() (as it is going to be 0)
 and
  getXXXLength() is any way going to be the sliced BB's limit.
 
  But in the 2nd approach we may end up in creating lot of small objects
 even
  while doing comparison.
 
  Now the next problem that comes is what to do with the getXXXArray()
 APIs.
  If one sees hasArray() as false (a DBB backed Cell) and uses the
  getXXXArray() API along with offset and length - what should we do.
 Should
  we create a byte[] from the DBB and return it? Then in that case what
 would
  should the *getXXXOffset() return for a getXXXBuffer or getXXXArray()?*
 
  If we go with the 2nd approach then getXXXBuffer() should be clearly
  documented saying that it has to be used without getXXXOffset() and
  getXXXLength() and use getXXXOffset() and getXXXLength() only with
  getXXXArray().
 
  Now if a Cell is backed by on heap BB then we could definitely return
  getXXXArray() also - but what to return in the getXXXOffset() would be
  determined by what approach to use for getXXXBuffer(). (based on (1) and
  (2)).
 
  We wanted to open up this topic now so that to get some feedback on what
  could be an option here. Since it is an user facing Interface we need to
 be
  careful with this.
 
  I would suggest that whenever a Cell is *BB backed*(Onheap or offheap)
  always *hasArray() would be false* in that Cell impl.
 
  Every where we would use getXXXBuffer() along with getXXXOffest() and
  getXXXLength(). Even in case of KV we could wrap the byte[] with BB so
 that
  we have uniformity through the read code and we don't have too many 'if'
  else conditions.
 
  When ever *hasArray() is false* - using getXXXArray() API would throw
  *UnSupportedOperation
  Exception*.
 
  As said if we want *getXXXArray()* to be supported as per the existing
 way
  then 

Re: ByteBuffer Backed Cell - New APIs (HBASE-12358)

2014-12-04 Thread Ted Yu
bq. (1) could be preferable as multiple new small objects can be avoided.

Agreed. +1

On Thu, Dec 4, 2014 at 8:03 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 Is there example of the above usage pattern ?
 Just take cases of Filters and CP where a Cell is exposed to the user in
 the read path and that cell could be having hasArray - false (Cells backed
 by DBB) or true (Cells that are coming from Memstore).
 Within HBase core, we can
 make sure the above pattern doesn't exist, right ?
 In HBase core code we could definitely avoid the pattern and infact always
 with getXXXBuffer everywhere (use getXXXOffset and getXXXLength) depends on
 either (1) or (2) approach that we take. (1) could be preferable as
 multiple new small objects can be avoided.
 Also would help in KeyValue type Cells also to be used with getXXXBuffer
 along with getXXXOffset and getXXXLength.

 Regards
 Ram


 On Fri, Dec 5, 2014 at 9:24 AM, Ted Yu yuzhih...@gmail.com wrote:

  Thanks for the writeup, Ram.
 
  This feature is targeting 2.0 release, right ?
 
  bq. If one sees hasArray() as false (a DBB backed Cell) and uses the
  getXXXArray()
  API along with offset and length
 
  Is there example of the above usage pattern ? Within HBase core, we can
  make sure the above pattern doesn't exist, right ?
 
  Cheers
 
  On Thu, Dec 4, 2014 at 7:24 PM, ramkrishna vasudevan 
  ramkrishna.s.vasude...@gmail.com wrote:
 
   Hi Devs
  
   This write up is to provide a brief idea  on why we need a BB backed
 cell
   and what are the items that we need to take care before introducing new
   APIs in Cell that are BB backed.
  
   Pls refer to https://issues.apache.org/jira/browse/HBASE-12358 also
 and
   its
   parent JIRA https://issues.apache.org/jira/browse/HBASE-11425 for the
   history.
  
   Coming back to the discussion on new APIs, this discussion is based on
   supporting BB in the read path (write path is not targeted now) so that
  we
   could work with offheap BBs also. This would avoid copying of data from
   BlockCache to the read path ByteBuffer.
  
   Assume we will be working with BBs in the read path, We will need to
introduce *getXXXBuffer() *APIs and also *hasArray()* in Cell itself
   directly.
   If we try to extend the cell or create a new Cell then *everywhere we
  need
   to do instanceOf check or do type conversion *and that is why adding
 new
   APIs to Cell interface itself makes sense.
  
   Plan is to use this *getXXXBuffer()* API through out the read path
  *instead
   of getXXXArray()*.
  
   Now there are two ways to use it
  
   1) Use getXXXBuffer() along with getXXXOffset(), getXXXLength() like
 how
  we
   use now for getXXXArray() APIs with the offset and length. Doing so
 would
   ensure that every where in the filters and CP one has to just replace
 the
   getXXXArray() with getXXXBuffer() and continue to use getXXXOffset()
 and
   getXXXLength(). We would do some wrapping of the byte[] with a BB
 incase
  of
   KeyValue type of cells so that getXXXBuffer along with offset and
 length
   holds true everywhere. Note that here if hasArray is true(for KV case)
  then
   getXXXArray() would also work.
  
   2)The other way of using this is that use only getXXXBuffer() API and
   ensure that the BB is always duplicated/sliced and only the portion of
  the
   total BB is returned which represents the individual component of the
  Cell.
   In this case there is no use of getXXXOffset() (as it is going to be 0)
  and
   getXXXLength() is any way going to be the sliced BB's limit.
  
   But in the 2nd approach we may end up in creating lot of small objects
  even
   while doing comparison.
  
   Now the next problem that comes is what to do with the getXXXArray()
  APIs.
   If one sees hasArray() as false (a DBB backed Cell) and uses the
   getXXXArray() API along with offset and length - what should we do.
  Should
   we create a byte[] from the DBB and return it? Then in that case what
  would
   should the *getXXXOffset() return for a getXXXBuffer or getXXXArray()?*
  
   If we go with the 2nd approach then getXXXBuffer() should be clearly
   documented saying that it has to be used without getXXXOffset() and
   getXXXLength() and use getXXXOffset() and getXXXLength() only with
   getXXXArray().
  
   Now if a Cell is backed by on heap BB then we could definitely return
   getXXXArray() also - but what to return in the getXXXOffset() would be
   determined by what approach to use for getXXXBuffer(). (based on (1)
 and
   (2)).
  
   We wanted to open up this topic now so that to get some feedback on
 what
   could be an option here. Since it is an user facing Interface we need
 to
  be
   careful with this.
  
   I would suggest that whenever a Cell is *BB backed*(Onheap or offheap)
   always *hasArray() would be false* in that Cell impl.
  
   Every where we would use getXXXBuffer() along with getXXXOffest() and
   getXXXLength(). Even in case of KV we could wrap the byte[] with BB so
 

RE: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread ashish singhi
Congrats, Sean!

-Original Message-
From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of Stack
Sent: 05 December 2014 01:40
To: HBase Dev List
Subject: Please welcome our latest committer, Sean Busbey

Sean has been doing excellent work around these environs. Your PMC made him a 
committer in recognition.  Welcome Sean!

St.Ack


[jira] [Created] (HBASE-12639) Backport HBASE-12565 Race condition in HRegion.batchMutate() causes partial data to be written when region closes

2014-12-04 Thread Keith David Winkler (JIRA)
Keith David Winkler created HBASE-12639:
---

 Summary: Backport HBASE-12565 Race condition in 
HRegion.batchMutate() causes partial data to be written when region closes
 Key: HBASE-12639
 URL: https://issues.apache.org/jira/browse/HBASE-12639
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.8
Reporter: Keith David Winkler


Backport HBASE-12565 Race condition in HRegion.batchMutate() causes partial 
data to be written when region closes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12640) Add doAs support for Thrift Server

2014-12-04 Thread Srikanth Srungarapu (JIRA)
Srikanth Srungarapu created HBASE-12640:
---

 Summary: Add doAs support for Thrift Server
 Key: HBASE-12640
 URL: https://issues.apache.org/jira/browse/HBASE-12640
 Project: HBase
  Issue Type: Improvement
  Components: Thrift
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu


In HBASE-11349, impersonation support has been added to Thrift Server. But the 
limitation is thrift client must use same set of credentials throughout the 
session. These changes will help us in circumventing this problem, by allowing 
user to populate doAs parameter as per his needs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Please welcome our latest committer, Sean Busbey

2014-12-04 Thread mail list
Congrats, Sean!

On Dec 5, 2014, at 12:18, ashish singhi ashish.sin...@huawei.com wrote:

 Congrats, Sean!
 
 -Original Message-
 From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of Stack
 Sent: 05 December 2014 01:40
 To: HBase Dev List
 Subject: Please welcome our latest committer, Sean Busbey
 
 Sean has been doing excellent work around these environs. Your PMC made him a 
 committer in recognition.  Welcome Sean!
 
 St.Ack



Re: ByteBuffer Backed Cell - New APIs (HBASE-12358)

2014-12-04 Thread Jonathan Hsieh
Ram,

Can we essentially do both by creating the new getXxBuffer, and then also
creating new offset and len apis -- getXxx(Bb|Buffer|Buf|B)Offset and
getXxx(Bb|Buffer|Buf|B)Length?

The old getXxxArray could use the old getXxxOffset and getXxxLength calls.
Also we'd deprecate all of these and provide an sliced and diced version
that would have offset always 0.

This way we wouldn't conflate the Bb and byte[] offsets and lengths.  Also
we could behind the scenes convert Bbs to byte[] arrays and convert
byte[]'s in to Bbs while maintaining the same interface.

Jon.



On Thu, Dec 4, 2014 at 7:24 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 Hi Devs

 This write up is to provide a brief idea  on why we need a BB backed cell
 and what are the items that we need to take care before introducing new
 APIs in Cell that are BB backed.

 Pls refer to https://issues.apache.org/jira/browse/HBASE-12358 also and
 its
 parent JIRA https://issues.apache.org/jira/browse/HBASE-11425 for the
 history.

 Coming back to the discussion on new APIs, this discussion is based on
 supporting BB in the read path (write path is not targeted now) so that we
 could work with offheap BBs also. This would avoid copying of data from
 BlockCache to the read path ByteBuffer.

 Assume we will be working with BBs in the read path, We will need to
  introduce *getXXXBuffer() *APIs and also *hasArray()* in Cell itself
 directly.
 If we try to extend the cell or create a new Cell then *everywhere we need
 to do instanceOf check or do type conversion *and that is why adding new
 APIs to Cell interface itself makes sense.

 Plan is to use this *getXXXBuffer()* API through out the read path *instead
 of getXXXArray()*.

 Now there are two ways to use it

 1) Use getXXXBuffer() along with getXXXOffset(), getXXXLength() like how we
 use now for getXXXArray() APIs with the offset and length. Doing so would
 ensure that every where in the filters and CP one has to just replace the
 getXXXArray() with getXXXBuffer() and continue to use getXXXOffset() and
 getXXXLength(). We would do some wrapping of the byte[] with a BB incase of
 KeyValue type of cells so that getXXXBuffer along with offset and length
 holds true everywhere. Note that here if hasArray is true(for KV case) then
 getXXXArray() would also work.

 2)The other way of using this is that use only getXXXBuffer() API and
 ensure that the BB is always duplicated/sliced and only the portion of the
 total BB is returned which represents the individual component of the Cell.
 In this case there is no use of getXXXOffset() (as it is going to be 0) and
 getXXXLength() is any way going to be the sliced BB's limit.

 But in the 2nd approach we may end up in creating lot of small objects even
 while doing comparison.

 Now the next problem that comes is what to do with the getXXXArray() APIs.
 If one sees hasArray() as false (a DBB backed Cell) and uses the
 getXXXArray() API along with offset and length - what should we do. Should
 we create a byte[] from the DBB and return it? Then in that case what would
 should the *getXXXOffset() return for a getXXXBuffer or getXXXArray()?*

 If we go with the 2nd approach then getXXXBuffer() should be clearly
 documented saying that it has to be used without getXXXOffset() and
 getXXXLength() and use getXXXOffset() and getXXXLength() only with
 getXXXArray().

 Now if a Cell is backed by on heap BB then we could definitely return
 getXXXArray() also - but what to return in the getXXXOffset() would be
 determined by what approach to use for getXXXBuffer(). (based on (1) and
 (2)).

 We wanted to open up this topic now so that to get some feedback on what
 could be an option here. Since it is an user facing Interface we need to be
 careful with this.

 I would suggest that whenever a Cell is *BB backed*(Onheap or offheap)
 always *hasArray() would be false* in that Cell impl.

 Every where we would use getXXXBuffer() along with getXXXOffest() and
 getXXXLength(). Even in case of KV we could wrap the byte[] with BB so that
 we have uniformity through the read code and we don't have too many 'if'
 else conditions.

 When ever *hasArray() is false* - using getXXXArray() API would throw
 *UnSupportedOperation
 Exception*.

 As said if we want *getXXXArray()* to be supported as per the existing way
 then getXXXBuffer() and getXXXOffset(), getXXXLength() should be clearly
 documented.

 Thoughts!!!

 Regards
 Ram  Anoop




-- 
// Jonathan Hsieh (shay)
// HBase Tech Lead, Software Engineer, Cloudera
// j...@cloudera.com // @jmhsieh