[jira] [Commented] (HBASE-16612) Use array to cache Types for KeyValue.Type.codeToType

2016-09-11 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483288#comment-15483288
 ] 

Phil Yang commented on HBASE-16612:
---

I ran a benchmark for three methods, codeToType, using an array and using a map.
{code}
public static Type arrayToType(final byte b) {
return ARR[0xff&b];
}

public static Type mapToType(final byte b) {
return MAP.get(b);
}
{code}

Benchmark   Mode  Cnt  Score Error  Units
Bench.codeToType(4)  thrpt   20  110060552.455 ± 1683158.201  ops/s
Bench.codeToType(14)  thrpt   20   89055208.888 ± 1478463.064  ops/s
Bench.arrayToType(4)  thrpt   20  298451203.883 ± 8411142.284  ops/s
Bench.arrayToType(14)  thrpt   20  299743528.870 ± 5011623.643  ops/s
Bench.mapToType(4)  thrpt   20  163042309.727 ± 3548380.929  ops/s
Bench.mapToType(14)  thrpt   20  163546726.415 ± 2980305.459  ops/s

4 is code of Put and 14 is code of DeleteFamily which is the max code of normal 
types. we can see that array has the best performance.

> Use array to cache Types for KeyValue.Type.codeToType
> -
>
> Key: HBASE-16612
> URL: https://issues.apache.org/jira/browse/HBASE-16612
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
>Priority: Minor
>
> We don't rely on enum ordinals in KeyValye.Type. We have own code in it. In 
> codeToType, we use a loop to find the Type which is not a good idea. We can 
> just use an arryay[256] to cache all types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16612) Use array to cache Types for KeyValue.Type.codeToType

2016-09-11 Thread Phil Yang (JIRA)
Phil Yang created HBASE-16612:
-

 Summary: Use array to cache Types for KeyValue.Type.codeToType
 Key: HBASE-16612
 URL: https://issues.apache.org/jira/browse/HBASE-16612
 Project: HBase
  Issue Type: Bug
Reporter: Phil Yang
Assignee: Phil Yang
Priority: Minor


We don't rely on enum ordinals in KeyValye.Type. We have own code in it. In 
codeToType, we use a loop to find the Type which is not a good idea. We can 
just use an arryay[256] to cache all types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16592) Unify Delete request with AP

2016-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483176#comment-15483176
 ] 

Hadoop QA commented on HBASE-16592:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
23m 56s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 33s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 124m 50s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestHRegion |
| Timed out junit tests | 
org.apache.hadoop.hbase.security.token.TestGenerateDelegationToken |
|   | org.apache.hadoop.hbase.security.access.TestNamespaceCommands |
|   | org.apache.hadoop.hbase.snapshot.TestExportSnapshot |
|   | org.apache.hadoop.hbase.security.access.TestWithDisabledAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827957/HBASE-16592.v1.patch |
| JIRA Issue | HBASE-16592 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux e85dbdf16ebd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c19d2ca |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| uni

[jira] [Commented] (HBASE-16607) Make NoncedRegionServerCallable extend CancellableRegionServerCallable

2016-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483169#comment-15483169
 ] 

Hudson commented on HBASE-16607:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1586 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1586/])
HBASE-16607 Make NoncedRegionServerCallable extend (chenheng: rev 
c19d2cabbd4c6e312e4926f72d348a5e554cd3dd)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/NoncedRegionServerCallable.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java


> Make NoncedRegionServerCallable extend CancellableRegionServerCallable
> --
>
> Key: HBASE-16607
> URL: https://issues.apache.org/jira/browse/HBASE-16607
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-16607.patch, HBASE-16607.v1.patch
>
>
> This is the first step to unify append, increment with AP.
> And after extends CancellableRegionServerCallable,  we could remove lots of 
> duplicate code in NoncedRegionServerCallable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16606) Remove some duplicate code in HTable

2016-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483170#comment-15483170
 ] 

Hudson commented on HBASE-16606:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1586 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1586/])
HBASE-16606 Remove some duplicate code in HTable (chenheng: rev 
2c3b0f2c0b2d47dfd3a22e1f47f7eb1317d3514f)
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java


> Remove some duplicate code in HTable
> 
>
> Key: HBASE-16606
> URL: https://issues.apache.org/jira/browse/HBASE-16606
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Minor
> Attachments: HBASE-16606.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16609) Fake cells EmptyByteBufferedCell created in read path not implementing SettableSequenceId

2016-09-11 Thread Yu Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Sun updated HBASE-16609:
---
Description: 
I backport offheap in 2.0 to hbase-1.1.2, and when testing,I encounter a 
similar problem HBASE-15379 ,Here is the stack trace:
{noformat}
java.io.IOException: java.lang.UnsupportedOperationException: Cell is not of 
type org.apache.hadoop.hbase.SettableSequenceId
at org.apache.hadoop.hbase.CellUtil.setSequenceId(CellUtil.java:915)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.setCurrentCell(StoreFileScanner.java:203)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:338)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:821)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:809)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:636)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5611)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5750)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5551)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5528)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5515)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2125)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2068)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32201
)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:790)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
{noformat}
this will occur in read path when offheap is used. mostly due to ByteBuffer 
backed Cells dont implement interface SettableSequenceId. 


  was:
I backport offheap in 2.0 to hbase-1.1.2, and when testing,I encounter a 
similar problem HBASE-14099 ,Here is the stack trace:
{noformat}
java.io.IOException: java.lang.UnsupportedOperationException: Cell is not of 
type org.apache.hadoop.hbase.SettableSequenceId
at org.apache.hadoop.hbase.CellUtil.setSequenceId(CellUtil.java:915)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.setCurrentCell(StoreFileScanner.java:203)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:338)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:821)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:809)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:636)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5611)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5750)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5551)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5528)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5515)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2125)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2068)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32201
)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:790)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
{noformat}
this will occur in read path when offheap is used. mostly due to ByteBuffer 
backed Cells

[jira] [Commented] (HBASE-16609) Fake cells EmptyByteBufferedCell created in read path not implementing SettableSequenceId

2016-09-11 Thread Yu Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483138#comment-15483138
 ] 

Yu Sun commented on HBASE-16609:


sorry, i make a mistake ,shoud be this jira HBASE-15379 .

> Fake cells EmptyByteBufferedCell  created in read path not implementing 
> SettableSequenceId 
> ---
>
> Key: HBASE-16609
> URL: https://issues.apache.org/jira/browse/HBASE-16609
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Yu Sun
> Fix For: 2.0.0
>
> Attachments: HBASE-16609-v1.patch
>
>
> I backport offheap in 2.0 to hbase-1.1.2, and when testing,I encounter a 
> similar problem HBASE-14099 ,Here is the stack trace:
> {noformat}
> java.io.IOException: java.lang.UnsupportedOperationException: Cell is not of 
> type org.apache.hadoop.hbase.SettableSequenceId
> at org.apache.hadoop.hbase.CellUtil.setSequenceId(CellUtil.java:915)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.setCurrentCell(StoreFileScanner.java:203)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:338)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:821)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:809)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:636)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5611)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5551)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5528)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5515)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2125)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2068)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32201
> )
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:790)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> {noformat}
> this will occur in read path when offheap is used. mostly due to ByteBuffer 
> backed Cells dont implement interface SettableSequenceId. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14882) Provide a Put API that adds the provided family, qualifier, value without copying

2016-09-11 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483085#comment-15483085
 ] 

Anoop Sam John commented on HBASE-14882:


Thanks for the tests..
1. Ya ideally those interfaces are not required to be used at client end. Here 
it seems like Put#addImmutable() API is being used by some WAL replay path.  On 
those cells coming in for replay, we tend to set the seqId.  Ya this is fully 
server end code path only.  The client API is used by server also..  So  we 
need to be careful here..  Can u see all usage of these affected Put APIs.. 
(addImmutable)
2. Streamable is not a must any way.  If this is implemented, then the Codec 
can do an optimized way of serializing this Cell on to wire IN KV FORMAT.   So 
the serialization to wire is always KV format way..  Because of that, the 
deserialization by codec is simple. It just deserialize into KV.  So there are 
no confusion as u mentioned above

> Provide a Put API that adds the provided family, qualifier, value without 
> copying
> -
>
> Key: HBASE-14882
> URL: https://issues.apache.org/jira/browse/HBASE-14882
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: Xiang Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14882.master.000.patch, 
> HBASE-14882.master.001.patch
>
>
> In the Put API, we have addImmutable()
> {code}
>  /**
>* See {@link #addColumn(byte[], byte[], byte[])}. This version expects
>* that the underlying arrays won't change. It's intended
>* for usage internal HBase to and for advanced client applications.
>*/
>   public Put addImmutable(byte [] family, byte [] qualifier, byte [] value)
> {code}
> But in the implementation, the family, qualifier and value are still being 
> copied locally to create kv.
> Hopefully we should provide an API that truly uses immutable family, 
> qualifier and value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15624) Move master branch/hbase-2.0.0 to jdk-8 only

2016-09-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483079#comment-15483079
 ] 

Duo Zhang commented on HBASE-15624:
---

Perfect. Any concerns? [~busbey] [~stack]

Thanks.

> Move master branch/hbase-2.0.0 to jdk-8 only
> 
>
> Key: HBASE-15624
> URL: https://issues.apache.org/jira/browse/HBASE-15624
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15624-branch-1.patch, HBASE-15624.patch, 
> hbase-15624.patch
>
>
> Set build and pom target jvm version as jdk8. We chatted about it here: 
> http://osdir.com/ml/general/2016-04/msg09691.html Set it as blocker on 2.0.0.
> We need to work on YETUS-369 before we can finish up this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16416) Make NoncedRegionServerCallable extends RegionServerCallable

2016-09-11 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483074#comment-15483074
 ] 

Guanghao Zhang commented on HBASE-16416:


Invalid. Duplicate with HBASE-16607.

> Make NoncedRegionServerCallable extends RegionServerCallable
> 
>
> Key: HBASE-16416
> URL: https://issues.apache.org/jira/browse/HBASE-16416
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Attachments: HBASE-16416.patch
>
>
> After HBASE-16308, there are a new class NoncedRegionServerCallable which 
> extends AbstractRegionServerCallable. But it have some duplicate methods with 
> RegionServerCallable. So we can make NoncedRegionServerCallable extends 
> RegionServerCallable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16610) Unify append, increment with AP

2016-09-11 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-16610:
--
Attachment: HBASE-16610.patch

blocked by HBASE-16592

> Unify append, increment with AP
> ---
>
> Key: HBASE-16610
> URL: https://issues.apache.org/jira/browse/HBASE-16610
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-16610.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16604) Scanner retries on IOException can cause the scans to miss data

2016-09-11 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16604:
--
Fix Version/s: (was: 1.2.3)
   1.2.4

> Scanner retries on IOException can cause the scans to miss data 
> 
>
> Key: HBASE-16604
> URL: https://issues.apache.org/jira/browse/HBASE-16604
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>
>
> Debugging an ITBLL failure, where the Verify did not "see" all the data in 
> the cluster, I've noticed that if we end up getting a generic IOException 
> from the HFileReader level, we may end up missing the rest of the data in the 
> region. I was able to manually test this, and this stack trace helps to 
> understand what is going on: 
> {code}
> 2016-09-09 16:27:15,633 INFO  [hconnection-0x71ad3d8a-shared--pool21-t9] 
> client.ScannerCallable(376): Open scanner=1 for 
> scan={"loadColumnFamiliesOnDemand":null,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":2097152,"families":{"testFamily":["testFamily"]},"caching":100,"maxVersions":1,"timeRange":[0,9223372036854775807]}
>  on region 
> region=testScanThrowsException,,1473463632707.b2adfb618e5d0fe225c1dc40c0eabfee.,
>  hostname=hw10676,51833,1473463626529, seqNum=2
> 2016-09-09 16:27:15,634 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2196): scan request:scanner_id: 1 number_of_rows: 
> 100 close_scanner: false next_call_seq: 0 client_handles_partials: true 
> client_handles_heartbeats: true renew: false
> 2016-09-09 16:27:15,635 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2510): Rolling back next call seqId
> 2016-09-09 16:27:15,635 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2565): Throwing new 
> ServiceExceptionjava.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://localhost:51795/user/enis/test-data/d6fb1c70-93c1-4099-acb7-5723fc05a737/data/default/testScanThrowsException/b2adfb618e5d0fe225c1dc40c0eabfee/testFamily/5a213cc23b714e5e8e1a140ebbe72f2c,
>  compression=none, cacheConf=blockCache=LruBlockCache{blockCount=0, 
> currentSize=1567264, freeSize=1525578848, maxSize=1527146112, 
> heapSize=1567264, minSize=1450788736, minFactor=0.95, multiSize=725394368, 
> multiFactor=0.5, singleSize=362697184, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false, firstKey=aaa/testFamily:testFamily/1473463633859/Put, 
> lastKey=zzz/testFamily:testFamily/1473463634271/Put, avgKeyLen=35, 
> avgValueLen=3, entries=17576, length=866998, 
> cur=/testFamily:/OLDEST_TIMESTAMP/Minimum/vlen=0/seqid=0] to key 
> /testFamily:testFamily/LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0
> 2016-09-09 16:27:15,635 DEBUG 
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] ipc.CallRunner(110): 
> B.fifo.QRpcServer.handler=5,queue=0,port=51833: callId: 26 service: 
> ClientService methodName: Scan size: 26 connection: 192.168.42.75:51903
> java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for 
> reader 
> reader=hdfs://localhost:51795/user/enis/test-data/d6fb1c70-93c1-4099-acb7-5723fc05a737/data/default/testScanThrowsException/b2adfb618e5d0fe225c1dc40c0eabfee/testFamily/5a213cc23b714e5e8e1a140ebbe72f2c,
>  compression=none, cacheConf=blockCache=LruBlockCache{blockCount=0, 
> currentSize=1567264, freeSize=1525578848, maxSize=1527146112, 
> heapSize=1567264, minSize=1450788736, minFactor=0.95, multiSize=725394368, 
> multiFactor=0.5, singleSize=362697184, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false, firstKey=aaa/testFamily:testFamily/1473463633859/Put, 
> lastKey=zzz/testFamily:testFamily/1473463634271/Put, avgKeyLen=35, 
> avgValueLen=3, entries=17576, length=866998, 
> cur=/testFamily:/OLDEST_TIMESTAMP/Minimum/vlen=0/seqid=0] to key 
> /testFamily:testFamily/LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:224)
>   at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:268)
>   at 
> org.apache.hadoop.hba

[jira] [Commented] (HBASE-15624) Move master branch/hbase-2.0.0 to jdk-8 only

2016-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483019#comment-15483019
 ] 

Hadoop QA commented on HBASE-15624:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
58s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 15s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 6s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 7s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 7s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 7s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 7s 
{color} | {color:red} The patch causes 7 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 13s 
{color} | {color:red} The patch causes 7 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 20s 
{color} | {color:red} The patch causes 7 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 26s 
{color} | {color:red} The patch causes 7 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 32s 
{color} | {color:red} The patch causes 7 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 38s 
{color} | {color:red} The patch causes 7 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 45s 
{color} | {color:red} The patch causes 7 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 51s 
{color} | {color:red} The patch causes 7 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 57s 
{color} | {color:red} The patch causes 7 errors with Hadoop v2.7.1. {color} |
| {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 7s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 6s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {col

[jira] [Updated] (HBASE-15624) Move master branch/hbase-2.0.0 to jdk-8 only

2016-09-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15624:
--
Attachment: HBASE-15624-branch-1.patch

Try the same patch on branch-1. It suppose to be failed and the default java 
should be 1.7.

> Move master branch/hbase-2.0.0 to jdk-8 only
> 
>
> Key: HBASE-15624
> URL: https://issues.apache.org/jira/browse/HBASE-15624
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15624-branch-1.patch, HBASE-15624.patch, 
> hbase-15624.patch
>
>
> Set build and pom target jvm version as jdk8. We chatted about it here: 
> http://osdir.com/ml/general/2016-04/msg09691.html Set it as blocker on 2.0.0.
> We need to work on YETUS-369 before we can finish up this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16609) Fake cells EmptyByteBufferedCell created in read path not implementing SettableSequenceId

2016-09-11 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16609:
---
Fix Version/s: 2.0.0

> Fake cells EmptyByteBufferedCell  created in read path not implementing 
> SettableSequenceId 
> ---
>
> Key: HBASE-16609
> URL: https://issues.apache.org/jira/browse/HBASE-16609
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Yu Sun
> Fix For: 2.0.0
>
> Attachments: HBASE-16609-v1.patch
>
>
> I backport offheap in 2.0 to hbase-1.1.2, and when testing,I encounter a 
> similar problem HBASE-14099 ,Here is the stack trace:
> {noformat}
> java.io.IOException: java.lang.UnsupportedOperationException: Cell is not of 
> type org.apache.hadoop.hbase.SettableSequenceId
> at org.apache.hadoop.hbase.CellUtil.setSequenceId(CellUtil.java:915)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.setCurrentCell(StoreFileScanner.java:203)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:338)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:821)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:809)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:636)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5611)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5551)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5528)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5515)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2125)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2068)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32201
> )
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:790)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> {noformat}
> this will occur in read path when offheap is used. mostly due to ByteBuffer 
> backed Cells dont implement interface SettableSequenceId. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16609) Fake cells EmptyByteBufferedCell created in read path not implementing SettableSequenceId

2016-09-11 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16609:
---
Affects Version/s: 2.0.0

> Fake cells EmptyByteBufferedCell  created in read path not implementing 
> SettableSequenceId 
> ---
>
> Key: HBASE-16609
> URL: https://issues.apache.org/jira/browse/HBASE-16609
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Yu Sun
> Fix For: 2.0.0
>
> Attachments: HBASE-16609-v1.patch
>
>
> I backport offheap in 2.0 to hbase-1.1.2, and when testing,I encounter a 
> similar problem HBASE-14099 ,Here is the stack trace:
> {noformat}
> java.io.IOException: java.lang.UnsupportedOperationException: Cell is not of 
> type org.apache.hadoop.hbase.SettableSequenceId
> at org.apache.hadoop.hbase.CellUtil.setSequenceId(CellUtil.java:915)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.setCurrentCell(StoreFileScanner.java:203)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:338)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:821)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:809)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:636)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5611)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5551)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5528)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5515)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2125)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2068)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32201
> )
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:790)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> {noformat}
> this will occur in read path when offheap is used. mostly due to ByteBuffer 
> backed Cells dont implement interface SettableSequenceId. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16609) Fake cells EmptyByteBufferedCell created in read path not implementing SettableSequenceId

2016-09-11 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482984#comment-15482984
 ] 

Anoop Sam John commented on HBASE-16609:


bq.I encounter a similar problem HBASE-14099 ,
Is this correct jira? Typo?

> Fake cells EmptyByteBufferedCell  created in read path not implementing 
> SettableSequenceId 
> ---
>
> Key: HBASE-16609
> URL: https://issues.apache.org/jira/browse/HBASE-16609
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Yu Sun
>Assignee: Yu Sun
> Fix For: 2.0.0
>
> Attachments: HBASE-16609-v1.patch
>
>
> I backport offheap in 2.0 to hbase-1.1.2, and when testing,I encounter a 
> similar problem HBASE-14099 ,Here is the stack trace:
> {noformat}
> java.io.IOException: java.lang.UnsupportedOperationException: Cell is not of 
> type org.apache.hadoop.hbase.SettableSequenceId
> at org.apache.hadoop.hbase.CellUtil.setSequenceId(CellUtil.java:915)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.setCurrentCell(StoreFileScanner.java:203)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:338)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:821)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:809)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:636)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5611)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5551)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5528)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5515)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2125)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2068)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32201
> )
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:790)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> {noformat}
> this will occur in read path when offheap is used. mostly due to ByteBuffer 
> backed Cells dont implement interface SettableSequenceId. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16609) Fake cells EmptyByteBufferedCell created in read path not implementing SettableSequenceId

2016-09-11 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482979#comment-15482979
 ] 

Anoop Sam John commented on HBASE-16609:


oh!!!  This issue was fixed for EmptyCell some time back.. Missed 
EmptyByteBufferedCell .   +1. Thanks for the find

> Fake cells EmptyByteBufferedCell  created in read path not implementing 
> SettableSequenceId 
> ---
>
> Key: HBASE-16609
> URL: https://issues.apache.org/jira/browse/HBASE-16609
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Sun
>Assignee: Yu Sun
> Attachments: HBASE-16609-v1.patch
>
>
> I backport offheap in 2.0 to hbase-1.1.2, and when testing,I encounter a 
> similar problem HBASE-14099 ,Here is the stack trace:
> {noformat}
> java.io.IOException: java.lang.UnsupportedOperationException: Cell is not of 
> type org.apache.hadoop.hbase.SettableSequenceId
> at org.apache.hadoop.hbase.CellUtil.setSequenceId(CellUtil.java:915)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.setCurrentCell(StoreFileScanner.java:203)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:338)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:821)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:809)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:636)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5611)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5750)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5551)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5528)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:5515)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2125)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2068)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32201
> )
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:790)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> {noformat}
> this will occur in read path when offheap is used. mostly due to ByteBuffer 
> backed Cells dont implement interface SettableSequenceId. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16591) Add a docker file only contains java 8 for running pre commit on master

2016-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482976#comment-15482976
 ] 

Hudson commented on HBASE-16591:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1585 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1585/])
HBASE-16591 Add a docker file only contains java 8 for running pre (zhangduo: 
rev 7bda5151eee2febc03a8e0434705e0aa2d6a8c34)
* (add) dev-support/docker/Dockerfile


> Add a docker file only contains java 8 for running pre commit on master
> ---
>
> Key: HBASE-16591
> URL: https://issues.apache.org/jira/browse/HBASE-16591
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-16591.patch
>
>
> As described in YETUS-369, this is a workaround before YETUS-369 done. 
> Hadoop's pre-commit has already been java 8 only, so I think we could just 
> copy their docker file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16592) Unify Delete request with AP

2016-09-11 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-16592:
--
Attachment: HBASE-16592.v1.patch

rebase on master,  and address [~tedyu] nice suggestions.

> Unify Delete request with AP
> 
>
> Key: HBASE-16592
> URL: https://issues.apache.org/jira/browse/HBASE-16592
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-16592.patch, HBASE-16592.v1.patch
>
>
> This is the first step try to unify the HTable with AP only,  to extend AP 
> could process single action, i introduced AbstractResponse,  multiResponse 
> and singleResponse (introduced to deal with single result) will extend this 
> class. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16607) Make NoncedRegionServerCallable extend CancellableRegionServerCallable

2016-09-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482950#comment-15482950
 ] 

stack commented on HBASE-16607:
---

Makes sense. I currently have a patch that does same thing. Thanks [~chenheng]


> Make NoncedRegionServerCallable extend CancellableRegionServerCallable
> --
>
> Key: HBASE-16607
> URL: https://issues.apache.org/jira/browse/HBASE-16607
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-16607.patch, HBASE-16607.v1.patch
>
>
> This is the first step to unify append, increment with AP.
> And after extends CancellableRegionServerCallable,  we could remove lots of 
> duplicate code in NoncedRegionServerCallable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16607) Make NoncedRegionServerCallable extend CancellableRegionServerCallable

2016-09-11 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482907#comment-15482907
 ] 

Heng Chen commented on HBASE-16607:
---

Flakey test case will be fixed in HBASE-16611

> Make NoncedRegionServerCallable extend CancellableRegionServerCallable
> --
>
> Key: HBASE-16607
> URL: https://issues.apache.org/jira/browse/HBASE-16607
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-16607.patch, HBASE-16607.v1.patch
>
>
> This is the first step to unify append, increment with AP.
> And after extends CancellableRegionServerCallable,  we could remove lots of 
> duplicate code in NoncedRegionServerCallable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16611) Flakey org.apache.hadoop.hbase.client.TestReplicasClient.testCancelOfMultiGet

2016-09-11 Thread Heng Chen (JIRA)
Heng Chen created HBASE-16611:
-

 Summary: Flakey 
org.apache.hadoop.hbase.client.TestReplicasClient.testCancelOfMultiGet
 Key: HBASE-16611
 URL: https://issues.apache.org/jira/browse/HBASE-16611
 Project: HBase
  Issue Type: Bug
Reporter: Heng Chen


see 
https://builds.apache.org/job/PreCommit-HBASE-Build/3494/artifact/patchprocess/patch-unit-hbase-server.txt

{code}
testCancelOfMultiGet(org.apache.hadoop.hbase.client.TestReplicasClient)  Time 
elapsed: 4.026 sec  <<< FAILURE!
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hbase.client.TestReplicasClient.testCancelOfMultiGet(TestReplicasClient.java:579)

Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 94.401 sec - 
in org.apache.hadoop.hbase.client.TestAdmin2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.861 sec - in 
org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout
Running 
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 261.925 sec <<< 
FAILURE! - in org.apache.hadoop.hbase.client.TestReplicasClient
testCancelOfMultiGet(org.apache.hadoop.hbase.client.TestReplicasClient)  Time 
elapsed: 4.522 sec  <<< FAILURE!
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hbase.client.TestReplicasClient.testCancelOfMultiGet(TestReplicasClient.java:581)

Running org.apache.hadoop.hbase.client.TestFastFail
Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 3.648 sec - in 
org.apache.hadoop.hbase.client.TestFastFail
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 277.894 sec <<< 
FAILURE! - in org.apache.hadoop.hbase.client.TestReplicasClient
testCancelOfMultiGet(org.apache.hadoop.hbase.client.TestReplicasClient)  Time 
elapsed: 5.359 sec  <<< FAILURE!
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hbase.client.TestReplicasClient.testCancelOfMultiGet(TestReplicasClient.java:579)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16607) Make NoncedRegionServerCallable extend CancellableRegionServerCallable

2016-09-11 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-16607:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

commit to master.

> Make NoncedRegionServerCallable extend CancellableRegionServerCallable
> --
>
> Key: HBASE-16607
> URL: https://issues.apache.org/jira/browse/HBASE-16607
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-16607.patch, HBASE-16607.v1.patch
>
>
> This is the first step to unify append, increment with AP.
> And after extends CancellableRegionServerCallable,  we could remove lots of 
> duplicate code in NoncedRegionServerCallable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16373) precommit needs a dockerfile with hbase prereqs

2016-09-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482894#comment-15482894
 ] 

Duo Zhang commented on HBASE-16373:
---

Two things here

1. Since we already have a customized docker file for master, we could also add 
a docker file for each branch.
2. The docker file in HBASE-16591 is copied from hadoop, we should check if 
there are some unnecessary dependencies.

> precommit needs a dockerfile with hbase prereqs
> ---
>
> Key: HBASE-16373
> URL: https://issues.apache.org/jira/browse/HBASE-16373
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Sean Busbey
>Priority: Critical
>
> specifically, we need protoc. starting with the dockerfile used by default in 
> yetus and adding it will probably suffice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16606) Remove some duplicate code in HTable

2016-09-11 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-16606:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Remove some duplicate code in HTable
> 
>
> Key: HBASE-16606
> URL: https://issues.apache.org/jira/browse/HBASE-16606
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Minor
> Attachments: HBASE-16606.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15624) Move master branch/hbase-2.0.0 to jdk-8 only

2016-09-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482888#comment-15482888
 ] 

Duo Zhang commented on HBASE-15624:
---

Seems fine? All javac related subsystems are executed once and the Default Java 
is '1.8.0_101'. The failed tests in unrelated I think since we do not change 
any java code?

[~stack] [~busbey] Thanks.

> Move master branch/hbase-2.0.0 to jdk-8 only
> 
>
> Key: HBASE-15624
> URL: https://issues.apache.org/jira/browse/HBASE-15624
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15624.patch, hbase-15624.patch
>
>
> Set build and pom target jvm version as jdk8. We chatted about it here: 
> http://osdir.com/ml/general/2016-04/msg09691.html Set it as blocker on 2.0.0.
> We need to work on YETUS-369 before we can finish up this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16607) Make NoncedRegionServerCallable extend CancellableRegionServerCallable

2016-09-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482882#comment-15482882
 ] 

Ted Yu commented on HBASE-16607:


Alright, +1

> Make NoncedRegionServerCallable extend CancellableRegionServerCallable
> --
>
> Key: HBASE-16607
> URL: https://issues.apache.org/jira/browse/HBASE-16607
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-16607.patch, HBASE-16607.v1.patch
>
>
> This is the first step to unify append, increment with AP.
> And after extends CancellableRegionServerCallable,  we could remove lots of 
> duplicate code in NoncedRegionServerCallable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16606) Remove some duplicate code in HTable

2016-09-11 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482877#comment-15482877
 ] 

Heng Chen commented on HBASE-16606:
---

Thanks [~tedyu] for your review.  Commit it to master.

> Remove some duplicate code in HTable
> 
>
> Key: HBASE-16606
> URL: https://issues.apache.org/jira/browse/HBASE-16606
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Minor
> Attachments: HBASE-16606.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16607) Make NoncedRegionServerCallable extend CancellableRegionServerCallable

2016-09-11 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482875#comment-15482875
 ] 

Heng Chen commented on HBASE-16607:
---

Thanks [~tedyu] for your review.  I check it locally,  it has no relates with 
the patch.  It failed sometimes with or without this patch.  Open another issue 
for the flaky test case.   And replicasClient has no relates with 
NoncedRegionServerCallable,  it is only Get/Scan requests. 

> Make NoncedRegionServerCallable extend CancellableRegionServerCallable
> --
>
> Key: HBASE-16607
> URL: https://issues.apache.org/jira/browse/HBASE-16607
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-16607.patch, HBASE-16607.v1.patch
>
>
> This is the first step to unify append, increment with AP.
> And after extends CancellableRegionServerCallable,  we could remove lots of 
> duplicate code in NoncedRegionServerCallable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15624) Move master branch/hbase-2.0.0 to jdk-8 only

2016-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482874#comment-15482874
 ] 

Hadoop QA commented on HBASE-15624:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
6s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 48s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 52s {color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 136m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot |
|   | org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot |
|   | org.apache.hadoop.hbase.snapshot.TestExportSnapshot |
|   | org.apache.hadoop.hbase.TestMovedRegionsCleaner |
|   | org.apache.hadoop.hbase.snapshot.TestMobRestoreFlushSnapshotFromClient |
|   | org.apache.hadoop.hbase.mapred.TestTableSnapshotInputFormat |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827940/HBASE-15624.patch |
| JIRA Issue | HBASE-15624 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  |
| uname | Linux d182e688d421 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7bda515 |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3496/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/3496/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3496/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3496/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Move master branch/hbase-

[jira] [Commented] (HBASE-16607) Make NoncedRegionServerCallable extend CancellableRegionServerCallable

2016-09-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482851#comment-15482851
 ] 

Ted Yu commented on HBASE-16607:


Have you verified that failure in TestReplicasClient was not related to the 
patch ?

> Make NoncedRegionServerCallable extend CancellableRegionServerCallable
> --
>
> Key: HBASE-16607
> URL: https://issues.apache.org/jira/browse/HBASE-16607
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-16607.patch, HBASE-16607.v1.patch
>
>
> This is the first step to unify append, increment with AP.
> And after extends CancellableRegionServerCallable,  we could remove lots of 
> duplicate code in NoncedRegionServerCallable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16606) Remove some duplicate code in HTable

2016-09-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16606:
---
Priority: Minor  (was: Major)

+1

> Remove some duplicate code in HTable
> 
>
> Key: HBASE-16606
> URL: https://issues.apache.org/jira/browse/HBASE-16606
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Minor
> Attachments: HBASE-16606.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16607) Make NoncedRegionServerCallable extend CancellableRegionServerCallable

2016-09-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16607:
---
Summary: Make NoncedRegionServerCallable extend 
CancellableRegionServerCallable  (was: Make NoncedRegionServerCallable extends 
CancellableRegionServerCallable)

> Make NoncedRegionServerCallable extend CancellableRegionServerCallable
> --
>
> Key: HBASE-16607
> URL: https://issues.apache.org/jira/browse/HBASE-16607
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
>Assignee: Heng Chen
> Attachments: HBASE-16607.patch, HBASE-16607.v1.patch
>
>
> This is the first step to unify append, increment with AP.
> And after extends CancellableRegionServerCallable,  we could remove lots of 
> duplicate code in NoncedRegionServerCallable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16594) ROW_INDEX_V2 DBE

2016-09-11 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482816#comment-15482816
 ] 

binlijin commented on HBASE-16594:
--

[~anoop.hbase] [~ram_krish] [~saint@gmail.com] mind take a look? Thanks 
very much.

> ROW_INDEX_V2 DBE
> 
>
> Key: HBASE-16594
> URL: https://issues.apache.org/jira/browse/HBASE-16594
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16594-master_v1.patch, HBASE-16594-master_v2.patch
>
>
> See HBASE-16213, ROW_INDEX_V1 DataBlockEncoding.
> ROW_INDEX_V1 is the first version which have no storage optimization, 
> ROW_INDEX_V2 do storage optimization: store every row only once, store column 
> family only once in a HFileBlock.
> ROW_INDEX_V1 is : 
> /** 
>  * Store cells following every row's start offset, so we can binary search to 
> a row's cells. 
>  * 
>  * Format: 
>  * flat cells 
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * integer: dataSize 
>  * 
> */
> ROW_INDEX_V2 is :
>  * row1 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  * row2 qualifier timestamp type value tag
>  * row3 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * column family
>  * integer: dataSize 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-16594) ROW_INDEX_V2 DBE

2016-09-11 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin reassigned HBASE-16594:


Assignee: binlijin

> ROW_INDEX_V2 DBE
> 
>
> Key: HBASE-16594
> URL: https://issues.apache.org/jira/browse/HBASE-16594
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16594-master_v1.patch, HBASE-16594-master_v2.patch
>
>
> See HBASE-16213, ROW_INDEX_V1 DataBlockEncoding.
> ROW_INDEX_V1 is the first version which have no storage optimization, 
> ROW_INDEX_V2 do storage optimization: store every row only once, store column 
> family only once in a HFileBlock.
> ROW_INDEX_V1 is : 
> /** 
>  * Store cells following every row's start offset, so we can binary search to 
> a row's cells. 
>  * 
>  * Format: 
>  * flat cells 
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * integer: dataSize 
>  * 
> */
> ROW_INDEX_V2 is :
>  * row1 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  * row2 qualifier timestamp type value tag
>  * row3 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * column family
>  * integer: dataSize 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16610) Unify append, increment with AP

2016-09-11 Thread Heng Chen (JIRA)
Heng Chen created HBASE-16610:
-

 Summary: Unify append, increment with AP
 Key: HBASE-16610
 URL: https://issues.apache.org/jira/browse/HBASE-16610
 Project: HBase
  Issue Type: Sub-task
Reporter: Heng Chen
Assignee: Heng Chen






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16594) ROW_INDEX_V2 DBE

2016-09-11 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15476741#comment-15476741
 ] 

binlijin edited comment on HBASE-16594 at 9/12/16 2:15 AM:
---

I do test with one of our very important table and the most requests on this 
table is random get. This table have 5 column family(why have so many families, 
this is for history reason.)
I get one region's data and do the random get performance on a regionserver.

This region's detail information is:
{code}
number of row : 3463153
5 family: a,b,c,d,f

family a : avgKeyLen=54,avgValueLen=12  entries=234100060  
length=4369389736(4.07GB)
family b : avgKeyLen=53,avgValueLen=10  entries=51913519   
length=981625160(936MB)
family c : avgKeyLen=50,avgValueLen=6   entries=14864860   
length=273820502(261MB)
family d : avgKeyLen=50,avgValueLen=6   entries=141422679  
length=3216604161(3GB)
family f : avgKeyLen=38,avgValueLen=13  entries=73084074   
length=1174375801(1.09GB)

avg cells per row
family a :  67.6
family b :  15
family c :  4.3
family d :  40.8
family f :  21.1

BlockSize=8k  COMPRESSION=LZO RegionSize=9.33GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=16k COMPRESSION=LZO RegionSize=8.52GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=32k COMPRESSION=LZO RegionSize=7.81GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=64k COMPRESSION=LZO RegionSize=7.74GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=32k COMPRESSION=LZO RegionSize=7.84GB  DATA_BLOCK_ENCODING => 
'ROW_INDEX_V1'
BlockSize=32k COMPRESSION=LZO RegionSize=6.24GB  DATA_BLOCK_ENCODING => 
'ROW_INDEX_V2'
{code}


was (Author: aoxiang):
I do test with one of our very important table. This table have 5 column 
family(why have so many families, this is for history reason.)
I get one region's data and do the random get performance on a regionserver.

This region's detail information is:
{code}
number of row : 3463153
5 family: a,b,c,d,f

family a : avgKeyLen=54,avgValueLen=12  entries=234100060  
length=4369389736(4.07GB)
family b : avgKeyLen=53,avgValueLen=10  entries=51913519   
length=981625160(936MB)
family c : avgKeyLen=50,avgValueLen=6   entries=14864860   
length=273820502(261MB)
family d : avgKeyLen=50,avgValueLen=6   entries=141422679  
length=3216604161(3GB)
family f : avgKeyLen=38,avgValueLen=13  entries=73084074   
length=1174375801(1.09GB)

avg cells per row
family a :  67.6
family b :  15
family c :  4.3
family d :  40.8
family f :  21.1

BlockSize=8k  COMPRESSION=LZO RegionSize=9.33GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=16k COMPRESSION=LZO RegionSize=8.52GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=32k COMPRESSION=LZO RegionSize=7.81GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=64k COMPRESSION=LZO RegionSize=7.74GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=32k COMPRESSION=LZO RegionSize=7.84GB  DATA_BLOCK_ENCODING => 
'ROW_INDEX_V1'
BlockSize=32k COMPRESSION=LZO RegionSize=6.24GB  DATA_BLOCK_ENCODING => 
'ROW_INDEX_V2'
{code}

> ROW_INDEX_V2 DBE
> 
>
> Key: HBASE-16594
> URL: https://issues.apache.org/jira/browse/HBASE-16594
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16594-master_v1.patch, HBASE-16594-master_v2.patch
>
>
> See HBASE-16213, ROW_INDEX_V1 DataBlockEncoding.
> ROW_INDEX_V1 is the first version which have no storage optimization, 
> ROW_INDEX_V2 do storage optimization: store every row only once, store column 
> family only once in a HFileBlock.
> ROW_INDEX_V1 is : 
> /** 
>  * Store cells following every row's start offset, so we can binary search to 
> a row's cells. 
>  * 
>  * Format: 
>  * flat cells 
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * integer: dataSize 
>  * 
> */
> ROW_INDEX_V2 is :
>  * row1 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  * row2 qualifier timestamp type value tag
>  * row3 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * column family
>  * integer: dataSize 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16594) ROW_INDEX_V2 DBE

2016-09-11 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482804#comment-15482804
 ] 

binlijin commented on HBASE-16594:
--

I get a part of column family a' data, and test it with ROW_INDEX_V2.
Second the random get qps result is:
{code}
RegionServer Network out is about 1.8GB

8k   NONE  (CPU System/User 7/58)  QPS=167k
8k   Row_Index_V1  (CPU System/User 7/60)  QPS=164k
8k   Row_Index_V2  (CPU System/User 7/52)  QPS=164k

16k  NONE  (CPU System/User 7/59)  QPS=166.5k
16k  Row_Index_V1  (CPU System/User 7/55)  QPS=165.6k
16k  Row_Index_V2  (CPU System/User 7/54)  QPS=165k

32k  NONE  (CPU System/User 7/63)  QPS=165k
32k  Row_Index_V1  (CPU System/User 7/56)  QPS=166k
32k  Row_Index_V2  (CPU System/User 7/54)  QPS=164k

64k  NONE  (CPU System/User 7/65)  QPS=160k
64k  Row_Index_V1  (CPU System/User 7/56)  QPS=165k
64k  Row_Index_V2  (CPU System/User 7/53)  QPS=165k
{code}


> ROW_INDEX_V2 DBE
> 
>
> Key: HBASE-16594
> URL: https://issues.apache.org/jira/browse/HBASE-16594
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16594-master_v1.patch, HBASE-16594-master_v2.patch
>
>
> See HBASE-16213, ROW_INDEX_V1 DataBlockEncoding.
> ROW_INDEX_V1 is the first version which have no storage optimization, 
> ROW_INDEX_V2 do storage optimization: store every row only once, store column 
> family only once in a HFileBlock.
> ROW_INDEX_V1 is : 
> /** 
>  * Store cells following every row's start offset, so we can binary search to 
> a row's cells. 
>  * 
>  * Format: 
>  * flat cells 
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * integer: dataSize 
>  * 
> */
> ROW_INDEX_V2 is :
>  * row1 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  * row2 qualifier timestamp type value tag
>  * row3 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * column family
>  * integer: dataSize 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16594) ROW_INDEX_V2 DBE

2016-09-11 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15476741#comment-15476741
 ] 

binlijin edited comment on HBASE-16594 at 9/12/16 2:10 AM:
---

I do test with one of our very important table. This table have 5 column 
family(why have so many families, this is for history reason.)
I get one region's data and do the random get performance on a regionserver.

This region's detail information is:
{code}
number of row : 3463153
5 family: a,b,c,d,f

family a : avgKeyLen=54,avgValueLen=12  entries=234100060  
length=4369389736(4.07GB)
family b : avgKeyLen=53,avgValueLen=10  entries=51913519   
length=981625160(936MB)
family c : avgKeyLen=50,avgValueLen=6   entries=14864860   
length=273820502(261MB)
family d : avgKeyLen=50,avgValueLen=6   entries=141422679  
length=3216604161(3GB)
family f : avgKeyLen=38,avgValueLen=13  entries=73084074   
length=1174375801(1.09GB)

avg cells per row
family a :  67.6
family b :  15
family c :  4.3
family d :  40.8
family f :  21.1

BlockSize=8k  COMPRESSION=LZO RegionSize=9.33GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=16k COMPRESSION=LZO RegionSize=8.52GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=32k COMPRESSION=LZO RegionSize=7.81GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=64k COMPRESSION=LZO RegionSize=7.74GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=32k COMPRESSION=LZO RegionSize=7.84GB  DATA_BLOCK_ENCODING => 
'ROW_INDEX_V1'
BlockSize=32k COMPRESSION=LZO RegionSize=6.24GB  DATA_BLOCK_ENCODING => 
'ROW_INDEX_V2'
{code}


was (Author: aoxiang):
I do test with one of our very important table. This table have 5 column 
family(why have so many families, this is for history reason.)
I get one region's data and do the random get performance on a regionserver.

This region's detail information is:
number of row : 3463153
5 family: a,b,c,d,f

family a : avgKeyLen=54,avgValueLen=12  entries=234100060  
length=4369389736(4.07GB)
family b : avgKeyLen=53,avgValueLen=10  entries=51913519   
length=981625160(936MB)
family c : avgKeyLen=50,avgValueLen=6   entries=14864860   
length=273820502(261MB)
family d : avgKeyLen=50,avgValueLen=6   entries=141422679  
length=3216604161(3GB)
family f : avgKeyLen=38,avgValueLen=13  entries=73084074   
length=1174375801(1.09GB)

avg cells per row
family a :  67.6
family b :  15
family c :  4.3
family d :  40.8
family f :  21.1


BlockSize=8k  COMPRESSION=LZO RegionSize=9.33GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=16k COMPRESSION=LZO RegionSize=8.52GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=32k COMPRESSION=LZO RegionSize=7.81GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=64k COMPRESSION=LZO RegionSize=7.74GB  DATA_BLOCK_ENCODING => 'NONE'
BlockSize=32k COMPRESSION=LZO RegionSize=7.84GB  DATA_BLOCK_ENCODING => 
'ROW_INDEX_V1'
BlockSize=32k COMPRESSION=LZO RegionSize=6.24GB  DATA_BLOCK_ENCODING => 
'ROW_INDEX_V2'


> ROW_INDEX_V2 DBE
> 
>
> Key: HBASE-16594
> URL: https://issues.apache.org/jira/browse/HBASE-16594
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16594-master_v1.patch, HBASE-16594-master_v2.patch
>
>
> See HBASE-16213, ROW_INDEX_V1 DataBlockEncoding.
> ROW_INDEX_V1 is the first version which have no storage optimization, 
> ROW_INDEX_V2 do storage optimization: store every row only once, store column 
> family only once in a HFileBlock.
> ROW_INDEX_V1 is : 
> /** 
>  * Store cells following every row's start offset, so we can binary search to 
> a row's cells. 
>  * 
>  * Format: 
>  * flat cells 
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * integer: dataSize 
>  * 
> */
> ROW_INDEX_V2 is :
>  * row1 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  * row2 qualifier timestamp type value tag
>  * row3 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * column family
>  * integer: dataSize 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16594) ROW_INDEX_V2 DBE

2016-09-11 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15476744#comment-15476744
 ] 

binlijin edited comment on HBASE-16594 at 9/12/16 2:10 AM:
---

The performance on a single regionserver is :
{code}
BlockSize=8K  DATA_BLOCK_ENCODING => 'NONE' (CPU 4/42)  37k
BlockSize=16K DATA_BLOCK_ENCODING => 'NONE' (CPU 3/41)  41k
BlockSize=32K DATA_BLOCK_ENCODING => 'NONE' (CPU 3/45)  43k
BlockSize=64K DATA_BLOCK_ENCODING => 'NONE' (CPU 3/46)  36k
BlockSize=32k DATA_BLOCK_ENCODING => 'Row_Index_V1' (CPU 4/45)  45k
BlockSize=32k DATA_BLOCK_ENCODING => 'Row_Index_V2' (CPU 4/48)  64k

(CPU 4/42) which mean System CPU 4%,User CPU 42%.
{code}


was (Author: aoxiang):
The performance on a single regionserver is :
BlockSize=8K  DATA_BLOCK_ENCODING => 'NONE' (CPU 4/42)  37k
BlockSize=16K DATA_BLOCK_ENCODING => 'NONE' (CPU 3/41)  41k
BlockSize=32K DATA_BLOCK_ENCODING => 'NONE' (CPU 3/45)  43k
BlockSize=64K DATA_BLOCK_ENCODING => 'NONE' (CPU 3/46)  36k
BlockSize=32k DATA_BLOCK_ENCODING => 'Row_Index_V1' (CPU 4/45)  45k
BlockSize=32k DATA_BLOCK_ENCODING => 'Row_Index_V2' (CPU 4/48)  64k

(CPU 4/42) which mean System CPU 4%,User CPU 42%.

> ROW_INDEX_V2 DBE
> 
>
> Key: HBASE-16594
> URL: https://issues.apache.org/jira/browse/HBASE-16594
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16594-master_v1.patch, HBASE-16594-master_v2.patch
>
>
> See HBASE-16213, ROW_INDEX_V1 DataBlockEncoding.
> ROW_INDEX_V1 is the first version which have no storage optimization, 
> ROW_INDEX_V2 do storage optimization: store every row only once, store column 
> family only once in a HFileBlock.
> ROW_INDEX_V1 is : 
> /** 
>  * Store cells following every row's start offset, so we can binary search to 
> a row's cells. 
>  * 
>  * Format: 
>  * flat cells 
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * integer: dataSize 
>  * 
> */
> ROW_INDEX_V2 is :
>  * row1 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  * row2 qualifier timestamp type value tag
>  * row3 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * column family
>  * integer: dataSize 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16594) ROW_INDEX_V2 DBE

2016-09-11 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482795#comment-15482795
 ] 

binlijin commented on HBASE-16594:
--

I get a part of column family a' data, and test it with ROW_INDEX_V2.
First the detail info is:
{code}
number of rows : 456399

avgKeyLen=56
avgValueLen=11
entries=69742427
length=5609482650

avg cells per row : 69742427/456399=152.8
avg row size: (56+11) * 152.8=10237.6(10k)

COMPRESSION => 'NONE'
BlockSize=8k   DATA_BLOCK_ENCODING => 'NONE’  5671843807
BlockSize=8k   DATA_BLOCK_ENCODING => 'ROW_INDEX_V1’  5683168196
BlockSize=8k   DATA_BLOCK_ENCODING => 'ROW_INDEX_V2’  3354641599

BlockSize=16k  DATA_BLOCK_ENCODING => 'NONE’  5636883803
BlockSize=16k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V1’  5643473654
BlockSize=16k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V2’  3306460265

BlockSize=32k  DATA_BLOCK_ENCODING => 'NONE’  5618631549
BlockSize=32k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V1’  5622842708
BlockSize=32k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V2’  3284154231

BlockSize=64k  DATA_BLOCK_ENCODING => 'NONE’  5609482650(5.22GB)
BlockSize=64k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V1’  5612502105(5.23GB)
BlockSize=64k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V2’  3273791654(3.05GB) -41.6%


COMPRESSION => 'LZO'
BlockSize=8k   DATA_BLOCK_ENCODING => 'NONE’  1.13GB
BlockSize=8k   DATA_BLOCK_ENCODING => 'ROW_INDEX_V1’  1.13GB
BlockSize=8k   DATA_BLOCK_ENCODING => 'ROW_INDEX_V2’  997MB

BlockSize=16k  DATA_BLOCK_ENCODING => 'NONE’  1.03GB
BlockSize=16k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V1’  1.03GB
BlockSize=16k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V2’  884MB

BlockSize=32k  DATA_BLOCK_ENCODING => 'NONE’  981MB
BlockSize=32k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V1’  983MB
BlockSize=32k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V2’  800MB

BlockSize=64k  DATA_BLOCK_ENCODING => 'NONE’  970MB
BlockSize=64k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V1’  971MB
BlockSize=64k  DATA_BLOCK_ENCODING => 'ROW_INDEX_V2’  744MB -23.3%
{code}

> ROW_INDEX_V2 DBE
> 
>
> Key: HBASE-16594
> URL: https://issues.apache.org/jira/browse/HBASE-16594
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16594-master_v1.patch, HBASE-16594-master_v2.patch
>
>
> See HBASE-16213, ROW_INDEX_V1 DataBlockEncoding.
> ROW_INDEX_V1 is the first version which have no storage optimization, 
> ROW_INDEX_V2 do storage optimization: store every row only once, store column 
> family only once in a HFileBlock.
> ROW_INDEX_V1 is : 
> /** 
>  * Store cells following every row's start offset, so we can binary search to 
> a row's cells. 
>  * 
>  * Format: 
>  * flat cells 
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * integer: dataSize 
>  * 
> */
> ROW_INDEX_V2 is :
>  * row1 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  * row2 qualifier timestamp type value tag
>  * row3 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * column family
>  * integer: dataSize 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16594) ROW_INDEX_V2 DBE

2016-09-11 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-16594:
-
Attachment: HBASE-16594-master_v2.patch

> ROW_INDEX_V2 DBE
> 
>
> Key: HBASE-16594
> URL: https://issues.apache.org/jira/browse/HBASE-16594
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16594-master_v1.patch, HBASE-16594-master_v2.patch
>
>
> See HBASE-16213, ROW_INDEX_V1 DataBlockEncoding.
> ROW_INDEX_V1 is the first version which have no storage optimization, 
> ROW_INDEX_V2 do storage optimization: store every row only once, store column 
> family only once in a HFileBlock.
> ROW_INDEX_V1 is : 
> /** 
>  * Store cells following every row's start offset, so we can binary search to 
> a row's cells. 
>  * 
>  * Format: 
>  * flat cells 
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * integer: dataSize 
>  * 
> */
> ROW_INDEX_V2 is :
>  * row1 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  * row2 qualifier timestamp type value tag
>  * row3 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * column family
>  * integer: dataSize 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16603) Detect unavailability of hbase:backup table to avoid extraneous logging

2016-09-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-16603.

  Resolution: Fixed
Assignee: Ted Yu
Hadoop Flags: Reviewed

Thanks for the review, Stephen.

> Detect unavailability of hbase:backup table to avoid extraneous logging
> ---
>
> Key: HBASE-16603
> URL: https://issues.apache.org/jira/browse/HBASE-16603
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: backup
> Attachments: 16603.v1.txt
>
>
> We observed the following when hbase:backup was not available:
> {code}
> 2016-09-07 13:32:11,471 ERROR [x,16000,1473269229816_ChoreService_1] 
> master.BackupLogCleaner: Failed to get hbase:backup table, therefore will 
> keep all files
> org.apache.hadoop.hbase.TableNotFoundException: hbase:backup
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1264)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1162)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166)
> at 
> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:161)
> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
> at 
> org.apache.hadoop.hbase.backup.impl.BackupSystemTable.hasBackupSessions(BackupSystemTable.java:573)
> at 
> org.apache.hadoop.hbase.backup.master.BackupLogCleaner.getDeletableFiles(BackupLogCleaner.java:67)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteFiles(CleanerChore.java:233)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:157)
> at 
> org.apache.hadoop.hbase.master.cleaner.CleanerChore.chore(CleanerChore.java:124)
> at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:185)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> {code}
> We should detect the unavailability of hbase:backup table and log at lower 
> level than ERROR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15624) Move master branch/hbase-2.0.0 to jdk-8 only

2016-09-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reassigned HBASE-15624:
-

Assignee: Duo Zhang  (was: stack)

> Move master branch/hbase-2.0.0 to jdk-8 only
> 
>
> Key: HBASE-15624
> URL: https://issues.apache.org/jira/browse/HBASE-15624
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15624.patch, hbase-15624.patch
>
>
> Set build and pom target jvm version as jdk8. We chatted about it here: 
> http://osdir.com/ml/general/2016-04/msg09691.html Set it as blocker on 2.0.0.
> We need to work on YETUS-369 before we can finish up this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15624) Move master branch/hbase-2.0.0 to jdk-8 only

2016-09-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15624:
--
Status: Patch Available  (was: In Progress)

> Move master branch/hbase-2.0.0 to jdk-8 only
> 
>
> Key: HBASE-15624
> URL: https://issues.apache.org/jira/browse/HBASE-15624
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15624.patch, hbase-15624.patch
>
>
> Set build and pom target jvm version as jdk8. We chatted about it here: 
> http://osdir.com/ml/general/2016-04/msg09691.html Set it as blocker on 2.0.0.
> We need to work on YETUS-369 before we can finish up this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15624) Move master branch/hbase-2.0.0 to jdk-8 only

2016-09-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15624:
--
Attachment: HBASE-15624.patch

Retry.

> Move master branch/hbase-2.0.0 to jdk-8 only
> 
>
> Key: HBASE-15624
> URL: https://issues.apache.org/jira/browse/HBASE-15624
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-15624.patch, hbase-15624.patch
>
>
> Set build and pom target jvm version as jdk8. We chatted about it here: 
> http://osdir.com/ml/general/2016-04/msg09691.html Set it as blocker on 2.0.0.
> We need to work on YETUS-369 before we can finish up this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16591) Add a docker file only contains java 8 for running pre commit on master

2016-09-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16591:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master. Thanks [~busbey] for reviewing.

> Add a docker file only contains java 8 for running pre commit on master
> ---
>
> Key: HBASE-16591
> URL: https://issues.apache.org/jira/browse/HBASE-16591
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-16591.patch
>
>
> As described in YETUS-369, this is a workaround before YETUS-369 done. 
> Hadoop's pre-commit has already been java 8 only, so I think we could just 
> copy their docker file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16591) Add a docker file only contains java 8 for running pre commit on master

2016-09-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482496#comment-15482496
 ] 

Duo Zhang commented on HBASE-16591:
---

No. We could add docker file for other branches in HBASE-16373 to add protoc 
dependency I think.

> Add a docker file only contains java 8 for running pre commit on master
> ---
>
> Key: HBASE-16591
> URL: https://issues.apache.org/jira/browse/HBASE-16591
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-16591.patch
>
>
> As described in YETUS-369, this is a workaround before YETUS-369 done. 
> Hadoop's pre-commit has already been java 8 only, so I think we could just 
> copy their docker file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16591) Add a docker file only contains java 8 for running pre commit on master

2016-09-11 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15481846#comment-15481846
 ] 

Sean Busbey commented on HBASE-16591:
-

+1 sounds good to me. Should this close out HBASE-16373, or not since we'll 
only have protoc support on master?

> Add a docker file only contains java 8 for running pre commit on master
> ---
>
> Key: HBASE-16591
> URL: https://issues.apache.org/jira/browse/HBASE-16591
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-16591.patch
>
>
> As described in YETUS-369, this is a workaround before YETUS-369 done. 
> Hadoop's pre-commit has already been java 8 only, so I think we could just 
> copy their docker file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15624) Move master branch/hbase-2.0.0 to jdk-8 only

2016-09-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15481538#comment-15481538
 ] 

Duo Zhang commented on HBASE-15624:
---

This is the pre-commit build result with HBASE-16591 applied.

{noformat}
-1 overall

 _ _ __ 
|  ___|_ _(_) |_   _ _ __ ___| |
| |_ / _` | | | | | | '__/ _ \ |
|  _| (_| | | | |_| | | |  __/_|
|_|  \__,_|_|_|\__,_|_|  \___(_)



| Vote |   Subsystem |  Runtime   | Comment

|   0  | reexec  |  0m 6s | Docker mode activated. 
|  +1  |  hbaseanti  |  0m 0s | Patch does not have any anti-patterns. 
|  +1  |@author  |  0m 0s | The patch does not contain any @author 
|  | || tags.
|  -1  | test4tests  |  0m 0s | The patch doesn't appear to include any 
|  | || new or modified tests. Please justify
|  | || why no new tests are needed for this
|  | || patch. Also please list what manual
|  | || steps were performed to verify this
|  | || patch.
|  +1  | mvninstall  |  52m 27s   | master passed 
|  +1  |compile  |  2m 42s| master passed 
|  +1  | mvneclipse  |  3m 41s| master passed 
|  +1  |javadoc  |  2m 53s| master passed 
|  +1  | mvninstall  |  3m 10s| the patch passed 
|  +1  |compile  |  2m 47s| the patch passed 
|  +1  |  javac  |  2m 47s| the patch passed 
|  +1  | mvneclipse  |  1m 6s | the patch passed 
|  +1  | whitespace  |  0m 0s | The patch has no whitespace issues. 
|  +1  |xml  |  0m 1s | The patch has no ill-formed XML file. 
|  +1  |hadoopcheck  |  37m 21s   | Patch does not cause any errors with 
|  | || Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2
|  | || 2.6.1 2.6.2 2.6.3 2.7.1.
|  +1  |hbaseprotoc  |  1m 28s| the patch passed 
|  +1  |javadoc  |  1m 53s| the patch passed 
|  +1  | asflicense  |  0m 50s| The patch does not generate ASF License 
|  | || warnings.
|  | |  110m 30s  | 


|| Subsystem || Report/Notes ||

| Docker | Client=1.12.1 Server=1.12.1 Image:yetus/hbase:7bda515 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  |
| uname | Linux 9a692c569b5e 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /home/zhangduo/hbase/code/dev-support/hbase-personality.sh |
| git revision | master / 7bda515 |
| Default Java | 1.8.0_101 |
| modules | C: . U: . |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |




  Finished build.


{noformat}

Seems OK?

> Move master branch/hbase-2.0.0 to jdk-8 only
> 
>
> Key: HBASE-15624
> URL: https://issues.apache.org/jira/browse/HBASE-15624
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: hbase-15624.patch
>
>
> Set build and pom target jvm version as jdk8. We chatted about it here: 
> http://osdir.com/ml/general/2016-04/msg09691.html Set it as blocker on 2.0.0.
> We need to work on YETUS-369 before we can finish up this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16608) Introducing the ability to merge ImmutableSegments without copy-compaction or SQM usage

2016-09-11 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15481462#comment-15481462
 ] 

Anastasia Braginsky commented on HBASE-16608:
-

THe RB: https://reviews.apache.org/r/51785/

> Introducing the ability to merge ImmutableSegments without copy-compaction or 
> SQM usage
> ---
>
> Key: HBASE-16608
> URL: https://issues.apache.org/jira/browse/HBASE-16608
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-16417-V02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16608) Introducing the ability to merge ImmutableSegments without copy-compaction or SQM usage

2016-09-11 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-16608:

External issue URL:   (was: https://reviews.apache.org/r/51785/)

> Introducing the ability to merge ImmutableSegments without copy-compaction or 
> SQM usage
> ---
>
> Key: HBASE-16608
> URL: https://issues.apache.org/jira/browse/HBASE-16608
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-16417-V02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16608) Introducing the ability to merge ImmutableSegments without copy-compaction or SQM usage

2016-09-11 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-16608:

External issue URL: https://reviews.apache.org/r/51785/

> Introducing the ability to merge ImmutableSegments without copy-compaction or 
> SQM usage
> ---
>
> Key: HBASE-16608
> URL: https://issues.apache.org/jira/browse/HBASE-16608
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-16417-V02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16229) Cleaning up size and heapSize calculation

2016-09-11 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15481437#comment-15481437
 ] 

Anastasia Braginsky commented on HBASE-16229:
-

I have finished the code review. Please refer to the not-answered comments in 
the RB.
Looking forward to see your replies.
Thank you!!

> Cleaning up size and heapSize calculation
> -
>
> Key: HBASE-16229
> URL: https://issues.apache.org/jira/browse/HBASE-16229
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-16229.patch, HBASE-16229_V2.patch, 
> HBASE-16229_V3.patch, HBASE-16229_V4.patch, HBASE-16229_V5.patch, 
> HBASE-16229_V5.patch
>
>
> It is bit ugly now. For eg:
> AbstractMemStore
> {code}
> public final static long FIXED_OVERHEAD = ClassSize.align(
>   ClassSize.OBJECT +
>   (4 * ClassSize.REFERENCE) +
>   (2 * Bytes.SIZEOF_LONG));
>   public final static long DEEP_OVERHEAD = ClassSize.align(FIXED_OVERHEAD +
>   (ClassSize.ATOMIC_LONG + ClassSize.TIMERANGE_TRACKER +
>   ClassSize.CELL_SKIPLIST_SET + ClassSize.CONCURRENT_SKIPLISTMAP));
> {code}
> We include the heap overhead of Segment also here. It will be better the 
> Segment contains its overhead part and the Memstore impl uses the heap size 
> of all of its segments to calculate its size.
> Also this
> {code}
> public long heapSize() {
> return getActive().getSize();
>   }
> {code}
> HeapSize to consider all segment's size not just active's. I am not able to 
> see an override method in CompactingMemstore.
> This jira tries to solve some of these.
> When we create a Segment, we seems pass some initial heap size value to it. 
> Why?  The segment object internally has to know what is its heap size not 
> like some one else dictate it.
> More to add when doing this cleanup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16086) TableCfWALEntryFilter and ScopeWALEntryFilter should not redundantly iterate over cells.

2016-09-11 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15481376#comment-15481376
 ] 

Heng Chen commented on HBASE-16086:
---

Not sure why compile error,  suspect HBASE-16538,  see 
https://issues.apache.org/jira/browse/HBASE-16538?focusedCommentId=15481021&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15481021

> TableCfWALEntryFilter and ScopeWALEntryFilter should not redundantly iterate 
> over cells.
> 
>
> Key: HBASE-16086
> URL: https://issues.apache.org/jira/browse/HBASE-16086
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: churro morales
>Assignee: Vincent Poon
> Fix For: 2.0.0, 1.4.0, 1.3.1
>
> Attachments: HBASE-16086.patch, HBASE-16086.v2.patch, 
> HBASE-16086.v3.patch
>
>
> TableCfWALEntryFilter and ScopeWALEntryFilter both filter by iterating over 
> cells.  Since the filters are chained we do this work twice.  Instead iterate 
> over cells once and apply the "cell filtering" logic to these cells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16086) TableCfWALEntryFilter and ScopeWALEntryFilter should not redundantly iterate over cells.

2016-09-11 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15481372#comment-15481372
 ] 

Heng Chen commented on HBASE-16086:
---

Two failed test case could pass locally,  Let me keep an eye on them.

> TableCfWALEntryFilter and ScopeWALEntryFilter should not redundantly iterate 
> over cells.
> 
>
> Key: HBASE-16086
> URL: https://issues.apache.org/jira/browse/HBASE-16086
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: churro morales
>Assignee: Vincent Poon
> Fix For: 2.0.0, 1.4.0, 1.3.1
>
> Attachments: HBASE-16086.patch, HBASE-16086.v2.patch, 
> HBASE-16086.v3.patch
>
>
> TableCfWALEntryFilter and ScopeWALEntryFilter both filter by iterating over 
> cells.  Since the filters are chained we do this work twice.  Instead iterate 
> over cells once and apply the "cell filtering" logic to these cells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16086) TableCfWALEntryFilter and ScopeWALEntryFilter should not redundantly iterate over cells.

2016-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15481298#comment-15481298
 ] 

Hudson commented on HBASE-16086:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #14 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/14/])
HBASE-16086 TableCfWALEntryFilter and ScopeWALEntryFilter should not (chenheng: 
rev d4014078451325c2e1ba18a7f1775a43cde49305)
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/WALCellFilter.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/BulkLoadCellFilter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/TableCfWALEntryFilter.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWALEntryFilters.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ScopeWALEntryFilter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ChainWALEntryFilter.java


> TableCfWALEntryFilter and ScopeWALEntryFilter should not redundantly iterate 
> over cells.
> 
>
> Key: HBASE-16086
> URL: https://issues.apache.org/jira/browse/HBASE-16086
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: churro morales
>Assignee: Vincent Poon
> Fix For: 2.0.0, 1.4.0, 1.3.1
>
> Attachments: HBASE-16086.patch, HBASE-16086.v2.patch, 
> HBASE-16086.v3.patch
>
>
> TableCfWALEntryFilter and ScopeWALEntryFilter both filter by iterating over 
> cells.  Since the filters are chained we do this work twice.  Instead iterate 
> over cells once and apply the "cell filtering" logic to these cells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16086) TableCfWALEntryFilter and ScopeWALEntryFilter should not redundantly iterate over cells.

2016-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15481259#comment-15481259
 ] 

Hudson commented on HBASE-16086:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1580 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1580/])
HBASE-16086 TableCfWALEntryFilter and ScopeWALEntryFilter should not (chenheng: 
rev 80d8b2100d9f4dc2a01ea6bdbded6ec52d7e4263)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ChainWALEntryFilter.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/BulkLoadCellFilter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/TableCfWALEntryFilter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ScopeWALEntryFilter.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWALEntryFilters.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/WALCellFilter.java


> TableCfWALEntryFilter and ScopeWALEntryFilter should not redundantly iterate 
> over cells.
> 
>
> Key: HBASE-16086
> URL: https://issues.apache.org/jira/browse/HBASE-16086
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: churro morales
>Assignee: Vincent Poon
> Fix For: 2.0.0, 1.4.0, 1.3.1
>
> Attachments: HBASE-16086.patch, HBASE-16086.v2.patch, 
> HBASE-16086.v3.patch
>
>
> TableCfWALEntryFilter and ScopeWALEntryFilter both filter by iterating over 
> cells.  Since the filters are chained we do this work twice.  Instead iterate 
> over cells once and apply the "cell filtering" logic to these cells.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)