[jira] [Commented] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027676#comment-16027676
 ] 

Hadoop QA commented on HBASE-14614:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 100 new or modified 
test files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 3s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
38s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 50s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 23s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
36s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 655 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
56m 26s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 3m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 17m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 8s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 57s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 39s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 23s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 187m 21s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s 
{color} | {color:green} hbase-it in the patch 

[jira] [Commented] (HBASE-14178) regionserver blocks because of waiting for offsetLock

2017-05-27 Thread Chang chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027673#comment-16027673
 ] 

Chang chen commented on HBASE-14178:


In case of cache miss, if missed row is accessed  simultaneously, then clients 
have to access cache one by one. does this also block RS ?



> regionserver blocks because of waiting for offsetLock
> -
>
> Key: HBASE-14178
> URL: https://issues.apache.org/jira/browse/HBASE-14178
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Heng Chen
>Assignee: Heng Chen
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14178-0.98.patch, HBASE-14178-0.98_v8.patch, 
> HBASE-14178-branch_1_v8.patch, HBASE-14178.patch, HBASE-14178_v1.patch, 
> HBASE-14178_v2.patch, HBASE-14178_v3.patch, HBASE-14178_v4.patch, 
> HBASE-14178_v5.patch, HBASE-14178_v6.patch, HBASE-14178_v7.patch, 
> HBASE-14178_v8.patch, jstack
>
>
> My regionserver blocks, and all client rpc timeout. 
> I print the regionserver's jstack,  it seems a lot of threads were blocked 
> for waiting offsetLock, detail infomation belows:
> PS:  my table's block cache is off
> {code}
> "B.DefaultRpcServer.handler=2,queue=2,port=60020" #82 daemon prio=5 os_prio=0 
> tid=0x01827000 nid=0x2cdc in Object.wait() [0x7f3831b72000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.hadoop.hbase.util.IdLock.getLockEntry(IdLock.java:79)
> - locked <0x000773af7c18> (a 
> org.apache.hadoop.hbase.util.IdLock$Entry)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:352)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:524)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:257)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:173)
> at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:313)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:269)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:695)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:683)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:533)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:140)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3889)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3969)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3847)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3820)
> - locked <0x0005e5c55ad0> (a 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3807)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4779)
> at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4753)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2916)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29583)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
> - <0x0005e5c55c08> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18114) Update the config of TestAsync*AdminApi to make test stable

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027640#comment-16027640
 ] 

Hudson commented on HBASE-18114:


FAILURE: Integrated in Jenkins build HBase-HBASE-14614 #253 (See 
[https://builds.apache.org/job/HBase-HBASE-14614/253/])
HBASE-18114 Update the config of TestAsync*AdminApi to make test stable (zghao: 
rev 97484f2aaf3809137fd50180164dc2c741d05ee8)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncReplicationAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncNamespaceAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncSnapshotAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTableAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncAdminBase.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcedureAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java


> Update the config of TestAsync*AdminApi to make test stable
> ---
>
> Key: HBASE-18114
> URL: https://issues.apache.org/jira/browse/HBASE-18114
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18114-v1.patch, HBASE-18114-v1.patch, 
> HBASE-18114-v1.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch
>
>
> {code}
> 2017-05-25 17:56:34,967 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> master.HMaster$11(2297): Client=hao//127.0.0.1 disable testModifyColumnFamily
> 2017-05-25 17:56:37,974 INFO  [RpcClient-timer-pool1-t1] 
> client.AsyncHBaseAdmin$TableProcedureBiConsumer(2219): Operation: DISABLE, 
> Table Name: default:testModifyColumnFamily failed with Failed after 
> attempts=3, exceptions: 
> Thu May 25 17:56:35 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=294, waitTime=1008, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=295, waitTime=1299, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=296, waitTime=668, 
> rpcTimeout=660
> 017-05-25 17:56:38,936 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> procedure2.ProcedureExecutor(788): Stored procId=15, owner=hao, 
> state=RUNNABLE:DISABLE_TABLE_PREPARE, DisableTableProcedure 
> table=testModifyColumnFamily
> {code}
> For this disable table procedure, master return the procedure id when it 
> submit the procedure to ProcedureExecutor. And the above procedure take 4 
> seconds to submit. So the disable table call failed because the rpc timeout 
> is 1 seconds and the retry number is 3.
> For admin operation, I thought we don't need change the default timeout 
> config in unit test. And the retry is not need, too. (Or we can set a retry > 
> 1 to test nonce thing). Meanwhile, the default timeout is 60 seconds. So the 
> test type may need change to LargeTests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027643#comment-16027643
 ] 

Hudson commented on HBASE-14614:


FAILURE: Integrated in Jenkins build HBase-HBASE-14614 #253 (See 
[https://builds.apache.org/job/HBase-HBASE-14614/253/])
HBASE-14614 Procedure v2 - Core Assignment Manager (Matteo Bertozzi) (stack: 
rev 657a5d46b4ac38cc173128acef3fbafd76a687a1)
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcServer.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestMasterProcedureEvents.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTableGetMultiThreadedWithBasicCompaction.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestSimpleRpcScheduler.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/SimpleMasterProcedureManager.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/assignment/AssignmentTestingUtil.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/NoSuchProcedureException.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildOverlap.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMetaShutdownHandler.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterWalManager.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController3.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/AbstractProcedureScheduler.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestDeleteTableProcedure.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/Util.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/FavoredStochasticBalancer.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentListener.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/RegionStateListener.java
* (edit) 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSourceImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestFavoredStochasticBalancerPickers.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManager.java
* (edit) hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterMetaBootstrap.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/BulkAssigner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestTruncateTableProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionPlan.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileCache.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestCreateTableProcedure.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/assignment/TestAssignmentManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterDumpServlet.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
* (edit) hbase-protocol-shaded/src/main/protobuf/Master.proto
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplit.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMaster.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableStateManager.java
* (edit) 

[jira] [Commented] (HBASE-18115) Move SaslServer creation to HBaseSaslRpcServer

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027641#comment-16027641
 ] 

Hudson commented on HBASE-18115:


FAILURE: Integrated in Jenkins build HBase-HBASE-14614 #253 (See 
[https://builds.apache.org/job/HBase-HBASE-14614/253/])
HBASE-18115 Move SaslServer creation to HBaseSaslRpcServer (zhangduo: rev 
efc7edc81a0d9da486ca37b8314baf5a7e75bc86)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUtil.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/HBaseSaslRpcServer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcConnection.java


> Move SaslServer creation to HBaseSaslRpcServer
> --
>
> Key: HBASE-18115
> URL: https://issues.apache.org/jira/browse/HBASE-18115
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18115.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18042) Client Compatibility breaks between versions 1.2 and 1.3

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027642#comment-16027642
 ] 

Hudson commented on HBASE-18042:


FAILURE: Integrated in Jenkins build HBase-HBASE-14614 #253 (See 
[https://builds.apache.org/job/HBase-HBASE-14614/253/])
HBASE-18042 Client Compatibility breaks between versions 1.2 and 1.3 (zhangduo: 
rev 6846b03944d7e72301b825d4d118732c0ca65577)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScanWithoutFetchingData.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAlwaysSetScannerId.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java


> Client Compatibility breaks between versions 1.2 and 1.3
> 
>
> Key: HBASE-18042
> URL: https://issues.apache.org/jira/browse/HBASE-18042
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Karan Mehta
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18042-branch-1.3.patch, 
> HBASE-18042-branch-1.3-v1.patch, HBASE-18042-branch-1.patch, 
> HBASE-18042-branch-1.patch, HBASE-18042-branch-1-v1.patch, 
> HBASE-18042-branch-1-v1.patch, HBASE-18042.patch, HBASE-18042-v1.patch, 
> HBASE-18042-v2.patch
>
>
> OpenTSDB uses AsyncHBase as its client, rather than using the traditional 
> HBase Client. From version 1.2 to 1.3, the {{ClientProtos}} have been 
> changed. Newer fields are added to {{ScanResponse}} proto.
> For a typical Scan request in 1.2, would require caller to make an 
> OpenScanner Request, GetNextRows Request and a CloseScanner Request, based on 
> {{more_rows}} boolean field in the {{ScanResponse}} proto.
> However, from 1.3, new parameter {{more_results_in_region}} was added, which 
> limits the results per region. Therefore the client has to now manage sending 
> all the requests for each region. Further more, if the results are exhausted 
> from a particular region, the {{ScanResponse}} will set 
> {{more_results_in_region}} to false, but {{more_results}} can still be true. 
> Whenever the former is set to false, the {{RegionScanner}} will also be 
> closed. 
> OpenTSDB makes an OpenScanner Request and receives all its results in the 
> first {{ScanResponse}} itself, thus creating a condition as described in 
> above paragraph. Since {{more_rows}} is true, it will proceed to send next 
> request at which point the {{RSRpcServices}} will throw 
> {{UnknownScannerException}}. The protobuf client compatibility is maintained 
> but expected behavior is modified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15903) Delete Object

2017-05-27 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027627#comment-16027627
 ] 

Enis Soztutar commented on HBASE-15903:
---

bq. Can you elaborate the correct call ?
You should not call SetTimestamp() from AddColumn or AddFamily variants. 
Calling SetTimestamp() will change the timestamp of the Delete object which is 
not what we want. Please check the java code. 

> Delete Object
> -
>
> Key: HBASE-15903
> URL: https://issues.apache.org/jira/browse/HBASE-15903
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Ted Yu
> Attachments: 15903.v2.txt, 15903.v4.txt, 
> HBASE-15903.HBASE-14850.v1.patch
>
>
> Patch for creating Delete objects. These Delete objects are used by the Table 
> implementation to delete rowkey from a table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-05-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14614:
--
Attachment: HBASE-14614.master.046.patch

> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.003.patch, 
> HBASE-14614.master.004.patch, HBASE-14614.master.005.patch, 
> HBASE-14614.master.006.patch, HBASE-14614.master.007.patch, 
> HBASE-14614.master.008.patch, HBASE-14614.master.009.patch, 
> HBASE-14614.master.010.patch, HBASE-14614.master.012.patch, 
> HBASE-14614.master.013.patch, HBASE-14614.master.014.patch, 
> HBASE-14614.master.015.patch, HBASE-14614.master.017.patch, 
> HBASE-14614.master.018.patch, HBASE-14614.master.019.patch, 
> HBASE-14614.master.020.patch, HBASE-14614.master.022.patch, 
> HBASE-14614.master.023.patch, HBASE-14614.master.024.patch, 
> HBASE-14614.master.025.patch, HBASE-14614.master.026.patch, 
> HBASE-14614.master.027.patch, HBASE-14614.master.028.patch, 
> HBASE-14614.master.029.patch, HBASE-14614.master.030.patch, 
> HBASE-14614.master.033.patch, HBASE-14614.master.038.patch, 
> HBASE-14614.master.039.patch, HBASE-14614.master.040.patch, 
> HBASE-14614.master.041.patch, HBASE-14614.master.042.patch, 
> HBASE-14614.master.043.patch, HBASE-14614.master.044.patch, 
> HBASE-14614.master.045.patch, HBASE-14614.master.045.patch, 
> HBASE-14614.master.046.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027569#comment-16027569
 ] 

Hadoop QA commented on HBASE-18027:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
2s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 9s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 23s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 120m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestMvccConsistentScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:58c504e |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870218/HBASE-18027-branch-1.patch
 |
| JIRA Issue | HBASE-18027 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 4d60d71860ec 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-18066) Get with closest_row_before on "hbase:meta" can return empty Cell during region merge/split

2017-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027520#comment-16027520
 ] 

Hadoop QA commented on HBASE-18066:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
26s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
32s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 45s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 38s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 141m 47s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 193m 59s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestReplicasClient |
|   | hadoop.hbase.client.TestMvccConsistentScanner |
|   | hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 |
|   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
|   | hadoop.hbase.regionserver.TestHRegion |
|   | hadoop.hbase.client.TestFromClientSide |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:58c504e |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870213/HBASE-18066.branch-1.v3.patch
 |
| JIRA Issue | HBASE-18066 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux dc7e7bb1b5f6 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/hbase.sh |
| git revision | branch-1 / 1a37f3b |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6982/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6982/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6982/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6982/testReport/ |
| modules 

[jira] [Commented] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027518#comment-16027518
 ] 

Hadoop QA commented on HBASE-14614:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 16s {color} 
| {color:red} HBASE-14614 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870219/HBASE-14614.master.045.patch
 |
| JIRA Issue | HBASE-14614 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6985/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.003.patch, 
> HBASE-14614.master.004.patch, HBASE-14614.master.005.patch, 
> HBASE-14614.master.006.patch, HBASE-14614.master.007.patch, 
> HBASE-14614.master.008.patch, HBASE-14614.master.009.patch, 
> HBASE-14614.master.010.patch, HBASE-14614.master.012.patch, 
> HBASE-14614.master.013.patch, HBASE-14614.master.014.patch, 
> HBASE-14614.master.015.patch, HBASE-14614.master.017.patch, 
> HBASE-14614.master.018.patch, HBASE-14614.master.019.patch, 
> HBASE-14614.master.020.patch, HBASE-14614.master.022.patch, 
> HBASE-14614.master.023.patch, HBASE-14614.master.024.patch, 
> HBASE-14614.master.025.patch, HBASE-14614.master.026.patch, 
> HBASE-14614.master.027.patch, HBASE-14614.master.028.patch, 
> HBASE-14614.master.029.patch, HBASE-14614.master.030.patch, 
> HBASE-14614.master.033.patch, HBASE-14614.master.038.patch, 
> HBASE-14614.master.039.patch, HBASE-14614.master.040.patch, 
> HBASE-14614.master.041.patch, HBASE-14614.master.042.patch, 
> HBASE-14614.master.043.patch, HBASE-14614.master.044.patch, 
> HBASE-14614.master.045.patch, HBASE-14614.master.045.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-05-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14614:
--
Attachment: HBASE-14614.master.045.patch

Retry

> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.003.patch, 
> HBASE-14614.master.004.patch, HBASE-14614.master.005.patch, 
> HBASE-14614.master.006.patch, HBASE-14614.master.007.patch, 
> HBASE-14614.master.008.patch, HBASE-14614.master.009.patch, 
> HBASE-14614.master.010.patch, HBASE-14614.master.012.patch, 
> HBASE-14614.master.013.patch, HBASE-14614.master.014.patch, 
> HBASE-14614.master.015.patch, HBASE-14614.master.017.patch, 
> HBASE-14614.master.018.patch, HBASE-14614.master.019.patch, 
> HBASE-14614.master.020.patch, HBASE-14614.master.022.patch, 
> HBASE-14614.master.023.patch, HBASE-14614.master.024.patch, 
> HBASE-14614.master.025.patch, HBASE-14614.master.026.patch, 
> HBASE-14614.master.027.patch, HBASE-14614.master.028.patch, 
> HBASE-14614.master.029.patch, HBASE-14614.master.030.patch, 
> HBASE-14614.master.033.patch, HBASE-14614.master.038.patch, 
> HBASE-14614.master.039.patch, HBASE-14614.master.040.patch, 
> HBASE-14614.master.041.patch, HBASE-14614.master.042.patch, 
> HBASE-14614.master.043.patch, HBASE-14614.master.044.patch, 
> HBASE-14614.master.045.patch, HBASE-14614.master.045.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027513#comment-16027513
 ] 

Andrew Purtell commented on HBASE-18027:


Updated patches. Added some trace level logging for debug, if needed.

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, 
> HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-27 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-18027:
---
Attachment: HBASE-18027-branch-1.patch
HBASE-18027.patch

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, 
> HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-27 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-18027:
---
Fix Version/s: (was: 1.3.2)
   Status: Patch Available  (was: Open)

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.1, 2.0.0, 1.4.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, 
> HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18042) Client Compatibility breaks between versions 1.2 and 1.3

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027494#comment-16027494
 ] 

Hudson commented on HBASE-18042:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3086 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3086/])
HBASE-18042 Client Compatibility breaks between versions 1.2 and 1.3 (zhangduo: 
rev 6846b03944d7e72301b825d4d118732c0ca65577)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScanWithoutFetchingData.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAlwaysSetScannerId.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java


> Client Compatibility breaks between versions 1.2 and 1.3
> 
>
> Key: HBASE-18042
> URL: https://issues.apache.org/jira/browse/HBASE-18042
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Karan Mehta
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18042-branch-1.3.patch, 
> HBASE-18042-branch-1.3-v1.patch, HBASE-18042-branch-1.patch, 
> HBASE-18042-branch-1.patch, HBASE-18042-branch-1-v1.patch, 
> HBASE-18042-branch-1-v1.patch, HBASE-18042.patch, HBASE-18042-v1.patch, 
> HBASE-18042-v2.patch
>
>
> OpenTSDB uses AsyncHBase as its client, rather than using the traditional 
> HBase Client. From version 1.2 to 1.3, the {{ClientProtos}} have been 
> changed. Newer fields are added to {{ScanResponse}} proto.
> For a typical Scan request in 1.2, would require caller to make an 
> OpenScanner Request, GetNextRows Request and a CloseScanner Request, based on 
> {{more_rows}} boolean field in the {{ScanResponse}} proto.
> However, from 1.3, new parameter {{more_results_in_region}} was added, which 
> limits the results per region. Therefore the client has to now manage sending 
> all the requests for each region. Further more, if the results are exhausted 
> from a particular region, the {{ScanResponse}} will set 
> {{more_results_in_region}} to false, but {{more_results}} can still be true. 
> Whenever the former is set to false, the {{RegionScanner}} will also be 
> closed. 
> OpenTSDB makes an OpenScanner Request and receives all its results in the 
> first {{ScanResponse}} itself, thus creating a condition as described in 
> above paragraph. Since {{more_rows}} is true, it will proceed to send next 
> request at which point the {{RSRpcServices}} will throw 
> {{UnknownScannerException}}. The protobuf client compatibility is maintained 
> but expected behavior is modified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-18124) Make Hbase Communication Support Virtual Network

2017-05-27 Thread liubangchen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027286#comment-16027286
 ] 

liubangchen edited comment on HBASE-18124 at 5/27/17 3:26 PM:
--

Hi,[~ted_yu],I think this feature is different with HBASE-12954,our requirement 
is like this:
1. use hbase.regionserver.hostname or hbase.master.hostname to locate server in 
physical network
2. use other address to locate server in virtual network
3. must  vip (virtual ip address ) and pip (physical ip address) to be 
published in zookeeper

I am not good at English,I will modify the description later ,thanks.


was (Author: liubangchen):
Hi,[~ted_yu],I think this feature is different with HBASE-12954,our requirement 
is like this:
1. use hbase.regionserver.hostname or hbase.master.hostname or locate server in 
physical network
2. use other address to locate server in virtual network
3. must  vip (virtual ip address ) and pip (physical ip address) to be 
published in zookeeper

I am not good at English,I will modify the description later ,thanks.

> Make Hbase Communication Support Virtual Network
> 
>
> Key: HBASE-18124
> URL: https://issues.apache.org/jira/browse/HBASE-18124
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, hbase, master
>Reporter: liubangchen
>Assignee: liubangchen
> Attachments: 1.jpg, HBASE-18124.patch, HBASE-18124.pdf
>
>
> Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
> two way to locate datanode used by name or hostname.
> I'm a engineer of cloud computing , and I'm in charge of to make Hbase as a 
> cloud service.
> Our Hbase cloud service architecture shown as follows:  1.jpg
> 1.VM
> User’s Hbase client work in vm and use virtual ip address to access hbase 
> cluster.
> 2.NAT
> Network Address Translation, vip(Virtual Network Address) to pip 
> (Physical Network Address)
> 3. HbaseCluster Service
>  HbaseCluster Service work in physical network
> Problem
>  Vm use vip to communicate with hbase cluster, but hbase have only one 
> way to communicate with each server using host which is setting by parameters 
> hbase.regionserver.hostname or hbase.master.hostname. When HMaster startup 
> will publish master address and meta region server address  in zookeeper, 
> then the address is pip(Physical Network Address) . Because hbase cluster 
> work in physical network. But the problem is that when vm get  the address 
> from zookeeper is pip, not vip. If I set host as vip, it will has problems 
> for hbase cluster communication inside. so it needs two ways for 
> communication setting by parameters.
> Solution
> 1.   protocol extend 
>   change strcut ServerName as
>   {code}
>   message ServerName {
>   required string host_name = 1;
>  optional uint32 port = 2;
>  optional uint64 start_code = 3;
>   optional string name=4;  //new field 
>  }
>   {code}
>  It will be publish in zookeeper. We can choose host_name or name 
> configured by parameters hbase.client.use.hostname
> 2.   metatable extend 
>Add a column to hbase:meta named info:namelocation . So the original 
> column info:server configured with hbase.regionserver.hostname, and the new 
> column info:namelocation  configured with hbase.regionserver.servername
> 3.   hbase-server
>When regionserver startup, configured  hbase.regionserver.hostname as 
> pip and configured hbase.regionserver.servername as vip. Then 
> hbase.regionserver.hostname will be writed in ServerName's host_name, and 
> hbase.regionserver.servername will be writed in ServerName's name.When 
> hmaster startup, configured hbase.hmaster.hostname as pip and configured  
> hbase.hmaster.servername as vip. Then hbase.hmaster.hostname will be writed 
> in ServerName's host_name, and hbase.hmaster.servername will be writed in 
> ServerName's name.
> 4.   hbase-client
>   Add a parameters named hbase.client.use.hostname to use vip or pip.
> This patch is base on Hbase-1.3.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18124) Make Hbase Communication Support Virtual Network

2017-05-27 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HBASE-18124:

Description: 
Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
two way to locate datanode used by name or hostname.

I'm a engineer of cloud computing , and I'm in charge of to make Hbase as a 
cloud service.
Our Hbase cloud service architecture shown as follows:  1.jpg

1.VM
User’s Hbase client work in vm and use virtual ip address to access hbase 
cluster.
2.NAT
Network Address Translation, vip(Virtual Network Address) to pip (Physical 
Network Address)
3. HbaseCluster Service
 HbaseCluster Service work in physical network

Problem
 Vm use vip to communicate with hbase cluster, but hbase have only one way 
to communicate with each server using host which is setting by parameters 
hbase.regionserver.hostname or hbase.master.hostname. When HMaster startup will 
publish master address and meta region server address  in zookeeper, then the 
address is pip(Physical Network Address) . Because hbase cluster work in 
physical network. But the problem is that when vm get  the address from 
zookeeper is pip, not vip. If I set host as vip, it will has problems for hbase 
cluster communication inside. so it needs two ways for communication setting by 
parameters.

Solution
1.   protocol extend 
  change strcut ServerName as
  {code}
  message ServerName {
  required string host_name = 1;
 optional uint32 port = 2;
 optional uint64 start_code = 3;
  optional string name=4;  //new field 
 }
  {code}
 It will be publish in zookeeper. We can choose host_name or name 
configured by parameters hbase.client.use.hostname

2.   metatable extend 
   Add a column to hbase:meta named info:namelocation . So the original 
column info:server configured with hbase.regionserver.hostname, and the new 
column info:namelocation  configured with hbase.regionserver.servername
3.   hbase-server
   When regionserver startup, configured  hbase.regionserver.hostname as 
pip and configured hbase.regionserver.servername as vip. Then 
hbase.regionserver.hostname will be writed in ServerName's host_name, and 
hbase.regionserver.servername will be writed in ServerName's name.When hmaster 
startup, configured hbase.hmaster.hostname as pip and configured  
hbase.hmaster.servername as vip. Then hbase.hmaster.hostname will be writed in 
ServerName's host_name, and hbase.hmaster.servername will be writed in 
ServerName's name.
4.   hbase-client
  Add a parameters named hbase.client.use.hostname to use vip or pip.

This patch is base on Hbase-1.3.0

  was:
Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
two way to locate datanode use by name or hostname.

I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
cloud service,when we make hbase as a cloud service we need  hbase support 
other way to support locate hmaster or hregionserver
Our Hbase cloud service architectue shown as follows 1.jpg

1.VM
User’s Hbase client work in vm and use virtual ip address to access hbase 
cluster.
2.NAT
   Network Address Translation, vip(Virtual Network Address) to pip (Physical 
Network Address)
3. HbaseCluster Service
 HbaseCluster Service work in physical network

Problem
1.  View on vm
  On vm side vm use vip to communication,but hbase have only one way to 
communication use struct named
  ServerName. When Hmaster startup will store master address and meta 
region server address in zookeeper, 
   then the address is pip(Physical Network Address)   because hbase 
cluster work in physical network . when vm 
  get the address from zookeeper will not work because   vm use vip to 
communication,one way to  solve this is to 
  make physical machine host as vip like 192.168.0.1,but is not better to 
make this.
2.  View on Physical machine
Physical machine use pip to communication

Solution
1.   protocol extend change proto message to below:
  {code}
  message ServerName {
  required string host_name = 1;
 optional uint32 port = 2;
 optional uint64 start_code = 3;
  optional string name=4;
 }
  {code}

 add a filed named name like hdfs’s datablock location
2.   metatable extend 
   add column to hbase:meta named info:namelocation
3.   hbase-server
  add params 
 {code}
  hbase.regionserver.servername
  
hbase.regionserver.servername
10.0.1.1
 
  {code}
  to regionserver namelocation
  add params
 {code}
   hbase.master.servername 
   
   hbase.master.servername
   10.0.1.2
   
 {code}
   to set master namelocation
4.   hbase-client
  add params 
{code}
 

[jira] [Updated] (HBASE-18124) Make Hbase Communication Support Virtual Network

2017-05-27 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HBASE-18124:

Attachment: HBASE-18124.pdf

> Make Hbase Communication Support Virtual Network
> 
>
> Key: HBASE-18124
> URL: https://issues.apache.org/jira/browse/HBASE-18124
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, hbase, master
>Reporter: liubangchen
>Assignee: liubangchen
> Attachments: 1.jpg, HBASE-18124.patch, HBASE-18124.pdf
>
>
> Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
> two way to locate datanode use by name or hostname.
> I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
> cloud service,when we make hbase as a cloud service we need  hbase support 
> other way to support locate hmaster or hregionserver
> Our Hbase cloud service architectue shown as follows 1.jpg
> 1.VM
> User’s Hbase client work in vm and use virtual ip address to access hbase 
> cluster.
> 2.NAT
>Network Address Translation, vip(Virtual Network Address) to pip (Physical 
> Network Address)
> 3. HbaseCluster Service
>  HbaseCluster Service work in physical network
> Problem
> 1.  View on vm
>   On vm side vm use vip to communication,but hbase have only one way 
> to communication use struct named
>   ServerName. When Hmaster startup will store master address and meta 
> region server address in zookeeper, 
>then the address is pip(Physical Network Address)   because hbase 
> cluster work in physical network . when vm 
>   get the address from zookeeper will not work because   vm use vip to 
> communication,one way to  solve this is to 
>   make physical machine host as vip like 192.168.0.1,but is not better to 
> make this.
> 2.  View on Physical machine
> Physical machine use pip to communication
> Solution
> 1.   protocol extend change proto message to below:
>   {code}
>   message ServerName {
>   required string host_name = 1;
>  optional uint32 port = 2;
>  optional uint64 start_code = 3;
>   optional string name=4;
>  }
>   {code}
>  add a filed named name like hdfs’s datablock location
> 2.   metatable extend 
>add column to hbase:meta named info:namelocation
> 3.   hbase-server
>   add params 
>  {code}
>   hbase.regionserver.servername
>   
> hbase.regionserver.servername
> 10.0.1.1
>  
>   {code}
>   to regionserver namelocation
>   add params
>  {code}
>hbase.master.servername 
>
>hbase.master.servername
>10.0.1.2
>
>  {code}
>to set master namelocation
> 4.   hbase-client
>   add params 
> {code}
>  hbase.client.use.hostname 
>  
>  hbase.client.use.hostname
>  true
>  
> {code}
>  to choose which address to use
> This patch is base on Hbase-1.3.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18124) Make Hbase Communication Support Virtual Network

2017-05-27 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HBASE-18124:

Attachment: (was: HBASE-18124.pdf)

> Make Hbase Communication Support Virtual Network
> 
>
> Key: HBASE-18124
> URL: https://issues.apache.org/jira/browse/HBASE-18124
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, hbase, master
>Reporter: liubangchen
>Assignee: liubangchen
> Attachments: 1.jpg, HBASE-18124.patch
>
>
> Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
> two way to locate datanode use by name or hostname.
> I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
> cloud service,when we make hbase as a cloud service we need  hbase support 
> other way to support locate hmaster or hregionserver
> Our Hbase cloud service architectue shown as follows 1.jpg
> 1.VM
> User’s Hbase client work in vm and use virtual ip address to access hbase 
> cluster.
> 2.NAT
>Network Address Translation, vip(Virtual Network Address) to pip (Physical 
> Network Address)
> 3. HbaseCluster Service
>  HbaseCluster Service work in physical network
> Problem
> 1.  View on vm
>   On vm side vm use vip to communication,but hbase have only one way 
> to communication use struct named
>   ServerName. When Hmaster startup will store master address and meta 
> region server address in zookeeper, 
>then the address is pip(Physical Network Address)   because hbase 
> cluster work in physical network . when vm 
>   get the address from zookeeper will not work because   vm use vip to 
> communication,one way to  solve this is to 
>   make physical machine host as vip like 192.168.0.1,but is not better to 
> make this.
> 2.  View on Physical machine
> Physical machine use pip to communication
> Solution
> 1.   protocol extend change proto message to below:
>   {code}
>   message ServerName {
>   required string host_name = 1;
>  optional uint32 port = 2;
>  optional uint64 start_code = 3;
>   optional string name=4;
>  }
>   {code}
>  add a filed named name like hdfs’s datablock location
> 2.   metatable extend 
>add column to hbase:meta named info:namelocation
> 3.   hbase-server
>   add params 
>  {code}
>   hbase.regionserver.servername
>   
> hbase.regionserver.servername
> 10.0.1.1
>  
>   {code}
>   to regionserver namelocation
>   add params
>  {code}
>hbase.master.servername 
>
>hbase.master.servername
>10.0.1.2
>
>  {code}
>to set master namelocation
> 4.   hbase-client
>   add params 
> {code}
>  hbase.client.use.hostname 
>  
>  hbase.client.use.hostname
>  true
>  
> {code}
>  to choose which address to use
> This patch is base on Hbase-1.3.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18124) Make Hbase Communication Support Virtual Network

2017-05-27 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HBASE-18124:

Summary: Make Hbase Communication Support Virtual Network  (was: Add 
Property name Of Strcut ServerName To Locate HMaster Or HRegionServer)

> Make Hbase Communication Support Virtual Network
> 
>
> Key: HBASE-18124
> URL: https://issues.apache.org/jira/browse/HBASE-18124
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, hbase, master
>Reporter: liubangchen
>Assignee: liubangchen
> Attachments: 1.jpg, HBASE-18124.patch, HBASE-18124.pdf
>
>
> Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
> two way to locate datanode use by name or hostname.
> I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
> cloud service,when we make hbase as a cloud service we need  hbase support 
> other way to support locate hmaster or hregionserver
> Our Hbase cloud service architectue shown as follows 1.jpg
> 1.VM
> User’s Hbase client work in vm and use virtual ip address to access hbase 
> cluster.
> 2.NAT
>Network Address Translation, vip(Virtual Network Address) to pip (Physical 
> Network Address)
> 3. HbaseCluster Service
>  HbaseCluster Service work in physical network
> Problem
> 1.  View on vm
>   On vm side vm use vip to communication,but hbase have only one way 
> to communication use struct named
>   ServerName. When Hmaster startup will store master address and meta 
> region server address in zookeeper, 
>then the address is pip(Physical Network Address)   because hbase 
> cluster work in physical network . when vm 
>   get the address from zookeeper will not work because   vm use vip to 
> communication,one way to  solve this is to 
>   make physical machine host as vip like 192.168.0.1,but is not better to 
> make this.
> 2.  View on Physical machine
> Physical machine use pip to communication
> Solution
> 1.   protocol extend change proto message to below:
>   {code}
>   message ServerName {
>   required string host_name = 1;
>  optional uint32 port = 2;
>  optional uint64 start_code = 3;
>   optional string name=4;
>  }
>   {code}
>  add a filed named name like hdfs’s datablock location
> 2.   metatable extend 
>add column to hbase:meta named info:namelocation
> 3.   hbase-server
>   add params 
>  {code}
>   hbase.regionserver.servername
>   
> hbase.regionserver.servername
> 10.0.1.1
>  
>   {code}
>   to regionserver namelocation
>   add params
>  {code}
>hbase.master.servername 
>
>hbase.master.servername
>10.0.1.2
>
>  {code}
>to set master namelocation
> 4.   hbase-client
>   add params 
> {code}
>  hbase.client.use.hostname 
>  
>  hbase.client.use.hostname
>  true
>  
> {code}
>  to choose which address to use
> This patch is base on Hbase-1.3.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18066) Get with closest_row_before on "hbase:meta" can return empty Cell during region merge/split

2017-05-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18066:
--
Attachment: HBASE-18066.branch-1.v3.patch

Retry

> Get with closest_row_before on "hbase:meta" can return empty Cell during 
> region merge/split
> ---
>
> Key: HBASE-18066
> URL: https://issues.apache.org/jira/browse/HBASE-18066
> Project: HBase
>  Issue Type: Bug
>  Components: hbase, regionserver
>Affects Versions: 1.3.1
> Environment: Linux (16.04.2), MacOS 10.11.6.
> Standalone and distributed HBase setup.
>Reporter: Andrey Elenskiy
>Assignee: Zheng Hu
> Attachments: HBASE-18066.branch-1.1.v1.patch, 
> HBASE-18066.branch-1.1.v1.patch, HBASE-18066.branch-1.1.v1.patch, 
> HBASE-18066.branch-1.3.v1.patch, HBASE-18066.branch-1.3.v1.patch, 
> HBASE-18066.branch-1.v1.patch, HBASE-18066.branch-1.v2.patch, 
> HBASE-18066.branch-1.v3.patch, HBASE-18066.branch-1.v3.patch, 
> TestGetWithClosestRowBeforeWhenSplit.java
>
>
> During region split/merge there's a brief period of time where doing a "Get" 
> with "closest_row_before=true" on "hbase:meta" may return empty 
> "GetResponse.result.cell" field even though parent, splitA and splitB regions 
> are all in "hbase:meta". Both gohbase (https://github.com/tsuna/gohbase) and 
> AsyncHBase (https://github.com/OpenTSDB/asynchbase) interprets this as 
> "TableDoesNotExist", which is returned to the client.
> Here's a gist that reproduces this problem: 
> https://gist.github.com/Timoha/c7a236b768be9220e85e53e1ca53bf96. Note that 
> you have to use older HTable client (I used 1.2.4) as current versions ignore 
> `Get.setClosestRowBefore(bool)` option.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18122) Scanner id should include ServerName of region server

2017-05-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027448#comment-16027448
 ] 

stack commented on HBASE-18122:
---

+1 Excellent (Are unit failures related)? Thanks for explanations. Reading 
HBASE-18121 helped.  Suggest making your helpful explanation into a release 
note.

> Scanner id should include ServerName of region server
> -
>
> Key: HBASE-18122
> URL: https://issues.apache.org/jira/browse/HBASE-18122
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18122.v01.patch, HBASE-18122.v02.patch, 
> HBASE-18122.v03.patch
>
>
> Now the scanner id is a long number from 1 to max in a region server. Each 
> new scanner will have a scanner id.
> If a client has a scanner whose id is x, when the RS restart and the scanner 
> id is also incremented to x or a little larger, there will be a scanner id 
> collision.
> So the scanner id should now be same during each time the RS restart. We can 
> add the start timestamp as the highest several bits in scanner id uint64.
> And because HBASE-18121 is not easy to fix and there are many clients with 
> old version. We can also encode server host:port into the scanner id.
> So we can use ServerName.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18042) Client Compatibility breaks between versions 1.2 and 1.3

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027423#comment-16027423
 ] 

Hudson commented on HBASE-18042:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #172 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/172/])
HBASE-18042 Client Compatibility breaks between versions 1.2 and 1.3 (zhangduo: 
rev 2277c2b63680df2af9edb3c534f0359e0ea14b5d)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScanWithoutFetchingData.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAlwaysSetScannerId.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java


> Client Compatibility breaks between versions 1.2 and 1.3
> 
>
> Key: HBASE-18042
> URL: https://issues.apache.org/jira/browse/HBASE-18042
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Karan Mehta
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18042-branch-1.3.patch, 
> HBASE-18042-branch-1.3-v1.patch, HBASE-18042-branch-1.patch, 
> HBASE-18042-branch-1.patch, HBASE-18042-branch-1-v1.patch, 
> HBASE-18042-branch-1-v1.patch, HBASE-18042.patch, HBASE-18042-v1.patch, 
> HBASE-18042-v2.patch
>
>
> OpenTSDB uses AsyncHBase as its client, rather than using the traditional 
> HBase Client. From version 1.2 to 1.3, the {{ClientProtos}} have been 
> changed. Newer fields are added to {{ScanResponse}} proto.
> For a typical Scan request in 1.2, would require caller to make an 
> OpenScanner Request, GetNextRows Request and a CloseScanner Request, based on 
> {{more_rows}} boolean field in the {{ScanResponse}} proto.
> However, from 1.3, new parameter {{more_results_in_region}} was added, which 
> limits the results per region. Therefore the client has to now manage sending 
> all the requests for each region. Further more, if the results are exhausted 
> from a particular region, the {{ScanResponse}} will set 
> {{more_results_in_region}} to false, but {{more_results}} can still be true. 
> Whenever the former is set to false, the {{RegionScanner}} will also be 
> closed. 
> OpenTSDB makes an OpenScanner Request and receives all its results in the 
> first {{ScanResponse}} itself, thus creating a condition as described in 
> above paragraph. Since {{more_rows}} is true, it will proceed to send next 
> request at which point the {{RSRpcServices}} will throw 
> {{UnknownScannerException}}. The protobuf client compatibility is maintained 
> but expected behavior is modified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18042) Client Compatibility breaks between versions 1.2 and 1.3

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027422#comment-16027422
 ] 

Hudson commented on HBASE-18042:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #186 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/186/])
HBASE-18042 Client Compatibility breaks between versions 1.2 and 1.3 (zhangduo: 
rev 2277c2b63680df2af9edb3c534f0359e0ea14b5d)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScanWithoutFetchingData.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAlwaysSetScannerId.java


> Client Compatibility breaks between versions 1.2 and 1.3
> 
>
> Key: HBASE-18042
> URL: https://issues.apache.org/jira/browse/HBASE-18042
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Karan Mehta
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18042-branch-1.3.patch, 
> HBASE-18042-branch-1.3-v1.patch, HBASE-18042-branch-1.patch, 
> HBASE-18042-branch-1.patch, HBASE-18042-branch-1-v1.patch, 
> HBASE-18042-branch-1-v1.patch, HBASE-18042.patch, HBASE-18042-v1.patch, 
> HBASE-18042-v2.patch
>
>
> OpenTSDB uses AsyncHBase as its client, rather than using the traditional 
> HBase Client. From version 1.2 to 1.3, the {{ClientProtos}} have been 
> changed. Newer fields are added to {{ScanResponse}} proto.
> For a typical Scan request in 1.2, would require caller to make an 
> OpenScanner Request, GetNextRows Request and a CloseScanner Request, based on 
> {{more_rows}} boolean field in the {{ScanResponse}} proto.
> However, from 1.3, new parameter {{more_results_in_region}} was added, which 
> limits the results per region. Therefore the client has to now manage sending 
> all the requests for each region. Further more, if the results are exhausted 
> from a particular region, the {{ScanResponse}} will set 
> {{more_results_in_region}} to false, but {{more_results}} can still be true. 
> Whenever the former is set to false, the {{RegionScanner}} will also be 
> closed. 
> OpenTSDB makes an OpenScanner Request and receives all its results in the 
> first {{ScanResponse}} itself, thus creating a condition as described in 
> above paragraph. Since {{more_rows}} is true, it will proceed to send next 
> request at which point the {{RSRpcServices}} will throw 
> {{UnknownScannerException}}. The protobuf client compatibility is maintained 
> but expected behavior is modified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18042) Client Compatibility breaks between versions 1.2 and 1.3

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027417#comment-16027417
 ] 

Hudson commented on HBASE-18042:


FAILURE: Integrated in Jenkins build HBase-1.4 #751 (See 
[https://builds.apache.org/job/HBase-1.4/751/])
HBASE-18042 Client Compatibility breaks between versions 1.2 and 1.3 (zhangduo: 
rev 1a37f3be82f3d4e111ff846a79583472da86da4d)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAlwaysSetScannerId.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScanWithoutFetchingData.java


> Client Compatibility breaks between versions 1.2 and 1.3
> 
>
> Key: HBASE-18042
> URL: https://issues.apache.org/jira/browse/HBASE-18042
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Karan Mehta
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18042-branch-1.3.patch, 
> HBASE-18042-branch-1.3-v1.patch, HBASE-18042-branch-1.patch, 
> HBASE-18042-branch-1.patch, HBASE-18042-branch-1-v1.patch, 
> HBASE-18042-branch-1-v1.patch, HBASE-18042.patch, HBASE-18042-v1.patch, 
> HBASE-18042-v2.patch
>
>
> OpenTSDB uses AsyncHBase as its client, rather than using the traditional 
> HBase Client. From version 1.2 to 1.3, the {{ClientProtos}} have been 
> changed. Newer fields are added to {{ScanResponse}} proto.
> For a typical Scan request in 1.2, would require caller to make an 
> OpenScanner Request, GetNextRows Request and a CloseScanner Request, based on 
> {{more_rows}} boolean field in the {{ScanResponse}} proto.
> However, from 1.3, new parameter {{more_results_in_region}} was added, which 
> limits the results per region. Therefore the client has to now manage sending 
> all the requests for each region. Further more, if the results are exhausted 
> from a particular region, the {{ScanResponse}} will set 
> {{more_results_in_region}} to false, but {{more_results}} can still be true. 
> Whenever the former is set to false, the {{RegionScanner}} will also be 
> closed. 
> OpenTSDB makes an OpenScanner Request and receives all its results in the 
> first {{ScanResponse}} itself, thus creating a condition as described in 
> above paragraph. Since {{more_rows}} is true, it will proceed to send next 
> request at which point the {{RSRpcServices}} will throw 
> {{UnknownScannerException}}. The protobuf client compatibility is maintained 
> but expected behavior is modified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18042) Client Compatibility breaks between versions 1.2 and 1.3

2017-05-27 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18042:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branch-1.3+.

> Client Compatibility breaks between versions 1.2 and 1.3
> 
>
> Key: HBASE-18042
> URL: https://issues.apache.org/jira/browse/HBASE-18042
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Karan Mehta
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18042-branch-1.3.patch, 
> HBASE-18042-branch-1.3-v1.patch, HBASE-18042-branch-1.patch, 
> HBASE-18042-branch-1.patch, HBASE-18042-branch-1-v1.patch, 
> HBASE-18042-branch-1-v1.patch, HBASE-18042.patch, HBASE-18042-v1.patch, 
> HBASE-18042-v2.patch
>
>
> OpenTSDB uses AsyncHBase as its client, rather than using the traditional 
> HBase Client. From version 1.2 to 1.3, the {{ClientProtos}} have been 
> changed. Newer fields are added to {{ScanResponse}} proto.
> For a typical Scan request in 1.2, would require caller to make an 
> OpenScanner Request, GetNextRows Request and a CloseScanner Request, based on 
> {{more_rows}} boolean field in the {{ScanResponse}} proto.
> However, from 1.3, new parameter {{more_results_in_region}} was added, which 
> limits the results per region. Therefore the client has to now manage sending 
> all the requests for each region. Further more, if the results are exhausted 
> from a particular region, the {{ScanResponse}} will set 
> {{more_results_in_region}} to false, but {{more_results}} can still be true. 
> Whenever the former is set to false, the {{RegionScanner}} will also be 
> closed. 
> OpenTSDB makes an OpenScanner Request and receives all its results in the 
> first {{ScanResponse}} itself, thus creating a condition as described in 
> above paragraph. Since {{more_rows}} is true, it will proceed to send next 
> request at which point the {{RSRpcServices}} will throw 
> {{UnknownScannerException}}. The protobuf client compatibility is maintained 
> but expected behavior is modified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18042) Client Compatibility breaks between versions 1.2 and 1.3

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027399#comment-16027399
 ] 

Hudson commented on HBASE-18042:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #53 (See 
[https://builds.apache.org/job/HBase-1.3-IT/53/])
HBASE-18042 Client Compatibility breaks between versions 1.2 and 1.3 (zhangduo: 
rev 2277c2b63680df2af9edb3c534f0359e0ea14b5d)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAlwaysSetScannerId.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScanWithoutFetchingData.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java


> Client Compatibility breaks between versions 1.2 and 1.3
> 
>
> Key: HBASE-18042
> URL: https://issues.apache.org/jira/browse/HBASE-18042
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Karan Mehta
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18042-branch-1.3.patch, 
> HBASE-18042-branch-1.3-v1.patch, HBASE-18042-branch-1.patch, 
> HBASE-18042-branch-1.patch, HBASE-18042-branch-1-v1.patch, 
> HBASE-18042-branch-1-v1.patch, HBASE-18042.patch, HBASE-18042-v1.patch, 
> HBASE-18042-v2.patch
>
>
> OpenTSDB uses AsyncHBase as its client, rather than using the traditional 
> HBase Client. From version 1.2 to 1.3, the {{ClientProtos}} have been 
> changed. Newer fields are added to {{ScanResponse}} proto.
> For a typical Scan request in 1.2, would require caller to make an 
> OpenScanner Request, GetNextRows Request and a CloseScanner Request, based on 
> {{more_rows}} boolean field in the {{ScanResponse}} proto.
> However, from 1.3, new parameter {{more_results_in_region}} was added, which 
> limits the results per region. Therefore the client has to now manage sending 
> all the requests for each region. Further more, if the results are exhausted 
> from a particular region, the {{ScanResponse}} will set 
> {{more_results_in_region}} to false, but {{more_results}} can still be true. 
> Whenever the former is set to false, the {{RegionScanner}} will also be 
> closed. 
> OpenTSDB makes an OpenScanner Request and receives all its results in the 
> first {{ScanResponse}} itself, thus creating a condition as described in 
> above paragraph. Since {{more_rows}} is true, it will proceed to send next 
> request at which point the {{RSRpcServices}} will throw 
> {{UnknownScannerException}}. The protobuf client compatibility is maintained 
> but expected behavior is modified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18122) Scanner id should include ServerName of region server

2017-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027398#comment-16027398
 ] 

Hadoop QA commented on HBASE-18122:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 50s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 7s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
57m 16s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 222m 9s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 306m 12s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.client.TestAsyncTableScannerCloseWhileSuspending |
| Timed out junit tests | 
org.apache.hadoop.hbase.client.TestAsyncNonMetaRegionLocatorConcurrenyLimit |
|   | org.apache.hadoop.hbase.client.TestAsyncTableBatch |
|   | org.apache.hadoop.hbase.client.TestAsyncTableScanMetrics |
|   | org.apache.hadoop.hbase.client.TestAsyncTableAdminApi |
|   | org.apache.hadoop.hbase.client.TestAsyncQuotaAdminApi |
|   | org.apache.hadoop.hbase.replication.regionserver.TestWALEntryStream |
|   | org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded |
|   | org.apache.hadoop.hbase.client.TestAsyncTableScan |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870189/HBASE-18122.v03.patch 
|
| JIRA Issue | HBASE-18122 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 66652201939d 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / efc7edc |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Updated] (HBASE-17678) FilterList with MUST_PASS_ONE lead to redundancy cells returned

2017-05-27 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17678:
-
Summary: FilterList with MUST_PASS_ONE lead to redundancy cells returned  
(was: ColumnPaginationFilter in a FilterList gives different results when using 
MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
timestamp)

> FilterList with MUST_PASS_ONE lead to redundancy cells returned
> ---
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.v1.rough.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single (not-duplicated)  within a page, but not across pages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17678) ColumnPaginationFilter in a FilterList gives different results when using MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given timestamp

2017-05-27 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17678:
-
Attachment: (was: HBASE-17678.v1.rough.patch)

> ColumnPaginationFilter in a FilterList gives different results when using 
> MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
> timestamp
> -
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.v1.rough.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single (not-duplicated)  within a page, but not across pages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17678) ColumnPaginationFilter in a FilterList gives different results when using MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given timestamp

2017-05-27 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17678:
-
Attachment: HBASE-17678.v1.rough.patch

> ColumnPaginationFilter in a FilterList gives different results when using 
> MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
> timestamp
> -
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.v1.rough.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single (not-duplicated)  within a page, but not across pages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17678) ColumnPaginationFilter in a FilterList gives different results when using MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given timestamp

2017-05-27 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027365#comment-16027365
 ] 

Zheng Hu edited comment on HBASE-17678 at 5/27/17 9:07 AM:
---

Uploaded HBASE-17678.v1.rough.patch.  It's not a production patch for master 
branch,  Just for explaining my solution for the bug,  even though the previous 
failed UT pass now. 

[~stack], [~zghaobac]], [~Apache9], [~tedyu],  Could you have a look ? 



was (Author: openinx):
Uploaded HBASE-17678.v1.rough.patch.  It's not a production patch for master 
branch,  Just for explaining my solution for the bug,  even though the previous 
failed UT pass now. 

[~stack], [~sghao], [~Apache9], [~tedyu],  Could you have a look ? 


> ColumnPaginationFilter in a FilterList gives different results when using 
> MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
> timestamp
> -
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.v1.rough.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 

[jira] [Comment Edited] (HBASE-17678) ColumnPaginationFilter in a FilterList gives different results when using MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given timestamp

2017-05-27 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027365#comment-16027365
 ] 

Zheng Hu edited comment on HBASE-17678 at 5/27/17 9:07 AM:
---

Uploaded HBASE-17678.v1.rough.patch.  It's not a production patch for master 
branch,  Just for explaining my solution for the bug,  even though the previous 
failed UT pass now. 

[~stack], [~zghaobac], [~Apache9], [~tedyu],  Could you have a look ? 



was (Author: openinx):
Uploaded HBASE-17678.v1.rough.patch.  It's not a production patch for master 
branch,  Just for explaining my solution for the bug,  even though the previous 
failed UT pass now. 

[~stack], [~zghaobac]], [~Apache9], [~tedyu],  Could you have a look ? 


> ColumnPaginationFilter in a FilterList gives different results when using 
> MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
> timestamp
> -
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.v1.rough.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 

[jira] [Updated] (HBASE-17678) ColumnPaginationFilter in a FilterList gives different results when using MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given timestamp

2017-05-27 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17678:
-
Attachment: HBASE-17678.v1.rough.patch

Uploaded HBASE-17678.v1.rough.patch.  It's not a production patch for master 
branch,  Just for explaining my solution for the bug,  even though the previous 
failed UT pass now. 

[~stack], [~sghao], [~Apache9], [~tedyu],  Could you have a look ? 


> ColumnPaginationFilter in a FilterList gives different results when using 
> MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
> timestamp
> -
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.v1.rough.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single (not-duplicated)  within a page, but not across pages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18042) Client Compatibility breaks between versions 1.2 and 1.3

2017-05-27 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027364#comment-16027364
 ] 

Duo Zhang commented on HBASE-18042:
---

TestReplicasClient.testCancelOfMultiGet is not related. It has already failed 
on branch-1.

Will commit later if no objections.

Thanks.

> Client Compatibility breaks between versions 1.2 and 1.3
> 
>
> Key: HBASE-18042
> URL: https://issues.apache.org/jira/browse/HBASE-18042
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Karan Mehta
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18042-branch-1.3.patch, 
> HBASE-18042-branch-1.3-v1.patch, HBASE-18042-branch-1.patch, 
> HBASE-18042-branch-1.patch, HBASE-18042-branch-1-v1.patch, 
> HBASE-18042-branch-1-v1.patch, HBASE-18042.patch, HBASE-18042-v1.patch, 
> HBASE-18042-v2.patch
>
>
> OpenTSDB uses AsyncHBase as its client, rather than using the traditional 
> HBase Client. From version 1.2 to 1.3, the {{ClientProtos}} have been 
> changed. Newer fields are added to {{ScanResponse}} proto.
> For a typical Scan request in 1.2, would require caller to make an 
> OpenScanner Request, GetNextRows Request and a CloseScanner Request, based on 
> {{more_rows}} boolean field in the {{ScanResponse}} proto.
> However, from 1.3, new parameter {{more_results_in_region}} was added, which 
> limits the results per region. Therefore the client has to now manage sending 
> all the requests for each region. Further more, if the results are exhausted 
> from a particular region, the {{ScanResponse}} will set 
> {{more_results_in_region}} to false, but {{more_results}} can still be true. 
> Whenever the former is set to false, the {{RegionScanner}} will also be 
> closed. 
> OpenTSDB makes an OpenScanner Request and receives all its results in the 
> first {{ScanResponse}} itself, thus creating a condition as described in 
> above paragraph. Since {{more_rows}} is true, it will proceed to send next 
> request at which point the {{RSRpcServices}} will throw 
> {{UnknownScannerException}}. The protobuf client compatibility is maintained 
> but expected behavior is modified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15160) Put back HFile's HDFS op latency sampling code and add metrics for monitoring

2017-05-27 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027352#comment-16027352
 ] 

Yu Li commented on HBASE-15160:
---

Have checked the patch and some comments:

bq. the callers of readBlock() do not know whether the returned block is read 
from disk, or comes from cache
I could see there's a {{if (cacheConf.isBlockCacheEnabled())}} check in 
{{HFileReaderImpl#readBlock}} where the cached block will be returned if hit, 
so we could simply update the metrics outside the if check? And with the same 
method we could also record the IO time of {{getMetaBlock}} in the finally 
clause (if cache missed). Wdyt?

Previously the concern on {{readAtOffset}} completely make sense, but 
HBASE-17917 has removed the stream lock so no more stream read when {{pread}} 
is true, which makes it possible to move the updating of the metrics up to the 
caller (smile).

> Put back HFile's HDFS op latency sampling code and add metrics for monitoring
> -
>
> Key: HBASE-15160
> URL: https://issues.apache.org/jira/browse/HBASE-15160
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.1.2
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Critical
> Attachments: HBASE-15160.patch, HBASE-15160_v2.patch, 
> HBASE-15160_v3.patch, hbase-15160_v4.patch, hbase-15160_v5.patch, 
> hbase-15160_v6.patch
>
>
> In HBASE-11586 all HDFS op latency sampling code, including fsReadLatency, 
> fsPreadLatency and fsWriteLatency, have been removed. There was some 
> discussion about putting them back in a new JIRA but never happened. 
> According to our experience, these metrics are useful to judge whether issue 
> lies on HDFS when slow request occurs, so we propose to put them back in this 
> JIRA, and add the metrics for monitoring as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18115) Move SaslServer creation to HBaseSaslRpcServer

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027341#comment-16027341
 ] 

Hudson commented on HBASE-18115:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3084 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3084/])
HBASE-18115 Move SaslServer creation to HBaseSaslRpcServer (zhangduo: rev 
efc7edc81a0d9da486ca37b8314baf5a7e75bc86)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUtil.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcConnection.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/HBaseSaslRpcServer.java


> Move SaslServer creation to HBaseSaslRpcServer
> --
>
> Key: HBASE-18115
> URL: https://issues.apache.org/jira/browse/HBASE-18115
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18115.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18114) Update the config of TestAsync*AdminApi to make test stable

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027340#comment-16027340
 ] 

Hudson commented on HBASE-18114:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3084 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3084/])
HBASE-18114 Update the config of TestAsync*AdminApi to make test stable (zghao: 
rev 97484f2aaf3809137fd50180164dc2c741d05ee8)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncReplicationAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcedureAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncAdminBase.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTableAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncNamespaceAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncSnapshotAdminApi.java


> Update the config of TestAsync*AdminApi to make test stable
> ---
>
> Key: HBASE-18114
> URL: https://issues.apache.org/jira/browse/HBASE-18114
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18114-v1.patch, HBASE-18114-v1.patch, 
> HBASE-18114-v1.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch
>
>
> {code}
> 2017-05-25 17:56:34,967 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> master.HMaster$11(2297): Client=hao//127.0.0.1 disable testModifyColumnFamily
> 2017-05-25 17:56:37,974 INFO  [RpcClient-timer-pool1-t1] 
> client.AsyncHBaseAdmin$TableProcedureBiConsumer(2219): Operation: DISABLE, 
> Table Name: default:testModifyColumnFamily failed with Failed after 
> attempts=3, exceptions: 
> Thu May 25 17:56:35 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=294, waitTime=1008, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=295, waitTime=1299, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=296, waitTime=668, 
> rpcTimeout=660
> 017-05-25 17:56:38,936 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> procedure2.ProcedureExecutor(788): Stored procId=15, owner=hao, 
> state=RUNNABLE:DISABLE_TABLE_PREPARE, DisableTableProcedure 
> table=testModifyColumnFamily
> {code}
> For this disable table procedure, master return the procedure id when it 
> submit the procedure to ProcedureExecutor. And the above procedure take 4 
> seconds to submit. So the disable table call failed because the rpc timeout 
> is 1 seconds and the retry number is 3.
> For admin operation, I thought we don't need change the default timeout 
> config in unit test. And the retry is not need, too. (Or we can set a retry > 
> 1 to test nonce thing). Meanwhile, the default timeout is 60 seconds. So the 
> test type may need change to LargeTests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15576) Scanning cursor to prevent blocking long time on ResultScanner.next()

2017-05-27 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15576:
--
Attachment: HBASE-15576.v04.patch

Fix findbug warning, the warning is indeed introduced by the patch..

> Scanning cursor to prevent blocking long time on ResultScanner.next()
> -
>
> Key: HBASE-15576
> URL: https://issues.apache.org/jira/browse/HBASE-15576
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-15576.v01.patch, HBASE-15576.v02.patch, 
> HBASE-15576.v03.patch, HBASE-15576.v03.patch, HBASE-15576.v04.patch
>
>
> After 1.1.0 released, we have partial and heartbeat protocol in scanning to 
> prevent responding large data or timeout. Now for ResultScanner.next(), we 
> may block for longer time larger than timeout settings to get a Result if the 
> row is very large, or filter is sparse, or there are too many delete markers 
> in files.
> However, in some scenes, we don't want it to be blocked for too long. For 
> example, a web service which handles requests from mobile devices whose 
> network is not stable and we can not set timeout too long(eg. only 5 seconds) 
> between mobile and web service. This service will scan rows from HBase and 
> return it to mobile devices. In this scene, the simplest way is to make the 
> web service stateless. Apps in mobile devices will send several requests one 
> by one to get the data until enough just like paging a list. In each request 
> it will carry a start position which depends on the last result from web 
> service. Different requests can be sent to different web service server 
> because it is stateless.
> Therefore, the stateless web service need a cursor from HBase telling where 
> we have scanned in RegionScanner when HBase client receives an empty 
> heartbeat. And the service will return the cursor to mobile device although 
> the response has no data. In next request we can start at the position of 
> cursor, without the cursor we have to scan from last returned result and we 
> may timeout forever. And of course even if the heartbeat message is not empty 
> we can still use cursor to prevent re-scan the same rows/cells which has beed 
> skipped.
> Obviously, we will give up consistency for scanning because even HBase client 
> is also stateless, but it is acceptable in this scene. And maybe we can keep 
> mvcc in cursor so we can get a consistent view?
> HBASE-13099 had some discussion, but it has no further progress by now.
> API:
> In Scan we need a new method setNeedCursorResult(true) to get the cursor row 
> key when there is a RPC response but client can not return any Result. In 
> this mode we will not block ResultScanner.next() longer than this timeout 
> setting.
> {code}
> while (r = scanner.next() && r != null) {
>   if(r.isCursor()){
>   // scanning is not end, it is a cursor, save its row key and close scanner 
> if you want, or
>   // just continue the loop to call next().
>   } else {
>   // just like before
>   }
> }
> // scanning is end
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15576) Scanning cursor to prevent blocking long time on ResultScanner.next()

2017-05-27 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15576:
--
Attachment: HBASE-15576.v03.patch

Retry for ut. The findbugs warning seems not related with the patch. [~Apache9] 
Does response will still return values==null? If not we can remove values != 
null in line 508; if so we should check it before line 462.

> Scanning cursor to prevent blocking long time on ResultScanner.next()
> -
>
> Key: HBASE-15576
> URL: https://issues.apache.org/jira/browse/HBASE-15576
> Project: HBase
>  Issue Type: New Feature
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-15576.v01.patch, HBASE-15576.v02.patch, 
> HBASE-15576.v03.patch, HBASE-15576.v03.patch
>
>
> After 1.1.0 released, we have partial and heartbeat protocol in scanning to 
> prevent responding large data or timeout. Now for ResultScanner.next(), we 
> may block for longer time larger than timeout settings to get a Result if the 
> row is very large, or filter is sparse, or there are too many delete markers 
> in files.
> However, in some scenes, we don't want it to be blocked for too long. For 
> example, a web service which handles requests from mobile devices whose 
> network is not stable and we can not set timeout too long(eg. only 5 seconds) 
> between mobile and web service. This service will scan rows from HBase and 
> return it to mobile devices. In this scene, the simplest way is to make the 
> web service stateless. Apps in mobile devices will send several requests one 
> by one to get the data until enough just like paging a list. In each request 
> it will carry a start position which depends on the last result from web 
> service. Different requests can be sent to different web service server 
> because it is stateless.
> Therefore, the stateless web service need a cursor from HBase telling where 
> we have scanned in RegionScanner when HBase client receives an empty 
> heartbeat. And the service will return the cursor to mobile device although 
> the response has no data. In next request we can start at the position of 
> cursor, without the cursor we have to scan from last returned result and we 
> may timeout forever. And of course even if the heartbeat message is not empty 
> we can still use cursor to prevent re-scan the same rows/cells which has beed 
> skipped.
> Obviously, we will give up consistency for scanning because even HBase client 
> is also stateless, but it is acceptable in this scene. And maybe we can keep 
> mvcc in cursor so we can get a consistent view?
> HBASE-13099 had some discussion, but it has no further progress by now.
> API:
> In Scan we need a new method setNeedCursorResult(true) to get the cursor row 
> key when there is a RPC response but client can not return any Result. In 
> this mode we will not block ResultScanner.next() longer than this timeout 
> setting.
> {code}
> while (r = scanner.next() && r != null) {
>   if(r.isCursor()){
>   // scanning is not end, it is a cursor, save its row key and close scanner 
> if you want, or
>   // just continue the loop to call next().
>   } else {
>   // just like before
>   }
> }
> // scanning is end
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-18124) Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer

2017-05-27 Thread liubangchen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027286#comment-16027286
 ] 

liubangchen edited comment on HBASE-18124 at 5/27/17 7:27 AM:
--

Hi,[~ted_yu],I think this feature is different with HBASE-12954,our requirement 
is like this:
1. use hbase.regionserver.hostname or hbase.master.hostname or locate server in 
physical network
2. use other address to locate server in virtual network
3. must  vip (virtual ip address ) and pip (physical ip address) to be 
published in zookeeper

I am not good at English,I will modify the description later ,thanks.


was (Author: liubangchen):
Hi,[~ted_yu],I think this feature is different with HBASE-12954,our requirement 
is like this:
1. use hbase.regionserver.hostname or hbase.master.hostname or locate server in 
physical network
2. use other address to locate server in virtual network
3. must  vip (virtual ip address ) and pip (physical ip address) to be 
published in zookeeper

I am not good at English,I will modify the Description later ,thanks.

> Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer
> -
>
> Key: HBASE-18124
> URL: https://issues.apache.org/jira/browse/HBASE-18124
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, hbase, master
>Reporter: liubangchen
>Assignee: liubangchen
> Attachments: 1.jpg, HBASE-18124.patch, HBASE-18124.pdf
>
>
> Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
> two way to locate datanode use by name or hostname.
> I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
> cloud service,when we make hbase as a cloud service we need  hbase support 
> other way to support locate hmaster or hregionserver
> Our Hbase cloud service architectue shown as follows 1.jpg
> 1.VM
> User’s Hbase client work in vm and use virtual ip address to access hbase 
> cluster.
> 2.NAT
>Network Address Translation, vip(Virtual Network Address) to pip (Physical 
> Network Address)
> 3. HbaseCluster Service
>  HbaseCluster Service work in physical network
> Problem
> 1.  View on vm
>   On vm side vm use vip to communication,but hbase have only one way 
> to communication use struct named
>   ServerName. When Hmaster startup will store master address and meta 
> region server address in zookeeper, 
>then the address is pip(Physical Network Address)   because hbase 
> cluster work in physical network . when vm 
>   get the address from zookeeper will not work because   vm use vip to 
> communication,one way to  solve this is to 
>   make physical machine host as vip like 192.168.0.1,but is not better to 
> make this.
> 2.  View on Physical machine
> Physical machine use pip to communication
> Solution
> 1.   protocol extend change proto message to below:
>   {code}
>   message ServerName {
>   required string host_name = 1;
>  optional uint32 port = 2;
>  optional uint64 start_code = 3;
>   optional string name=4;
>  }
>   {code}
>  add a filed named name like hdfs’s datablock location
> 2.   metatable extend 
>add column to hbase:meta named info:namelocation
> 3.   hbase-server
>   add params 
>  {code}
>   hbase.regionserver.servername
>   
> hbase.regionserver.servername
> 10.0.1.1
>  
>   {code}
>   to regionserver namelocation
>   add params
>  {code}
>hbase.master.servername 
>
>hbase.master.servername
>10.0.1.2
>
>  {code}
>to set master namelocation
> 4.   hbase-client
>   add params 
> {code}
>  hbase.client.use.hostname 
>  
>  hbase.client.use.hostname
>  true
>  
> {code}
>  to choose which address to use
> This patch is base on Hbase-1.3.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027311#comment-16027311
 ] 

Hudson commented on HBASE-14614:


FAILURE: Integrated in Jenkins build HBase-HBASE-14614 #251 (See 
[https://builds.apache.org/job/HBase-HBASE-14614/251/])
HBASE-14614 Procedure v2 - Core Assignment Manager (Matteo Bertozzi) (stack: 
rev 6ae5b3bf1e1dccad719812f52c188349ab08d418)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestCreateTableProcedure.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestDisableTableProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureConstants.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/master/RegionState.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignCallable.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/assignment/TestAssignmentManager.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/BulkReOpen.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterDumpServlet.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionUnassigner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestModifyNamespaceProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileCache.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestEnableTable.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMaster.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ProcedureSyncWait.java
* (delete) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestSplitTableRegionProcedure.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionFileSystem.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/RegionLocationFinder.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java
* (edit) 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManagerSource.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TableProcedureInterface.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckTwoRS.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/UnAssignCallable.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/GCRegionProcedure.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/BulkAssigner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/RestoreSnapshotProcedure.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin2.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/DeleteTableProcedure.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterBalanceThrottling.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRestartCluster.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/namespace/NamespaceAuditor.java
* (add) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/RemoteProcedureDispatcher.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/ProcedureInfo.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java
* (edit) 

[jira] [Updated] (HBASE-18124) Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer

2017-05-27 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HBASE-18124:

Description: 
Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
two way to locate datanode use by name or hostname.

I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
cloud service,when we make hbase as a cloud service we need  hbase support 
other way to support locate hmaster or hregionserver
Our Hbase cloud service architectue shown as follows 1.jpg

1.VM
User’s Hbase client work in vm and use virtual ip address to access hbase 
cluster.
2.NAT
   Network Address Translation, vip(Virtual Network Address) to pip (Physical 
Network Address)
3. HbaseCluster Service
 HbaseCluster Service work in physical network

Problem
1.  View on vm
  On vm side vm use vip to communication,but hbase have only one way to 
communication use struct named
  ServerName. When Hmaster startup will store master address and meta 
region server address in zookeeper, 
   then the address is pip(Physical Network Address)   because hbase 
cluster work in physical network . when vm 
  get the address from zookeeper will not work because   vm use vip to 
communication,one way to  solve this is to 
  make physical machine host as vip like 192.168.0.1,but is not better to 
make this.
2.  View on Physical machine
Physical machine use pip to communication

Solution
1.   protocol extend change proto message to below:
  {code}
  message ServerName {
  required string host_name = 1;
 optional uint32 port = 2;
 optional uint64 start_code = 3;
  optional string name=4;
 }
  {code}

 add a filed named name like hdfs’s datablock location
2.   metatable extend 
   add column to hbase:meta named info:namelocation
3.   hbase-server
  add params 
 {code}
  hbase.regionserver.servername
  
hbase.regionserver.servername
10.0.1.1
 
  {code}
  to regionserver namelocation
  add params
 {code}
   hbase.master.servername 
   
   hbase.master.servername
   10.0.1.2
   
 {code}
   to set master namelocation
4.   hbase-client
  add params 
{code}
 hbase.client.use.hostname 
 
 hbase.client.use.hostname
 true
 
{code}
 to choose which address to use

This patch is base on Hbase-1.3.0

  was:
Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
two way to locate datanode use by name or hostname.

I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
cloud service,when we make hbase as a cloud service we need  hbase support 
other way to support locate hmaster or hregionserver
Our Hbase cloud service architectue shown as follows 

{image}
1.jpg
{image}

1.VM
User’s Hbase client work in vm and use virtual ip address to access hbase 
cluster.
2.NAT
   Network Address Translation, vip(Virtual Network Address) to pip (Physical 
Network Address)
3. HbaseCluster Service
 HbaseCluster Service work in physical network

Problem
1.  View on vm
  On vm side vm use vip to communication,but hbase have only one way to 
communication use struct named
  ServerName. When Hmaster startup will store master address and meta 
region server address in zookeeper, 
   then the address is pip(Physical Network Address)   because hbase 
cluster work in physical network . when vm 
  get the address from zookeeper will not work because   vm use vip to 
communication,one way to  solve this is to 
  make physical machine host as vip like 192.168.0.1,but is not better to 
make this.
2.  View on Physical machine
Physical machine use pip to communication

Solution
1.   protocol extend change proto message to below:
  {code}
  message ServerName {
  required string host_name = 1;
 optional uint32 port = 2;
 optional uint64 start_code = 3;
  optional string name=4;
 }
  {code}

 add a filed named name like hdfs’s datablock location
2.   metatable extend 
   add column to hbase:meta named info:namelocation
3.   hbase-server
  add params 
 {code}
  hbase.regionserver.servername
  
hbase.regionserver.servername
10.0.1.1
 
  {code}
  to regionserver namelocation
  add params
 {code}
   hbase.master.servername 
   
   hbase.master.servername
   10.0.1.2
   
 {code}
   to set master namelocation
4.   hbase-client
  add params 
{code}
 hbase.client.use.hostname 
 
 hbase.client.use.hostname
 true
 
{code}
 to choose which address to use

This patch is base on Hbase-1.3.0


> Add Property name Of 

[jira] [Updated] (HBASE-18124) Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer

2017-05-27 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HBASE-18124:

Description: 
Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
two way to locate datanode use by name or hostname.

I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
cloud service,when we make hbase as a cloud service we need  hbase support 
other way to support locate hmaster or hregionserver
Our Hbase cloud service architectue shown as follows 

{image}
1.jpg
{image}

1.VM
User’s Hbase client work in vm and use virtual ip address to access hbase 
cluster.
2.NAT
   Network Address Translation, vip(Virtual Network Address) to pip (Physical 
Network Address)
3. HbaseCluster Service
 HbaseCluster Service work in physical network

Problem
1.  View on vm
  On vm side vm use vip to communication,but hbase have only one way to 
communication use struct named
  ServerName. When Hmaster startup will store master address and meta 
region server address in zookeeper, 
   then the address is pip(Physical Network Address)   because hbase 
cluster work in physical network . when vm 
  get the address from zookeeper will not work because   vm use vip to 
communication,one way to  solve this is to 
  make physical machine host as vip like 192.168.0.1,but is not better to 
make this.
2.  View on Physical machine
Physical machine use pip to communication

Solution
1.   protocol extend change proto message to below:
  {code}
  message ServerName {
  required string host_name = 1;
 optional uint32 port = 2;
 optional uint64 start_code = 3;
  optional string name=4;
 }
  {code}

 add a filed named name like hdfs’s datablock location
2.   metatable extend 
   add column to hbase:meta named info:namelocation
3.   hbase-server
  add params 
 {code}
  hbase.regionserver.servername
  
hbase.regionserver.servername
10.0.1.1
 
  {code}
  to regionserver namelocation
  add params
 {code}
   hbase.master.servername 
   
   hbase.master.servername
   10.0.1.2
   
 {code}
   to set master namelocation
4.   hbase-client
  add params 
{code}
 hbase.client.use.hostname 
 
 hbase.client.use.hostname
 true
 
{code}
 to choose which address to use

This patch is base on Hbase-1.3.0

  was:
Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
two way to locate datanode use by name or hostname.

I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
cloud service,when we make hbase as a cloud service we need  hbase support 
other way to support locate hmaster or hregionserver
Our Hbase cloud service architectue shown as follows 1.jpg

1.VM
User’s Hbase client work in vm and use virtual ip address to access hbase 
cluster.
2.NAT
   Network Address Translation, vip(Virtual Network Address) to pip (Physical 
Network Address)
3. HbaseCluster Service
 HbaseCluster Service work in physical network

Problem
1.  View on vm
  On vm side vm use vip to communication,but hbase have only one way to 
communication use struct named
  ServerName. When Hmaster startup will store master address and meta 
region server address in zookeeper, 
   then the address is pip(Physical Network Address)   because hbase 
cluster work in physical network . when vm 
  get the address from zookeeper will not work because   vm use vip to 
communication,one way to  solve this is to 
  make physical machine host as vip like 192.168.0.1,but is not better to 
make this.
2.  View on Physical machine
Physical machine use pip to communication

Solution
1.   protocol extend change proto message to below:
  {code}
  message ServerName {
  required string host_name = 1;
 optional uint32 port = 2;
 optional uint64 start_code = 3;
  optional string name=4;
 }
  {code}

 add a filed named name like hdfs’s datablock location
2.   metatable extend 
   add column to hbase:meta named info:namelocation
3.   hbase-server
  add params 
 {code}
  hbase.regionserver.servername
  
hbase.regionserver.servername
10.0.1.1
 
  {code}
  to regionserver namelocation
  add params
 {code}
   hbase.master.servername 
   
   hbase.master.servername
   10.0.1.2
   
 {code}
   to set master namelocation
4.   hbase-client
  add params 
{code}
 hbase.client.use.hostname 
 
 hbase.client.use.hostname
 true
 
{code}
 to choose which address to use

This patch is base on Hbase-1.3.0


> Add Property name Of 

[jira] [Updated] (HBASE-18124) Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer

2017-05-27 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HBASE-18124:

Description: 
Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
two way to locate datanode use by name or hostname.

I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
cloud service,when we make hbase as a cloud service we need  hbase support 
other way to support locate hmaster or hregionserver
Our Hbase cloud service architectue shown as follows 1.jpg

1.VM
User’s Hbase client work in vm and use virtual ip address to access hbase 
cluster.
2.NAT
   Network Address Translation, vip(Virtual Network Address) to pip (Physical 
Network Address)
3. HbaseCluster Service
 HbaseCluster Service work in physical network

Problem
1.  View on vm
  On vm side vm use vip to communication,but hbase have only one way to 
communication use struct named
  ServerName. When Hmaster startup will store master address and meta 
region server address in zookeeper, 
   then the address is pip(Physical Network Address)   because hbase 
cluster work in physical network . when vm 
  get the address from zookeeper will not work because   vm use vip to 
communication,one way to  solve this is to 
  make physical machine host as vip like 192.168.0.1,but is not better to 
make this.
2.  View on Physical machine
Physical machine use pip to communication

Solution
1.   protocol extend change proto message to below:
  {code}
  message ServerName {
  required string host_name = 1;
 optional uint32 port = 2;
 optional uint64 start_code = 3;
  optional string name=4;
 }
  {code}

 add a filed named name like hdfs’s datablock location
2.   metatable extend 
   add column to hbase:meta named info:namelocation
3.   hbase-server
  add params 
 {code}
  hbase.regionserver.servername
  
hbase.regionserver.servername
10.0.1.1
 
  {code}
  to regionserver namelocation
  add params
 {code}
   hbase.master.servername 
   
   hbase.master.servername
   10.0.1.2
   
 {code}
   to set master namelocation
4.   hbase-client
  add params 
{code}
 hbase.client.use.hostname 
 
 hbase.client.use.hostname
 true
 
{code}
 to choose which address to use

This patch is base on Hbase-1.3.0

  was:
Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
two way to locate datanode use by name or hostname.

I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
cloud service,when we make hbase as a cloud service we need  hbase support 
other way to support locate hmaster or hregionserver

Tencent Hbase cloud service architectue shown as follows 1.jpg

1.VM
User’s Hbase client work in vm and use virtual ip address to access hbase 
cluster.
2.NAT
   Network Address Translation, vip(Virtual Network Address) to pip (Physical 
Network Address)
3. HbaseCluster Service
 HbaseCluster Service work in physical network

Problem
1.  View on vm
  On vm side vm use vip to communication,but hbase have only one way to 
communication use struct named
  ServerName. When Hmaster startup will store master address and meta 
region server address in zookeeper, 
   then the address is pip(Physical Network Address)   because hbase 
cluster work in physical network . when vm 
  get the address from zookeeper will not work because   vm use vip to 
communication,one way to  solve this is to 
  make physical machine host as vip like 192.168.0.1,but is not better to 
make this.
2.  View on Physical machine
Physical machine use pip to communication

Solution
1.   protocol extend change proto message to below:
  {code}
  message ServerName {
  required string host_name = 1;
 optional uint32 port = 2;
 optional uint64 start_code = 3;
  optional string name=4;
 }
  {code}

 add a filed named name like hdfs’s datablock location
2.   metatable extend 
   add column to hbase:meta named info:namelocation
3.   hbase-server
  add params 
 {code}
  hbase.regionserver.servername
  
hbase.regionserver.servername
10.0.1.1
 
  {code}
  to regionserver namelocation
  add params
 {code}
   hbase.master.servername 
   
   hbase.master.servername
   10.0.1.2
   
 {code}
   to set master namelocation
4.   hbase-client
  add params 
{code}
 hbase.client.use.hostname 
 
 hbase.client.use.hostname
 true
 
{code}
 to choose which address to use

This patch is base on Hbase-1.3.0


> Add Property name Of Strcut 

[jira] [Updated] (HBASE-18124) Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer

2017-05-27 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HBASE-18124:

Attachment: HBASE-18124.pdf

> Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer
> -
>
> Key: HBASE-18124
> URL: https://issues.apache.org/jira/browse/HBASE-18124
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, hbase, master
>Reporter: liubangchen
>Assignee: liubangchen
> Attachments: 1.jpg, HBASE-18124.patch, HBASE-18124.pdf
>
>
> Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
> two way to locate datanode use by name or hostname.
> I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
> cloud service,when we make hbase as a cloud service we need  hbase support 
> other way to support locate hmaster or hregionserver
> Tencent Hbase cloud service architectue shown as follows 1.jpg
> 1.VM
> User’s Hbase client work in vm and use virtual ip address to access hbase 
> cluster.
> 2.NAT
>Network Address Translation, vip(Virtual Network Address) to pip (Physical 
> Network Address)
> 3. HbaseCluster Service
>  HbaseCluster Service work in physical network
> Problem
> 1.  View on vm
>   On vm side vm use vip to communication,but hbase have only one way 
> to communication use struct named
>   ServerName. When Hmaster startup will store master address and meta 
> region server address in zookeeper, 
>then the address is pip(Physical Network Address)   because hbase 
> cluster work in physical network . when vm 
>   get the address from zookeeper will not work because   vm use vip to 
> communication,one way to  solve this is to 
>   make physical machine host as vip like 192.168.0.1,but is not better to 
> make this.
> 2.  View on Physical machine
> Physical machine use pip to communication
> Solution
> 1.   protocol extend change proto message to below:
>   {code}
>   message ServerName {
>   required string host_name = 1;
>  optional uint32 port = 2;
>  optional uint64 start_code = 3;
>   optional string name=4;
>  }
>   {code}
>  add a filed named name like hdfs’s datablock location
> 2.   metatable extend 
>add column to hbase:meta named info:namelocation
> 3.   hbase-server
>   add params 
>  {code}
>   hbase.regionserver.servername
>   
> hbase.regionserver.servername
> 10.0.1.1
>  
>   {code}
>   to regionserver namelocation
>   add params
>  {code}
>hbase.master.servername 
>
>hbase.master.servername
>10.0.1.2
>
>  {code}
>to set master namelocation
> 4.   hbase-client
>   add params 
> {code}
>  hbase.client.use.hostname 
>  
>  hbase.client.use.hostname
>  true
>  
> {code}
>  to choose which address to use
> This patch is base on Hbase-1.3.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18124) Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer

2017-05-27 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HBASE-18124:

Attachment: (was: HBASE-18124.pdf)

> Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer
> -
>
> Key: HBASE-18124
> URL: https://issues.apache.org/jira/browse/HBASE-18124
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, hbase, master
>Reporter: liubangchen
>Assignee: liubangchen
> Attachments: 1.jpg, HBASE-18124.patch
>
>
> Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
> two way to locate datanode use by name or hostname.
> I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
> cloud service,when we make hbase as a cloud service we need  hbase support 
> other way to support locate hmaster or hregionserver
> Tencent Hbase cloud service architectue shown as follows 1.jpg
> 1.VM
> User’s Hbase client work in vm and use virtual ip address to access hbase 
> cluster.
> 2.NAT
>Network Address Translation, vip(Virtual Network Address) to pip (Physical 
> Network Address)
> 3. HbaseCluster Service
>  HbaseCluster Service work in physical network
> Problem
> 1.  View on vm
>   On vm side vm use vip to communication,but hbase have only one way 
> to communication use struct named
>   ServerName. When Hmaster startup will store master address and meta 
> region server address in zookeeper, 
>then the address is pip(Physical Network Address)   because hbase 
> cluster work in physical network . when vm 
>   get the address from zookeeper will not work because   vm use vip to 
> communication,one way to  solve this is to 
>   make physical machine host as vip like 192.168.0.1,but is not better to 
> make this.
> 2.  View on Physical machine
> Physical machine use pip to communication
> Solution
> 1.   protocol extend change proto message to below:
>   {code}
>   message ServerName {
>   required string host_name = 1;
>  optional uint32 port = 2;
>  optional uint64 start_code = 3;
>   optional string name=4;
>  }
>   {code}
>  add a filed named name like hdfs’s datablock location
> 2.   metatable extend 
>add column to hbase:meta named info:namelocation
> 3.   hbase-server
>   add params 
>  {code}
>   hbase.regionserver.servername
>   
> hbase.regionserver.servername
> 10.0.1.1
>  
>   {code}
>   to regionserver namelocation
>   add params
>  {code}
>hbase.master.servername 
>
>hbase.master.servername
>10.0.1.2
>
>  {code}
>to set master namelocation
> 4.   hbase-client
>   add params 
> {code}
>  hbase.client.use.hostname 
>  
>  hbase.client.use.hostname
>  true
>  
> {code}
>  to choose which address to use
> This patch is base on Hbase-1.3.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18124) Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer

2017-05-27 Thread liubangchen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liubangchen updated HBASE-18124:

Description: 
Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
two way to locate datanode use by name or hostname.

I’m a engineer of  cloud computing , and I’m in charge of to make Hbase as a 
cloud service,when we make hbase as a cloud service we need  hbase support 
other way to support locate hmaster or hregionserver

Tencent Hbase cloud service architectue shown as follows 1.jpg

1.VM
User’s Hbase client work in vm and use virtual ip address to access hbase 
cluster.
2.NAT
   Network Address Translation, vip(Virtual Network Address) to pip (Physical 
Network Address)
3. HbaseCluster Service
 HbaseCluster Service work in physical network

Problem
1.  View on vm
  On vm side vm use vip to communication,but hbase have only one way to 
communication use struct named
  ServerName. When Hmaster startup will store master address and meta 
region server address in zookeeper, 
   then the address is pip(Physical Network Address)   because hbase 
cluster work in physical network . when vm 
  get the address from zookeeper will not work because   vm use vip to 
communication,one way to  solve this is to 
  make physical machine host as vip like 192.168.0.1,but is not better to 
make this.
2.  View on Physical machine
Physical machine use pip to communication

Solution
1.   protocol extend change proto message to below:
  {code}
  message ServerName {
  required string host_name = 1;
 optional uint32 port = 2;
 optional uint64 start_code = 3;
  optional string name=4;
 }
  {code}

 add a filed named name like hdfs’s datablock location
2.   metatable extend 
   add column to hbase:meta named info:namelocation
3.   hbase-server
  add params 
 {code}
  hbase.regionserver.servername
  
hbase.regionserver.servername
10.0.1.1
 
  {code}
  to regionserver namelocation
  add params
 {code}
   hbase.master.servername 
   
   hbase.master.servername
   10.0.1.2
   
 {code}
   to set master namelocation
4.   hbase-client
  add params 
{code}
 hbase.client.use.hostname 
 
 hbase.client.use.hostname
 true
 
{code}
 to choose which address to use

This patch is base on Hbase-1.3.0

  was:
Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
two way to locate datanode use by name or hostname.

I’m a engineer of tencent cloud computing , and I’m in charge of to make Hbase 
as a cloud service,when we make hbase as a cloud service we need  hbase support 
other way to support locate hmaster or hregionserver

Tencent Hbase cloud service architectue shown as follows 1.jpg

1.VM
User’s Hbase client work in vm and use virtual ip address to access hbase 
cluster.
2.NAT
   Network Address Translation, vip(Virtual Network Address) to pip (Physical 
Network Address)
3. HbaseCluster Service
 HbaseCluster Service work in physical network

Problem
1.  View on vm
  On vm side vm use vip to communication,but hbase have only one way to 
communication use struct named
  ServerName. When Hmaster startup will store master address and meta 
region server address in zookeeper, 
   then the address is pip(Physical Network Address)   because hbase 
cluster work in physical network . when vm 
  get the address from zookeeper will not work because   vm use vip to 
communication,one way to  solve this is to 
  make physical machine host as vip like 192.168.0.1,but is not better to 
make this.
2.  View on Physical machine
Physical machine use pip to communication

Solution
1.   protocol extend change proto message to below:
  {code}
  message ServerName {
  required string host_name = 1;
 optional uint32 port = 2;
 optional uint64 start_code = 3;
  optional string name=4;
 }
  {code}

 add a filed named name like hdfs’s datablock location
2.   metatable extend 
   add column to hbase:meta named info:namelocation
3.   hbase-server
  add params 
 {code}
  hbase.regionserver.servername
  
hbase.regionserver.servername
10.0.1.1
 
  {code}
  to regionserver namelocation
  add params
 {code}
   hbase.master.servername 
   
   hbase.master.servername
   10.0.1.2
   
 {code}
   to set master namelocation
4.   hbase-client
  add params 
{code}
 hbase.client.use.hostname 
 
 hbase.client.use.hostname
 true
 
{code}
 to choose which address to use

This patch is base on Hbase-1.3.0


> Add Property name Of 

[jira] [Commented] (HBASE-18124) Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer

2017-05-27 Thread liubangchen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027292#comment-16027292
 ] 

liubangchen commented on HBASE-18124:
-

our requirement is like hdfs datanode block location ,for ex:
Name: 10.11.9.130:4001 (10.11.9.130)
Hostname: 10.11.9.130
Decommission Status : Normal

which address to use rpc call thought parameter 
dfs.datanode.use.datanode.hostname 
false use name and true use hostname 

> Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer
> -
>
> Key: HBASE-18124
> URL: https://issues.apache.org/jira/browse/HBASE-18124
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, hbase, master
>Reporter: liubangchen
>Assignee: liubangchen
> Attachments: 1.jpg, HBASE-18124.patch, HBASE-18124.pdf
>
>
> Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
> two way to locate datanode use by name or hostname.
> I’m a engineer of tencent cloud computing , and I’m in charge of to make 
> Hbase as a cloud service,when we make hbase as a cloud service we need  hbase 
> support other way to support locate hmaster or hregionserver
> Tencent Hbase cloud service architectue shown as follows 1.jpg
> 1.VM
> User’s Hbase client work in vm and use virtual ip address to access hbase 
> cluster.
> 2.NAT
>Network Address Translation, vip(Virtual Network Address) to pip (Physical 
> Network Address)
> 3. HbaseCluster Service
>  HbaseCluster Service work in physical network
> Problem
> 1.  View on vm
>   On vm side vm use vip to communication,but hbase have only one way 
> to communication use struct named
>   ServerName. When Hmaster startup will store master address and meta 
> region server address in zookeeper, 
>then the address is pip(Physical Network Address)   because hbase 
> cluster work in physical network . when vm 
>   get the address from zookeeper will not work because   vm use vip to 
> communication,one way to  solve this is to 
>   make physical machine host as vip like 192.168.0.1,but is not better to 
> make this.
> 2.  View on Physical machine
> Physical machine use pip to communication
> Solution
> 1.   protocol extend change proto message to below:
>   {code}
>   message ServerName {
>   required string host_name = 1;
>  optional uint32 port = 2;
>  optional uint64 start_code = 3;
>   optional string name=4;
>  }
>   {code}
>  add a filed named name like hdfs’s datablock location
> 2.   metatable extend 
>add column to hbase:meta named info:namelocation
> 3.   hbase-server
>   add params 
>  {code}
>   hbase.regionserver.servername
>   
> hbase.regionserver.servername
> 10.0.1.1
>  
>   {code}
>   to regionserver namelocation
>   add params
>  {code}
>hbase.master.servername 
>
>hbase.master.servername
>10.0.1.2
>
>  {code}
>to set master namelocation
> 4.   hbase-client
>   add params 
> {code}
>  hbase.client.use.hostname 
>  
>  hbase.client.use.hostname
>  true
>  
> {code}
>  to choose which address to use
> This patch is base on Hbase-1.3.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18124) Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer

2017-05-27 Thread liubangchen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027286#comment-16027286
 ] 

liubangchen commented on HBASE-18124:
-

Hi,[~ted_yu],I think this feature is different with HBASE-12954,our requirement 
is like this:
1. use hbase.regionserver.hostname or hbase.master.hostname or locate server in 
physical network
2. use other address to locate server in virtual network
3. must  vip (virtual ip address ) and pip (physical ip address) to be 
published in zookeeper

I am not good at English,I will modify the Description later ,thanks.

> Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer
> -
>
> Key: HBASE-18124
> URL: https://issues.apache.org/jira/browse/HBASE-18124
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, hbase, master
>Reporter: liubangchen
>Assignee: liubangchen
> Attachments: 1.jpg, HBASE-18124.patch, HBASE-18124.pdf
>
>
> Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
> two way to locate datanode use by name or hostname.
> I’m a engineer of tencent cloud computing , and I’m in charge of to make 
> Hbase as a cloud service,when we make hbase as a cloud service we need  hbase 
> support other way to support locate hmaster or hregionserver
> Tencent Hbase cloud service architectue shown as follows 1.jpg
> 1.VM
> User’s Hbase client work in vm and use virtual ip address to access hbase 
> cluster.
> 2.NAT
>Network Address Translation, vip(Virtual Network Address) to pip (Physical 
> Network Address)
> 3. HbaseCluster Service
>  HbaseCluster Service work in physical network
> Problem
> 1.  View on vm
>   On vm side vm use vip to communication,but hbase have only one way 
> to communication use struct named
>   ServerName. When Hmaster startup will store master address and meta 
> region server address in zookeeper, 
>then the address is pip(Physical Network Address)   because hbase 
> cluster work in physical network . when vm 
>   get the address from zookeeper will not work because   vm use vip to 
> communication,one way to  solve this is to 
>   make physical machine host as vip like 192.168.0.1,but is not better to 
> make this.
> 2.  View on Physical machine
> Physical machine use pip to communication
> Solution
> 1.   protocol extend change proto message to below:
>   {code}
>   message ServerName {
>   required string host_name = 1;
>  optional uint32 port = 2;
>  optional uint64 start_code = 3;
>   optional string name=4;
>  }
>   {code}
>  add a filed named name like hdfs’s datablock location
> 2.   metatable extend 
>add column to hbase:meta named info:namelocation
> 3.   hbase-server
>   add params 
>  {code}
>   hbase.regionserver.servername
>   
> hbase.regionserver.servername
> 10.0.1.1
>  
>   {code}
>   to regionserver namelocation
>   add params
>  {code}
>hbase.master.servername 
>
>hbase.master.servername
>10.0.1.2
>
>  {code}
>to set master namelocation
> 4.   hbase-client
>   add params 
> {code}
>  hbase.client.use.hostname 
>  
>  hbase.client.use.hostname
>  true
>  
> {code}
>  to choose which address to use
> This patch is base on Hbase-1.3.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17678) ColumnPaginationFilter in a FilterList gives different results when using MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given timestamp

2017-05-27 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17678:
-
Affects Version/s: 2.0.0

> ColumnPaginationFilter in a FilterList gives different results when using 
> MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
> timestamp
> -
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single (not-duplicated)  within a page, but not across pages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17678) ColumnPaginationFilter in a FilterList gives different results when using MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given timestamp

2017-05-27 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027285#comment-16027285
 ] 

Zheng Hu commented on HBASE-17678:
--

I think the problem is: 

A FilterList with MUST_PASS_ONE (e.g FilterList(Operator.MUST_PASS_ONE, 
filter-A, filter-B) ) have to check cells one by one for filter-A and filter-B, 
even though our filter-A returns a NEXT_COL and filter-B return SKIP in 
previous cell  (because we can still not make sure whether the next cell fit 
filter-B or not, if fit then we should return it to user).  
So we may still pass the cell whose column is the same as previous cell 
even though filter-A return a NEXT_COL when the previous cell, and if filter-A 
save some column relative global variables(e.g count ) in its class as private 
member, the BUG will occur.

So, I think if filter-A is ColumnCountGetFilter(maybe more Filters like this), 
the bug may occur too.

> ColumnPaginationFilter in a FilterList gives different results when using 
> MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
> timestamp
> -
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 

[jira] [Commented] (HBASE-18129) truncate_preserve fails when the truncate method doesn't exists on the master

2017-05-27 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027283#comment-16027283
 ] 

Guangxu Cheng commented on HBASE-18129:
---

hmmm, I have just found that [~appy] has created an issue HBASE-16120 to add 
test for truncate_preserve.
should the UT be here or add to HBASE-16120?

> truncate_preserve fails when the truncate method doesn't exists on the master
> -
>
> Key: HBASE-18129
> URL: https://issues.apache.org/jira/browse/HBASE-18129
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0, 1.2.5
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Attachments: HBASE-18129-branch-1.patch
>
>
> Recently, I runs a rolling upgrade from HBase 0.98.x to HBase 1.2.5. During 
> the master hasn't been upgraded yet, I truncate a table by the command 
> truncate_preserve of 1.2.5, but failed.
> {code}
> hbase(main):001:0> truncate_preserve 'cf_logs'
> Truncating 'cf_logs' table (it may take a while):
>  - Disabling table...
>  - Truncating table...
>  - Dropping table...
>  - Creating table with region boundaries...
> ERROR: no method 'createTable' for arguments 
> (org.apache.hadoop.hbase.HTableDescriptor,org.jruby.java.proxies.ArrayJavaProxy)
>  on Java::OrgApacheHadoopHbaseClient::HBaseAdmin
> {code}
> After checking code and commit history, I found it's HBASE-12833 which causes 
> this bug.so we should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18129) truncate_preserve fails when the truncate method doesn't exists on the master

2017-05-27 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027273#comment-16027273
 ] 

Guangxu Cheng commented on HBASE-18129:
---

Upload the first patch and add a UT of truncate_preserve.

> truncate_preserve fails when the truncate method doesn't exists on the master
> -
>
> Key: HBASE-18129
> URL: https://issues.apache.org/jira/browse/HBASE-18129
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0, 1.2.5
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Attachments: HBASE-18129-branch-1.patch
>
>
> Recently, I runs a rolling upgrade from HBase 0.98.x to HBase 1.2.5. During 
> the master hasn't been upgraded yet, I truncate a table by the command 
> truncate_preserve of 1.2.5, but failed.
> {code}
> hbase(main):001:0> truncate_preserve 'cf_logs'
> Truncating 'cf_logs' table (it may take a while):
>  - Disabling table...
>  - Truncating table...
>  - Dropping table...
>  - Creating table with region boundaries...
> ERROR: no method 'createTable' for arguments 
> (org.apache.hadoop.hbase.HTableDescriptor,org.jruby.java.proxies.ArrayJavaProxy)
>  on Java::OrgApacheHadoopHbaseClient::HBaseAdmin
> {code}
> After checking code and commit history, I found it's HBASE-12833 which causes 
> this bug.so we should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)